Full Release Notes Index
This section contains a single page with all release notes on the same page.
Important
If you are using this to look for specific release note entries or changes, please use Search Release Notes page instead, which provides a much richer set of functionality for finding sepeicifc types, different types, and across specific versions.
Falcon LogScale 1.169.0 GA (2024-12-17)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.169.0 | GA | 2024-12-17 | Cloud | Next LTS | No | 1.136 | No |
Available for download two days after release.
Bug fixes and updates.
Breaking Changes
The following items create a breaking change in the behavior, response or operation of this release.
GraphQL API
The new parameter strict has been added to the input of analyzeQuery() GraphQL query. When set to default value
true
, query validation will always validate uses of saved query and query parameter. When set tofalse
, it will attempt to ignore validation of saved query and query parameter uses. This is a breaking change because previously, validation would behave as if strict was set tofalse
. To achieve legacy behavior, setstrict=false
.Storage
There is a change to the archiving logic so that LogScale no longer splits a given segment into multiple bucket objects based on ungrouped tag combinations in the segment. Tag groups were introduced to limit the number of datasources if a given tag had too many different values. But the current implementation of archiving splits the different tag combinations contained in a given segment back out into one bucket per tag combination, which is a scalability issue, and can also affect mini-segment merging. The new approach just uploads into one object per segment. As a visible impact for the user, there will be fewer objects in the archiving bucket, and the naming schema for the objects will change to not include the tags that were grouped into the tag groups that the datasource is based on. The set of events in the bucket will remain the same. This is a cluster risk, so the change is released immediately.
For self-hosted customers: if you need time to change the external systems that read from the archive due to the naming changes, you may disable the
DontSplitSegmentsForArchiving
feature flag (see Enabling & Disabling Feature Flags).For more information, see Tag Grouping.
Deprecation
Items that have been deprecated and may be removed in a future release.
The lastScheduledSearch field from the
ScheduledSearch
datatype is now deprecated and planned for removal in LogScale version 1.202. The new lastExecuted and lastTriggered fields have been added to theScheduledSearch
datatype to replace lastScheduledSearch.
New features and improvements
Administration and Management
Usage
is now logged to the humio repository.
Ingestion
Clicking
on the parser editor page now produces events that are more similar to what an ingested event would look like in certain edge cases.You can now validate whether your parser complies to the ??? by clicking the checkbox in the parser editor.
For more information, see Normalize and Validate Against CPS Schema.
Functions
Introducing a new query function
array:dedup()
for deduplicating elements of an array.For more information, see
array:dedup()
.
Fixed in this release
Queries
The query table endpoint client has been fixed as it was unable to receive the response for tables larger than 128 MB, and an error occurred.
A performance regression in the query scheduler has been fixed as it could lead to query starvation and slow searches.
Improvement
Storage
Improved performance when syncing IOCs internally within nodes in a cluster.
Improved the performance of ingest queue message handling that immediately follows a change in the Kafka partition count. Without this improvement, changing the partition count could substantially slow down processing of events ingested before the repartitioning.
Relocation of datasources after a partition count change will now be restarted if the Kafka partition count changes again while the cluster is executing relocations. This ensures that datasource placement always reflects the latest partition count.
Functions
Improving the error message for missing time zones in the
parseTimestamp()
function.
Falcon LogScale 1.168.0 GA (2024-12-10)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.168.0 | GA | 2024-12-10 | Cloud | Next LTS | No | 1.136 | No |
Available for download two days after release.
Bug fixes and updates.
Deprecation
Items that have been deprecated and may be removed in a future release.
The lastScheduledSearch field from the
ScheduledSearch
datatype is now deprecated and planned for removal in LogScale version 1.202. The new lastExecuted and lastTriggered fields have been added to theScheduledSearch
datatype to replace lastScheduledSearch.
New features and improvements
Administration and Management
Metrics made available on the Prometheus HTTP API have been modified so that the internal metrics that represent "meters" no longer become type=COUNTER in Prometheus, but instead are type=SUMMARY. The suffix on the name changes from
_total
to_count
due to this. This adds reporting if 1, 5 and 15 minute rates.
Storage
Cluster statistics such as compressed byte size and compressed file of merged subset only count
aux
files at most once. Previously, the statistic counted every localaux
file in the cluster, which would increase with the replication factor, but that sum ofaux
file sizes was added to a sum of segment file sizes which did not consider the replication factor.From the user point of view, this change does not affect the ingest accounting and measurements, but it does affect the following other items:
The semantics of the
compressedByteSize
,compressedByteSizeOfMerged
anddataVolumeCompressed
fields in theClusterStatsType
,RepositoryType
andOrganizationStats
graphql types are changed: now file sizes of both segments andaux
files are only counted once.These values are shown for example on the front-page, and will be smaller than the old values.
Retention by compressed file size will keep more segments, since we delete segments to keep under the actual limit, which is calculated as the configured limit minus the
aux
file sizes.
For more information, see Cluster statistics.
Configuration
Clusters using an HTTP proxy can now choose to have calls to the token endpoint for Google, Bitbucket, Github and Auth0 providers go through this proxy. This is configured by using the following new configuration values:
The default value for all of these is
false
, so there is no change to how existing clusters are configured to use Google, Bitbucket, Github or Auth0.
Dashboards and Widgets
The
Table
widget cells will now show a warning along with the original value if decimal places are configured to be below 0 or above 20.
Fixed in this release
UI Changes
The dialog for creating a new group did not close automatically after successfully creating a group. This issue has been fixed.
The Saved query dialog has been fixed so that the saved queries are now sorted.
The Filter Match Highlighting feature could be deactivated for some regular expression results due to a stack overflow issue in the JavaScript Regular Expression engine. This issue has been fixed and the highlighting now works as expected.
API
filterQuery
in API QuerymetaData
was incorrect when using filters with implicitAND
after aggregators. For example,groupBy(x) | y=* z=*
would incorrectly givey=* z=*
for thefilterQuery
, whereas*
is the correctfilterQuery
. This issue has existed since 1.160.0 and it has now been fixed. You can work around the issue by explicitly adding|
between filters.
Dashboards and Widgets
In the
Time Chart
widget, the Step after interpolation method would not display the line or area correctly when used with the Show gaps method for handling missing values.In the
Time Chart
widget, an issue has been fixed where values below the minimum value of a Logarithmic axis would not be displayed, but values below 0 would.
Queries
Some queries (especially live queries) would continuously send a warning about missing data. This could happen if the query was planned at a time when there were cluster topology changes. This issue has been fixed and, instead of sending the warning, the query will now automatically restart since there might be more data to search.
Queries could sometimes fail and return an
IndexOutOfBoundsException
error. This issue has been fixed.
Functions
Fixed an issue where
parseCEF()
would stop a parser or query upon encountering invalid key-value pairs in the CEF extensions field. For example, in:Jun 09 02:26:06 zscaler-nss CEF:0||||||| xx==
since the CEF specification dictates that
=
must be escaped if it is meant as a value, the second=
would trigger the issue as it is no longer a valid key-value.If such an error is encountered, the event is left unparsed and a parser error field will be added.
Known Issues
Functions
A known issue in the implementation of the
defineTable()
function means it is not possible to transfer generated tables larger than 128MB. The user receives an error if the generated table exceeds that size.
Improvement
Storage
Improved performance of replicating IOC files to allow faster replication.
Falcon LogScale 1.167.0 GA (2024-12-03)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.167.0 | GA | 2024-12-03 | Cloud | Next LTS | No | 1.136 | No |
Available for download two days after release.
Bug fixes and updates.
Deprecation
Items that have been deprecated and may be removed in a future release.
The lastScheduledSearch field from the
ScheduledSearch
datatype is now deprecated and planned for removal in LogScale version 1.202. The new lastExecuted and lastTriggered fields have been added to theScheduledSearch
datatype to replace lastScheduledSearch.
New features and improvements
Installation and Deployment
Added support for communicating between PDF Render Service and LogScale using a HTTP client rather than requiring HTTPS.
UI Changes
In the Inspection panel, case-insensitive search is now allowed when searching for field names. For example,
repo
andRepo
will now match repo if this field is present.
Storage
The frequency of Kafka deletions has been reduced from once per minute to once per 10 minutes with the aim of reducing the load on global. As a consequence of this change, Kafka will retain slightly more data.
API
filterQuery
in API QuerymetaData
now searches using the same timestamp field as the original query — the one set in the UI in the Time field selection. For example, it returnsuseIngestTime=true
if the original query used the @ingesttimestamp field.
Configuration
Two new metrics,
global-reader-occupancy
andchatter-reader-occupancy
, have been added to measure occupancy of the global-events loop and transientChatter-events loop.Additionally, global now also starts logging errors if the roundtrips take more than 10 seconds while the occupancy of the consumer part is below 90%. This includes a small update to the metric
global-publish-wait-for-value
to measure time spent publishing the message to Kafka as well.
Ingestion
The error preview for test cases on the Parsers page now shows if there are additional errors.
Functions
The
wildcard()
function has an additional parameter:includeEverythingOnAsterisk
. When this parameter is set totrue
, andpattern
is set to*
, the function will also match events that are missing the field specified in thefield
parameter.For more information, see
wildcard()
.
Fixed in this release
UI Changes
The Events tab in
Search
results would generate an error when using @ingesttimestamp in the Time field selection. This issue has now been fixed.
Storage
An issue has been fixed which could in rare cases cause data loss of recently digested events due to improper cache invalidation of the digester state.
Dashboards and Widgets
Queries
An error in the query execution could lead to a query that would not progress and not stop, and would appear to hang indefinitely. This could happen when hosts were removed from the cluster. This issue has now been fixed.
Known Issues
Functions
A known issue in the implementation of the
defineTable()
function means it is not possible to transfer generated tables larger than 128MB. The user receives an error if the generated table exceeds that size.
Falcon LogScale 1.166.0 GA (2024-11-26)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.166.0 | GA | 2024-11-26 | Cloud | Next LTS | No | 1.136 | No |
Available for download two days after release.
Bug fixes and updates.
Deprecation
Items that have been deprecated and may be removed in a future release.
The lastScheduledSearch field from the
ScheduledSearch
datatype is now deprecated and planned for removal in LogScale version 1.202. The new lastExecuted and lastTriggered fields have been added to theScheduledSearch
datatype to replace lastScheduledSearch.
Upgrades
Changes that may occur or be required during an upgrade.
Installation and Deployment
The minimum supported version that LogScale can be upgraded from has increased from 1.112 to 1.136. This change allows for removal of some obsolete data from LogScale database.
The Kafka client has been upgraded to 3.9.0.
New features and improvements
Security
Users granted with the
ReadAccess
permission on the repository can now read files in read-only mode.
Automation and Alerts
Updated the wording on a number of error and warning messages shown in the UI for alerts and scheduled searches.
Dashboards and Widgets
Sections in the Styling panel for all widgets are now collapsible.
Functions
When the @timestamp field is used in
collect()
, a warning has been added because collecting @timestamp will usually not return any results unless there's only one unique timestamp or thelimit
parameter has been given an argument of1
. A work-around is to rename or create a new field with the value of timestamp and collect that field instead, for example:logscaletimestamp := @timestamp | collect(timestamp)
Other
Added
organization
to logs from building parsers.When logging organizations, the name is now logged with key
organizationName
instead ofname
.
Fixed in this release
UI Changes
The layout of the
Table
widget has been fixed due to a a vertical scroll bar that was appearing inside the table even when rows took up minimum space. This would lead to users having to scroll in the table to see the last row.
Queries
The
Query stats
panel on the Organization Query Monitor was reporting misleading information about total number of running queries, total number of live queries etc. when there were more than 1,000 queries that matched the searched term. This has been fixed by changing the global part of the result of the runningQueries graphql query, although the list of specific queries used to populate the table on the page is still capped at 1,000.
Functions
Matching on multiple rows in
glob
mode missed some matching rows. This happened in cases where there were rows with differentglob
patterns matching on the same event. For example, using a fileexample.csv
:csvcolumn1, column2 ab*, one a*, two a*, three
And the query:
logscalematch(example.csv, field=column1, mode=glob, nrows=3)
An event with the field column1=abc would only match on the last two rows. This issue has been fixed so that all three rows will match on the event.
objectArray:eval()
has been fixed as it did not work on array names containing an array index, for exampleobjectArray:eval(array="myArray[0].foo[]", ...)
.The
defineTable()
function in Ad-hoc tables has been fixed as it did not use the ingest timestamp for time range specification provided by the primary query, using the event timestamp instead. This issue only affected queries where the primary query used ingest timestamps.The
defineTable()
function in Ad-hoc tables has been fixed as it incorrectly used UTC time zone for query start and end timestamps, regardless of the primary query's time zone. This issue only affected queries where the primary query used a non-UTC time zone, and either of the following:the primary query's time interval used calendar-based presets (like
calendar:2d
, ornow@week
), or:the sub-query used any query function that uses the timezone, for example
timeChart()
,bucket()
, and anytime:*
function.
Known Issues
Functions
A known issue in the implementation of the
defineTable()
function means it is not possible to transfer generated tables larger than 128MB. The user receives an error if the generated table exceeds that size.
Improvement
UI Changes
The Search Link dashboard interaction now allows you to specify the target view/repository as . This setting allows for exporting and importing the dashboard in another view, while allowing the Search Link interaction to execute in the same view as the dashboard was imported to. is now the first suggested option in the drop-down list in Dashboard Link or Search Link interaction types.
Queries
In cases where a streaming query is unable to start — for example, if it refers to a file that does not exist — an error message is now returned instead of an empty string.
Falcon LogScale 1.165.1 LTS (2024-12-17)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.165.1 | LTS | 2024-12-17 | Cloud On-Prem | 2025-12-31 | Yes | 1.112 | No |
Hide file hashes
Filename | Hashtype | File Hash |
---|---|---|
server-alpine_x64 | SHA256 | f8a30db3009f7fb34d5d4fc23e12bbdd4b90b913931369b6146808b29541b79b |
server-linux_x64 | SHA256 | 6509786ea0df0c87fb4712e3bb92c96252c2438e110f376d8ce12fb5453a0ac7 |
Docker Image | SHA256 Checksum |
---|---|
humio-core | f0fe82c6e6f3d9560a9c1b928393345c3471f72dbdc0832f429cc6719f84ec7a |
humio-single-node-demo | 17c4dbb564ce98e73cbda45eea0e13609e8a83b509cc7670667c105afbf2ecb1 |
Download
Bug fixes and updates.
Removed
Items that have been removed as of this release.
GraphQL API
Removed the following deprecated fields from the
Cluster
GraphQL type:
ingestPartitionsWarnings
suggestedIngestPartitions
suggestedIngestPartitions
storagePartitions
storagePartitionsWarnings
suggestedStoragePartitions
Configuration
The dynamic configuration and related GraphQL API
AstDepthLimit
has been removed.The
UNSAFE_ALLOW_FEDERATED_CIDR
,UNSAFE_ALLOW_FEDERATED_MATCH
, andALLOW_MULTI_CLUSTER_TABLE_SYNCHRONIZATION
environment variables have been removed as they now react as if they are always enabled.
Deprecation
Items that have been deprecated and may be removed in a future release.
The lastScheduledSearch field from the
ScheduledSearch
datatype is now deprecated and planned for removal in LogScale version 1.202. The new lastExecuted and lastTriggered fields have been added to theScheduledSearch
datatype to replace lastScheduledSearch.
Upgrades
Changes that may occur or be required during an upgrade.
Installation and Deployment
The JDK has been upgraded to 23.0.1
New features and improvements
Security
Users can now view actions in restricted read-only mode when they have the
Data read access
permission on the repository or view.Users can now see and use saved queries without needing the
CreateSavedQueries
and theUpdateSavedQueries
permissions.Users can now see actions in restricted read-only mode when they have the
ReadAccess
permission on the repository or view.
Installation and Deployment
Bumped the lowest compatible version for
UNSAFE_RELAX_FEDERATED_PROTOCOL_VERSION_CHECK
to 1.163.0. Searching LogScale Multi-Cluster Search in clusters can only be used when all clusters are using 1.163 or above.
UI Changes
PDF Render Service now supports proxy communication between service and LogScale. Adding the environment variable
http_proxy
orhttps_proxy
to the PDF render service environment will add a proxy agent to all requests from the service to LogScale.Documentation is now displayed on hover in the LogScale query editor within Falcon. The full syntax usage and a link to the documentation is now visible for any keyword in a query.
The
Files
page now features a new table view with enhanced search and filtering, making it easier to find and manage your files. You can now import multiple files at once.For more information, see Lookup Files.
When Saving Queries, saved queries now appear in sorted order and are also searchable.
Users with the
ReadAccess
permission on the repository or view can now view scheduled reports in read-only mode.Files grouped by package are now displayed back again on the
Files
page including the Package Name column, which was temporarily unavailable after the recent page overhaul.A custom dialog now helps users save their widget changes on the
Dashboard
page before continuing on theSearch
page.
Automation and Alerts
In the activity logs, the exception field now only contains the name of the exception class, as the remainder of what used to be there is already present in the exceptionMessage field.
Three alert messages were deprecated and replaced with new, more accurate alert messages.
For Legacy Alerts: The query result is currently incomplete. The alert will not be polled in this loop replaces Starting the query for the alert has not finished. The alert will not be polled in this loop.
For Filter Alerts and Aggregate Alerts: The query result is currently incomplete. The alert will not be polled in this run replaces Starting the alert query has not finished. The alert will not be polled in this run in some situations where it is more correct.
The alert message was updated for filter and aggregate alerts in some cases where the live query was stopped due to the alert being behind.
For more information, see Monitoring Alert Execution through the humio-activity Repository.
The queryStart and queryEnd fields has been added for two aggregate alerts log lines:
Alert found results, but no actions were invoked since the alert is throttled
Alert found no results and will not trigger
and removed for three others as they did not contain the correct value:
Alert is behind. Will stop live query and start running historic queries to catch up
Alert query took too long to start and the result are now too old. LogScale will stop the live query and start running historic queries to catch up
Running a historic query to catch up took too long and the result is now outside the retry limit. LogScale will skip this data and start a query for events within the retry limit
The
Alerts
page now shows the following UI changes:A new column Last modified is added in the
Alerts
overview to display when the alert was last updated and by whom.The same above column is added either in the alert properties side panel and in the
Search
page.The Package column is no longer displayed as default on the
Alerts
overview page.
For more information, see Creating an Alert from the Alerts Overview.
GraphQL API
The disableFieldAliasSchemaOnViews GraphQL mutation has been added. This mutation allows you to disable a schema on multiple views or repositories at once, instead of running multiple disableFieldAliasSchemaOnView mutations.
For more information, see disableFieldAliasSchemaOnViews() .
New yamlTemplate fields have been created for
Dashboard
andSavedQuery
datatypes. They now replace the deprecated templateYaml fields.For more information, see
Dashboard
,SavedQuery
.GraphQL introspection queries now require authentication. Setting the configuration parameter
API_EXPLORER_ENABLED
tofalse
will still reject all introspection queries.Added the permissionType field to the
Group
GraphQL type. This field identifies the level of permissions the group has (view, organization or system).Added the following mutations:
createSystemPermissionsTokenV2
These mutations extend the functionality of the previous versions (without the
V2
suffix) by returning additional information about the token such as the id, name, permissions, expiry and IP filters.
Storage
WriteNewSegmentFileFormat
feature flag is now removed and the feature enabled by default to improve compression of segment files.The amount of autoshard increase requests allowed has been reduced, to reduce pressure on global traffic from these requests.
API
Implemented support for returning a result over 1GB in size on the
/api/v1/globalsubset/clustervhost
endpoint. There is now a limit on the size of 8GB of the returned result.
Configuration
A new boolean dynamic configuration parameter,
DisableNewRegexEngine
has been added for disabling the LogScale Regular Expression Engine V2 globally on the cluster. This parameter does not stop queries that are already running and using the engine, but prevents the submission of new ones. See Setting a Dynamic Configuration Value for an example of how to set dynamic configurations.The default value of
INGEST_OCCUPANCY_QUERY_PERMIT_LIMIT
variable has been changed from90 %
to20 %
.The default value for
MINISEGMENT_PREMERGE_MIN_FILES
has been increased from4
to12
. This results in less global traffic from merges, and reduces churn in bucket storage from mini-segments being replaced.
Dashboards and Widgets
Numbers in the
Table
widget can now be displayed with trailing zeros to maintain a consistent number of decimal places.When configuring series for a widget, suggestions for series are now available in a dropdown list, rather than having to type the series out.
The
Bar Chart
widget can now be configured in the style panel with a horizontal or vertical orientation.
Ingestion
Query resources will now also account for reading segment files in addition to scanning files. This will enable better control of CPU resources between search and the data pipeline operations (ingest, digest, storage).
Increased a timeout for loading new CSV files used in parsers to reduce the likelihood of having the parser fail.
The way query resources are handled with respect to ingest occupancy has changed. If the maximum occupancy over all the ingest readers is less than the limit set (90 % by default), LogScale will not reduce resources for queries. The new configuration variable
INGEST_OCCUPANCY_QUERY_PERMIT_LIMIT
now allows to change such default limit of 90 % to adjust how busy ingest readers should be in order to limit query resources.The toolbar of the Parser editor has been modified to be more in-line with the design of the LogScale layout. You can now find
, and buttons under the ellipsis menu.For more information, see Parsing Data.
Added logging when a parser fails to build and ingest defaults to ingesting without parsing. The log lines start with Failed compiling parser.
Log Collector
LogScale Collector can now enable internal loggin of instances through
Fleet Management
.For more information, see Fleet Management Internal Logging.
Queries
LogScale Regular Expression Engine V2 is now optimized to support character match within a single line, e.g.
/.*/s
.Ad-hoc tables feature is introduced for easier joins. Use the
defineTable()
function to define temporary lookup tables. Then, join them with the results of the primary query using thematch()
function. The feature offers several benefits:Intuitive approach that now allows for writing join-like queries in the order of execution
Step-by-step workflow to create complex, nested joins easily.
Workflow that is consistent to the model used when working with Lookup Files
Easy troubleshooting while building queries, using the
readFile()
functionExpanded join use cases, providing support for:
inner joins with
match(... strict=true)
left joins with
match(... strict=false)
right joins with
readFile() | match(... strict=false)
Join capabilities in LogScale Multi-Cluster Search environments (Self-Hosted users only)
When
match()
or similar functions are used, additional tabs from the files and/or tables used in the primary query now appear in order inSearch
next to the Results tab. The tab names are prefixed by \"Table: \" to make it more clear what they refer to.For more information, see Using Ad-hoc Tables.
Changed the internal submit endpoint such that the requests logs correct information on whether the request is internal or not.
Functions
Improvements in the
sort()
,head()
, andtail()
functions: the error message when entering an incorrect value in thelimit
parameter now mentions both the minimum and the maximum configured value for the limit.Introducing the new query function
array:rename()
. This function renames all consecutive entries of an array starting at index 0.For more information, see
array:rename()
.A new parameter
trim
has been added to theparseCsv()
function to ignore whitespace before and after values. In particular, it allows quotes to appear after whitespace. This is a non-standard extension useful for parsing data created by sources that do not adhere to the CSV standard.The following new functions have been added:
bitfield:extractFlagsAsString()
collects the names of the flags appearing in a bitfield in a string.bitfield:extractFlagsAsArray()
collects the names of the flags appearing in a bitfield in an array.
bitfield:extractFlags()
can now handle unsigned 64 bit input. It can also handle larger integers, but only the lowest 64 bits will be extracted.The
wildcard()
function has an additional parameter:includeEverythingOnAsterisk
. When this parameter is set totrue
, andpattern
is set to*
, the function will also match events that are missing the field specified in thefield
parameter.For more information, see
wildcard()
.The following query functions limits have now their minimum value set to
1
. In particular:The
bucket()
andtimeChart()
query functions now require that the value given as theirbucket
argument is at least1
. For example,bucket(buckets=0)
will produce an error.The
collect()
,hash()
,readFile()
,selfJoin()
,top()
andtranspose()
query functions now require theirlimit
argument to be at least1
. For example,top([aid], limit=0)
will produce an error.The
series()
query function now requires thememlimit
argument to be at least1
, if provided. For example,| series(collect=aid, memlimit=0)
will produce an error.
The new query functions
crypto:sha1()
andcrypto:sha256()
have been added. These functions compute a cryptographic SHA-hashing of the given fields and output ahex
string as the result.
Fixed in this release
Security
OIDC authentication would fail if certain characters in the
state
variable were not properly URL-encoded when redirecting back to LogScale. This issue has been fixed.
UI Changes
Event List has been fixed as it would not take sorting from query API into consideration when sorting events based on UI configuration.
The red border appearing in the
Table
widget when invalid changes are made to a dashboard interaction is now fixed as it would not display correctly.Dragging would stop working on the
Dashboard
page in cases where invalid changes were made and saved to a widget and the user would then click . This issue has been fixed and the dragging now works correctly also in this case.
Automation and Alerts
Fixed an issue where the
Action
overview page would not load if it contained a large number of actions.
GraphQL API
role.users query has been fixed as it would return duplicate users in some cases.
Storage
Mini-segments would not be prioritized correctly when fetching them from bucket storage. This issue has now been fixed.
Segments were not being fetched on an owner node. This issue could lead to temporary under-replication and keeping events in Kafka.
Resolved a defect that could lead to corrupted JSON messages on the internal Kafka queue.
NullPointerException error occurring since version 1.156.0 when closing segment readers during
redactEvent
processing has now been fixed.Several issues have been fixed, which could cause LogScale to replay either too much, or too little data from Kafka if segments with
topOffsets
were deleted at inopportune times. LogScale will now delay deleting newly written segments, even if they violate retention, until thetopOffsets
field has been cleared, which indicates that the segments cannot be replayed from Kafka later. Segment bytes being held onto in this way are logged by theRetentionJob
as part of the periodic logging.An extremely rare data loss issue has been fixed: file corruption on a digester could cause the cluster to delete all copies of the affected segments, even if some copies were not corrupt. When a digester detects a corrupt recently-written segment file during bootup, it will no longer delete that segment from Global. It will instead only remove the local file copy. If the segment needs to be deleted in Global because it's being replayed from Kafka, the new digest leader will handle that as part of taking over the partition.
Recently ingested data could be lost when the cluster has bucket storage enabled,
USING_EPHEMERAL_DISKS
is set tofalse
, and a recently ingested segment only exists in bucket storage. This issue has now been fixed.LogScale could spuriously log Found mini segment without replacedBy and a merge target that already exists errors when a repository is undeleted. This issue has been fixed.
API
An issue has been fixed in the computation of the
digestFlow
property of the query response. The information contained there would be stale in cases where the query started from a cached state or there were digest leadership changes (for example, in case of node restarts).For more information, see Polling a Query Job.
Dashboards and Widgets
Long values rendered in the
Single Value
widget would overflow the widget container. This issue has now been fixed.Dashboard parameter values were mistakenly not used by saved queries in scenarios with parameter naming overlap and no saved query arguments provided.
Ingestion
Parser Assertions have been fixed as some would be marked as passing, even though they should be failing.
An erronous array gap detection has been fixed as it would detect gaps where there were none.
An error is no longer returned when running parser tests without test cases.
An issue has been fixed that could cause the starting position for digest to get stuck in rare cases.
Queries
Backtracking checks are now added to the optimized instructions for
(?s).*?
in the LogScale Regular Expression Engine V2. This prevents regexes of this type from getting stuck in infinite loops which are ultimately detrimental to a cluster's health.Fixed an issue which could cause live query results from some workers being temporarily represented in the final result twice. The situation was transient and could only occur during digester changes.
Fixed an issue where a query would fail to start in some cases when the query cache was available. The user would see the error Recent events overlap span excluded from query using historicStartMin.
Stopping alerts and scheduled searches could create a Could not cancel alert query entry in the activity logs. This issue has now been fixed. The queries were still correctly stopped previously, but this bug led to incorrect logging in the activity log.
The query scheduler has been fixed for an issue that could cause queries to get stuck in rare cases.
Functions
In
defineTable()
,start
andend
parameters did not work correctly when the primary query's end time was a relative timestamp: the sub-query's time was relative tonow
, and it has now been fixed to be relative to the primary query's end time.Error messages produced by the
match()
function could reference the wrong file. This issue has now been fixed.
Other
Query result highlighting would crash cluster nodes when getting filter matches for some regexes. This issue has been fixed.
Known Issues
Functions
A known issue in the implementation of the
defineTable()
function means it is not possible to transfer generated tables larger than 128MB. The user receives an error if the generated table exceeds that size.The
match()
function misses some matching rows when matching on multiple rows inglob
mode. This happens in cases where there are rows with different glob patterns matching on the same event. For example, using a fileexample.csv
:Raw Eventscolumn1,column2 ab*,one a*,two a*,three and the query:
logscalematch(example.csv, field=column1, mode=glob, nrows=3)
An event with the field column1=abc will only match on the last two rows.
The
match()
function misses some matching rows when matching on multiple rows incidr
mode. This happens in cases where there are rows with different subnets matching the same event. For example, using a fileexample.csv
:Raw Eventssubnet,value 1.2.3.4/24,monkey 1.2.3.4/25,horse and the query:
logscalematch(example.csv, field=subnet, mode=cidr, nrows=3)
An input event with ip = 1.2.3.10 will only output:
ip,value 1.2.3.10,horse
whereas the correct output should actually be:
ip,value 1.2.3.10,horse 1.2.3.10,monkey
Improvement
UI Changes
Improving the information messages that are displayed in the query editor when errors with lookup files used in queries occur.
Improving the warnings given when performing multi-cluster searches across clusters running on different LogScale versions.
API
Improved the efficiency of the autosharding rules store.
Queries
Worker query prioritization is improved in specific cases where a query starts off highly resource-consuming but becomes more efficient as it progresses. In such cases, the scheduler could severely penalize the query, leading to it being unfairly deprioritized.
Queries that refer to fields in the event are now more efficient due to an improvement made in the query engine.
Falcon LogScale 1.165.0 GA (2024-11-19)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.165.0 | GA | 2024-11-19 | Cloud | 2025-12-31 | No | 1.112 | No |
Available for download two days after release.
Bug fixes and updates.
Deprecation
Items that have been deprecated and may be removed in a future release.
The lastScheduledSearch field from the
ScheduledSearch
datatype is now deprecated and planned for removal in LogScale version 1.202. The new lastExecuted and lastTriggered fields have been added to theScheduledSearch
datatype to replace lastScheduledSearch.
New features and improvements
Security
Users can now see and use saved queries without needing the
CreateSavedQueries
and theUpdateSavedQueries
permissions.Users can now see actions in restricted read-only mode when they have the
ReadAccess
permission on the repository or view.
UI Changes
Users with the
ReadAccess
permission on the repository or view can now view scheduled reports in read-only mode.Files grouped by package are now displayed back again on the
Files
page including the Package Name column, which was temporarily unavailable after the recent page overhaul.
GraphQL API
New yamlTemplate fields have been created for
Dashboard
andSavedQuery
datatypes. They now replace the deprecated templateYaml fields.For more information, see
Dashboard
,SavedQuery
.
API
Implemented support for returning a result over 1GB in size on the
/api/v1/globalsubset/clustervhost
endpoint. There is now a limit on the size of 8GB of the returned result.
Configuration
The default value of
INGEST_OCCUPANCY_QUERY_PERMIT_LIMIT
variable has been changed from90 %
to20 %
.
Ingestion
Increased a timeout for loading new CSV files used in parsers to reduce the likelihood of having the parser fail.
Added logging when a parser fails to build and ingest defaults to ingesting without parsing. The log lines start with Failed compiling parser.
Functions
A new parameter
trim
has been added to theparseCsv()
function to ignore whitespace before and after values. In particular, it allows quotes to appear after whitespace. This is a non-standard extension useful for parsing data created by sources that do not adhere to the CSV standard.The following new functions have been added:
bitfield:extractFlagsAsString()
collects the names of the flags appearing in a bitfield in a string.bitfield:extractFlagsAsArray()
collects the names of the flags appearing in a bitfield in an array.
bitfield:extractFlags()
can now handle unsigned 64 bit input. It can also handle larger integers, but only the lowest 64 bits will be extracted.
Fixed in this release
Security
OIDC authentication would fail if certain characters in the
state
variable were not properly URL-encoded when redirecting back to LogScale. This issue has been fixed.
GraphQL API
role.users query has been fixed as it would return duplicate users in some cases.
Storage
Recently ingested data could be lost when the cluster has bucket storage enabled,
USING_EPHEMERAL_DISKS
is set tofalse
, and a recently ingested segment only exists in bucket storage. This issue has now been fixed.LogScale could spuriously log Found mini segment without replacedBy and a merge target that already exists errors when a repository is undeleted. This issue has been fixed.
Functions
In
defineTable()
,start
andend
parameters did not work correctly when the primary query's end time was a relative timestamp: the sub-query's time was relative tonow
, and it has now been fixed to be relative to the primary query's end time.
Other
Query result highlighting would crash cluster nodes when getting filter matches for some regexes. This issue has been fixed.
Known Issues
Functions
A known issue in the implementation of the
defineTable()
function means it is not possible to transfer generated tables larger than 128MB. The user receives an error if the generated table exceeds that size.
Falcon LogScale 1.164.0 GA (2024-11-12)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.164.0 | GA | 2024-11-12 | Cloud | 2025-12-31 | No | 1.112 | No |
Available for download two days after release.
Bug fixes and updates.
Removed
Items that have been removed as of this release.
Configuration
The dynamic configuration and related GraphQL API
AstDepthLimit
has been removed.
Deprecation
Items that have been deprecated and may be removed in a future release.
The lastScheduledSearch field from the
ScheduledSearch
datatype is now deprecated and planned for removal in LogScale version 1.202. The new lastExecuted and lastTriggered fields have been added to theScheduledSearch
datatype to replace lastScheduledSearch.
New features and improvements
UI Changes
The
Files
page now features a new table view with enhanced search and filtering, making it easier to find and manage your files. You can now import multiple files at once.For more information, see Lookup Files.
When Saving Queries, saved queries now appear in sorted order and are also searchable.
Automation and Alerts
In the activity logs, the exception field now only contains the name of the exception class, as the remainder of what used to be there is already present in the exceptionMessage field.
GraphQL API
The disableFieldAliasSchemaOnViews GraphQL mutation has been added. This mutation allows you to disable a schema on multiple views or repositories at once, instead of running multiple disableFieldAliasSchemaOnView mutations.
For more information, see disableFieldAliasSchemaOnViews() .
Storage
The amount of autoshard increase requests allowed has been reduced, to reduce pressure on global traffic from these requests.
Ingestion
The toolbar of the Parser editor has been modified to be more in-line with the design of the LogScale layout. You can now find
, and buttons under the ellipsis menu.For more information, see Parsing Data.
Fixed in this release
Dashboards and Widgets
Dashboard parameter values were mistakenly not used by saved queries in scenarios with parameter naming overlap and no saved query arguments provided.
Falcon LogScale 1.163.0 GA (2024-11-05)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.163.0 | GA | 2024-11-05 | Cloud | 2025-12-31 | No | 1.112 | No |
Available for download two days after release.
Bug fixes and updates.
Removed
Items that have been removed as of this release.
GraphQL API
Removed the following deprecated fields from the
Cluster
GraphQL type:
ingestPartitionsWarnings
suggestedIngestPartitions
suggestedIngestPartitions
storagePartitions
storagePartitionsWarnings
suggestedStoragePartitions
Configuration
The
UNSAFE_ALLOW_FEDERATED_CIDR
,UNSAFE_ALLOW_FEDERATED_MATCH
, andALLOW_MULTI_CLUSTER_TABLE_SYNCHRONIZATION
environment variables have been removed as they now react as if they are always enabled.
Deprecation
Items that have been deprecated and may be removed in a future release.
The lastScheduledSearch field from the
ScheduledSearch
datatype is now deprecated and planned for removal in LogScale version 1.202. The new lastExecuted and lastTriggered fields have been added to theScheduledSearch
datatype to replace lastScheduledSearch.
New features and improvements
Installation and Deployment
Bumped the lowest compatible version for
UNSAFE_RELAX_FEDERATED_PROTOCOL_VERSION_CHECK
to 1.163.0. Searching LogScale Multi-Cluster Search in clusters can only be used when all clusters are using 1.163 or above.
GraphQL API
Added the permissionType field to the
Group
GraphQL type. This field identifies the level of permissions the group has (view, organization or system).Added the following mutations:
createSystemPermissionsTokenV2
These mutations extend the functionality of the previous versions (without the
V2
suffix) by returning additional information about the token such as the id, name, permissions, expiry and IP filters.
Ingestion
Query resources will now also account for reading segment files in addition to scanning files. This will enable better control of CPU resources between search and the data pipeline operations (ingest, digest, storage).
Queries
Ad-hoc tables feature is introduced for easier joins. Use the
defineTable()
function to define temporary lookup tables. Then, join them with the results of the primary query using thematch()
function. The feature offers several benefits:Intuitive approach that now allows for writing join-like queries in the order of execution
Step-by-step workflow to create complex, nested joins easily.
Workflow that is consistent to the model used when working with Lookup Files
Easy troubleshooting while building queries, using the
readFile()
functionExpanded join use cases, providing support for:
inner joins with
match(... strict=true)
left joins with
match(... strict=false)
right joins with
readFile() | match(... strict=false)
Join capabilities in LogScale Multi-Cluster Search environments (Self-Hosted users only)
When
match()
or similar functions are used, additional tabs from the files and/or tables used in the primary query now appear in order inSearch
next to the Results tab. The tab names are prefixed by \"Table: \" to make it more clear what they refer to.For more information, see Using Ad-hoc Tables.
Changed the internal submit endpoint such that the requests logs correct information on whether the request is internal or not.
Functions
The following query functions limits have now their minimum value set to
1
. In particular:The
bucket()
andtimeChart()
query functions now require that the value given as theirbucket
argument is at least1
. For example,bucket(buckets=0)
will produce an error.The
collect()
,hash()
,readFile()
,selfJoin()
,top()
andtranspose()
query functions now require theirlimit
argument to be at least1
. For example,top([aid], limit=0)
will produce an error.The
series()
query function now requires thememlimit
argument to be at least1
, if provided. For example,| series(collect=aid, memlimit=0)
will produce an error.
Fixed in this release
Automation and Alerts
Fixed an issue where the
Action
overview page would not load if it contained a large number of actions.
Storage
Segments were not being fetched on an owner node. This issue could lead to temporary under-replication and keeping events in Kafka.
Resolved a defect that could lead to corrupted JSON messages on the internal Kafka queue.
Ingestion
An error is no longer returned when running parser tests without test cases.
Queries
Fixed an issue which could cause live query results from some workers being temporarily represented in the final result twice. The situation was transient and could only occur during digester changes.
Fixed an issue where a query would fail to start in some cases when the query cache was available. The user would see the error Recent events overlap span excluded from query using historicStartMin.
Falcon LogScale 1.162.0 GA (2024-10-29)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.162.0 | GA | 2024-10-29 | Cloud | 2025-12-31 | No | 1.112 | No |
Available for download two days after release.
Bug fixes and updates.
Deprecation
Items that have been deprecated and may be removed in a future release.
The lastScheduledSearch field from the
ScheduledSearch
datatype is now deprecated and planned for removal in LogScale version 1.202. The new lastExecuted and lastTriggered fields have been added to theScheduledSearch
datatype to replace lastScheduledSearch.
New features and improvements
Security
Users can now view actions in restricted read-only mode when they have the
Data read access
permission on the repository or view.
Storage
WriteNewSegmentFileFormat
feature flag is now removed and the feature enabled by default to improve compression of segment files.
Configuration
The default value for
MINISEGMENT_PREMERGE_MIN_FILES
has been increased from4
to12
. This results in less global traffic from merges, and reduces churn in bucket storage from mini-segments being replaced.
Dashboards and Widgets
When configuring series for a widget, suggestions for series are now available in a dropdown list, rather than having to type the series out.
Ingestion
The way query resources are handled with respect to ingest occupancy has changed. If the maximum occupancy over all the ingest readers is less than the limit set (90 % by default), LogScale will not reduce resources for queries. The new configuration variable
INGEST_OCCUPANCY_QUERY_PERMIT_LIMIT
now allows to change such default limit of 90 % to adjust how busy ingest readers should be in order to limit query resources.
Fixed in this release
Storage
NullPointerException error occurring since version 1.156.0 when closing segment readers during
redactEvent
processing has now been fixed.Several issues have been fixed, which could cause LogScale to replay either too much, or too little data from Kafka if segments with
topOffsets
were deleted at inopportune times. LogScale will now delay deleting newly written segments, even if they violate retention, until thetopOffsets
field has been cleared, which indicates that the segments cannot be replayed from Kafka later. Segment bytes being held onto in this way are logged by theRetentionJob
as part of the periodic logging.An extremely rare data loss issue has been fixed: file corruption on a digester could cause the cluster to delete all copies of the affected segments, even if some copies were not corrupt. When a digester detects a corrupt recently-written segment file during bootup, it will no longer delete that segment from Global. It will instead only remove the local file copy. If the segment needs to be deleted in Global because it's being replayed from Kafka, the new digest leader will handle that as part of taking over the partition.
Ingestion
An issue has been fixed that could cause the starting position for digest to get stuck in rare cases.
Queries
Backtracking checks are now added to the optimized instructions for
(?s).*?
in the LogScale Regular Expression Engine V2. This prevents regexes of this type from getting stuck in infinite loops which are ultimately detrimental to a cluster's health.Stopping alerts and scheduled searches could create a Could not cancel alert query entry in the activity logs. This issue has now been fixed. The queries were still correctly stopped previously, but this bug led to incorrect logging in the activity log.
Functions
Error messages produced by the
match()
function could reference the wrong file. This issue has now been fixed.
Improvement
API
Improved the efficiency of the autosharding rules store.
Queries
Queries that refer to fields in the event are now more efficient due to an improvement made in the query engine.
Falcon LogScale 1.161.0 GA (2024-10-22)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.161.0 | GA | 2024-10-22 | Cloud | 2025-12-31 | No | 1.112 | No |
Available for download two days after release.
Bug fixes and updates.
Deprecation
Items that have been deprecated and may be removed in a future release.
The lastScheduledSearch field from the
ScheduledSearch
datatype is now deprecated and planned for removal in LogScale version 1.202. The new lastExecuted and lastTriggered fields have been added to theScheduledSearch
datatype to replace lastScheduledSearch.
Upgrades
Changes that may occur or be required during an upgrade.
Installation and Deployment
The JDK has been upgraded to 23.0.1
New features and improvements
UI Changes
A custom dialog now helps users save their widget changes on the
Dashboard
page before continuing on theSearch
page.
Configuration
A new boolean dynamic configuration parameter,
DisableNewRegexEngine
has been added for disabling the LogScale Regular Expression Engine V2 globally on the cluster. This parameter does not stop queries that are already running and using the engine, but prevents the submission of new ones. See Setting a Dynamic Configuration Value for an example of how to set dynamic configurations.
Dashboards and Widgets
The
Bar Chart
widget can now be configured in the style panel with a horizontal or vertical orientation.
Functions
The new query functions
crypto:sha1()
andcrypto:sha256()
have been added. These functions compute a cryptographic SHA-hashing of the given fields and output ahex
string as the result.
Fixed in this release
Storage
Mini-segments would not be prioritized correctly when fetching them from bucket storage. This issue has now been fixed.
Dashboards and Widgets
Long values rendered in the
Single Value
widget would overflow the widget container. This issue has now been fixed.
Queries
The query scheduler has been fixed for an issue that could cause queries to get stuck in rare cases.
Improvement
UI Changes
Improving the information messages that are displayed in the query editor when errors with lookup files used in queries occur.
Queries
Worker query prioritization is improved in specific cases where a query starts off highly resource-consuming but becomes more efficient as it progresses. In such cases, the scheduler could severely penalize the query, leading to it being unfairly deprioritized.
Falcon LogScale 1.160.0 GA (2024-10-15)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.160.0 | GA | 2024-10-15 | Cloud | 2025-12-31 | No | 1.112 | No |
Available for download two days after release.
Bug fixes and updates.
Deprecation
Items that have been deprecated and may be removed in a future release.
The lastScheduledSearch field from the
ScheduledSearch
datatype is now deprecated and planned for removal in LogScale version 1.202. The new lastExecuted and lastTriggered fields have been added to theScheduledSearch
datatype to replace lastScheduledSearch.
New features and improvements
UI Changes
PDF Render Service now supports proxy communication between service and LogScale. Adding the environment variable
http_proxy
orhttps_proxy
to the PDF render service environment will add a proxy agent to all requests from the service to LogScale.Documentation is now displayed on hover in the LogScale query editor within Falcon. The full syntax usage and a link to the documentation is now visible for any keyword in a query.
Automation and Alerts
Three alert messages were deprecated and replaced with new, more accurate alert messages.
For Legacy Alerts: The query result is currently incomplete. The alert will not be polled in this loop replaces Starting the query for the alert has not finished. The alert will not be polled in this loop.
For Filter Alerts and Aggregate Alerts: The query result is currently incomplete. The alert will not be polled in this run replaces Starting the alert query has not finished. The alert will not be polled in this run in some situations where it is more correct.
The alert message was updated for filter and aggregate alerts in some cases where the live query was stopped due to the alert being behind.
For more information, see Monitoring Alert Execution through the humio-activity Repository.
The queryStart and queryEnd fields has been added for two aggregate alerts log lines:
Alert found results, but no actions were invoked since the alert is throttled
Alert found no results and will not trigger
and removed for three others as they did not contain the correct value:
Alert is behind. Will stop live query and start running historic queries to catch up
Alert query took too long to start and the result are now too old. LogScale will stop the live query and start running historic queries to catch up
Running a historic query to catch up took too long and the result is now outside the retry limit. LogScale will skip this data and start a query for events within the retry limit
The
Alerts
page now shows the following UI changes:A new column Last modified is added in the
Alerts
overview to display when the alert was last updated and by whom.The same above column is added either in the alert properties side panel and in the
Search
page.The Package column is no longer displayed as default on the
Alerts
overview page.
For more information, see Creating an Alert from the Alerts Overview.
GraphQL API
GraphQL introspection queries now require authentication. Setting the configuration parameter
API_EXPLORER_ENABLED
tofalse
will still reject all introspection queries.
Dashboards and Widgets
Numbers in the
Table
widget can now be displayed with trailing zeros to maintain a consistent number of decimal places.
Log Collector
LogScale Collector can now enable internal loggin of instances through
Fleet Management
.For more information, see Fleet Management Internal Logging.
Queries
LogScale Regular Expression Engine V2 is now optimized to support character match within a single line, e.g.
/.*/s
.
Functions
Improvements in the
sort()
,head()
, andtail()
functions: the error message when entering an incorrect value in thelimit
parameter now mentions both the minimum and the maximum configured value for the limit.Introducing the new query function
array:rename()
. This function renames all consecutive entries of an array starting at index 0.For more information, see
array:rename()
.
Fixed in this release
UI Changes
Event List has been fixed as it would not take sorting from query API into consideration when sorting events based on UI configuration.
The red border appearing in the
Table
widget when invalid changes are made to a dashboard interaction is now fixed as it would not display correctly.Dragging would stop working on the
Dashboard
page in cases where invalid changes were made and saved to a widget and the user would then click . This issue has been fixed and the dragging now works correctly also in this case.
Storage
A regression introduced with the upgrade to Java 23 in version 1.158.0 has now been fixed. The issue broke SASL support for Kafka, see Kafka documentation for more information.
API
An issue has been fixed in the computation of the
digestFlow
property of the query response. The information contained there would be stale in cases where the query started from a cached state or there were digest leadership changes (for example, in case of node restarts).For more information, see Polling a Query Job.
Ingestion
Parser Assertions have been fixed as some would be marked as passing, even though they should be failing.
An erronous array gap detection has been fixed as it would detect gaps where there were none.
Queries
Fixed an issue where non-greedy repetition and repetition of fixed width patterns would not adhere to the backtracking limit in the LogScale Regular Expression Engine V2.
Improvement
UI Changes
Improving the warnings given when performing multi-cluster searches across clusters running on different LogScale versions.
Falcon LogScale 1.159.1 LTS (2024-10-31)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.159.1 | LTS | 2024-10-31 | Cloud | 2025-10-31 | Yes | 1.112 | No |
Hide file hashes
Filename | Hashtype | File Hash |
---|---|---|
server-alpine_x64 | SHA256 | c39192a6e78307694965fda7b36f0be4044b5385a7edaff84e539599a7cd8e70 |
server-linux_x64 | SHA256 | 1a472c88cfdd1bff9b82c6adb495bdde7eb1530274eaed0cedf73640b7892b33 |
Docker Image | SHA256 Checksum |
---|---|
humio-core | d57a3645e5870838097a0128d4e5d2f57d747e544926df486d28cbd6d9ea41f0 |
humio-single-node-demo | 36a40a62e626ba51e52281f871439532ef764fa4a010d8b1f5768c071357e697 |
Download
Bug fixes and updates.
Deprecation
Items that have been deprecated and may be removed in a future release.
The following GraphQL mutations and field have been deprecated, since the starring functionality is no longer in use for alerts and scheduled searches:
isStarred field on the
Alert
andScheduledSearch
types.The lastScheduledSearch field from the
ScheduledSearch
datatype is now deprecated and planned for removal in LogScale version 1.202. The new lastExecuted and lastTriggered fields have been added to theScheduledSearch
datatype to replace lastScheduledSearch.The deprecated JDK-less
server.tar.gz
tarball release is no longer being published. Users should switch to eitherserver-linux_x64.tar.gz
orserver-alpine_x64.tar.gz
depending on their operating system.
Behavior Changes
Scripts or environment which make use of these tools should be checked and updated for the new configuration:
Automation and Alerts
Aggregate Alerts and Filter Alerts as well as Scheduled Searches will now stop the query, if it has become outdated before it finishes.
Storage
LogScale now avoids moving mini-segments to follow the digest nodes if the mini-segments are available in Bucket Storage. Instead, mini-segments will now be fetched as needed, when the digest leader is ready to merge them. This reduces the load on Global Database in some cases following a digest reassignment.
During digest reassignment, LogScale will now ignore mini-segments in Bucket Storage when deciding whether to switch merge targets because some mini-segments are not present locally. This should slightly reduce the load on Global Database during digest reassignment.
Allow live query updates to run on a new thread pool
digestLive
, but only for datasources that spend more time on these updates than allowed in the digester pool on live queries, or for many datasources, if their total load exceeds time available for digesters. This frees up time for the digesters, provided there is available CPU on the node.LogScale now avoids moving merge targets to the digest leader during digest reassignment if those segments are already in Bucket Storage.
Ingestion
Falcon LogScale now improves decision-making around which segments a digest leader fetches as part of taking over leadership. This should reduce the incidence of small bits of data being replayed from Kafka unnecessarily, and may also reduce how often reassignment will trigger a restart of live queries.
For more information, see Ingestion: Digest Phase.
Queries
When a digest node is unavailable, a warning is not attached to queries, but the queries are allowed to proceed.
This way, the behaviour of a query is similar to the case where a segment cannot be searched, due to all the owning nodes being unavailable at the time of the query.
Upgrades
Changes that may occur or be required during an upgrade.
Installation and Deployment
The JDK has been upgraded to 23.0.1
Bundled JDK is now upgraded to Java 23.
Upgraded the Kafka clients to 3.8.0.
New features and improvements
Security
New view permissions have been added to allow for updating and deleting different types of assets in a view. For instance, granting a user the
UpdateFiles
permission in a view will allow the user to update files, but not delete or create files.View permissions added:
UpdateActions
– Allow updating actionsUpdateDashboards
– Allow updating dashboardsUpdateFiles
– Allow updating CSV filesUpdateSavedQueries
– Allow updating saved queriesUpdateScheduledReports
– Allow updating scheduled reportsUpdateTriggers
– Allow updating alerts and scheduled searchesDeleteActions
– Allow deleting actionsDeleteDashboards
– Allow deleting dashboardsDeleteFiles
– Allow deleting CSV filesDeleteSavedQueries
– Allow deleting scheduled reportsDeleteScheduledReports
– Allow deleting saved queriesDeleteTriggers
– Allow deleting alerts and scheduled searches
These permissions can currently only be assigned using the LogScale GraphQL API and are not supported in the LogScale UI.
For more information, see Repository & View Permissions.
View permissions to allow for creating different types of assets in a view have been added.
For instance granting a user the
CreateFiles
permission in a view, will allow the user to create new files, but not edit existing files.CreateActions
- Allow creating actionsCreateDashboards
- Allow creating dashboardsCreateSavedQueries
- Allow creating saved queriesCreateScheduledReports
- Allow creating scheduled reportsCreateTriggers
- Allow creating alerts and scheduled searches
These permissions can currently only be assigned using the LogScale GraphQL API.
For more information, see Repository & View Permissions.
For multiple configured SAML IdP certificates, Falcon LogScale now enforces that at least one of them is valid and not expired. This prevents login failures that have occurred due to the expiration of one of the certificates.
For more information, see Certificate Rotation.
Purpose of the
repository&view
permissionChangeTriggers
has changed: it is now intended for creating, deleting and updating alerts and scheduled searches. This permission is no longer needed to view alerts and scheduled searches in read-only mode from theAlerts
page: instead, theReadAccess
permission is required for that.Creating roles that have an empty set of permissions is now supported in the
role-permissions.json file
file. To allow this, add the following line to the file:JAVASCRIPT"options": { "allowRolesWithNoPermissions": true }
This ensures compatibility when migrating from previous
view-group-permissions.json
file, should this contain roles without permissions.For more information, see Setting up Roles in a File.
UI Changes
The Time Selector now allows setting advanced relative time ranges that includes both a start and an end, and time anchoring
For more information, see Changing Time Interval, Advanced Time Syntax.
The maximum number of fields that can be added in a Field Aliasing schema has been increased from 50 to 1,000.
The logging for LogScale Multi-Cluster Search network requests have been improved by adding new endpoints that have the
externalQueryId
in the path and thefederationId
in a query parameter.The proxy endpoints for LogScale Multi-Cluster Search have changed. Specific internal marked endpoints that match the external endpoints for proxying are added. This will improve the ability to track multi-cluster searches in the LogScale requests log.
Documentation
The naming structure and identification of release types has been updated. LogScale is available in two release types:
Generally Available (GA) releases — includes new functionality. Using a GA release gets you access to the latest features and functionality.
GA releases are deployed in LogScale SaaS environments.
Long Term Support (LTS) releases — contains the latest features and functionality.
LogScale on-premise customers are advised to install the LTS releases. LTS releases are provided approximately every six weeks.
Security fixes are applied to the last three LTS releases.
GraphQL API
Add a new GraphQL API for getting non-default buckets storage configurations for organizations onDefaultBucketConfigs. The intended use is to help managing a fleet of LogScale clusters.
Field aliases now have API support for being exported and imported as YAML.
Introducing the view field on GraphQL
FileEntry
type, accessible through the entitiesSearch field.The
GA
status has been removed from the following GraphQL mutations:A modifiedInfo field has been added to the following GraphQL types, to provide information about when and by whom the asset was last modified:
If the Enable or Disable actions are used or edited within the UI, the modifiedInfo will also be updated.
Configuration
The new dynamic configuration parameter
ParserBacktrackingLimit
has been added to govern how many new events can be created from a single input event in parsers.This was previously controlled by the
QueryBacktrackingLimit
configuration parameter, which now applies only to queries, thus allowing for finer control.Kafka resets described at Switching Kafka do no longer occur by default. In order to provide safeguard against accidental misconfiguration, the
ALLOW_KAFKA_RESET_UNTIL_TIMESTAMP_MS
environment variable has been added, which per default ensures that Kafka resets are not allowed. With this variable unset, accidental Kafka resets are avoided until an administrator assents to having a Kafka reset performed.To intentionally perform a Kafka reset, administrators should set
ALLOW_KAFKA_RESET_UNTIL_TIMESTAMP_MS
to an epoch timestamp in near future (for instance now + one hour), which will make sure that the setting is automatically disabled again once the reset is complete.For more information, see
ALLOW_KAFKA_RESET_UNTIL_TIMESTAMP_MS
.Mini-segments auto-tune their max block count, up to their limit from configuration. This allows bigger minis for fast datasources, which reduces the number of minis in the global change stream.
Dashboards and Widgets
Improved user experience for creating and configuring dashboards parameters, providing immediate feedback when the setup changes and improved error validation.
Saving changes in parameters settings does not require an additional step to apply the changes before saving the dashboard, making it consistent with saving all other dashboard configurations.
Changes in the Parameters settings side panel now give immediate feedback on the dashboard.
Errors in the parameters setup are now validated on dashboard save, informing users about identified issues.
In the Query Parameter type, the Query String field has been replaced with LogScale Query editor, providing rich query writing experience as well as syntax validation.
In the File Parameter type, additional validation was added to display a warning if the lookup file used as a source of suggestions was deleted.
Parameters have now additional states (error, warning, info) informing users about issues with the setup.
Added the ability to move dashboards parameters to a parameter panel from the configuration side panel.
Added the ability to drag widgets to Sections when in Editing dashboard mode.
Queries
Nested repetitions/quantifiers in the Falcon LogScale Regular Expression Engine v2 are now supported. Nested repetitions are constructions that repeat or quantify another regex expression that contains repetition/quantification. For instance, the regex:
/(?<ipv4>(?:\d{1,3}\.){3}\d{1,3})/
makes use of nested repetitions, namely:
(?:\d{1,3}\.){3}
For more information, see LogScale Regular Expression Engine V2.
Added support for using the new experimental LogScale Regular Expression Engine v2 by specifying the
F
flag, for example:logscale Syntax'/foo/F'
The new engine is currently under development and while it can be faster in some cases, there may also be cases where it is slower.
For more information, see LogScale Regular Expression Engine V2.
LogScale Regular Expression Engine v2 now improves the optimizer ability to make alternations into decision trees.
For more information, see LogScale Regular Expression Engine V2.
Introducing a regex backtracking limit of 0,5 seconds pr. input for the Falcon LogScale Regex Engine v2. As soon as the regex starts backtracking to find matches, it is timed and cancelled if the backtracking to find a match exceeds 0.5 seconds. This is done to avoid instances of practically infinite backtracking, as can be the case with some regexes.
For more information, see LogScale Regular Expression Engine V2.
Added optimizations for start-of-text regex expressions with LogScale Regular Expression Engine v2. In particular:
/^X/
and:
/\AX/
no longer try to match all positions in the string.
When doing tests on large body of text, these optimizations have proven to be faster and shown improvements of ~202%, for example when tested against a collection of works by Mark Twain.
For more information, see LogScale Regular Expression Engine V2.
Under the hood changes to how the size of certain events is estimated should now make query state size estimates more realistic.
Query warnings are now included in the activity logs for queries
When a query is rejected due to a validation exception, an activity log is added
Activity logs for queries are now generated for LogScale Self-Hosted
Functions
Introducing the new query function
coalesce()
. This function accepts a list of fields and returns the first value that is not null or empty. Empty values can also be returned by setting a parameter in the function.For more information, see
coalesce()
.Introducing the new query function
array:drop()
. This function drops all consecutive fields of a given array, starting from index 0.For more information, see
array:drop()
.The new
objectArray:eval()
query function is now available for processing structured/nested arrays.For more information, see
objectArray:eval()
.The
array:eval()
query function for processing flat arrays is no longer experimental.For more information, see
array:eval()
.
Fixed in this release
UI Changes
The OIDC and SAML configuration pages under Organization settings have been fixed due to a tooltip containing a link that would close before users could click the link.
Entering new arguments for Multi-value Parameters in Dashboard Link would not actually insert the new argument into the list of arguments. This issue has now been fixed.
Suggestions for parameter values in the Interactions panel would not be able to find fields in the query result. This issue has now been fixed.
A minor UI issue in dropdown windows has been fixed e.g., the Time interval window popping up from the Time Selector would close if any text inside the window fields was selected and the mouse click was released outside the window.
Clean up state for multi-cluster searches that could result in a build up of memory used.
Automation and Alerts
The severity of log message Alert found no results and will not trigger for Aggregate Alerts has been adjusted from
Warning
toInfo
.
Storage
An issue has been fixed which could cause clusters with too few hosts online to reach the configured segment replication factor to run segment rebalancing repeatedly.
The rebalancing now disables itself in such a situation, until enough nodes come back online that rebalancing will actually be able to reach the replication factor.
NullPointerException error occurring since version 1.156.0 when closing segment readers during
redactEvent
processing has now been fixed.A regression introduced with the upgrade to Java 23 in version 1.158.0 has now been fixed. The issue broke SASL support for Kafka, see Kafka documentation for more information.
API
An issue has been fixed in the computation of the
digestFlow
property of the query response. The information contained there would be stale in cases where the query started from a cached state or there were digest leadership changes (for example, in case of node restarts).For more information, see Polling a Query Job.
Dashboards and Widgets
The tooltip description of a widget would be cut off if the widget took up the whole row. This issue has now been fixed.
Newline characters would not be escaped in the dashboard parameter input field, thus appearing as not being part of the value. This issue has now been fixed.
Ingestion
When creating a new event forwarding rule, the editor could not be editable in some cases. This issue has now been fixed.
Fixed issues related to searching for ingest timestamp:
Issues with the usage of the query state cache when searching by ingest timestamp.
Reject queries where query time interval starts before the UNIX epoch. This applies both when searching by ingest timestamp or event timestamp. Previously such a query by ingest timestamp would cause an error, but a query by event timestamp was allowed, but not useful as all events in LogScale have event timestamps after the UNIX epoch.
When searching by ingest timestamp,
start()
andend()
functions now report the correct search range.Use event timestamp in place of ingest timestamp if the latter is missing. In old versions of LogScale (prior to 1.15) ingest timestamp was not stored with events. In order to support correct filtering when searching via ingest timestamp also for such old data, LogScale now considers the event timestamp to be also the ingest timestamp.
Log Collector
Fixed a performance issue when sorting by config name in the Fleet Management overview which could result in 503s from the backend.
Queries
Fixed stale QuerySessions that could cause invalid queries to be re-used.
Stopping queries that use early stopping criteria were wrongly reported as
Cancelled
instead ofDone
. The query status has now been fixed.Fixed an issue where non-greedy repetition and repetition of fixed width patterns would not adhere to the backtracking limit in the LogScale Regular Expression Engine V2.
A regression issue that occurred in LogScale version 1.142.0 has now been fixed: it could cause LogScale to exceed the limit on off-heap memory when running many queries concurrently.
Queries hitting the limit on off-heap memory could be deprioritized more strongly than intended. This issue has now been fixed.
Query poll would not be re-tried on dashboards if the request timed out.
Building tables for a query would block other tables from being built due to an internal cache implementation behaviour, which has now been fixed.
Functions
Fixed some cases where
writeJson()
would output fields as numbers that are not supported by the JSON standard. These fields are now represented by strings in the output to ensure that the resulting JSON is valid.A regression issue has been fixed in the
match()
function incidr
mode, which was significantly slower when doing submission of the query.
Other
Off-heap memory limiting might not apply correctly.
A regression issue where some uploaded files close to 2GB could fail to load has now been fixed.
Early Access
Security
It is now possible to map one IdP group name to multiple Falcon LogScale groups during group synchronization. Activate the
OneToManyGroupSynchronization
feature flag for this functionality. With the feature flag enabled, Falcon LogScale will map a group name to all Falcon LogScale groups in the organization that have a matchinglookupName
ordisplayName
, while also performing validation for identical groups. If the multiple mapping feature is not enabled, the existing one-to-one mapping functionality remains unchanged.For more information on how feature flags are enabled, see Enabling & Disabling Feature Flags.
For more information, see Group Synchronization.
Configuration
A new dynamic configuration
AggregatorOutputRowLimit
has been added, along with the new organisation-levelCancelQueriesExceedingAggregateOutputRowLimit
configuration, which is currently under feature flag.Aggregate Query Functions in queries that output more rows than the limit specified by the
AggregatorOutputRowLimit
configuration will get cancelled if theCancelQueriesExceedingAggregateOutputRowLimit
configuration is enabled.These configuration items are being added to allow LogScale administrators to protect the health of the cluster in cases where queries use runaway amounts of resources in the result phase of query execution, impacting cluster health and availability.
For more information, see Dynamic Configuration Parameters.
Improvement
UI Changes
The
Amazon S3 archiving
UI page now correctly points to the S3 Archiving documentation pages versioned for Self-Hosted and Cloud.
Automation and Alerts
The error message The alert query did not start within {timeout}. LogScale will retry starting the query. has been fixed to show the actual timeout instead of just {timeout}.
In the emails sent by email actions, the text
Open in Humio
has been replaced byOpen in LogScale
.
Dashboards and Widgets
Dashboard parameter suggestions of the FixedList Parameter type now follow the order in which they were configured.
Dashboard parameter suggestions of the Query Parameter type now follow the order of the query result.
Ingestion
Data ingest rate monitoring has been adjusted to ensure it reports from nodes across all node roles. Additionally, the number of nodes reporting in large clusters has been raised.
Queries
Some internal improvements have been made to query coordination to make it more robust in certain cases — in particular with failing queries — with an impact on the timing of some API responses.
Some internal improvements have been made to query caching and cache distribution.
The enforcement of the limit on off-heap buffers for segments being queried has been tightened: the limit should no longer exceed the size required for reading a single segment, even in cases where the scheduler is very busy.
Falcon LogScale 1.159.0 GA (2024-10-08)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.159.0 | GA | 2024-10-08 | Cloud | 2025-10-31 | No | 1.112 | No |
Available for download two days after release.
Bug fixes and updates.
Deprecation
Items that have been deprecated and may be removed in a future release.
The lastScheduledSearch field from the
ScheduledSearch
datatype is now deprecated and planned for removal in LogScale version 1.202. The new lastExecuted and lastTriggered fields have been added to theScheduledSearch
datatype to replace lastScheduledSearch.
Behavior Changes
Scripts or environment which make use of these tools should be checked and updated for the new configuration:
Ingestion
Falcon LogScale now improves decision-making around which segments a digest leader fetches as part of taking over leadership. This should reduce the incidence of small bits of data being replayed from Kafka unnecessarily, and may also reduce how often reassignment will trigger a restart of live queries.
For more information, see Ingestion: Digest Phase.
New features and improvements
Security
For multiple configured SAML IdP certificates, Falcon LogScale now enforces that at least one of them is valid and not expired. This prevents login failures that have occurred due to the expiration of one of the certificates.
For more information, see Certificate Rotation.
Purpose of the
repository&view
permissionChangeTriggers
has changed: it is now intended for creating, deleting and updating alerts and scheduled searches. This permission is no longer needed to view alerts and scheduled searches in read-only mode from theAlerts
page: instead, theReadAccess
permission is required for that.Creating roles that have an empty set of permissions is now supported in the
role-permissions.json file
file. To allow this, add the following line to the file:JAVASCRIPT"options": { "allowRolesWithNoPermissions": true }
This ensures compatibility when migrating from previous
view-group-permissions.json
file, should this contain roles without permissions.For more information, see Setting up Roles in a File.
Configuration
Kafka resets described at Switching Kafka do no longer occur by default. In order to provide safeguard against accidental misconfiguration, the
ALLOW_KAFKA_RESET_UNTIL_TIMESTAMP_MS
environment variable has been added, which per default ensures that Kafka resets are not allowed. With this variable unset, accidental Kafka resets are avoided until an administrator assents to having a Kafka reset performed.To intentionally perform a Kafka reset, administrators should set
ALLOW_KAFKA_RESET_UNTIL_TIMESTAMP_MS
to an epoch timestamp in near future (for instance now + one hour), which will make sure that the setting is automatically disabled again once the reset is complete.For more information, see
ALLOW_KAFKA_RESET_UNTIL_TIMESTAMP_MS
.
Queries
Nested repetitions/quantifiers in the Falcon LogScale Regular Expression Engine v2 are now supported. Nested repetitions are constructions that repeat or quantify another regex expression that contains repetition/quantification. For instance, the regex:
/(?<ipv4>(?:\d{1,3}\.){3}\d{1,3})/
makes use of nested repetitions, namely:
(?:\d{1,3}\.){3}
For more information, see LogScale Regular Expression Engine V2.
Introducing a regex backtracking limit of 0,5 seconds pr. input for the Falcon LogScale Regex Engine v2. As soon as the regex starts backtracking to find matches, it is timed and cancelled if the backtracking to find a match exceeds 0.5 seconds. This is done to avoid instances of practically infinite backtracking, as can be the case with some regexes.
For more information, see LogScale Regular Expression Engine V2.
Under the hood changes to how the size of certain events is estimated should now make query state size estimates more realistic.
Functions
Introducing the new query function
coalesce()
. This function accepts a list of fields and returns the first value that is not null or empty. Empty values can also be returned by setting a parameter in the function.For more information, see
coalesce()
.Introducing the new query function
array:drop()
. This function drops all consecutive fields of a given array, starting from index 0.For more information, see
array:drop()
.
Fixed in this release
Queries
Building tables for a query would block other tables from being built due to an internal cache implementation behaviour, which has now been fixed.
Early Access
Security
It is now possible to map one IdP group name to multiple Falcon LogScale groups during group synchronization. Activate the
OneToManyGroupSynchronization
feature flag for this functionality. With the feature flag enabled, Falcon LogScale will map a group name to all Falcon LogScale groups in the organization that have a matchinglookupName
ordisplayName
, while also performing validation for identical groups. If the multiple mapping feature is not enabled, the existing one-to-one mapping functionality remains unchanged.For more information on how feature flags are enabled, see Enabling & Disabling Feature Flags.
For more information, see Group Synchronization.
Falcon LogScale 1.158.0 GA (2024-10-01)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.158.0 | GA | 2024-10-01 | Cloud | 2025-10-31 | No | 1.112 | No |
Available for download two days after release.
Bug fixes and updates.
Deprecation
Items that have been deprecated and may be removed in a future release.
The following GraphQL mutations and field have been deprecated, since the starring functionality is no longer in use for alerts and scheduled searches:
isStarred field on the
Alert
andScheduledSearch
types.The lastScheduledSearch field from the
ScheduledSearch
datatype is now deprecated and planned for removal in LogScale version 1.202. The new lastExecuted and lastTriggered fields have been added to theScheduledSearch
datatype to replace lastScheduledSearch.
Behavior Changes
Scripts or environment which make use of these tools should be checked and updated for the new configuration:
Queries
When a digest node is unavailable, a warning is not attached to queries, but the queries are allowed to proceed.
This way, the behaviour of a query is similar to the case where a segment cannot be searched, due to all the owning nodes being unavailable at the time of the query.
Upgrades
Changes that may occur or be required during an upgrade.
Installation and Deployment
Bundled JDK is now upgraded to Java 23.
New features and improvements
Security
New view permissions have been added to allow for updating and deleting different types of assets in a view. For instance, granting a user the
UpdateFiles
permission in a view will allow the user to update files, but not delete or create files.View permissions added:
UpdateActions
– Allow updating actionsUpdateDashboards
– Allow updating dashboardsUpdateFiles
– Allow updating CSV filesUpdateSavedQueries
– Allow updating saved queriesUpdateScheduledReports
– Allow updating scheduled reportsUpdateTriggers
– Allow updating alerts and scheduled searchesDeleteActions
– Allow deleting actionsDeleteDashboards
– Allow deleting dashboardsDeleteFiles
– Allow deleting CSV filesDeleteSavedQueries
– Allow deleting scheduled reportsDeleteScheduledReports
– Allow deleting saved queriesDeleteTriggers
– Allow deleting alerts and scheduled searches
These permissions can currently only be assigned using the LogScale GraphQL API and are not supported in the LogScale UI.
For more information, see Repository & View Permissions.
UI Changes
The logging for LogScale Multi-Cluster Search network requests have been improved by adding new endpoints that have the
externalQueryId
in the path and thefederationId
in a query parameter.The proxy endpoints for LogScale Multi-Cluster Search have changed. Specific internal marked endpoints that match the external endpoints for proxying are added. This will improve the ability to track multi-cluster searches in the LogScale requests log.
Documentation
The naming structure and identification of release types has been updated. LogScale is available in two release types:
Generally Available (GA) releases — includes new functionality. Using a GA release gets you access to the latest features and functionality.
GA releases are deployed in LogScale SaaS environments.
Long Term Support (LTS) releases — contains the latest features and functionality.
LogScale on-premise customers are advised to install the LTS releases. LTS releases are provided approximately every six weeks.
Security fixes are applied to the last three LTS releases.
Configuration
The new dynamic configuration parameter
ParserBacktrackingLimit
has been added to govern how many new events can be created from a single input event in parsers.This was previously controlled by the
QueryBacktrackingLimit
configuration parameter, which now applies only to queries, thus allowing for finer control.
Queries
LogScale Regular Expression Engine v2 now improves the optimizer ability to make alternations into decision trees.
For more information, see LogScale Regular Expression Engine V2.
Added optimizations for start-of-text regex expressions with LogScale Regular Expression Engine v2. In particular:
/^X/
and:
/\AX/
no longer try to match all positions in the string.
When doing tests on large body of text, these optimizations have proven to be faster and shown improvements of ~202%, for example when tested against a collection of works by Mark Twain.
For more information, see LogScale Regular Expression Engine V2.
Fixed in this release
UI Changes
A minor UI issue in dropdown windows has been fixed e.g., the Time interval window popping up from the Time Selector would close if any text inside the window fields was selected and the mouse click was released outside the window.
Dashboards and Widgets
The tooltip description of a widget would be cut off if the widget took up the whole row. This issue has now been fixed.
Ingestion
When creating a new event forwarding rule, the editor could not be editable in some cases. This issue has now been fixed.
Functions
Early Access
Configuration
A new dynamic configuration
AggregatorOutputRowLimit
has been added, along with the new organisation-levelCancelQueriesExceedingAggregateOutputRowLimit
configuration, which is currently under feature flag.Aggregate Query Functions in queries that output more rows than the limit specified by the
AggregatorOutputRowLimit
configuration will get cancelled if theCancelQueriesExceedingAggregateOutputRowLimit
configuration is enabled.These configuration items are being added to allow LogScale administrators to protect the health of the cluster in cases where queries use runaway amounts of resources in the result phase of query execution, impacting cluster health and availability.
For more information, see Dynamic Configuration Parameters.
Improvement
Automation and Alerts
The error message The alert query did not start within {timeout}. LogScale will retry starting the query. has been fixed to show the actual timeout instead of just {timeout}.
In the emails sent by email actions, the text
Open in Humio
has been replaced byOpen in LogScale
.
Dashboards and Widgets
Dashboard parameter suggestions of the FixedList Parameter type now follow the order in which they were configured.
Dashboard parameter suggestions of the Query Parameter type now follow the order of the query result.
Falcon LogScale 1.157.0 GA (2024-09-24)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.157.0 | GA | 2024-09-24 | Cloud | 2025-10-31 | No | 1.112 | No |
Available for download two days after release.
Bug fixes and updates.
Deprecation
Items that have been deprecated and may be removed in a future release.
The lastScheduledSearch field from the
ScheduledSearch
datatype is now deprecated and planned for removal in LogScale version 1.202. The new lastExecuted and lastTriggered fields have been added to theScheduledSearch
datatype to replace lastScheduledSearch.The deprecated JDK-less
server.tar.gz
tarball release is no longer being published. Users should switch to eitherserver-linux_x64.tar.gz
orserver-alpine_x64.tar.gz
depending on their operating system.
Behavior Changes
Scripts or environment which make use of these tools should be checked and updated for the new configuration:
Storage
LogScale now avoids moving mini-segments to follow the digest nodes if the mini-segments are available in Bucket Storage. Instead, mini-segments will now be fetched as needed, when the digest leader is ready to merge them. This reduces the load on Global Database in some cases following a digest reassignment.
During digest reassignment, LogScale will now ignore mini-segments in Bucket Storage when deciding whether to switch merge targets because some mini-segments are not present locally. This should slightly reduce the load on Global Database during digest reassignment.
Allow live query updates to run on a new thread pool
digestLive
, but only for datasources that spend more time on these updates than allowed in the digester pool on live queries, or for many datasources, if their total load exceeds time available for digesters. This frees up time for the digesters, provided there is available CPU on the node.LogScale now avoids moving merge targets to the digest leader during digest reassignment if those segments are already in Bucket Storage.
New features and improvements
GraphQL API
Field aliases now have API support for being exported and imported as YAML.
Fixed in this release
Dashboards and Widgets
Newline characters would not be escaped in the dashboard parameter input field, thus appearing as not being part of the value. This issue has now been fixed.
Queries
Stopping queries that use early stopping criteria were wrongly reported as
Cancelled
instead ofDone
. The query status has now been fixed.
Other
Off-heap memory limiting might not apply correctly.
Known Issues
Queries
Improvement
Ingestion
Data ingest rate monitoring has been adjusted to ensure it reports from nodes across all node roles. Additionally, the number of nodes reporting in large clusters has been raised.
Queries
Some internal improvements have been made to query coordination to make it more robust in certain cases — in particular with failing queries — with an impact on the timing of some API responses.
Falcon LogScale 1.156.0 GA (2024-09-17)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.156.0 | GA | 2024-09-17 | Cloud | 2025-10-31 | No | 1.112 | No |
Available for download two days after release.
Bug fixes and updates.
Deprecation
Items that have been deprecated and may be removed in a future release.
The lastScheduledSearch field from the
ScheduledSearch
datatype is now deprecated and planned for removal in LogScale version 1.202. The new lastExecuted and lastTriggered fields have been added to theScheduledSearch
datatype to replace lastScheduledSearch.
Upgrades
Changes that may occur or be required during an upgrade.
Installation and Deployment
Upgraded the Kafka clients to 3.8.0.
New features and improvements
GraphQL API
The
GA
status has been removed from the following GraphQL mutations:A modifiedInfo field has been added to the following GraphQL types, to provide information about when and by whom the asset was last modified:
If the Enable or Disable actions are used or edited within the UI, the modifiedInfo will also be updated.
Dashboards and Widgets
Added the ability to drag widgets to Sections when in Editing dashboard mode.
Fixed in this release
UI Changes
The OIDC and SAML configuration pages under Organization settings have been fixed due to a tooltip containing a link that would close before users could click the link.
Entering new arguments for Multi-value Parameters in Dashboard Link would not actually insert the new argument into the list of arguments. This issue has now been fixed.
Suggestions for parameter values in the Interactions panel would not be able to find fields in the query result. This issue has now been fixed.
Storage
An issue has been fixed which could cause clusters with too few hosts online to reach the configured segment replication factor to run segment rebalancing repeatedly.
The rebalancing now disables itself in such a situation, until enough nodes come back online that rebalancing will actually be able to reach the replication factor.
Queries
A regression issue that occurred in LogScale version 1.142.0 has now been fixed: it could cause LogScale to exceed the limit on off-heap memory when running many queries concurrently.
Queries hitting the limit on off-heap memory could be deprioritized more strongly than intended. This issue has now been fixed.
Other
A regression issue where some uploaded files close to 2GB could fail to load has now been fixed.
Known Issues
Queries
Improvement
UI Changes
The
Amazon S3 archiving
UI page now correctly points to the S3 Archiving documentation pages versioned for Self-Hosted and Cloud.
Queries
The enforcement of the limit on off-heap buffers for segments being queried has been tightened: the limit should no longer exceed the size required for reading a single segment, even in cases where the scheduler is very busy.
Falcon LogScale 1.155.0 GA (2024-09-10)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.155.0 | GA | 2024-09-10 | Cloud | 2025-10-31 | No | 1.112 | No |
Available for download two days after release.
Bug fixes and updates.
Deprecation
Items that have been deprecated and may be removed in a future release.
The lastScheduledSearch field from the
ScheduledSearch
datatype is now deprecated and planned for removal in LogScale version 1.202. The new lastExecuted and lastTriggered fields have been added to theScheduledSearch
datatype to replace lastScheduledSearch.
Behavior Changes
Scripts or environment which make use of these tools should be checked and updated for the new configuration:
Automation and Alerts
Aggregate Alerts and Filter Alerts as well as Scheduled Searches will now stop the query, if it has become outdated before it finishes.
Functions
Prior to LogScale v1.147, the
array:length()
function accepted a value in thearray
argument that did not contain brackets[ ]
so thatarray:length("field")
would always produce the result0
(since there was no field named field). The function has now been updated to properly throw an exception if given a non-array field name in thearray
argument. Therefore, the function now requires the given array name to have[ ]
brackets, since it only works on array fields.
New features and improvements
Security
View permissions to allow for creating different types of assets in a view have been added.
For instance granting a user the
CreateFiles
permission in a view, will allow the user to create new files, but not edit existing files.CreateActions
- Allow creating actionsCreateDashboards
- Allow creating dashboardsCreateSavedQueries
- Allow creating saved queriesCreateScheduledReports
- Allow creating scheduled reportsCreateTriggers
- Allow creating alerts and scheduled searches
These permissions can currently only be assigned using the LogScale GraphQL API.
For more information, see Repository & View Permissions.
UI Changes
The maximum number of fields that can be added in a Field Aliasing schema has been increased from 50 to 1,000.
GraphQL API
Add a new GraphQL API for getting non-default buckets storage configurations for organizations onDefaultBucketConfigs. The intended use is to help managing a fleet of LogScale clusters.
Functions
The new
objectArray:eval()
query function is now available for processing structured/nested arrays.For more information, see
objectArray:eval()
.The
array:eval()
query function for processing flat arrays is no longer experimental.For more information, see
array:eval()
.
Fixed in this release
UI Changes
Clean up state for multi-cluster searches that could result in a build up of memory used.
Automation and Alerts
The severity of log message Alert found no results and will not trigger for Aggregate Alerts has been adjusted from
Warning
toInfo
.
Known Issues
Queries
Improvement
Queries
Some internal improvements have been made to query caching and cache distribution.
Falcon LogScale 1.154.0 GA (2024-09-03)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.154.0 | GA | 2024-09-03 | Cloud | 2025-10-31 | No | 1.112 | No |
Available for download two days after release.
Bug fixes and updates.
Deprecation
Items that have been deprecated and may be removed in a future release.
The
server.tar.gz
release artifact has been deprecated. Users should switch to theOS/architecture-specific server-linux_x64.tar.gz
orserver-alpine_x64.tar.gz
, which include bundled JDKs. Users installing a Docker image do not need to make any changes. With this change, LogScale will no longer support bringing your own JDK, we will bundle one with releases instead.We are making this change for the following reasons:
By bundling a JDK specifically for LogScale, we can customize the JDK to contain only the functionality needed by LogScale. This is a benefit from a security perspective, and also reduces the size of release artifacts.
Bundling the JDK ensures that the JDK version in use is one we've tested with, which makes it more likely a customer install will perform similar to our own internal setups.
By bundling the JDK, we will only need to support one JDK version. This means we can take advantage of enhanced JDK features sooner, such as specific performance improvements, which benefits everyone.
The last release where
server.tar.gz artifact
is included will be 1.154.0.The lastScheduledSearch field from the
ScheduledSearch
datatype is now deprecated and planned for removal in LogScale version 1.202. The new lastExecuted and lastTriggered fields have been added to theScheduledSearch
datatype to replace lastScheduledSearch.
Behavior Changes
Scripts or environment which make use of these tools should be checked and updated for the new configuration:
Functions
Prior to LogScale v1.147, the
array:length()
function accepted a value in thearray
argument that did not contain brackets[ ]
so thatarray:length("field")
would always produce the result0
(since there was no field named field). The function has now been updated to properly throw an exception if given a non-array field name in thearray
argument. Therefore, the function now requires the given array name to have[ ]
brackets, since it only works on array fields.
New features and improvements
UI Changes
The Time Selector now allows setting advanced relative time ranges that includes both a start and an end, and time anchoring
For more information, see Changing Time Interval, Advanced Time Syntax.
GraphQL API
Introducing the view field on GraphQL
FileEntry
type, accessible through the entitiesSearch field.
Configuration
Mini-segments auto-tune their max block count, up to their limit from configuration. This allows bigger minis for fast datasources, which reduces the number of minis in the global change stream.
Dashboards and Widgets
Improved user experience for creating and configuring dashboards parameters, providing immediate feedback when the setup changes and improved error validation.
Saving changes in parameters settings does not require an additional step to apply the changes before saving the dashboard, making it consistent with saving all other dashboard configurations.
Changes in the Parameters settings side panel now give immediate feedback on the dashboard.
Errors in the parameters setup are now validated on dashboard save, informing users about identified issues.
In the Query Parameter type, the Query String field has been replaced with LogScale Query editor, providing rich query writing experience as well as syntax validation.
In the File Parameter type, additional validation was added to display a warning if the lookup file used as a source of suggestions was deleted.
Parameters have now additional states (error, warning, info) informing users about issues with the setup.
Added the ability to move dashboards parameters to a parameter panel from the configuration side panel.
Queries
Added support for using the new experimental LogScale Regular Expression Engine v2 by specifying the
F
flag, for example:logscale Syntax'/foo/F'
The new engine is currently under development and while it can be faster in some cases, there may also be cases where it is slower.
For more information, see LogScale Regular Expression Engine V2.
Query warnings are now included in the activity logs for queries
When a query is rejected due to a validation exception, an activity log is added
Activity logs for queries are now generated for LogScale Self-Hosted
Fixed in this release
Ingestion
Fixed issues related to searching for ingest timestamp:
Issues with the usage of the query state cache when searching by ingest timestamp.
Reject queries where query time interval starts before the UNIX epoch. This applies both when searching by ingest timestamp or event timestamp. Previously such a query by ingest timestamp would cause an error, but a query by event timestamp was allowed, but not useful as all events in LogScale have event timestamps after the UNIX epoch.
When searching by ingest timestamp,
start()
andend()
functions now report the correct search range.Use event timestamp in place of ingest timestamp if the latter is missing. In old versions of LogScale (prior to 1.15) ingest timestamp was not stored with events. In order to support correct filtering when searching via ingest timestamp also for such old data, LogScale now considers the event timestamp to be also the ingest timestamp.
Log Collector
Fixed a performance issue when sorting by config name in the Fleet Management overview which could result in 503s from the backend.
Queries
Fixed stale QuerySessions that could cause invalid queries to be re-used.
Query poll would not be re-tried on dashboards if the request timed out.
Functions
Fixed some cases where
writeJson()
would output fields as numbers that are not supported by the JSON standard. These fields are now represented by strings in the output to ensure that the resulting JSON is valid.
Known Issues
Falcon LogScale 1.153.4 LTS (2024-12-17)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.153.4 | LTS | 2024-12-17 | Cloud | 2025-09-30 | Yes | 1.112 | No |
Hide file hashes
TAR Checksum | Value |
---|---|
MD5 | 89da4587a8997aaced15238e84a88c29 |
SHA1 | 54a7c0d8481492beef15c2af19217a90ee415ca9 |
SHA256 | 1219274fef386c080ab7a02410a6dc16bed88b3a49c45f072c39c6bd5d071e73 |
SHA512 | 50effa4373b048ec699ddb0a40bf29da0865723e6e05d1b446ce3ffd6c05df8085c17be09652541ec507559fa6a42b38faddf3e224ed5975c55da4363cd7a75f |
Docker Image | Included JDK | SHA256 Checksum |
---|---|---|
humio | 22 | 2bfe64dc45f561eef134c2d1d3c1c2fdd0173b474aef160aa3dcdf42a56e14a8 |
humio-core | 22 | b636b5f96d84b398a75b37aefef3162ca5b9a9213ce1834f4181a4b433801a05 |
kafka | 22 | a8c068517a4decedea274df8426b78f496e2382eca6388aa65ea63736bb70459 |
zookeeper | 22 | bad8d6ec1dfcb589bac951cb69a22169141d2fd514c0924bf3c8d31cedbd699c |
Download
These notes include entries from the following previous releases: 1.153.1, 1.153.3
Bug fixes and updates.
Breaking Changes
The following items create a breaking change in the behavior, response or operation of this release.
Removed
Items that have been removed as of this release.
Installation and Deployment
The previously deprecated
jar
distribution of LogScale (e.g.server-1.117.jar
) is no longer published starting from this version. For more information, see Falcon LogScale 1.130.0 GA (2024-03-19).The previously deprecated
humio/kafka
andhumio/zookeeper
Docker images are now removed and no longer published.API
The following previously deprecated KAFKA API endpoints have been removed:
POST
/api/v1/clusterconfig/kafka-queues/partition-assignment
GET
/api/v1/clusterconfig/kafka-queues/partition-assignment
POST
/api/v1/clusterconfig/kafka-queues/partition-assignment/set-replication-defaults
GET
/api/v1/clusterconfig/kafka-queues/partition-assignment/id
Configuration
The obsolete configuration parameters
AUTOSHARDING_TRIGGER_DELAY_MS
andAUTOSHARDING_CHECKINTERVAL_MS
have been removed due to autosharding being handled by rate monitoring and not by ingest delay anymore.Other
Unnecessary
digest-coordinator-changes
anddesired-digest-coordinator-changes
metrics have been removed. Instead, the logging in theIngestPartitionCoordinator
class has been improved, to allow monitoring of when reassignment of desired and current digesters happens — by searching forWrote changes to desired digest partitions
/Wrote changes to current digest partitions
.
Deprecation
Items that have been deprecated and may be removed in a future release.
The
server.tar.gz
release artifact has been deprecated. Users should switch to theOS/architecture-specific server-linux_x64.tar.gz
orserver-alpine_x64.tar.gz
, which include bundled JDKs. Users installing a Docker image do not need to make any changes. With this change, LogScale will no longer support bringing your own JDK, we will bundle one with releases instead.We are making this change for the following reasons:
By bundling a JDK specifically for LogScale, we can customize the JDK to contain only the functionality needed by LogScale. This is a benefit from a security perspective, and also reduces the size of release artifacts.
Bundling the JDK ensures that the JDK version in use is one we've tested with, which makes it more likely a customer install will perform similar to our own internal setups.
By bundling the JDK, we will only need to support one JDK version. This means we can take advantage of enhanced JDK features sooner, such as specific performance improvements, which benefits everyone.
The last release where
server.tar.gz artifact
is included will be 1.154.0.The lastScheduledSearch field from the
ScheduledSearch
datatype is now deprecated and planned for removal in LogScale version 1.202. The new lastExecuted and lastTriggered fields have been added to theScheduledSearch
datatype to replace lastScheduledSearch.
Behavior Changes
Scripts or environment which make use of these tools should be checked and updated for the new configuration:
Installation and Deployment
The default
cleanup.policy
for the transientChatter-events topic has been switched fromcompact
todelete,compact
. This change will not apply to existing clusters. Changing this setting todelete,compact
via Kafka's command line tools is particularly recommended iftransientChatter
is taking up excessive space on disk, whereas it is less relevant in production environments where Kafka's disks tend to be large.Automation and Alerts
Aggregate and filter alert types now both display an Error (red) status if starting the alert query times out after 1 minute.
For more information on alert statuses, see Monitoring Alerts.
Storage
Reduced the waiting time for redactEvents background jobs to complete.
The background job will not complete until all mini-segments affected by the redaction have been merged into full segments. The job was pessimistically waiting for
MAX_HOURS_SEGMENT_OPEN
(30 days) before attempting the rewrite. This has been changed to wait forFLUSH_BLOCK_SECONDS
(15 minutes) before attempting the rewrite, this means, while some mini-segments may not be rewritten for 30 days, it is uncommon. If a rewrite is attempted and encounters mini-segments, it is postponed and retried later.For more information, see Redact Events API.
Configuration
When global publish to Kafka times out from digester threads, the system would initiate a failure shutdown. Instead, from this 1.144 version the system retries the publish to Global Database indefinitely for those specific global transactions that originate in a digester thread. If retries occur, these get logged with an error executeTransactionRetryingOnTimeout: unable to execute transaction for global, retrying.
Autoshards no longer respond to ingest delay by default, and now support round-robin instead.
Functions
Prior to LogScale v1.147, the
array:length()
function accepted a value in thearray
argument that did not contain brackets[ ]
so thatarray:length("field")
would always produce the result0
(since there was no field named field). The function has now been updated to properly throw an exception if given a non-array field name in thearray
argument. Therefore, the function now requires the given array name to have[ ]
brackets, since it only works on array fields.
Upgrades
Changes that may occur or be required during an upgrade.
Installation and Deployment
The minimum version of Java compatible with LogScale is now 21. Docker users, and users installing the release artifacts that bundle the JDK, are not affected.
It is recommended to switch to the release artifacts that bundle a JDK, because LogScale no longer supports bringing your own JDK as of release 1.138, see Falcon LogScale 1.138.0 GA (2024-05-14)
The JDK has been upgraded to 23.0.1
New features and improvements
Security
When extending Retention span or size, any segments that were marked for deletion — but where the files remain in the system — are automatically resurrected. How much data you reclaim via this depends on the
backupAfterMillis
configuration on the repository.For more information, see Audit Logging.
Installation and Deployment
The Docker containers have been configured to use the following environment variable values internally:
DIRECTORY=/data/humio-data
HUMIO_AUDITLOG_DIR=/data/logs
HUMIO_DEBUGLOG_DIR=/data/logs
JVM_LOG_DIR=/data/logs
JVM_TMP_DIR=/data/humio-data/jvm-tmp
This configuration replaces the following chains of internal symlinks, which have been removed:/app/humio/humio/humio-data
to/app/humio/humio-data
/app/humio/humio-data
to/data/humio-data
/app/humio/humio/logs
/app/humio/logs
/app/humio/logs
to/data/logs
This change is intended for allowing the tool scripts in
/app/humio/humio/bin
to work correctly, as they were previously failing due to the presence of dangling symlinks when invoked via docker run if nothing was mounted at/data
.
UI Changes
LogScale administrators can now set the default timezone for their users.
For more information, see Setting Time Zone.
When exporting data to CSV, the Export to File dialog now offers the ability to select field names that are suggested based on the query results, or to select all fields in one click.
For more information, see Exporting Data.
The Time Interval panel now displays the @ingesttimestamp/@timestamp options selected when querying events for Aggregate Alerts.
For more information, see Changing Time Interval.
A new timestamp column has been added in the Event list displaying the alert timestamp selected (@ingesttimestamp or @timestamp). This will show as the new default column along with the usual @rawstring field column.
For more information, see Alert Properties.
When a file is referenced in a query, the
Search
page now shows a new tab next to the Results and Events tabs, bearing the name of the uploaded file. Activating the file tab will fetch the contents of the file and will show them as aTable
widget. Alternatively, if the file cannot be queried, a download link will be presented instead.For more information, see Creating a File.
Sections can now be created inside dashboards, allowing for grouping relevant content together to maintain a clean and organized layout, making it easier for users to find and analyze related information. Sections can contain data visualizations as well as Parameter Panels. Additionally, they offer more flexibility when using the Time Selector, enabling users to apply a time setting across multiple widgets.
For more information, see Sections.
The
Users
page has been redesigned so that the Repository and view roles are displayed in a right hand side panel which opens when a repository or view is selected. The repository and views roles panel shows the roles that give permissions to the user for the selected repository or view, together with groups that apply to them and the corresponding query prefixes.For more information, see Manage Users.
An organization administrator can now update a user's role on a repository or view from the
Users
page.For more information, see Manage User Roles.
The design of the file editor for Lookup Files has been improved. The editor is now also more responsive and has support for tab navigation.
The Client type item in the Query details tab has been removed. Previously, Dashboard was incorrectly displayed as the value for both live dashboard and alert query types.
For more information, see Query Monitor — Query Details.
In Organization settings, layout changes have been made to the
Groups
page for viewing and updating repository and view permissions on a group.UI workflow updates have been made in the
Groups
page for managing permissions and roles.For more information, see Manage Groups.
Automation and Alerts
A maximum limit of 1 week has been added on the throttle period for Filter Alerts and Standard Alerts. Any existing alert with a higher throttle time will continue to run, but when edited, lowering the throttle time to 1 week at most will be required.
Standard Alerts have been renamed to Legacy Alerts. It is recommended using Filter Alerts or Aggregate Alerts alerts instead of legacy alerts.
For more information, see Alerts.
The
{action_invocation_id}
message template has been added: it contains a unique id for the invocation of the action that can be correlated with the activity logs.For more information, see Message Templates and Variables, Monitoring Alert Execution through the humio-activity Repository.
It is no longer possible to use @id as throttle field in filter alerts, as this has no effect. Any existing filter alerts with @id as throttle field will continue to run, but the next time the filter alert is updated, the throttle field must be changed or removed.
For more information, see Field-Based Throttling.
Audit logs for Alerts and Scheduled Searches now contain the package, if installed from a package.
The following UI changes have been introduced for alerts:
The Alerts overview page now presents a table with search and filtering options.
An alert-specific version of the
Search
page is now available for creating and refining your query before saving it as an alert.The alert's properties are opened in a side panel when creating or editing an alert.
In the side panel, the recommended alert type to choose is suggested based on the query.
For aggregate alerts, the side panel allows you to select the timestamp (@ingesttimestamp or @timestamp).
For more information, see Creating Alerts, Alert Properties.
A new Disabled actions status is added and can be visible from the
Alerts
overview table. This status will be displayed when there is an alert (or scheduled search) with only disabled actions attached.For more information, see Alerts Overview.
Audit logs for Filter Alerts now contain the language version of the alert query.
A new aggregate alert type is introduced. The aggregate alert is now the recommended alert type for any queries containing aggregate functions. Like filter alerts, aggregate alerts use ingest timestamps and run back-to-back searches, guaranteeing at least once delivery to the actions for more robust results, even in case of ingest delays of up to 24 hours.
For more information, see Aggregate Alerts.
The following adjustments have been made for Scheduled PDF Reports:
If the feature is disabled for the cluster, then the
menu item under will not show.If the feature is disabled or the render service is in an error state, users who are granted with the
ChangeScheduledReport
permission and try to access, will be presented with a banner on theScheduled reports
overview page.The permissions overview in the UI now informs that the feature must be enabled and configured correctly for the cluster, in order for the
ChangeScheduledReport
permission to have any effect.
Users can now see warnings and errors associated to alerts in the
Alerts
page opened in read-only mode.
GraphQL API
The new environmentVariableUsage() GraphQL API has been introduced for listing non-secret environment variables used by a node. This is intended as an aid to help do configuration discovery when managing a large number of LogScale clusters.
The getFileContent() and newFile() GraphQL endpoint responses will change for empty files. The return type is still
UploadedFileSnapshot!
, but the lines field will be changed to return[]
when the file is empty. Previously, the return value was a list containing an empty list[[]]
. This change applies both for empty files, and when the provided filter string doesn't match any rows in the file.The log line containing
Executed GraphQL query
in the humio repository, that is logged for every GraphQL call, now contains the name of the mutations and queries that are executed.The new concatenateQueries() GraphQL API has been introduced for programmatically concatenating multiple queries into one. This is intended to eliminate errors that might occur if queries are combined naively.
The preview tag has been removed from the following GraphQL mutations:
DeleteIngestFeed
resetQuota
testAwsS3SqsIngestFeed
The stopStreamingQueries() GraphQL mutation is no longer in preview.
The getFileContent()GraphQL query will now filter CSV file rows case insensitively and allow partial text matches. This happens when filterString input argument is provided. This makes it possible to search for rows without knowing the full column values, and while ignoring the case.
The defaultTimeZone GraphQL field on the
UserSettings
GraphQL type no longer defaults to the organisation default time zone if the user has no default time zone set. To get the default organization time zone through the API, use the defaultTimeZone field on theOrganizationConfigs
GraphQL type.The new startFromDateTime argument has been added to s3ConfigureArchiving GraphQL mutation. When set, S3Archiving does not consider segment files that have a start time that is before this point in time. This in particular allows enabling S3 archiving only from a point in time and going forward, without archiving all the older files too.
A new field named searchUsers has been added on the group() output type in graphql, which is used to search users in the group. The field also allows for pagination, ordering and sorting of the result set.
Storage
An alternative S3 client is now available and enabled by default. It handles file uploads more efficiently, by setting the Content-MD5 header during upload thus allowing S3 to perform file validation instead of having LogScale do it via post-upload validation steps. This form of validation should work for all uploads, including when server-side encryption is enabled. The new S3 client only supports this validation mode, so setting the following variables will have no effect:
In case of issues, the S3 client can be disabled by setting
USE_AWS_SDK=false
, which will set LogScale back to the previous default client. Should you need to do this, please reach out to Support to have the issue addressed, because the previous client will be deprecated and removed eventually.Support for bucket storage upload validation has changed. LogScale now supports the following three validation modes:
Checking the ETag HTTP response header on the upload response. This mode is the default, and can be opted out of via the
BUCKET_STORAGE_IGNORE_ETAG_UPLOAD
configuration parameter.Checking the ETag HTTP response header on a HEAD request done for the uploaded file. This is the second preferred mode, and can be opted out of via the
BUCKET_STORAGE_IGNORE_ETAG_AFTER_UPLOAD
configuration parameter.Downloading the file that was uploaded, in order to validate the
checksum
file. This mode is enabled if neither of the other modes are enabled.
Previous validation modes that did not compare checksums have been removed, as they were not reliable indicators of the uploaded file integrity.
The size of the queue for segments being uploaded to bucket storage has been increased. This reduces how often a scan global for changes is needed.
For more information, see Bucket Storage.
For better efficiency, more than one object is now deleted from Bucket Storage per request to S3 in order to reduce the number of requests to S3.
Support is implemented for returning a result over 1GB in size on the
queryjobs
endpoint. There is now a limit on the size of 8GB of the returned result. The limits on state sizes for queries remain unaltered, so the effect of this change is that some queries that previously failed returning their results due to reaching 1GB, even though the query completed, now work.
API
Support for array and object handling in the
fields
object has been added for Ingesting with HTTP Event Collector (HEC) events.
Configuration
A new dynamic configuration variable
GraphQlDirectivesAmountLimit
has been added to restrict how many GraphQL directives can be in a query. Valid values are integers from 5 to 1,000. The default value is 25.The
QueryBacktrackingLimit
feature is now enabled by default. The default value for the max number of backtracks (number of times a single event can be processed) a query can do has been reduced to2,000
.Adjusted launcher script handling of the
CORES
environment variable:If
CORES
is set, the launcher will now pass-XX:ActiveProcessorCount=$CORES
to the JVM. IfCORES
is not set, the launcher will pass-XX:ActiveProcessorCount
to the JVM with a value determined by the launcher. This ensures that the core count configured for LogScale is always same as the core count configured for internal JVM thread pools.-XX:ActiveProcessorCount
will be ignored if passed directly via other environment variables, such asHUMIO_OPTS
. Administrators currently configuring their clusters this way should remove-XX:ActiveProcessorCount
from their variables and setCORES
instead.The default
retention.bytes
has been modified for global topic from 1 GB to 20 GB. This is applied only when the topic is being created by LogScale initially. For existing clusters you should raise retention on the global topic so that it has room for at least a few hours of flow. This is only relevant for large clusters, as small clusters do not produce enough to exceed 1 GB per few hours. It is ideal to have room for at least 1 day in the global topic for better resilience against large spikes in traffic combined with losing global snapshot files.Cluster-wide configuration of S3 Archiving is introduced, in addition to the existing repo-specific configurations. This feature allows the cluster admin to setup archiving to a (single) bucket for a subset of repositories on the cluster, fully independent of the S3 Archiving available to end users via the UI. This feature adds the following new configuration parameters:
S3_CLUSTERWIDE_ARCHIVING_ACCESSKEY
(required)S3_CLUSTERWIDE_ARCHIVING_SECRETKEY
(required)S3_CLUSTERWIDE_ARCHIVING_REGION
(required)S3_CLUSTERWIDE_ARCHIVING_BUCKET
(required)S3_CLUSTERWIDE_ARCHIVING_PREFIX
(defaults to empty string)S3_CLUSTERWIDE_ARCHIVING_PATH_STYLE_ACCESS
(default isfalse
)S3_CLUSTERWIDE_ARCHIVING_KMS_KEY_ARN
S3_CLUSTERWIDE_ARCHIVING_ENDPOINT_BASE
S3_CLUSTERWIDE_ARCHIVING_WORKERCOUNT
(default iscores/4
)S3_CLUSTERWIDE_ARCHIVING_USE_HTTP_PROXY
(default isfalse
)S3_CLUSTERWIDE_ARCHIVING_IBM_COMPAT
(default isfalse
)
Most of these configuration variables work like they do for S3 Archiving, except that the region/bucket is selected here via configuration, and not dynamically by the end users, and also that the authentication is via explicit accesskey and secret, and not via IAM roles or any other means.
The following dynamic configurations are added for this feature:
S3ArchivingClusterWideDisabled
(defaults tofalse
when not set) — allows temporarily pausing the archiving in case of issues triggered by, for example, the traffic this creates.S3ArchivingClusterWideEndAt
andS3ArchivingClusterWideStartFrom
— timestamps in milliseconds of the "cut" that selects segment files and events in them to include. When these configuration variables are unset (which is the default) the effect is to not filter by time.S3ArchivingClusterWideRegexForRepoName
(defaults tonot match
if not set) — the repository name regex must be set in order to enable the feature. When set, all repositories that have a name that matches the regex (unanchored) will be archived using the cluster wide configuration from this variable.
Ingestion
On the Code page accessible from the menu when writing a new parser, the following validation rules have been added globally:
Arrays must be contiguous and must have a field with index 0. For instance,
myArray[0] := "some value"
Fields that are prefixed with
#
must be configured to be tagged (to avoid falsely tagged fields).
An error is displayed on the parser Code page if the rules above are violated. This error will not appear during actual parsing.
For more information, see Creating a New Parser.
To avoid exporting redundant fields in the parsers, LogScale will now omit YAML fields with a
null
value when exporting YAML templates — even when such fields are contained inside a list. Omitting fields with anull
value previously only happened for fields outside a list.
Log Collector
RemoteUpdate
version dialog has been improved, with the ability to cancel pending and scheduled updates.
Functions
Matching on multiple rows with the
match()
query function is now supported. This functionality allowsmatch()
to emit multiple events, one for each matching row. Thenrows
parameter is used to specify the maximum number of rows to match on.For more information, see
match()
.The
match()
function now supports matching on multiple pairs of fields and columns.For more information, see
match()
.The new query function
text:contains()
is introduced. The function tests if a specific substring is present within a given string. It takes two arguments:string
andsubstring
, both of which can be provided as plain text, field values, or results of an expression.For more information, see
text:contains()
.The new query function
array:append()
is introduced, used to append one or more values to an existing array, or to create a new array.For more information, see
array:append()
.
Fixed in this release
Falcon Data Replicator
Testing new FDR feeds using s3 aliasing would fail for valid credentials. This issue has now been fixed.
UI Changes
The
Query Monitor
page would show queries running on @ingesttimestamp as running on a search interval over all time. This wrong behavior has been fixed to show the correct search interval.The event histogram would not adhere to the timezone selected for the query.
When managing sessions within an organization, it was not possible to sort active sessions by the Last active timestamp column. This issue has now been fixed.
In the Export to File dialog, when using the keyboard to switch between options, a different item than the one selected was highlighted. This issue has now been fixed.
A long list of large queries would break the queries' list appearing under the Recent tab by not being updatable. The limit to recent queries has now been set to 30.
For more information, see Recalling Queries.
A race condition in LogScale Multi-Cluster Search has been fixed: a
done
query with an incomplete result could be overwritten, causing the query to never complete.The dialog to quickly switch to another repository would open when pressing the undo hotkey on Windows machines. This wrong behavior has now been fixed.
The dropdown menu for selecting fields used when exporting data to a CSV file was hidden behind the Export to file dialog. This issue has now been fixed.
The
Organizations
overview page has been fixed as the Volume column width within a specific organization could not be adjusted.The display of Lookup Files metadata in the file editor for very long user names has now been fixed.
The settings used to disable automatic searching would not be respected when creating a new alert. This issue has now been fixed.
When Creating a File, saving an invalid
.csv
file was possible in the file editor. This wrong behavior has now been fixed.The Export to file dialog used when Exporting Data has been fixed as CSV fields input would in some cases not be populated with all fields.
Fixing a visualization issue where the values in a multi-select combo box could overlap with the number of selected items.
When clicking to sort the Sessions based on Last active, the sorting was wrongly based on Login time instead. This issue has now been fixed.
It was not possible to sort by columns other than ID in the Cluster nodes table under the UI menu. This issue has now been fixed.
Automation and Alerts
Actions would show up as scheduled searches and vice versa when viewing the contents of a package. This issue has now been fixed.
Fixed an issue where queries that were failing would never complete. This could cause Alerts and Scheduled Searches to hang.
Scheduled Searches would not always log if runs were skipped due to being behind. This issue has been fixed now.
The read-only alert page would wrongly report that actions were being throttled when a filter alert had disabled throttling. This issue has now been fixed.
GraphQL API
The getFileContent() GraphQL endpoint will now return an
UploadedFileSnapshot!
datatype with the field totalLinesCount: 0 when a file has no matches for a given filter string. Previously it would return the total number of lines in the file.The background processing underlying the redactEvents() mutation would fail if the filter included tags. This error has now been fixed.
Storage
Throttling for bucket uploads/downloads has been fixed as it could cause unintentionally high number of concurrent uploads or downloads, to the point of exceeding the pool of connections.
Notifying to Global Database about file changes could be slow. This issue has now been fixed.
Segments could be considered under-replicated for a long time leading to events being retained in Kafka for extended periods. This wrong behavior has now been fixed.
Throttling for bucket uploads/downloads could cause unintentionally harsh throttling of downloads in favor of running more uploads concurrently. This issue has now been fixed.
Digest threads could fail to start digesting if
global
is very large, and if writing toglobal
is slow. This issue has now been fixed.The throttling for segment rebalancing has been reworked, which should help rebalancing keep up without overwhelming the cluster.
API
fields
object did not show up in Ingesting with HTTP Event Collector (HEC) events. This issue has now been fixed.
Configuration
Make a value of
1
forBucketStorageUploadInfrequentThresholdDays
dynamic configuration result in all uploads to bucket being subject to "S3 Intelligent-Tiering". Some installs want this as they apply versioning to their bucket, so even though the life span as a non-deleted object is short, the actual data remains for much longer in the bucket, and then tiering all objects saves on cost of storage for them. Objects below 128KB are never tiered in any case.
Dashboards and Widgets
Arguments for parameters no longer used in a deleted query could be submitted anyway when invoking a saved query that uses the same arguments, thus generating an error. This issue has now been fixed.
The
Table
widget has been fixed due to its header appearing transparent.
Ingestion
Event Forwarding would fail silently if an error occurred while executing the query. This issue has now been fixed.
A queryToRead field has been added to the filesUsed property of queryResult to read the data from a file used in a query.
For more information, see Polling a Query Job.
Event Forwarding using
match()
orlookup()
with a missing file would continue to fail after the file was uploaded. This issue has now been fixed.When shutting down a node, the process that load files used by a parser would be stopped before the parser itself. This could lead to ingested events not being parsed. This issue has now been fixed.
A wrong order of the output events for parsers have been fixed — the output now returns the correct event order.
Log Collector
Queries that were nested too deeply would crash LogScale nodes. This issue has now been fixed.
Functions
parseXml()
would sometimes only partially extract text elements when the text contained newline characters. This issue has now been fixed.Parsing the empty string as a number could lead to errors causing the query to fail (in
formatTime()
function, for example). This issue has now been fixed.The query backtracking limit would wrongly apply to the total number of events, rather than how many times individual events are passed through the query pipeline. This issue has now been fixed.
Long running queries using
window()
could end up never completing. This issue has now been fixed.writeJson()
would write invalid JSON by not correctly quoting numbers starting with unary plus or ending with a trailing.
(dot).A regression issue has been fixed in the
match()
function incidr
mode, which was significantly slower when doing submission of the query.
Known Issues
Queries
Improvement
UI Changes
The performance of the query editor has been improved, especially when working with large query results.
Automation and Alerts
The log field
previouslyPlannedForExecutionAt
has been renamed toearliestSkippedPlannedExecution
when skipping scheduled search executions.The field
useProxyOption
has been added to Webhooks action templates to be consistent with the other action templates.The severity of a number of alert and scheduled search logs has been changed to better reflect the severity for users.
Storage
The global topic throughput has been improved for particular updates to segments in datasources with many segments.
For more information, see Global Database.
Let segment merge span vary by +/- 10% of the configured value to avoid all segment targets switching to a new merge targets at the same point in time.
Ingestion
The input validation on Split by AWS records preprocessing when Set up a New Ingest Feed has been simplified: it will still validate that the incoming file is a single JSON object (and not, for example, multiple newline-delimited JSON objects), but the object may or may not contain a
Records
array. This resolves an ingest feed issue for CloudTrail with log file integrity enabled. In such cases, the emitted digest files (that does not have theRecords
array) would halt the ingest feed. These digest files are now ignored.For more background information, see this related release note.
The Split by AWS records preprocessing when Set up a New Ingest Feed now requires the
Records
array. This better protects against a situation where mistakenly using this preprocessing step with non-AWS records would interpret the files as empty batches of events, leading notifications in SQS to be deleted without ingesting any events.
Queries
Cache files, used by query functions such as
match()
andreadFile()
, are now written to disk for up to 24 hours after use. This can improve the time it takes for a query to start significantly, however, it naturally takes op disk space.A fraction of the disk used can be controlled using the configuration variables
TABLE_CACHE_MAX_STORAGE_FRACTION
andTABLE_CACHE_MAX_STORAGE_FRACTION_FOR_INGEST_AND_HTTP_ONLY
.
Falcon LogScale 1.153.3 LTS (2024-10-02)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.153.3 | LTS | 2024-10-02 | Cloud | 2025-09-30 | No | 1.112 | No |
Hide file hashes
TAR Checksum | Value |
---|---|
MD5 | 11d707c1e84f5de2386b6cef2e9a7309 |
SHA1 | 978a3bced7ad25a0c5d6cbe02975636be07ec829 |
SHA256 | 36a4ba68604c328b0efc5ed3dcde506f0a5664e77be1f3db0760b917aa35d11b |
SHA512 | 6c084b83bd6764e8a190bd5c4324da3e6394eb47b60a456cffcd539a20057b9003f2c2b00263d2af7db3d65ebd86eadd38380e70b7778d4fdba613e5e5eac70c |
Docker Image | Included JDK | SHA256 Checksum |
---|---|---|
humio | 22 | a2672fa37f835074b5f9c75fc5806c116a8211f9e6f2cb2fd3623cf20bdc8e4b |
humio-core | 22 | febf83507daf65f91789b4b2a19e9d116f8572bce00d7e269c40c1112b1bbd7d |
kafka | 22 | 968b820261790d6ea9bb24427850168039d558e2a87c7e8fe59d14dee89390fb |
zookeeper | 22 | ec4b860137b8f152fd74b164a00ab28134436f42f6d7de1b18de9fce96a86b13 |
Download
These notes include entries from the following previous releases: 1.153.1
Bug fixes and updates.
Breaking Changes
The following items create a breaking change in the behavior, response or operation of this release.
Removed
Items that have been removed as of this release.
Installation and Deployment
The previously deprecated
jar
distribution of LogScale (e.g.server-1.117.jar
) is no longer published starting from this version. For more information, see Falcon LogScale 1.130.0 GA (2024-03-19).The previously deprecated
humio/kafka
andhumio/zookeeper
Docker images are now removed and no longer published.API
The following previously deprecated KAFKA API endpoints have been removed:
POST
/api/v1/clusterconfig/kafka-queues/partition-assignment
GET
/api/v1/clusterconfig/kafka-queues/partition-assignment
POST
/api/v1/clusterconfig/kafka-queues/partition-assignment/set-replication-defaults
GET
/api/v1/clusterconfig/kafka-queues/partition-assignment/id
Configuration
The obsolete configuration parameters
AUTOSHARDING_TRIGGER_DELAY_MS
andAUTOSHARDING_CHECKINTERVAL_MS
have been removed due to autosharding being handled by rate monitoring and not by ingest delay anymore.Other
Unnecessary
digest-coordinator-changes
anddesired-digest-coordinator-changes
metrics have been removed. Instead, the logging in theIngestPartitionCoordinator
class has been improved, to allow monitoring of when reassignment of desired and current digesters happens — by searching forWrote changes to desired digest partitions
/Wrote changes to current digest partitions
.
Deprecation
Items that have been deprecated and may be removed in a future release.
The
server.tar.gz
release artifact has been deprecated. Users should switch to theOS/architecture-specific server-linux_x64.tar.gz
orserver-alpine_x64.tar.gz
, which include bundled JDKs. Users installing a Docker image do not need to make any changes. With this change, LogScale will no longer support bringing your own JDK, we will bundle one with releases instead.We are making this change for the following reasons:
By bundling a JDK specifically for LogScale, we can customize the JDK to contain only the functionality needed by LogScale. This is a benefit from a security perspective, and also reduces the size of release artifacts.
Bundling the JDK ensures that the JDK version in use is one we've tested with, which makes it more likely a customer install will perform similar to our own internal setups.
By bundling the JDK, we will only need to support one JDK version. This means we can take advantage of enhanced JDK features sooner, such as specific performance improvements, which benefits everyone.
The last release where
server.tar.gz artifact
is included will be 1.154.0.The lastScheduledSearch field from the
ScheduledSearch
datatype is now deprecated and planned for removal in LogScale version 1.202. The new lastExecuted and lastTriggered fields have been added to theScheduledSearch
datatype to replace lastScheduledSearch.
Behavior Changes
Scripts or environment which make use of these tools should be checked and updated for the new configuration:
Installation and Deployment
The default
cleanup.policy
for the transientChatter-events topic has been switched fromcompact
todelete,compact
. This change will not apply to existing clusters. Changing this setting todelete,compact
via Kafka's command line tools is particularly recommended iftransientChatter
is taking up excessive space on disk, whereas it is less relevant in production environments where Kafka's disks tend to be large.Automation and Alerts
Aggregate and filter alert types now both display an Error (red) status if starting the alert query times out after 1 minute.
For more information on alert statuses, see Monitoring Alerts.
Storage
Reduced the waiting time for redactEvents background jobs to complete.
The background job will not complete until all mini-segments affected by the redaction have been merged into full segments. The job was pessimistically waiting for
MAX_HOURS_SEGMENT_OPEN
(30 days) before attempting the rewrite. This has been changed to wait forFLUSH_BLOCK_SECONDS
(15 minutes) before attempting the rewrite, this means, while some mini-segments may not be rewritten for 30 days, it is uncommon. If a rewrite is attempted and encounters mini-segments, it is postponed and retried later.For more information, see Redact Events API.
Configuration
When global publish to Kafka times out from digester threads, the system would initiate a failure shutdown. Instead, from this 1.144 version the system retries the publish to Global Database indefinitely for those specific global transactions that originate in a digester thread. If retries occur, these get logged with an error executeTransactionRetryingOnTimeout: unable to execute transaction for global, retrying.
Autoshards no longer respond to ingest delay by default, and now support round-robin instead.
Functions
Prior to LogScale v1.147, the
array:length()
function accepted a value in thearray
argument that did not contain brackets[ ]
so thatarray:length("field")
would always produce the result0
(since there was no field named field). The function has now been updated to properly throw an exception if given a non-array field name in thearray
argument. Therefore, the function now requires the given array name to have[ ]
brackets, since it only works on array fields.
Upgrades
Changes that may occur or be required during an upgrade.
Installation and Deployment
The minimum version of Java compatible with LogScale is now 21. Docker users, and users installing the release artifacts that bundle the JDK, are not affected.
It is recommended to switch to the release artifacts that bundle a JDK, because LogScale no longer supports bringing your own JDK as of release 1.138, see Falcon LogScale 1.138.0 GA (2024-05-14)
New features and improvements
Security
When extending Retention span or size, any segments that were marked for deletion — but where the files remain in the system — are automatically resurrected. How much data you reclaim via this depends on the
backupAfterMillis
configuration on the repository.For more information, see Audit Logging.
Installation and Deployment
The Docker containers have been configured to use the following environment variable values internally:
DIRECTORY=/data/humio-data
HUMIO_AUDITLOG_DIR=/data/logs
HUMIO_DEBUGLOG_DIR=/data/logs
JVM_LOG_DIR=/data/logs
JVM_TMP_DIR=/data/humio-data/jvm-tmp
This configuration replaces the following chains of internal symlinks, which have been removed:/app/humio/humio/humio-data
to/app/humio/humio-data
/app/humio/humio-data
to/data/humio-data
/app/humio/humio/logs
/app/humio/logs
/app/humio/logs
to/data/logs
This change is intended for allowing the tool scripts in
/app/humio/humio/bin
to work correctly, as they were previously failing due to the presence of dangling symlinks when invoked via docker run if nothing was mounted at/data
.
UI Changes
LogScale administrators can now set the default timezone for their users.
For more information, see Setting Time Zone.
When exporting data to CSV, the Export to File dialog now offers the ability to select field names that are suggested based on the query results, or to select all fields in one click.
For more information, see Exporting Data.
The Time Interval panel now displays the @ingesttimestamp/@timestamp options selected when querying events for Aggregate Alerts.
For more information, see Changing Time Interval.
A new timestamp column has been added in the Event list displaying the alert timestamp selected (@ingesttimestamp or @timestamp). This will show as the new default column along with the usual @rawstring field column.
For more information, see Alert Properties.
When a file is referenced in a query, the
Search
page now shows a new tab next to the Results and Events tabs, bearing the name of the uploaded file. Activating the file tab will fetch the contents of the file and will show them as aTable
widget. Alternatively, if the file cannot be queried, a download link will be presented instead.For more information, see Creating a File.
Sections can now be created inside dashboards, allowing for grouping relevant content together to maintain a clean and organized layout, making it easier for users to find and analyze related information. Sections can contain data visualizations as well as Parameter Panels. Additionally, they offer more flexibility when using the Time Selector, enabling users to apply a time setting across multiple widgets.
For more information, see Sections.
The
Users
page has been redesigned so that the Repository and view roles are displayed in a right hand side panel which opens when a repository or view is selected. The repository and views roles panel shows the roles that give permissions to the user for the selected repository or view, together with groups that apply to them and the corresponding query prefixes.For more information, see Manage Users.
An organization administrator can now update a user's role on a repository or view from the
Users
page.For more information, see Manage User Roles.
The design of the file editor for Lookup Files has been improved. The editor is now also more responsive and has support for tab navigation.
The Client type item in the Query details tab has been removed. Previously, Dashboard was incorrectly displayed as the value for both live dashboard and alert query types.
For more information, see Query Monitor — Query Details.
In Organization settings, layout changes have been made to the
Groups
page for viewing and updating repository and view permissions on a group.UI workflow updates have been made in the
Groups
page for managing permissions and roles.For more information, see Manage Groups.
Automation and Alerts
A maximum limit of 1 week has been added on the throttle period for Filter Alerts and Standard Alerts. Any existing alert with a higher throttle time will continue to run, but when edited, lowering the throttle time to 1 week at most will be required.
Standard Alerts have been renamed to Legacy Alerts. It is recommended using Filter Alerts or Aggregate Alerts alerts instead of legacy alerts.
For more information, see Alerts.
The
{action_invocation_id}
message template has been added: it contains a unique id for the invocation of the action that can be correlated with the activity logs.For more information, see Message Templates and Variables, Monitoring Alert Execution through the humio-activity Repository.
It is no longer possible to use @id as throttle field in filter alerts, as this has no effect. Any existing filter alerts with @id as throttle field will continue to run, but the next time the filter alert is updated, the throttle field must be changed or removed.
For more information, see Field-Based Throttling.
Audit logs for Alerts and Scheduled Searches now contain the package, if installed from a package.
The following UI changes have been introduced for alerts:
The Alerts overview page now presents a table with search and filtering options.
An alert-specific version of the
Search
page is now available for creating and refining your query before saving it as an alert.The alert's properties are opened in a side panel when creating or editing an alert.
In the side panel, the recommended alert type to choose is suggested based on the query.
For aggregate alerts, the side panel allows you to select the timestamp (@ingesttimestamp or @timestamp).
For more information, see Creating Alerts, Alert Properties.
A new Disabled actions status is added and can be visible from the
Alerts
overview table. This status will be displayed when there is an alert (or scheduled search) with only disabled actions attached.For more information, see Alerts Overview.
Audit logs for Filter Alerts now contain the language version of the alert query.
A new aggregate alert type is introduced. The aggregate alert is now the recommended alert type for any queries containing aggregate functions. Like filter alerts, aggregate alerts use ingest timestamps and run back-to-back searches, guaranteeing at least once delivery to the actions for more robust results, even in case of ingest delays of up to 24 hours.
For more information, see Aggregate Alerts.
The following adjustments have been made for Scheduled PDF Reports:
If the feature is disabled for the cluster, then the
menu item under will not show.If the feature is disabled or the render service is in an error state, users who are granted with the
ChangeScheduledReport
permission and try to access, will be presented with a banner on theScheduled reports
overview page.The permissions overview in the UI now informs that the feature must be enabled and configured correctly for the cluster, in order for the
ChangeScheduledReport
permission to have any effect.
Users can now see warnings and errors associated to alerts in the
Alerts
page opened in read-only mode.
GraphQL API
The new environmentVariableUsage() GraphQL API has been introduced for listing non-secret environment variables used by a node. This is intended as an aid to help do configuration discovery when managing a large number of LogScale clusters.
The getFileContent() and newFile() GraphQL endpoint responses will change for empty files. The return type is still
UploadedFileSnapshot!
, but the lines field will be changed to return[]
when the file is empty. Previously, the return value was a list containing an empty list[[]]
. This change applies both for empty files, and when the provided filter string doesn't match any rows in the file.The log line containing
Executed GraphQL query
in the humio repository, that is logged for every GraphQL call, now contains the name of the mutations and queries that are executed.The new concatenateQueries() GraphQL API has been introduced for programmatically concatenating multiple queries into one. This is intended to eliminate errors that might occur if queries are combined naively.
The preview tag has been removed from the following GraphQL mutations:
DeleteIngestFeed
resetQuota
testAwsS3SqsIngestFeed
The stopStreamingQueries() GraphQL mutation is no longer in preview.
The getFileContent()GraphQL query will now filter CSV file rows case insensitively and allow partial text matches. This happens when filterString input argument is provided. This makes it possible to search for rows without knowing the full column values, and while ignoring the case.
The defaultTimeZone GraphQL field on the
UserSettings
GraphQL type no longer defaults to the organisation default time zone if the user has no default time zone set. To get the default organization time zone through the API, use the defaultTimeZone field on theOrganizationConfigs
GraphQL type.The new startFromDateTime argument has been added to s3ConfigureArchiving GraphQL mutation. When set, S3Archiving does not consider segment files that have a start time that is before this point in time. This in particular allows enabling S3 archiving only from a point in time and going forward, without archiving all the older files too.
A new field named searchUsers has been added on the group() output type in graphql, which is used to search users in the group. The field also allows for pagination, ordering and sorting of the result set.
Storage
An alternative S3 client is now available and enabled by default. It handles file uploads more efficiently, by setting the Content-MD5 header during upload thus allowing S3 to perform file validation instead of having LogScale do it via post-upload validation steps. This form of validation should work for all uploads, including when server-side encryption is enabled. The new S3 client only supports this validation mode, so setting the following variables will have no effect:
In case of issues, the S3 client can be disabled by setting
USE_AWS_SDK=false
, which will set LogScale back to the previous default client. Should you need to do this, please reach out to Support to have the issue addressed, because the previous client will be deprecated and removed eventually.Support for bucket storage upload validation has changed. LogScale now supports the following three validation modes:
Checking the ETag HTTP response header on the upload response. This mode is the default, and can be opted out of via the
BUCKET_STORAGE_IGNORE_ETAG_UPLOAD
configuration parameter.Checking the ETag HTTP response header on a HEAD request done for the uploaded file. This is the second preferred mode, and can be opted out of via the
BUCKET_STORAGE_IGNORE_ETAG_AFTER_UPLOAD
configuration parameter.Downloading the file that was uploaded, in order to validate the
checksum
file. This mode is enabled if neither of the other modes are enabled.
Previous validation modes that did not compare checksums have been removed, as they were not reliable indicators of the uploaded file integrity.
The size of the queue for segments being uploaded to bucket storage has been increased. This reduces how often a scan global for changes is needed.
For more information, see Bucket Storage.
For better efficiency, more than one object is now deleted from Bucket Storage per request to S3 in order to reduce the number of requests to S3.
Support is implemented for returning a result over 1GB in size on the
queryjobs
endpoint. There is now a limit on the size of 8GB of the returned result. The limits on state sizes for queries remain unaltered, so the effect of this change is that some queries that previously failed returning their results due to reaching 1GB, even though the query completed, now work.
API
Support for array and object handling in the
fields
object has been added for Ingesting with HTTP Event Collector (HEC) events.
Configuration
A new dynamic configuration variable
GraphQlDirectivesAmountLimit
has been added to restrict how many GraphQL directives can be in a query. Valid values are integers from 5 to 1,000. The default value is 25.The
QueryBacktrackingLimit
feature is now enabled by default. The default value for the max number of backtracks (number of times a single event can be processed) a query can do has been reduced to2,000
.Adjusted launcher script handling of the
CORES
environment variable:If
CORES
is set, the launcher will now pass-XX:ActiveProcessorCount=$CORES
to the JVM. IfCORES
is not set, the launcher will pass-XX:ActiveProcessorCount
to the JVM with a value determined by the launcher. This ensures that the core count configured for LogScale is always same as the core count configured for internal JVM thread pools.-XX:ActiveProcessorCount
will be ignored if passed directly via other environment variables, such asHUMIO_OPTS
. Administrators currently configuring their clusters this way should remove-XX:ActiveProcessorCount
from their variables and setCORES
instead.The default
retention.bytes
has been modified for global topic from 1 GB to 20 GB. This is applied only when the topic is being created by LogScale initially. For existing clusters you should raise retention on the global topic so that it has room for at least a few hours of flow. This is only relevant for large clusters, as small clusters do not produce enough to exceed 1 GB per few hours. It is ideal to have room for at least 1 day in the global topic for better resilience against large spikes in traffic combined with losing global snapshot files.Cluster-wide configuration of S3 Archiving is introduced, in addition to the existing repo-specific configurations. This feature allows the cluster admin to setup archiving to a (single) bucket for a subset of repositories on the cluster, fully independent of the S3 Archiving available to end users via the UI. This feature adds the following new configuration parameters:
S3_CLUSTERWIDE_ARCHIVING_ACCESSKEY
(required)S3_CLUSTERWIDE_ARCHIVING_SECRETKEY
(required)S3_CLUSTERWIDE_ARCHIVING_REGION
(required)S3_CLUSTERWIDE_ARCHIVING_BUCKET
(required)S3_CLUSTERWIDE_ARCHIVING_PREFIX
(defaults to empty string)S3_CLUSTERWIDE_ARCHIVING_PATH_STYLE_ACCESS
(default isfalse
)S3_CLUSTERWIDE_ARCHIVING_KMS_KEY_ARN
S3_CLUSTERWIDE_ARCHIVING_ENDPOINT_BASE
S3_CLUSTERWIDE_ARCHIVING_WORKERCOUNT
(default iscores/4
)S3_CLUSTERWIDE_ARCHIVING_USE_HTTP_PROXY
(default isfalse
)S3_CLUSTERWIDE_ARCHIVING_IBM_COMPAT
(default isfalse
)
Most of these configuration variables work like they do for S3 Archiving, except that the region/bucket is selected here via configuration, and not dynamically by the end users, and also that the authentication is via explicit accesskey and secret, and not via IAM roles or any other means.
The following dynamic configurations are added for this feature:
S3ArchivingClusterWideDisabled
(defaults tofalse
when not set) — allows temporarily pausing the archiving in case of issues triggered by, for example, the traffic this creates.S3ArchivingClusterWideEndAt
andS3ArchivingClusterWideStartFrom
— timestamps in milliseconds of the "cut" that selects segment files and events in them to include. When these configuration variables are unset (which is the default) the effect is to not filter by time.S3ArchivingClusterWideRegexForRepoName
(defaults tonot match
if not set) — the repository name regex must be set in order to enable the feature. When set, all repositories that have a name that matches the regex (unanchored) will be archived using the cluster wide configuration from this variable.
Ingestion
On the Code page accessible from the menu when writing a new parser, the following validation rules have been added globally:
Arrays must be contiguous and must have a field with index 0. For instance,
myArray[0] := "some value"
Fields that are prefixed with
#
must be configured to be tagged (to avoid falsely tagged fields).
An error is displayed on the parser Code page if the rules above are violated. This error will not appear during actual parsing.
For more information, see Creating a New Parser.
To avoid exporting redundant fields in the parsers, LogScale will now omit YAML fields with a
null
value when exporting YAML templates — even when such fields are contained inside a list. Omitting fields with anull
value previously only happened for fields outside a list.
Log Collector
RemoteUpdate
version dialog has been improved, with the ability to cancel pending and scheduled updates.
Functions
Matching on multiple rows with the
match()
query function is now supported. This functionality allowsmatch()
to emit multiple events, one for each matching row. Thenrows
parameter is used to specify the maximum number of rows to match on.For more information, see
match()
.The
match()
function now supports matching on multiple pairs of fields and columns.For more information, see
match()
.The new query function
text:contains()
is introduced. The function tests if a specific substring is present within a given string. It takes two arguments:string
andsubstring
, both of which can be provided as plain text, field values, or results of an expression.For more information, see
text:contains()
.The new query function
array:append()
is introduced, used to append one or more values to an existing array, or to create a new array.For more information, see
array:append()
.
Fixed in this release
Falcon Data Replicator
Testing new FDR feeds using s3 aliasing would fail for valid credentials. This issue has now been fixed.
UI Changes
The
Query Monitor
page would show queries running on @ingesttimestamp as running on a search interval over all time. This wrong behavior has been fixed to show the correct search interval.The event histogram would not adhere to the timezone selected for the query.
When managing sessions within an organization, it was not possible to sort active sessions by the Last active timestamp column. This issue has now been fixed.
In the Export to File dialog, when using the keyboard to switch between options, a different item than the one selected was highlighted. This issue has now been fixed.
A long list of large queries would break the queries' list appearing under the Recent tab by not being updatable. The limit to recent queries has now been set to 30.
For more information, see Recalling Queries.
A race condition in LogScale Multi-Cluster Search has been fixed: a
done
query with an incomplete result could be overwritten, causing the query to never complete.The dialog to quickly switch to another repository would open when pressing the undo hotkey on Windows machines. This wrong behavior has now been fixed.
The dropdown menu for selecting fields used when exporting data to a CSV file was hidden behind the Export to file dialog. This issue has now been fixed.
The
Organizations
overview page has been fixed as the Volume column width within a specific organization could not be adjusted.The display of Lookup Files metadata in the file editor for very long user names has now been fixed.
The settings used to disable automatic searching would not be respected when creating a new alert. This issue has now been fixed.
When Creating a File, saving an invalid
.csv
file was possible in the file editor. This wrong behavior has now been fixed.The Export to file dialog used when Exporting Data has been fixed as CSV fields input would in some cases not be populated with all fields.
Fixing a visualization issue where the values in a multi-select combo box could overlap with the number of selected items.
When clicking to sort the Sessions based on Last active, the sorting was wrongly based on Login time instead. This issue has now been fixed.
It was not possible to sort by columns other than ID in the Cluster nodes table under the UI menu. This issue has now been fixed.
Automation and Alerts
Actions would show up as scheduled searches and vice versa when viewing the contents of a package. This issue has now been fixed.
Fixed an issue where queries that were failing would never complete. This could cause Alerts and Scheduled Searches to hang.
Scheduled Searches would not always log if runs were skipped due to being behind. This issue has been fixed now.
The read-only alert page would wrongly report that actions were being throttled when a filter alert had disabled throttling. This issue has now been fixed.
GraphQL API
The getFileContent() GraphQL endpoint will now return an
UploadedFileSnapshot!
datatype with the field totalLinesCount: 0 when a file has no matches for a given filter string. Previously it would return the total number of lines in the file.The background processing underlying the redactEvents() mutation would fail if the filter included tags. This error has now been fixed.
Storage
Throttling for bucket uploads/downloads has been fixed as it could cause unintentionally high number of concurrent uploads or downloads, to the point of exceeding the pool of connections.
Notifying to Global Database about file changes could be slow. This issue has now been fixed.
Segments could be considered under-replicated for a long time leading to events being retained in Kafka for extended periods. This wrong behavior has now been fixed.
Throttling for bucket uploads/downloads could cause unintentionally harsh throttling of downloads in favor of running more uploads concurrently. This issue has now been fixed.
Digest threads could fail to start digesting if
global
is very large, and if writing toglobal
is slow. This issue has now been fixed.The throttling for segment rebalancing has been reworked, which should help rebalancing keep up without overwhelming the cluster.
API
fields
object did not show up in Ingesting with HTTP Event Collector (HEC) events. This issue has now been fixed.
Configuration
Make a value of
1
forBucketStorageUploadInfrequentThresholdDays
dynamic configuration result in all uploads to bucket being subject to "S3 Intelligent-Tiering". Some installs want this as they apply versioning to their bucket, so even though the life span as a non-deleted object is short, the actual data remains for much longer in the bucket, and then tiering all objects saves on cost of storage for them. Objects below 128KB are never tiered in any case.
Dashboards and Widgets
Arguments for parameters no longer used in a deleted query could be submitted anyway when invoking a saved query that uses the same arguments, thus generating an error. This issue has now been fixed.
The
Table
widget has been fixed due to its header appearing transparent.
Ingestion
Event Forwarding would fail silently if an error occurred while executing the query. This issue has now been fixed.
A queryToRead field has been added to the filesUsed property of queryResult to read the data from a file used in a query.
For more information, see Polling a Query Job.
Event Forwarding using
match()
orlookup()
with a missing file would continue to fail after the file was uploaded. This issue has now been fixed.When shutting down a node, the process that load files used by a parser would be stopped before the parser itself. This could lead to ingested events not being parsed. This issue has now been fixed.
A wrong order of the output events for parsers have been fixed — the output now returns the correct event order.
Log Collector
Queries that were nested too deeply would crash LogScale nodes. This issue has now been fixed.
Functions
parseXml()
would sometimes only partially extract text elements when the text contained newline characters. This issue has now been fixed.Parsing the empty string as a number could lead to errors causing the query to fail (in
formatTime()
function, for example). This issue has now been fixed.The query backtracking limit would wrongly apply to the total number of events, rather than how many times individual events are passed through the query pipeline. This issue has now been fixed.
Long running queries using
window()
could end up never completing. This issue has now been fixed.writeJson()
would write invalid JSON by not correctly quoting numbers starting with unary plus or ending with a trailing.
(dot).A regression issue has been fixed in the
match()
function incidr
mode, which was significantly slower when doing submission of the query.
Known Issues
Queries
Improvement
UI Changes
The performance of the query editor has been improved, especially when working with large query results.
Automation and Alerts
The log field
previouslyPlannedForExecutionAt
has been renamed toearliestSkippedPlannedExecution
when skipping scheduled search executions.The field
useProxyOption
has been added to Webhooks action templates to be consistent with the other action templates.The severity of a number of alert and scheduled search logs has been changed to better reflect the severity for users.
Storage
The global topic throughput has been improved for particular updates to segments in datasources with many segments.
For more information, see Global Database.
Let segment merge span vary by +/- 10% of the configured value to avoid all segment targets switching to a new merge targets at the same point in time.
Ingestion
The input validation on Split by AWS records preprocessing when Set up a New Ingest Feed has been simplified: it will still validate that the incoming file is a single JSON object (and not, for example, multiple newline-delimited JSON objects), but the object may or may not contain a
Records
array. This resolves an ingest feed issue for CloudTrail with log file integrity enabled. In such cases, the emitted digest files (that does not have theRecords
array) would halt the ingest feed. These digest files are now ignored.For more background information, see this related release note.
The Split by AWS records preprocessing when Set up a New Ingest Feed now requires the
Records
array. This better protects against a situation where mistakenly using this preprocessing step with non-AWS records would interpret the files as empty batches of events, leading notifications in SQS to be deleted without ingesting any events.
Queries
Cache files, used by query functions such as
match()
andreadFile()
, are now written to disk for up to 24 hours after use. This can improve the time it takes for a query to start significantly, however, it naturally takes op disk space.A fraction of the disk used can be controlled using the configuration variables
TABLE_CACHE_MAX_STORAGE_FRACTION
andTABLE_CACHE_MAX_STORAGE_FRACTION_FOR_INGEST_AND_HTTP_ONLY
.
Falcon LogScale 1.153.2 Internal (2024-09-18)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.153.2 | Internal | 2024-09-18 | Internal Only | 2025-09-30 | No | 1.112 | No |
Available for download two days after release.
Internal-only release.
Deprecation
Items that have been deprecated and may be removed in a future release.
The
server.tar.gz
release artifact has been deprecated. Users should switch to theOS/architecture-specific server-linux_x64.tar.gz
orserver-alpine_x64.tar.gz
, which include bundled JDKs. Users installing a Docker image do not need to make any changes. With this change, LogScale will no longer support bringing your own JDK, we will bundle one with releases instead.We are making this change for the following reasons:
By bundling a JDK specifically for LogScale, we can customize the JDK to contain only the functionality needed by LogScale. This is a benefit from a security perspective, and also reduces the size of release artifacts.
Bundling the JDK ensures that the JDK version in use is one we've tested with, which makes it more likely a customer install will perform similar to our own internal setups.
By bundling the JDK, we will only need to support one JDK version. This means we can take advantage of enhanced JDK features sooner, such as specific performance improvements, which benefits everyone.
The last release where
server.tar.gz artifact
is included will be 1.154.0.The lastScheduledSearch field from the
ScheduledSearch
datatype is now deprecated and planned for removal in LogScale version 1.202. The new lastExecuted and lastTriggered fields have been added to theScheduledSearch
datatype to replace lastScheduledSearch.
Behavior Changes
Scripts or environment which make use of these tools should be checked and updated for the new configuration:
Functions
Prior to LogScale v1.147, the
array:length()
function accepted a value in thearray
argument that did not contain brackets[ ]
so thatarray:length("field")
would always produce the result0
(since there was no field named field). The function has now been updated to properly throw an exception if given a non-array field name in thearray
argument. Therefore, the function now requires the given array name to have[ ]
brackets, since it only works on array fields.
Known Issues
Falcon LogScale 1.153.1 LTS (2024-09-18)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.153.1 | LTS | 2024-09-18 | Cloud | 2025-09-30 | No | 1.112 | No |
Hide file hashes
TAR Checksum | Value |
---|---|
MD5 | 5780ffe21c92f8fa122d7eeb30136cb2 |
SHA1 | 38ae38917c6fbb3a5c9820e7f1dbc97a687c876c |
SHA256 | a28139539a3a9ee3c851fdeeed88167d2707d70a621813594ac0502cfc609a04 |
SHA512 | db367ed6483b118c34ebfc6c732e809e54a352266c8b1a8a23dc5cee41043048ecc3041d32b4f09c2987f5f702e49a3d512dcf2f16350c0adac6a4390845c1c5 |
Docker Image | Included JDK | SHA256 Checksum |
---|---|---|
humio | 22 | 38801e6d339cfc288ccf58fb694e9e0e4882763773393e6c5940501f5c9987dc |
humio-core | 22 | 4b3a9fbe1d7de1e0e1048a73e82191984b74e33ac023dda3bec0ec5418b76a1a |
kafka | 22 | ffdb1580b5f5d17746757f8f8ff3f18d2286713a7d13da8ac21ed576677be826 |
zookeeper | 22 | 4126d016a2c432cb76278ee0e7368d93df7a9304cad08d19fa5ae3334872fc0a |
Download
Bug fixes and updates.
Breaking Changes
The following items create a breaking change in the behavior, response or operation of this release.
Removed
Items that have been removed as of this release.
Installation and Deployment
The previously deprecated
jar
distribution of LogScale (e.g.server-1.117.jar
) is no longer published starting from this version. For more information, see Falcon LogScale 1.130.0 GA (2024-03-19).The previously deprecated
humio/kafka
andhumio/zookeeper
Docker images are now removed and no longer published.API
The following previously deprecated KAFKA API endpoints have been removed:
POST
/api/v1/clusterconfig/kafka-queues/partition-assignment
GET
/api/v1/clusterconfig/kafka-queues/partition-assignment
POST
/api/v1/clusterconfig/kafka-queues/partition-assignment/set-replication-defaults
GET
/api/v1/clusterconfig/kafka-queues/partition-assignment/id
Configuration
The obsolete configuration parameters
AUTOSHARDING_TRIGGER_DELAY_MS
andAUTOSHARDING_CHECKINTERVAL_MS
have been removed due to autosharding being handled by rate monitoring and not by ingest delay anymore.Other
Unnecessary
digest-coordinator-changes
anddesired-digest-coordinator-changes
metrics have been removed. Instead, the logging in theIngestPartitionCoordinator
class has been improved, to allow monitoring of when reassignment of desired and current digesters happens — by searching forWrote changes to desired digest partitions
/Wrote changes to current digest partitions
.
Deprecation
Items that have been deprecated and may be removed in a future release.
The
server.tar.gz
release artifact has been deprecated. Users should switch to theOS/architecture-specific server-linux_x64.tar.gz
orserver-alpine_x64.tar.gz
, which include bundled JDKs. Users installing a Docker image do not need to make any changes. With this change, LogScale will no longer support bringing your own JDK, we will bundle one with releases instead.We are making this change for the following reasons:
By bundling a JDK specifically for LogScale, we can customize the JDK to contain only the functionality needed by LogScale. This is a benefit from a security perspective, and also reduces the size of release artifacts.
Bundling the JDK ensures that the JDK version in use is one we've tested with, which makes it more likely a customer install will perform similar to our own internal setups.
By bundling the JDK, we will only need to support one JDK version. This means we can take advantage of enhanced JDK features sooner, such as specific performance improvements, which benefits everyone.
The last release where
server.tar.gz artifact
is included will be 1.154.0.The lastScheduledSearch field from the
ScheduledSearch
datatype is now deprecated and planned for removal in LogScale version 1.202. The new lastExecuted and lastTriggered fields have been added to theScheduledSearch
datatype to replace lastScheduledSearch.
Behavior Changes
Scripts or environment which make use of these tools should be checked and updated for the new configuration:
Installation and Deployment
The default
cleanup.policy
for the transientChatter-events topic has been switched fromcompact
todelete,compact
. This change will not apply to existing clusters. Changing this setting todelete,compact
via Kafka's command line tools is particularly recommended iftransientChatter
is taking up excessive space on disk, whereas it is less relevant in production environments where Kafka's disks tend to be large.Automation and Alerts
Aggregate and filter alert types now both display an Error (red) status if starting the alert query times out after 1 minute.
For more information on alert statuses, see Monitoring Alerts.
Storage
Reduced the waiting time for redactEvents background jobs to complete.
The background job will not complete until all mini-segments affected by the redaction have been merged into full segments. The job was pessimistically waiting for
MAX_HOURS_SEGMENT_OPEN
(30 days) before attempting the rewrite. This has been changed to wait forFLUSH_BLOCK_SECONDS
(15 minutes) before attempting the rewrite, this means, while some mini-segments may not be rewritten for 30 days, it is uncommon. If a rewrite is attempted and encounters mini-segments, it is postponed and retried later.For more information, see Redact Events API.
Configuration
When global publish to Kafka times out from digester threads, the system would initiate a failure shutdown. Instead, from this 1.144 version the system retries the publish to Global Database indefinitely for those specific global transactions that originate in a digester thread. If retries occur, these get logged with an error executeTransactionRetryingOnTimeout: unable to execute transaction for global, retrying.
Autoshards no longer respond to ingest delay by default, and now support round-robin instead.
Functions
Prior to LogScale v1.147, the
array:length()
function accepted a value in thearray
argument that did not contain brackets[ ]
so thatarray:length("field")
would always produce the result0
(since there was no field named field). The function has now been updated to properly throw an exception if given a non-array field name in thearray
argument. Therefore, the function now requires the given array name to have[ ]
brackets, since it only works on array fields.
Upgrades
Changes that may occur or be required during an upgrade.
Installation and Deployment
The minimum version of Java compatible with LogScale is now 21. Docker users, and users installing the release artifacts that bundle the JDK, are not affected.
It is recommended to switch to the release artifacts that bundle a JDK, because LogScale no longer supports bringing your own JDK as of release 1.138, see Falcon LogScale 1.138.0 GA (2024-05-14)
New features and improvements
Security
When extending Retention span or size, any segments that were marked for deletion — but where the files remain in the system — are automatically resurrected. How much data you reclaim via this depends on the
backupAfterMillis
configuration on the repository.For more information, see Audit Logging.
Installation and Deployment
The Docker containers have been configured to use the following environment variable values internally:
DIRECTORY=/data/humio-data
HUMIO_AUDITLOG_DIR=/data/logs
HUMIO_DEBUGLOG_DIR=/data/logs
JVM_LOG_DIR=/data/logs
JVM_TMP_DIR=/data/humio-data/jvm-tmp
This configuration replaces the following chains of internal symlinks, which have been removed:/app/humio/humio/humio-data
to/app/humio/humio-data
/app/humio/humio-data
to/data/humio-data
/app/humio/humio/logs
/app/humio/logs
/app/humio/logs
to/data/logs
This change is intended for allowing the tool scripts in
/app/humio/humio/bin
to work correctly, as they were previously failing due to the presence of dangling symlinks when invoked via docker run if nothing was mounted at/data
.
UI Changes
LogScale administrators can now set the default timezone for their users.
For more information, see Setting Time Zone.
When exporting data to CSV, the Export to File dialog now offers the ability to select field names that are suggested based on the query results, or to select all fields in one click.
For more information, see Exporting Data.
The Time Interval panel now displays the @ingesttimestamp/@timestamp options selected when querying events for Aggregate Alerts.
For more information, see Changing Time Interval.
A new timestamp column has been added in the Event list displaying the alert timestamp selected (@ingesttimestamp or @timestamp). This will show as the new default column along with the usual @rawstring field column.
For more information, see Alert Properties.
When a file is referenced in a query, the
Search
page now shows a new tab next to the Results and Events tabs, bearing the name of the uploaded file. Activating the file tab will fetch the contents of the file and will show them as aTable
widget. Alternatively, if the file cannot be queried, a download link will be presented instead.For more information, see Creating a File.
Sections can now be created inside dashboards, allowing for grouping relevant content together to maintain a clean and organized layout, making it easier for users to find and analyze related information. Sections can contain data visualizations as well as Parameter Panels. Additionally, they offer more flexibility when using the Time Selector, enabling users to apply a time setting across multiple widgets.
For more information, see Sections.
The
Users
page has been redesigned so that the Repository and view roles are displayed in a right hand side panel which opens when a repository or view is selected. The repository and views roles panel shows the roles that give permissions to the user for the selected repository or view, together with groups that apply to them and the corresponding query prefixes.For more information, see Manage Users.
An organization administrator can now update a user's role on a repository or view from the
Users
page.For more information, see Manage User Roles.
The design of the file editor for Lookup Files has been improved. The editor is now also more responsive and has support for tab navigation.
The Client type item in the Query details tab has been removed. Previously, Dashboard was incorrectly displayed as the value for both live dashboard and alert query types.
For more information, see Query Monitor — Query Details.
In Organization settings, layout changes have been made to the
Groups
page for viewing and updating repository and view permissions on a group.UI workflow updates have been made in the
Groups
page for managing permissions and roles.For more information, see Manage Groups.
Automation and Alerts
A maximum limit of 1 week has been added on the throttle period for Filter Alerts and Standard Alerts. Any existing alert with a higher throttle time will continue to run, but when edited, lowering the throttle time to 1 week at most will be required.
Standard Alerts have been renamed to Legacy Alerts. It is recommended using Filter Alerts or Aggregate Alerts alerts instead of legacy alerts.
For more information, see Alerts.
The
{action_invocation_id}
message template has been added: it contains a unique id for the invocation of the action that can be correlated with the activity logs.For more information, see Message Templates and Variables, Monitoring Alert Execution through the humio-activity Repository.
It is no longer possible to use @id as throttle field in filter alerts, as this has no effect. Any existing filter alerts with @id as throttle field will continue to run, but the next time the filter alert is updated, the throttle field must be changed or removed.
For more information, see Field-Based Throttling.
Audit logs for Alerts and Scheduled Searches now contain the package, if installed from a package.
The following UI changes have been introduced for alerts:
The Alerts overview page now presents a table with search and filtering options.
An alert-specific version of the
Search
page is now available for creating and refining your query before saving it as an alert.The alert's properties are opened in a side panel when creating or editing an alert.
In the side panel, the recommended alert type to choose is suggested based on the query.
For aggregate alerts, the side panel allows you to select the timestamp (@ingesttimestamp or @timestamp).
For more information, see Creating Alerts, Alert Properties.
A new Disabled actions status is added and can be visible from the
Alerts
overview table. This status will be displayed when there is an alert (or scheduled search) with only disabled actions attached.For more information, see Alerts Overview.
Audit logs for Filter Alerts now contain the language version of the alert query.
A new aggregate alert type is introduced. The aggregate alert is now the recommended alert type for any queries containing aggregate functions. Like filter alerts, aggregate alerts use ingest timestamps and run back-to-back searches, guaranteeing at least once delivery to the actions for more robust results, even in case of ingest delays of up to 24 hours.
For more information, see Aggregate Alerts.
The following adjustments have been made for Scheduled PDF Reports:
If the feature is disabled for the cluster, then the
menu item under will not show.If the feature is disabled or the render service is in an error state, users who are granted with the
ChangeScheduledReport
permission and try to access, will be presented with a banner on theScheduled reports
overview page.The permissions overview in the UI now informs that the feature must be enabled and configured correctly for the cluster, in order for the
ChangeScheduledReport
permission to have any effect.
Users can now see warnings and errors associated to alerts in the
Alerts
page opened in read-only mode.
GraphQL API
The new environmentVariableUsage() GraphQL API has been introduced for listing non-secret environment variables used by a node. This is intended as an aid to help do configuration discovery when managing a large number of LogScale clusters.
The getFileContent() and newFile() GraphQL endpoint responses will change for empty files. The return type is still
UploadedFileSnapshot!
, but the lines field will be changed to return[]
when the file is empty. Previously, the return value was a list containing an empty list[[]]
. This change applies both for empty files, and when the provided filter string doesn't match any rows in the file.The log line containing
Executed GraphQL query
in the humio repository, that is logged for every GraphQL call, now contains the name of the mutations and queries that are executed.The new concatenateQueries() GraphQL API has been introduced for programmatically concatenating multiple queries into one. This is intended to eliminate errors that might occur if queries are combined naively.
The preview tag has been removed from the following GraphQL mutations:
DeleteIngestFeed
resetQuota
testAwsS3SqsIngestFeed
The stopStreamingQueries() GraphQL mutation is no longer in preview.
The getFileContent()GraphQL query will now filter CSV file rows case insensitively and allow partial text matches. This happens when filterString input argument is provided. This makes it possible to search for rows without knowing the full column values, and while ignoring the case.
The defaultTimeZone GraphQL field on the
UserSettings
GraphQL type no longer defaults to the organisation default time zone if the user has no default time zone set. To get the default organization time zone through the API, use the defaultTimeZone field on theOrganizationConfigs
GraphQL type.The new startFromDateTime argument has been added to s3ConfigureArchiving GraphQL mutation. When set, S3Archiving does not consider segment files that have a start time that is before this point in time. This in particular allows enabling S3 archiving only from a point in time and going forward, without archiving all the older files too.
A new field named searchUsers has been added on the group() output type in graphql, which is used to search users in the group. The field also allows for pagination, ordering and sorting of the result set.
Storage
An alternative S3 client is now available and enabled by default. It handles file uploads more efficiently, by setting the Content-MD5 header during upload thus allowing S3 to perform file validation instead of having LogScale do it via post-upload validation steps. This form of validation should work for all uploads, including when server-side encryption is enabled. The new S3 client only supports this validation mode, so setting the following variables will have no effect:
In case of issues, the S3 client can be disabled by setting
USE_AWS_SDK=false
, which will set LogScale back to the previous default client. Should you need to do this, please reach out to Support to have the issue addressed, because the previous client will be deprecated and removed eventually.Support for bucket storage upload validation has changed. LogScale now supports the following three validation modes:
Checking the ETag HTTP response header on the upload response. This mode is the default, and can be opted out of via the
BUCKET_STORAGE_IGNORE_ETAG_UPLOAD
configuration parameter.Checking the ETag HTTP response header on a HEAD request done for the uploaded file. This is the second preferred mode, and can be opted out of via the
BUCKET_STORAGE_IGNORE_ETAG_AFTER_UPLOAD
configuration parameter.Downloading the file that was uploaded, in order to validate the
checksum
file. This mode is enabled if neither of the other modes are enabled.
Previous validation modes that did not compare checksums have been removed, as they were not reliable indicators of the uploaded file integrity.
The size of the queue for segments being uploaded to bucket storage has been increased. This reduces how often a scan global for changes is needed.
For more information, see Bucket Storage.
For better efficiency, more than one object is now deleted from Bucket Storage per request to S3 in order to reduce the number of requests to S3.
Support is implemented for returning a result over 1GB in size on the
queryjobs
endpoint. There is now a limit on the size of 8GB of the returned result. The limits on state sizes for queries remain unaltered, so the effect of this change is that some queries that previously failed returning their results due to reaching 1GB, even though the query completed, now work.
API
Support for array and object handling in the
fields
object has been added for Ingesting with HTTP Event Collector (HEC) events.
Configuration
A new dynamic configuration variable
GraphQlDirectivesAmountLimit
has been added to restrict how many GraphQL directives can be in a query. Valid values are integers from 5 to 1,000. The default value is 25.The
QueryBacktrackingLimit
feature is now enabled by default. The default value for the max number of backtracks (number of times a single event can be processed) a query can do has been reduced to2,000
.Adjusted launcher script handling of the
CORES
environment variable:If
CORES
is set, the launcher will now pass-XX:ActiveProcessorCount=$CORES
to the JVM. IfCORES
is not set, the launcher will pass-XX:ActiveProcessorCount
to the JVM with a value determined by the launcher. This ensures that the core count configured for LogScale is always same as the core count configured for internal JVM thread pools.-XX:ActiveProcessorCount
will be ignored if passed directly via other environment variables, such asHUMIO_OPTS
. Administrators currently configuring their clusters this way should remove-XX:ActiveProcessorCount
from their variables and setCORES
instead.The default
retention.bytes
has been modified for global topic from 1 GB to 20 GB. This is applied only when the topic is being created by LogScale initially. For existing clusters you should raise retention on the global topic so that it has room for at least a few hours of flow. This is only relevant for large clusters, as small clusters do not produce enough to exceed 1 GB per few hours. It is ideal to have room for at least 1 day in the global topic for better resilience against large spikes in traffic combined with losing global snapshot files.Cluster-wide configuration of S3 Archiving is introduced, in addition to the existing repo-specific configurations. This feature allows the cluster admin to setup archiving to a (single) bucket for a subset of repositories on the cluster, fully independent of the S3 Archiving available to end users via the UI. This feature adds the following new configuration parameters:
S3_CLUSTERWIDE_ARCHIVING_ACCESSKEY
(required)S3_CLUSTERWIDE_ARCHIVING_SECRETKEY
(required)S3_CLUSTERWIDE_ARCHIVING_REGION
(required)S3_CLUSTERWIDE_ARCHIVING_BUCKET
(required)S3_CLUSTERWIDE_ARCHIVING_PREFIX
(defaults to empty string)S3_CLUSTERWIDE_ARCHIVING_PATH_STYLE_ACCESS
(default isfalse
)S3_CLUSTERWIDE_ARCHIVING_KMS_KEY_ARN
S3_CLUSTERWIDE_ARCHIVING_ENDPOINT_BASE
S3_CLUSTERWIDE_ARCHIVING_WORKERCOUNT
(default iscores/4
)S3_CLUSTERWIDE_ARCHIVING_USE_HTTP_PROXY
(default isfalse
)S3_CLUSTERWIDE_ARCHIVING_IBM_COMPAT
(default isfalse
)
Most of these configuration variables work like they do for S3 Archiving, except that the region/bucket is selected here via configuration, and not dynamically by the end users, and also that the authentication is via explicit accesskey and secret, and not via IAM roles or any other means.
The following dynamic configurations are added for this feature:
S3ArchivingClusterWideDisabled
(defaults tofalse
when not set) — allows temporarily pausing the archiving in case of issues triggered by, for example, the traffic this creates.S3ArchivingClusterWideEndAt
andS3ArchivingClusterWideStartFrom
— timestamps in milliseconds of the "cut" that selects segment files and events in them to include. When these configuration variables are unset (which is the default) the effect is to not filter by time.S3ArchivingClusterWideRegexForRepoName
(defaults tonot match
if not set) — the repository name regex must be set in order to enable the feature. When set, all repositories that have a name that matches the regex (unanchored) will be archived using the cluster wide configuration from this variable.
Ingestion
On the Code page accessible from the menu when writing a new parser, the following validation rules have been added globally:
Arrays must be contiguous and must have a field with index 0. For instance,
myArray[0] := "some value"
Fields that are prefixed with
#
must be configured to be tagged (to avoid falsely tagged fields).
An error is displayed on the parser Code page if the rules above are violated. This error will not appear during actual parsing.
For more information, see Creating a New Parser.
To avoid exporting redundant fields in the parsers, LogScale will now omit YAML fields with a
null
value when exporting YAML templates — even when such fields are contained inside a list. Omitting fields with anull
value previously only happened for fields outside a list.
Log Collector
RemoteUpdate
version dialog has been improved, with the ability to cancel pending and scheduled updates.
Functions
Matching on multiple rows with the
match()
query function is now supported. This functionality allowsmatch()
to emit multiple events, one for each matching row. Thenrows
parameter is used to specify the maximum number of rows to match on.For more information, see
match()
.The
match()
function now supports matching on multiple pairs of fields and columns.For more information, see
match()
.The new query function
text:contains()
is introduced. The function tests if a specific substring is present within a given string. It takes two arguments:string
andsubstring
, both of which can be provided as plain text, field values, or results of an expression.For more information, see
text:contains()
.The new query function
array:append()
is introduced, used to append one or more values to an existing array, or to create a new array.For more information, see
array:append()
.
Fixed in this release
Falcon Data Replicator
Testing new FDR feeds using s3 aliasing would fail for valid credentials. This issue has now been fixed.
UI Changes
The
Query Monitor
page would show queries running on @ingesttimestamp as running on a search interval over all time. This wrong behavior has been fixed to show the correct search interval.The event histogram would not adhere to the timezone selected for the query.
When managing sessions within an organization, it was not possible to sort active sessions by the Last active timestamp column. This issue has now been fixed.
In the Export to File dialog, when using the keyboard to switch between options, a different item than the one selected was highlighted. This issue has now been fixed.
A long list of large queries would break the queries' list appearing under the Recent tab by not being updatable. The limit to recent queries has now been set to 30.
For more information, see Recalling Queries.
A race condition in LogScale Multi-Cluster Search has been fixed: a
done
query with an incomplete result could be overwritten, causing the query to never complete.The dialog to quickly switch to another repository would open when pressing the undo hotkey on Windows machines. This wrong behavior has now been fixed.
The dropdown menu for selecting fields used when exporting data to a CSV file was hidden behind the Export to file dialog. This issue has now been fixed.
The
Organizations
overview page has been fixed as the Volume column width within a specific organization could not be adjusted.The display of Lookup Files metadata in the file editor for very long user names has now been fixed.
The settings used to disable automatic searching would not be respected when creating a new alert. This issue has now been fixed.
When Creating a File, saving an invalid
.csv
file was possible in the file editor. This wrong behavior has now been fixed.The Export to file dialog used when Exporting Data has been fixed as CSV fields input would in some cases not be populated with all fields.
Fixing a visualization issue where the values in a multi-select combo box could overlap with the number of selected items.
When clicking to sort the Sessions based on Last active, the sorting was wrongly based on Login time instead. This issue has now been fixed.
It was not possible to sort by columns other than ID in the Cluster nodes table under the UI menu. This issue has now been fixed.
Automation and Alerts
Actions would show up as scheduled searches and vice versa when viewing the contents of a package. This issue has now been fixed.
Fixed an issue where queries that were failing would never complete. This could cause Alerts and Scheduled Searches to hang.
Scheduled Searches would not always log if runs were skipped due to being behind. This issue has been fixed now.
The read-only alert page would wrongly report that actions were being throttled when a filter alert had disabled throttling. This issue has now been fixed.
GraphQL API
The getFileContent() GraphQL endpoint will now return an
UploadedFileSnapshot!
datatype with the field totalLinesCount: 0 when a file has no matches for a given filter string. Previously it would return the total number of lines in the file.The background processing underlying the redactEvents() mutation would fail if the filter included tags. This error has now been fixed.
Storage
Throttling for bucket uploads/downloads has been fixed as it could cause unintentionally high number of concurrent uploads or downloads, to the point of exceeding the pool of connections.
Notifying to Global Database about file changes could be slow. This issue has now been fixed.
Segments could be considered under-replicated for a long time leading to events being retained in Kafka for extended periods. This wrong behavior has now been fixed.
Throttling for bucket uploads/downloads could cause unintentionally harsh throttling of downloads in favor of running more uploads concurrently. This issue has now been fixed.
Digest threads could fail to start digesting if
global
is very large, and if writing toglobal
is slow. This issue has now been fixed.The throttling for segment rebalancing has been reworked, which should help rebalancing keep up without overwhelming the cluster.
API
fields
object did not show up in Ingesting with HTTP Event Collector (HEC) events. This issue has now been fixed.
Configuration
Make a value of
1
forBucketStorageUploadInfrequentThresholdDays
dynamic configuration result in all uploads to bucket being subject to "S3 Intelligent-Tiering". Some installs want this as they apply versioning to their bucket, so even though the life span as a non-deleted object is short, the actual data remains for much longer in the bucket, and then tiering all objects saves on cost of storage for them. Objects below 128KB are never tiered in any case.
Dashboards and Widgets
Arguments for parameters no longer used in a deleted query could be submitted anyway when invoking a saved query that uses the same arguments, thus generating an error. This issue has now been fixed.
The
Table
widget has been fixed due to its header appearing transparent.
Ingestion
Event Forwarding would fail silently if an error occurred while executing the query. This issue has now been fixed.
A queryToRead field has been added to the filesUsed property of queryResult to read the data from a file used in a query.
For more information, see Polling a Query Job.
Event Forwarding using
match()
orlookup()
with a missing file would continue to fail after the file was uploaded. This issue has now been fixed.When shutting down a node, the process that load files used by a parser would be stopped before the parser itself. This could lead to ingested events not being parsed. This issue has now been fixed.
A wrong order of the output events for parsers have been fixed — the output now returns the correct event order.
Log Collector
Queries that were nested too deeply would crash LogScale nodes. This issue has now been fixed.
Functions
parseXml()
would sometimes only partially extract text elements when the text contained newline characters. This issue has now been fixed.Parsing the empty string as a number could lead to errors causing the query to fail (in
formatTime()
function, for example). This issue has now been fixed.The query backtracking limit would wrongly apply to the total number of events, rather than how many times individual events are passed through the query pipeline. This issue has now been fixed.
Long running queries using
window()
could end up never completing. This issue has now been fixed.writeJson()
would write invalid JSON by not correctly quoting numbers starting with unary plus or ending with a trailing.
(dot).
Known Issues
Queries
Improvement
UI Changes
The performance of the query editor has been improved, especially when working with large query results.
Automation and Alerts
The log field
previouslyPlannedForExecutionAt
has been renamed toearliestSkippedPlannedExecution
when skipping scheduled search executions.The field
useProxyOption
has been added to Webhooks action templates to be consistent with the other action templates.The severity of a number of alert and scheduled search logs has been changed to better reflect the severity for users.
Storage
The global topic throughput has been improved for particular updates to segments in datasources with many segments.
For more information, see Global Database.
Let segment merge span vary by +/- 10% of the configured value to avoid all segment targets switching to a new merge targets at the same point in time.
Ingestion
The input validation on Split by AWS records preprocessing when Set up a New Ingest Feed has been simplified: it will still validate that the incoming file is a single JSON object (and not, for example, multiple newline-delimited JSON objects), but the object may or may not contain a
Records
array. This resolves an ingest feed issue for CloudTrail with log file integrity enabled. In such cases, the emitted digest files (that does not have theRecords
array) would halt the ingest feed. These digest files are now ignored.For more background information, see this related release note.
The Split by AWS records preprocessing when Set up a New Ingest Feed now requires the
Records
array. This better protects against a situation where mistakenly using this preprocessing step with non-AWS records would interpret the files as empty batches of events, leading notifications in SQS to be deleted without ingesting any events.
Queries
Cache files, used by query functions such as
match()
andreadFile()
, are now written to disk for up to 24 hours after use. This can improve the time it takes for a query to start significantly, however, it naturally takes op disk space.A fraction of the disk used can be controlled using the configuration variables
TABLE_CACHE_MAX_STORAGE_FRACTION
andTABLE_CACHE_MAX_STORAGE_FRACTION_FOR_INGEST_AND_HTTP_ONLY
.
Falcon LogScale 1.153.0 GA (2024-08-27)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.153.0 | GA | 2024-08-27 | Cloud | 2025-09-30 | No | 1.112 | No |
Available for download two days after release.
Bug fixes and updates.
Deprecation
Items that have been deprecated and may be removed in a future release.
The
server.tar.gz
release artifact has been deprecated. Users should switch to theOS/architecture-specific server-linux_x64.tar.gz
orserver-alpine_x64.tar.gz
, which include bundled JDKs. Users installing a Docker image do not need to make any changes. With this change, LogScale will no longer support bringing your own JDK, we will bundle one with releases instead.We are making this change for the following reasons:
By bundling a JDK specifically for LogScale, we can customize the JDK to contain only the functionality needed by LogScale. This is a benefit from a security perspective, and also reduces the size of release artifacts.
Bundling the JDK ensures that the JDK version in use is one we've tested with, which makes it more likely a customer install will perform similar to our own internal setups.
By bundling the JDK, we will only need to support one JDK version. This means we can take advantage of enhanced JDK features sooner, such as specific performance improvements, which benefits everyone.
The last release where
server.tar.gz artifact
is included will be 1.154.0.The
HUMIO_JVM_ARGS
environment variable in the LogScale Launcher Script script will be removed in 1.154.0.The variable existed for migration from older deployments where the launcher script was not available. The launcher script replaces the need for manually setting parameters in this variable, so the use of this variable is no longer required. Using the launcher script is now the recommended method of launching LogScale. For more details on the launcher script, see LogScale Launcher Script. Clusters that still set this configuration should migrate to the other variables described at Configuration.
The lastScheduledSearch field from the
ScheduledSearch
datatype is now deprecated and planned for removal in LogScale version 1.202. The new lastExecuted and lastTriggered fields have been added to theScheduledSearch
datatype to replace lastScheduledSearch.
Behavior Changes
Scripts or environment which make use of these tools should be checked and updated for the new configuration:
Automation and Alerts
Aggregate and filter alert types now both display an Error (red) status if starting the alert query times out after 1 minute.
For more information on alert statuses, see Monitoring Alerts.
Configuration
Autoshards no longer respond to ingest delay by default, and now support round-robin instead.
Functions
Prior to LogScale v1.147, the
array:length()
function accepted a value in thearray
argument that did not contain brackets[ ]
so thatarray:length("field")
would always produce the result0
(since there was no field named field). The function has now been updated to properly throw an exception if given a non-array field name in thearray
argument. Therefore, the function now requires the given array name to have[ ]
brackets, since it only works on array fields.
New features and improvements
UI Changes
UI workflow updates have been made in the
Groups
page for managing permissions and roles.For more information, see Manage Groups.
Automation and Alerts
The following adjustments have been made for Scheduled PDF Reports:
If the feature is disabled for the cluster, then the
menu item under will not show.If the feature is disabled or the render service is in an error state, users who are granted with the
ChangeScheduledReport
permission and try to access, will be presented with a banner on theScheduled reports
overview page.The permissions overview in the UI now informs that the feature must be enabled and configured correctly for the cluster, in order for the
ChangeScheduledReport
permission to have any effect.
GraphQL API
The getFileContent()GraphQL query will now filter CSV file rows case insensitively and allow partial text matches. This happens when filterString input argument is provided. This makes it possible to search for rows without knowing the full column values, and while ignoring the case.
The defaultTimeZone GraphQL field on the
UserSettings
GraphQL type no longer defaults to the organisation default time zone if the user has no default time zone set. To get the default organization time zone through the API, use the defaultTimeZone field on theOrganizationConfigs
GraphQL type.
Storage
For better efficiency, more than one object is now deleted from Bucket Storage per request to S3 in order to reduce the number of requests to S3.
Configuration
Cluster-wide configuration of S3 Archiving is introduced, in addition to the existing repo-specific configurations. This feature allows the cluster admin to setup archiving to a (single) bucket for a subset of repositories on the cluster, fully independent of the S3 Archiving available to end users via the UI. This feature adds the following new configuration parameters:
S3_CLUSTERWIDE_ARCHIVING_ACCESSKEY
(required)S3_CLUSTERWIDE_ARCHIVING_SECRETKEY
(required)S3_CLUSTERWIDE_ARCHIVING_REGION
(required)S3_CLUSTERWIDE_ARCHIVING_BUCKET
(required)S3_CLUSTERWIDE_ARCHIVING_PREFIX
(defaults to empty string)S3_CLUSTERWIDE_ARCHIVING_PATH_STYLE_ACCESS
(default isfalse
)S3_CLUSTERWIDE_ARCHIVING_KMS_KEY_ARN
S3_CLUSTERWIDE_ARCHIVING_ENDPOINT_BASE
S3_CLUSTERWIDE_ARCHIVING_WORKERCOUNT
(default iscores/4
)S3_CLUSTERWIDE_ARCHIVING_USE_HTTP_PROXY
(default isfalse
)S3_CLUSTERWIDE_ARCHIVING_IBM_COMPAT
(default isfalse
)
Most of these configuration variables work like they do for S3 Archiving, except that the region/bucket is selected here via configuration, and not dynamically by the end users, and also that the authentication is via explicit accesskey and secret, and not via IAM roles or any other means.
The following dynamic configurations are added for this feature:
S3ArchivingClusterWideDisabled
(defaults tofalse
when not set) — allows temporarily pausing the archiving in case of issues triggered by, for example, the traffic this creates.S3ArchivingClusterWideEndAt
andS3ArchivingClusterWideStartFrom
— timestamps in milliseconds of the "cut" that selects segment files and events in them to include. When these configuration variables are unset (which is the default) the effect is to not filter by time.S3ArchivingClusterWideRegexForRepoName
(defaults tonot match
if not set) — the repository name regex must be set in order to enable the feature. When set, all repositories that have a name that matches the regex (unanchored) will be archived using the cluster wide configuration from this variable.
Ingestion
On the Code page accessible from the menu when writing a new parser, the following validation rules have been added globally:
Arrays must be contiguous and must have a field with index 0. For instance,
myArray[0] := "some value"
Fields that are prefixed with
#
must be configured to be tagged (to avoid falsely tagged fields).
An error is displayed on the parser Code page if the rules above are violated. This error will not appear during actual parsing.
For more information, see Creating a New Parser.
Fixed in this release
UI Changes
A race condition in LogScale Multi-Cluster Search has been fixed: a
done
query with an incomplete result could be overwritten, causing the query to never complete.The Export to file dialog used when Exporting Data has been fixed as CSV fields input would in some cases not be populated with all fields.
Storage
Throttling for bucket uploads/downloads has been fixed as it could cause unintentionally high number of concurrent uploads or downloads, to the point of exceeding the pool of connections.
Segments could be considered under-replicated for a long time leading to events being retained in Kafka for extended periods. This wrong behavior has now been fixed.
Functions
The query backtracking limit would wrongly apply to the total number of events, rather than how many times individual events are passed through the query pipeline. This issue has now been fixed.
Known Issues
Queries
Improvement
UI Changes
The performance of the query editor has been improved, especially when working with large query results.
Ingestion
The input validation on Split by AWS records preprocessing when Set up a New Ingest Feed has been simplified: it will still validate that the incoming file is a single JSON object (and not, for example, multiple newline-delimited JSON objects), but the object may or may not contain a
Records
array. This resolves an ingest feed issue for CloudTrail with log file integrity enabled. In such cases, the emitted digest files (that does not have theRecords
array) would halt the ingest feed. These digest files are now ignored.For more background information, see this related release note.
Falcon LogScale 1.152.0 GA (2024-08-20)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.152.0 | GA | 2024-08-20 | Cloud | 2025-09-30 | No | 1.112 | No |
Available for download two days after release.
Bug fixes and updates.
Removed
Items that have been removed as of this release.
Configuration
The obsolete configuration parameters
AUTOSHARDING_TRIGGER_DELAY_MS
andAUTOSHARDING_CHECKINTERVAL_MS
have been removed due to autosharding being handled by rate monitoring and not by ingest delay anymore.
Deprecation
Items that have been deprecated and may be removed in a future release.
The
server.tar.gz
release artifact has been deprecated. Users should switch to theOS/architecture-specific server-linux_x64.tar.gz
orserver-alpine_x64.tar.gz
, which include bundled JDKs. Users installing a Docker image do not need to make any changes. With this change, LogScale will no longer support bringing your own JDK, we will bundle one with releases instead.We are making this change for the following reasons:
By bundling a JDK specifically for LogScale, we can customize the JDK to contain only the functionality needed by LogScale. This is a benefit from a security perspective, and also reduces the size of release artifacts.
Bundling the JDK ensures that the JDK version in use is one we've tested with, which makes it more likely a customer install will perform similar to our own internal setups.
By bundling the JDK, we will only need to support one JDK version. This means we can take advantage of enhanced JDK features sooner, such as specific performance improvements, which benefits everyone.
The last release where
server.tar.gz artifact
is included will be 1.154.0.The
HUMIO_JVM_ARGS
environment variable in the LogScale Launcher Script script will be removed in 1.154.0.The variable existed for migration from older deployments where the launcher script was not available. The launcher script replaces the need for manually setting parameters in this variable, so the use of this variable is no longer required. Using the launcher script is now the recommended method of launching LogScale. For more details on the launcher script, see LogScale Launcher Script. Clusters that still set this configuration should migrate to the other variables described at Configuration.
The lastScheduledSearch field from the
ScheduledSearch
datatype is now deprecated and planned for removal in LogScale version 1.202. The new lastExecuted and lastTriggered fields have been added to theScheduledSearch
datatype to replace lastScheduledSearch.
Behavior Changes
Scripts or environment which make use of these tools should be checked and updated for the new configuration:
Functions
Prior to LogScale v1.147, the
array:length()
function accepted a value in thearray
argument that did not contain brackets[ ]
so thatarray:length("field")
would always produce the result0
(since there was no field named field). The function has now been updated to properly throw an exception if given a non-array field name in thearray
argument. Therefore, the function now requires the given array name to have[ ]
brackets, since it only works on array fields.
New features and improvements
UI Changes
In Organization settings, layout changes have been made to the
Groups
page for viewing and updating repository and view permissions on a group.
GraphQL API
The stopStreamingQueries() GraphQL mutation is no longer in preview.
Configuration
The default
retention.bytes
has been modified for global topic from 1 GB to 20 GB. This is applied only when the topic is being created by LogScale initially. For existing clusters you should raise retention on the global topic so that it has room for at least a few hours of flow. This is only relevant for large clusters, as small clusters do not produce enough to exceed 1 GB per few hours. It is ideal to have room for at least 1 day in the global topic for better resilience against large spikes in traffic combined with losing global snapshot files.
Fixed in this release
UI Changes
The
Query Monitor
page would show queries running on @ingesttimestamp as running on a search interval over all time. This wrong behavior has been fixed to show the correct search interval.
Automation and Alerts
Fixed an issue where queries that were failing would never complete. This could cause Alerts and Scheduled Searches to hang.
Scheduled Searches would not always log if runs were skipped due to being behind. This issue has been fixed now.
Dashboards and Widgets
The
Table
widget has been fixed due to its header appearing transparent.
Known Issues
Queries
Improvement
Automation and Alerts
The log field
previouslyPlannedForExecutionAt
has been renamed toearliestSkippedPlannedExecution
when skipping scheduled search executions.The field
useProxyOption
has been added to Webhooks action templates to be consistent with the other action templates.The severity of a number of alert and scheduled search logs has been changed to better reflect the severity for users.
Ingestion
The Split by AWS records preprocessing when Set up a New Ingest Feed now requires the
Records
array. This better protects against a situation where mistakenly using this preprocessing step with non-AWS records would interpret the files as empty batches of events, leading notifications in SQS to be deleted without ingesting any events.
Falcon LogScale 1.151.1 GA (2024-08-15)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.151.1 | GA | 2024-08-15 | Cloud | 2025-09-30 | No | 1.112 | No |
Available for download two days after release.
Bug fixes recommended for all customers.
Deprecation
Items that have been deprecated and may be removed in a future release.
The
server.tar.gz
release artifact has been deprecated. Users should switch to theOS/architecture-specific server-linux_x64.tar.gz
orserver-alpine_x64.tar.gz
, which include bundled JDKs. Users installing a Docker image do not need to make any changes. With this change, LogScale will no longer support bringing your own JDK, we will bundle one with releases instead.We are making this change for the following reasons:
By bundling a JDK specifically for LogScale, we can customize the JDK to contain only the functionality needed by LogScale. This is a benefit from a security perspective, and also reduces the size of release artifacts.
Bundling the JDK ensures that the JDK version in use is one we've tested with, which makes it more likely a customer install will perform similar to our own internal setups.
By bundling the JDK, we will only need to support one JDK version. This means we can take advantage of enhanced JDK features sooner, such as specific performance improvements, which benefits everyone.
The last release where
server.tar.gz artifact
is included will be 1.154.0.The
HUMIO_JVM_ARGS
environment variable in the LogScale Launcher Script script will be removed in 1.154.0.The variable existed for migration from older deployments where the launcher script was not available. The launcher script replaces the need for manually setting parameters in this variable, so the use of this variable is no longer required. Using the launcher script is now the recommended method of launching LogScale. For more details on the launcher script, see LogScale Launcher Script. Clusters that still set this configuration should migrate to the other variables described at Configuration.
The lastScheduledSearch field from the
ScheduledSearch
datatype is now deprecated and planned for removal in LogScale version 1.202. The new lastExecuted and lastTriggered fields have been added to theScheduledSearch
datatype to replace lastScheduledSearch.
Behavior Changes
Scripts or environment which make use of these tools should be checked and updated for the new configuration:
Functions
Prior to LogScale v1.147, the
array:length()
function accepted a value in thearray
argument that did not contain brackets[ ]
so thatarray:length("field")
would always produce the result0
(since there was no field named field). The function has now been updated to properly throw an exception if given a non-array field name in thearray
argument. Therefore, the function now requires the given array name to have[ ]
brackets, since it only works on array fields.
Fixed in this release
Ingestion
Fixed an issue where queries with a large number of
OR
statements would crash the parser and cause a node to fail.
Known Issues
Falcon LogScale 1.151.0 GA (2024-08-13)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.151.0 | GA | 2024-08-13 | Cloud | 2025-09-30 | No | 1.112 | No |
Available for download two days after release.
Bug fixes and updates.
Deprecation
Items that have been deprecated and may be removed in a future release.
The
server.tar.gz
release artifact has been deprecated. Users should switch to theOS/architecture-specific server-linux_x64.tar.gz
orserver-alpine_x64.tar.gz
, which include bundled JDKs. Users installing a Docker image do not need to make any changes. With this change, LogScale will no longer support bringing your own JDK, we will bundle one with releases instead.We are making this change for the following reasons:
By bundling a JDK specifically for LogScale, we can customize the JDK to contain only the functionality needed by LogScale. This is a benefit from a security perspective, and also reduces the size of release artifacts.
Bundling the JDK ensures that the JDK version in use is one we've tested with, which makes it more likely a customer install will perform similar to our own internal setups.
By bundling the JDK, we will only need to support one JDK version. This means we can take advantage of enhanced JDK features sooner, such as specific performance improvements, which benefits everyone.
The last release where
server.tar.gz artifact
is included will be 1.154.0.The
HUMIO_JVM_ARGS
environment variable in the LogScale Launcher Script script will be removed in 1.154.0.The variable existed for migration from older deployments where the launcher script was not available. The launcher script replaces the need for manually setting parameters in this variable, so the use of this variable is no longer required. Using the launcher script is now the recommended method of launching LogScale. For more details on the launcher script, see LogScale Launcher Script. Clusters that still set this configuration should migrate to the other variables described at Configuration.
The lastScheduledSearch field from the
ScheduledSearch
datatype is now deprecated and planned for removal in LogScale version 1.202. The new lastExecuted and lastTriggered fields have been added to theScheduledSearch
datatype to replace lastScheduledSearch.
Behavior Changes
Scripts or environment which make use of these tools should be checked and updated for the new configuration:
Functions
Prior to LogScale v1.147, the
array:length()
function accepted a value in thearray
argument that did not contain brackets[ ]
so thatarray:length("field")
would always produce the result0
(since there was no field named field). The function has now been updated to properly throw an exception if given a non-array field name in thearray
argument. Therefore, the function now requires the given array name to have[ ]
brackets, since it only works on array fields.
New features and improvements
UI Changes
LogScale administrators can now set the default timezone for their users.
For more information, see Setting Time Zone.
The design of the file editor for Lookup Files has been improved. The editor is now also more responsive and has support for tab navigation.
The Client type item in the Query details tab has been removed. Previously, Dashboard was incorrectly displayed as the value for both live dashboard and alert query types.
For more information, see Query Monitor — Query Details.
Automation and Alerts
It is no longer possible to use @id as throttle field in filter alerts, as this has no effect. Any existing filter alerts with @id as throttle field will continue to run, but the next time the filter alert is updated, the throttle field must be changed or removed.
For more information, see Field-Based Throttling.
GraphQL API
A new field named searchUsers has been added on the group() output type in graphql, which is used to search users in the group. The field also allows for pagination, ordering and sorting of the result set.
Configuration
The
QueryBacktrackingLimit
feature is now enabled by default. The default value for the max number of backtracks (number of times a single event can be processed) a query can do has been reduced to2,000
.
Ingestion
To avoid exporting redundant fields in the parsers, LogScale will now omit YAML fields with a
null
value when exporting YAML templates — even when such fields are contained inside a list. Omitting fields with anull
value previously only happened for fields outside a list.
Fixed in this release
UI Changes
The settings used to disable automatic searching would not be respected when creating a new alert. This issue has now been fixed.
When Creating a File, saving an invalid
.csv
file was possible in the file editor. This wrong behavior has now been fixed.
Dashboards and Widgets
Shared dashboards created on the special humio-search-all view wouldn't load correctly. This issue has now been fixed.
Ingestion
Event Forwarding would fail silently if an error occurred while executing the query. This issue has now been fixed.
Event Forwarding using
match()
orlookup()
with a missing file would continue to fail after the file was uploaded. This issue has now been fixed.
Log Collector
Queries that were nested too deeply would crash LogScale nodes. This issue has now been fixed.
Functions
writeJson()
would write invalid JSON by not correctly quoting numbers starting with unary plus or ending with a trailing.
(dot).
Known Issues
Falcon LogScale 1.150.1 GA (2024-08-15)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.150.1 | GA | 2024-08-15 | Cloud | 2025-09-30 | No | 1.112 | No |
Available for download two days after release.
Bug fixes recommended for all customers.
Deprecation
Items that have been deprecated and may be removed in a future release.
The
server.tar.gz
release artifact has been deprecated. Users should switch to theOS/architecture-specific server-linux_x64.tar.gz
orserver-alpine_x64.tar.gz
, which include bundled JDKs. Users installing a Docker image do not need to make any changes. With this change, LogScale will no longer support bringing your own JDK, we will bundle one with releases instead.We are making this change for the following reasons:
By bundling a JDK specifically for LogScale, we can customize the JDK to contain only the functionality needed by LogScale. This is a benefit from a security perspective, and also reduces the size of release artifacts.
Bundling the JDK ensures that the JDK version in use is one we've tested with, which makes it more likely a customer install will perform similar to our own internal setups.
By bundling the JDK, we will only need to support one JDK version. This means we can take advantage of enhanced JDK features sooner, such as specific performance improvements, which benefits everyone.
The last release where
server.tar.gz artifact
is included will be 1.154.0.The
HUMIO_JVM_ARGS
environment variable in the LogScale Launcher Script script will be removed in 1.154.0.The variable existed for migration from older deployments where the launcher script was not available. The launcher script replaces the need for manually setting parameters in this variable, so the use of this variable is no longer required. Using the launcher script is now the recommended method of launching LogScale. For more details on the launcher script, see LogScale Launcher Script. Clusters that still set this configuration should migrate to the other variables described at Configuration.
The lastScheduledSearch field from the
ScheduledSearch
datatype is now deprecated and planned for removal in LogScale version 1.202. The new lastExecuted and lastTriggered fields have been added to theScheduledSearch
datatype to replace lastScheduledSearch.
Behavior Changes
Scripts or environment which make use of these tools should be checked and updated for the new configuration:
Functions
Prior to LogScale v1.147, the
array:length()
function accepted a value in thearray
argument that did not contain brackets[ ]
so thatarray:length("field")
would always produce the result0
(since there was no field named field). The function has now been updated to properly throw an exception if given a non-array field name in thearray
argument. Therefore, the function now requires the given array name to have[ ]
brackets, since it only works on array fields.
Fixed in this release
Ingestion
Fixed an issue where queries with a large number of
OR
statements would crash the parser and cause a node to fail.
Known Issues
Falcon LogScale 1.150.0 GA (2024-08-06)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.150.0 | GA | 2024-08-06 | Cloud | 2025-09-30 | No | 1.112 | No |
Available for download two days after release.
Bug fixes and updates.
Deprecation
Items that have been deprecated and may be removed in a future release.
The
server.tar.gz
release artifact has been deprecated. Users should switch to theOS/architecture-specific server-linux_x64.tar.gz
orserver-alpine_x64.tar.gz
, which include bundled JDKs. Users installing a Docker image do not need to make any changes. With this change, LogScale will no longer support bringing your own JDK, we will bundle one with releases instead.We are making this change for the following reasons:
By bundling a JDK specifically for LogScale, we can customize the JDK to contain only the functionality needed by LogScale. This is a benefit from a security perspective, and also reduces the size of release artifacts.
Bundling the JDK ensures that the JDK version in use is one we've tested with, which makes it more likely a customer install will perform similar to our own internal setups.
By bundling the JDK, we will only need to support one JDK version. This means we can take advantage of enhanced JDK features sooner, such as specific performance improvements, which benefits everyone.
The last release where
server.tar.gz artifact
is included will be 1.154.0.The
HUMIO_JVM_ARGS
environment variable in the LogScale Launcher Script script will be removed in 1.154.0.The variable existed for migration from older deployments where the launcher script was not available. The launcher script replaces the need for manually setting parameters in this variable, so the use of this variable is no longer required. Using the launcher script is now the recommended method of launching LogScale. For more details on the launcher script, see LogScale Launcher Script. Clusters that still set this configuration should migrate to the other variables described at Configuration.
The lastScheduledSearch field from the
ScheduledSearch
datatype is now deprecated and planned for removal in LogScale version 1.202. The new lastExecuted and lastTriggered fields have been added to theScheduledSearch
datatype to replace lastScheduledSearch.
Behavior Changes
Scripts or environment which make use of these tools should be checked and updated for the new configuration:
Functions
Prior to LogScale v1.147, the
array:length()
function accepted a value in thearray
argument that did not contain brackets[ ]
so thatarray:length("field")
would always produce the result0
(since there was no field named field). The function has now been updated to properly throw an exception if given a non-array field name in thearray
argument. Therefore, the function now requires the given array name to have[ ]
brackets, since it only works on array fields.
New features and improvements
Installation and Deployment
The Docker containers have been configured to use the following environment variable values internally:
DIRECTORY=/data/humio-data
HUMIO_AUDITLOG_DIR=/data/logs
HUMIO_DEBUGLOG_DIR=/data/logs
JVM_LOG_DIR=/data/logs
JVM_TMP_DIR=/data/humio-data/jvm-tmp
This configuration replaces the following chains of internal symlinks, which have been removed:/app/humio/humio/humio-data
to/app/humio/humio-data
/app/humio/humio-data
to/data/humio-data
/app/humio/humio/logs
/app/humio/logs
/app/humio/logs
to/data/logs
This change is intended for allowing the tool scripts in
/app/humio/humio/bin
to work correctly, as they were previously failing due to the presence of dangling symlinks when invoked via docker run if nothing was mounted at/data
.
UI Changes
Sections can now be created inside dashboards, allowing for grouping relevant content together to maintain a clean and organized layout, making it easier for users to find and analyze related information. Sections can contain data visualizations as well as Parameter Panels. Additionally, they offer more flexibility when using the Time Selector, enabling users to apply a time setting across multiple widgets.
For more information, see Sections.
An organization administrator can now update a user's role on a repository or view from the
Users
page.For more information, see Manage User Roles.
Automation and Alerts
The
{action_invocation_id}
message template has been added: it contains a unique id for the invocation of the action that can be correlated with the activity logs.For more information, see Message Templates and Variables, Monitoring Alert Execution through the humio-activity Repository.
Users can now see warnings and errors associated to alerts in the
Alerts
page opened in read-only mode.
Storage
Support is implemented for returning a result over 1GB in size on the
queryjobs
endpoint. There is now a limit on the size of 8GB of the returned result. The limits on state sizes for queries remain unaltered, so the effect of this change is that some queries that previously failed returning their results due to reaching 1GB, even though the query completed, now work.
Functions
Fixed in this release
Falcon Data Replicator
Testing new FDR feeds using s3 aliasing would fail for valid credentials. This issue has now been fixed.
UI Changes
The
Organizations
overview page has been fixed as the Volume column width within a specific organization could not be adjusted.The display of Lookup Files metadata in the file editor for very long user names has now been fixed.
Storage
Throttling for bucket uploads/downloads could cause unintentionally harsh throttling of downloads in favor of running more uploads concurrently. This issue has now been fixed.
The throttling for segment rebalancing has been reworked, which should help rebalancing keep up without overwhelming the cluster.
Known Issues
Falcon LogScale 1.149.0 GA (2024-07-30)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.149.0 | GA | 2024-07-30 | Cloud | 2025-09-30 | No | 1.112 | No |
Available for download two days after release.
Bug fixes and updates.
Removed
Items that have been removed as of this release.
Installation and Deployment
The previously deprecated
jar
distribution of LogScale (e.g.server-1.117.jar
) is no longer published starting from this version. For more information, see Falcon LogScale 1.130.0 GA (2024-03-19).The previously deprecated
humio/kafka
andhumio/zookeeper
Docker images are now removed and no longer published.
Deprecation
Items that have been deprecated and may be removed in a future release.
The
server.tar.gz
release artifact has been deprecated. Users should switch to theOS/architecture-specific server-linux_x64.tar.gz
orserver-alpine_x64.tar.gz
, which include bundled JDKs. Users installing a Docker image do not need to make any changes. With this change, LogScale will no longer support bringing your own JDK, we will bundle one with releases instead.We are making this change for the following reasons:
By bundling a JDK specifically for LogScale, we can customize the JDK to contain only the functionality needed by LogScale. This is a benefit from a security perspective, and also reduces the size of release artifacts.
Bundling the JDK ensures that the JDK version in use is one we've tested with, which makes it more likely a customer install will perform similar to our own internal setups.
By bundling the JDK, we will only need to support one JDK version. This means we can take advantage of enhanced JDK features sooner, such as specific performance improvements, which benefits everyone.
The last release where
server.tar.gz artifact
is included will be 1.154.0.The
HUMIO_JVM_ARGS
environment variable in the LogScale Launcher Script script will be removed in 1.154.0.The variable existed for migration from older deployments where the launcher script was not available. The launcher script replaces the need for manually setting parameters in this variable, so the use of this variable is no longer required. Using the launcher script is now the recommended method of launching LogScale. For more details on the launcher script, see LogScale Launcher Script. Clusters that still set this configuration should migrate to the other variables described at Configuration.
The lastScheduledSearch field from the
ScheduledSearch
datatype is now deprecated and planned for removal in LogScale version 1.202. The new lastExecuted and lastTriggered fields have been added to theScheduledSearch
datatype to replace lastScheduledSearch.
Behavior Changes
Scripts or environment which make use of these tools should be checked and updated for the new configuration:
Functions
Prior to LogScale v1.147, the
array:length()
function accepted a value in thearray
argument that did not contain brackets[ ]
so thatarray:length("field")
would always produce the result0
(since there was no field named field). The function has now been updated to properly throw an exception if given a non-array field name in thearray
argument. Therefore, the function now requires the given array name to have[ ]
brackets, since it only works on array fields.
Upgrades
Changes that may occur or be required during an upgrade.
Installation and Deployment
The bundled JDK is upgraded to 22.0.2.
Fixed in this release
UI Changes
Fixing a visualization issue where the values in a multi-select combo box could overlap with the number of selected items.
When clicking to sort the Sessions based on Last active, the sorting was wrongly based on Login time instead. This issue has now been fixed.
Configuration
Make a value of
1
forBucketStorageUploadInfrequentThresholdDays
dynamic configuration result in all uploads to bucket being subject to "S3 Intelligent-Tiering". Some installs want this as they apply versioning to their bucket, so even though the life span as a non-deleted object is short, the actual data remains for much longer in the bucket, and then tiering all objects saves on cost of storage for them. Objects below 128KB are never tiered in any case.
Falcon LogScale 1.148.0 Internal (2024-07-23)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.148.0 | Internal | 2024-07-23 | Internal Only | 2025-07-31 | No | 1.112 | No |
Available for download two days after release.
Internal-only release.
Advance Warning
The following items are due to change in a future release.
Installation and Deployment
The LogScale Launcher Script script for starting LogScale will be modified to change the way CPU core usage can be configured. The
-XX:ActiveProcessorCount=n
command-line option will be ignored if set. Users that need to configure the core count manually should setCORES=n
environment variable instead. This will cause the launcher to configure both LogScale and the JVM properly.This change is scheduled for 1.148.0.
For more information, see Configuring Available CPU Cores.
Removed
Items that have been removed as of this release.
API
The following previously deprecated KAFKA API endpoints have been removed:
POST
/api/v1/clusterconfig/kafka-queues/partition-assignment
GET
/api/v1/clusterconfig/kafka-queues/partition-assignment
POST
/api/v1/clusterconfig/kafka-queues/partition-assignment/set-replication-defaults
GET
/api/v1/clusterconfig/kafka-queues/partition-assignment/id
Deprecation
Items that have been deprecated and may be removed in a future release.
The
server.tar.gz
release artifact has been deprecated. Users should switch to theOS/architecture-specific server-linux_x64.tar.gz
orserver-alpine_x64.tar.gz
, which include bundled JDKs. Users installing a Docker image do not need to make any changes. With this change, LogScale will no longer support bringing your own JDK, we will bundle one with releases instead.We are making this change for the following reasons:
By bundling a JDK specifically for LogScale, we can customize the JDK to contain only the functionality needed by LogScale. This is a benefit from a security perspective, and also reduces the size of release artifacts.
Bundling the JDK ensures that the JDK version in use is one we've tested with, which makes it more likely a customer install will perform similar to our own internal setups.
By bundling the JDK, we will only need to support one JDK version. This means we can take advantage of enhanced JDK features sooner, such as specific performance improvements, which benefits everyone.
The last release where
server.tar.gz artifact
is included will be 1.154.0.We are deprecating the
humio/kafka
andhumio/zookeeper
Docker images due to low use. The planned final release for these images will be with LogScale 1.148.0.Better alternatives are available going forward. We recommend the following:
If you still require
humio/kafka
orhumio/zookeeper
for needs that cannot be covered by these alternatives, please contact Support and share your concerns.The
HUMIO_JVM_ARGS
environment variable in the LogScale Launcher Script script will be removed in 1.154.0.The variable existed for migration from older deployments where the launcher script was not available. The launcher script replaces the need for manually setting parameters in this variable, so the use of this variable is no longer required. Using the launcher script is now the recommended method of launching LogScale. For more details on the launcher script, see LogScale Launcher Script. Clusters that still set this configuration should migrate to the other variables described at Configuration.
The lastScheduledSearch field from the
ScheduledSearch
datatype is now deprecated and planned for removal in LogScale version 1.202. The new lastExecuted and lastTriggered fields have been added to theScheduledSearch
datatype to replace lastScheduledSearch.
Behavior Changes
Scripts or environment which make use of these tools should be checked and updated for the new configuration:
Storage
Reduced the waiting time for redactEvents background jobs to complete.
The background job will not complete until all mini-segments affected by the redaction have been merged into full segments. The job was pessimistically waiting for
MAX_HOURS_SEGMENT_OPEN
(30 days) before attempting the rewrite. This has been changed to wait forFLUSH_BLOCK_SECONDS
(15 minutes) before attempting the rewrite, this means, while some mini-segments may not be rewritten for 30 days, it is uncommon. If a rewrite is attempted and encounters mini-segments, it is postponed and retried later.For more information, see Redact Events API.
Functions
Prior to LogScale v1.147, the
array:length()
function accepted a value in thearray
argument that did not contain brackets[ ]
so thatarray:length("field")
would always produce the result0
(since there was no field named field). The function has now been updated to properly throw an exception if given a non-array field name in thearray
argument. Therefore, the function now requires the given array name to have[ ]
brackets, since it only works on array fields.
New features and improvements
UI Changes
The
Users
page has been redesigned so that the Repository and view roles are displayed in a right hand side panel which opens when a repository or view is selected. The repository and views roles panel shows the roles that give permissions to the user for the selected repository or view, together with groups that apply to them and the corresponding query prefixes.For more information, see Manage Users.
Storage
The size of the queue for segments being uploaded to bucket storage has been increased. This reduces how often a scan global for changes is needed.
For more information, see Bucket Storage.
Configuration
Adjusted launcher script handling of the
CORES
environment variable:If
CORES
is set, the launcher will now pass-XX:ActiveProcessorCount=$CORES
to the JVM. IfCORES
is not set, the launcher will pass-XX:ActiveProcessorCount
to the JVM with a value determined by the launcher. This ensures that the core count configured for LogScale is always same as the core count configured for internal JVM thread pools.-XX:ActiveProcessorCount
will be ignored if passed directly via other environment variables, such asHUMIO_OPTS
. Administrators currently configuring their clusters this way should remove-XX:ActiveProcessorCount
from their variables and setCORES
instead.
Fixed in this release
UI Changes
The dropdown menu for selecting fields used when exporting data to a CSV file was hidden behind the Export to file dialog. This issue has now been fixed.
Ingestion
A queryToRead field has been added to the filesUsed property of queryResult to read the data from a file used in a query.
For more information, see Polling a Query Job.
Improvement
Queries
Cache files, used by query functions such as
match()
andreadFile()
, are now written to disk for up to 24 hours after use. This can improve the time it takes for a query to start significantly, however, it naturally takes op disk space.A fraction of the disk used can be controlled using the configuration variables
TABLE_CACHE_MAX_STORAGE_FRACTION
andTABLE_CACHE_MAX_STORAGE_FRACTION_FOR_INGEST_AND_HTTP_ONLY
.
Falcon LogScale 1.147.0 GA (2024-07-16)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.147.0 | GA | 2024-07-16 | Cloud | 2025-09-30 | No | 1.112 | No |
Available for download two days after release.
Bug fixes and updates.
Advance Warning
The following items are due to change in a future release.
Installation and Deployment
The LogScale Launcher Script script for starting LogScale will be modified to change the way CPU core usage can be configured. The
-XX:ActiveProcessorCount=n
command-line option will be ignored if set. Users that need to configure the core count manually should setCORES=n
environment variable instead. This will cause the launcher to configure both LogScale and the JVM properly.This change is scheduled for 1.148.0.
For more information, see Configuring Available CPU Cores.
Deprecation
Items that have been deprecated and may be removed in a future release.
The following API endpoints are deprecated and marked for removal in 1.148.0:
POST
/api/v1/clusterconfig/kafka-queues/partition-assignment
GET
/api/v1/clusterconfig/kafka-queues/partition-assignment
POST
/api/v1/clusterconfig/kafka-queues/partition-assignment/set-replication-defaults
The deprecated methods are used for viewing and changing the partition assignment in Kafka for the ingest queue. Administrators should use Kafka's own tools for editing partition assignments instead, such as the bin/kafka-reassign-partitions.sh and bin/kafka-topics.sh scripts that ship with the Kafka install.
The
server.tar.gz
release artifact has been deprecated. Users should switch to theOS/architecture-specific server-linux_x64.tar.gz
orserver-alpine_x64.tar.gz
, which include bundled JDKs. Users installing a Docker image do not need to make any changes. With this change, LogScale will no longer support bringing your own JDK, we will bundle one with releases instead.We are making this change for the following reasons:
By bundling a JDK specifically for LogScale, we can customize the JDK to contain only the functionality needed by LogScale. This is a benefit from a security perspective, and also reduces the size of release artifacts.
Bundling the JDK ensures that the JDK version in use is one we've tested with, which makes it more likely a customer install will perform similar to our own internal setups.
By bundling the JDK, we will only need to support one JDK version. This means we can take advantage of enhanced JDK features sooner, such as specific performance improvements, which benefits everyone.
The last release where
server.tar.gz artifact
is included will be 1.154.0.We are deprecating the
humio/kafka
andhumio/zookeeper
Docker images due to low use. The planned final release for these images will be with LogScale 1.148.0.Better alternatives are available going forward. We recommend the following:
If you still require
humio/kafka
orhumio/zookeeper
for needs that cannot be covered by these alternatives, please contact Support and share your concerns.The
HUMIO_JVM_ARGS
environment variable in the LogScale Launcher Script script will be removed in 1.154.0.The variable existed for migration from older deployments where the launcher script was not available. The launcher script replaces the need for manually setting parameters in this variable, so the use of this variable is no longer required. Using the launcher script is now the recommended method of launching LogScale. For more details on the launcher script, see LogScale Launcher Script. Clusters that still set this configuration should migrate to the other variables described at Configuration.
The lastScheduledSearch field from the
ScheduledSearch
datatype is now deprecated and planned for removal in LogScale version 1.202. The new lastExecuted and lastTriggered fields have been added to theScheduledSearch
datatype to replace lastScheduledSearch.
Behavior Changes
Scripts or environment which make use of these tools should be checked and updated for the new configuration:
Functions
Prior to LogScale v1.147, the
array:length()
function accepted a value in thearray
argument that did not contain brackets[ ]
so thatarray:length("field")
would always produce the result0
(since there was no field named field). The function has now been updated to properly throw an exception if given a non-array field name in thearray
argument. Therefore, the function now requires the given array name to have[ ]
brackets, since it only works on array fields.
New features and improvements
UI Changes
The Time Interval panel now displays the @ingesttimestamp/@timestamp options selected when querying events for Aggregate Alerts.
For more information, see Changing Time Interval.
A new timestamp column has been added in the Event list displaying the alert timestamp selected (@ingesttimestamp or @timestamp). This will show as the new default column along with the usual @rawstring field column.
For more information, see Alert Properties.
Automation and Alerts
Standard Alerts have been renamed to Legacy Alerts. It is recommended using Filter Alerts or Aggregate Alerts alerts instead of legacy alerts.
For more information, see Alerts.
The following UI changes have been introduced for alerts:
The Alerts overview page now presents a table with search and filtering options.
An alert-specific version of the
Search
page is now available for creating and refining your query before saving it as an alert.The alert's properties are opened in a side panel when creating or editing an alert.
In the side panel, the recommended alert type to choose is suggested based on the query.
For aggregate alerts, the side panel allows you to select the timestamp (@ingesttimestamp or @timestamp).
For more information, see Creating Alerts, Alert Properties.
A new Disabled actions status is added and can be visible from the
Alerts
overview table. This status will be displayed when there is an alert (or scheduled search) with only disabled actions attached.For more information, see Alerts Overview.
A new aggregate alert type is introduced. The aggregate alert is now the recommended alert type for any queries containing aggregate functions. Like filter alerts, aggregate alerts use ingest timestamps and run back-to-back searches, guaranteeing at least once delivery to the actions for more robust results, even in case of ingest delays of up to 24 hours.
For more information, see Aggregate Alerts.
Log Collector
RemoteUpdate
version dialog has been improved, with the ability to cancel pending and scheduled updates.
Fixed in this release
Ingestion
When shutting down a node, the process that load files used by a parser would be stopped before the parser itself. This could lead to ingested events not being parsed. This issue has now been fixed.
Functions
parseXml()
would sometimes only partially extract text elements when the text contained newline characters. This issue has now been fixed.Live queries using Field Aliasing on a repository with Tag Groupings enabled could fail. This issue has now been fixed.
Long running queries using
window()
could end up never completing. This issue has now been fixed.
Falcon LogScale 1.146.0 GA (2024-07-09)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.146.0 | GA | 2024-07-09 | Cloud | 2025-09-30 | No | 1.112 | No |
Available for download two days after release.
Bug fixes and updates.
Advance Warning
The following items are due to change in a future release.
Installation and Deployment
The LogScale Launcher Script script for starting LogScale will be modified to change the way CPU core usage can be configured. The
-XX:ActiveProcessorCount=n
command-line option will be ignored if set. Users that need to configure the core count manually should setCORES=n
environment variable instead. This will cause the launcher to configure both LogScale and the JVM properly.This change is scheduled for 1.148.0.
For more information, see Configuring Available CPU Cores.
Deprecation
Items that have been deprecated and may be removed in a future release.
The following API endpoints are deprecated and marked for removal in 1.148.0:
POST
/api/v1/clusterconfig/kafka-queues/partition-assignment
GET
/api/v1/clusterconfig/kafka-queues/partition-assignment
POST
/api/v1/clusterconfig/kafka-queues/partition-assignment/set-replication-defaults
The deprecated methods are used for viewing and changing the partition assignment in Kafka for the ingest queue. Administrators should use Kafka's own tools for editing partition assignments instead, such as the bin/kafka-reassign-partitions.sh and bin/kafka-topics.sh scripts that ship with the Kafka install.
The
server.tar.gz
release artifact has been deprecated. Users should switch to theOS/architecture-specific server-linux_x64.tar.gz
orserver-alpine_x64.tar.gz
, which include bundled JDKs. Users installing a Docker image do not need to make any changes. With this change, LogScale will no longer support bringing your own JDK, we will bundle one with releases instead.We are making this change for the following reasons:
By bundling a JDK specifically for LogScale, we can customize the JDK to contain only the functionality needed by LogScale. This is a benefit from a security perspective, and also reduces the size of release artifacts.
Bundling the JDK ensures that the JDK version in use is one we've tested with, which makes it more likely a customer install will perform similar to our own internal setups.
By bundling the JDK, we will only need to support one JDK version. This means we can take advantage of enhanced JDK features sooner, such as specific performance improvements, which benefits everyone.
The last release where
server.tar.gz artifact
is included will be 1.154.0.We are deprecating the
humio/kafka
andhumio/zookeeper
Docker images due to low use. The planned final release for these images will be with LogScale 1.148.0.Better alternatives are available going forward. We recommend the following:
If you still require
humio/kafka
orhumio/zookeeper
for needs that cannot be covered by these alternatives, please contact Support and share your concerns.The
HUMIO_JVM_ARGS
environment variable in the LogScale Launcher Script script will be removed in 1.154.0.The variable existed for migration from older deployments where the launcher script was not available. The launcher script replaces the need for manually setting parameters in this variable, so the use of this variable is no longer required. Using the launcher script is now the recommended method of launching LogScale. For more details on the launcher script, see LogScale Launcher Script. Clusters that still set this configuration should migrate to the other variables described at Configuration.
The lastScheduledSearch field from the
ScheduledSearch
datatype is now deprecated and planned for removal in LogScale version 1.202. The new lastExecuted and lastTriggered fields have been added to theScheduledSearch
datatype to replace lastScheduledSearch.
New features and improvements
Automation and Alerts
A maximum limit of 1 week has been added on the throttle period for Filter Alerts and Standard Alerts. Any existing alert with a higher throttle time will continue to run, but when edited, lowering the throttle time to 1 week at most will be required.
GraphQL API
The getFileContent() and newFile() GraphQL endpoint responses will change for empty files. The return type is still
UploadedFileSnapshot!
, but the lines field will be changed to return[]
when the file is empty. Previously, the return value was a list containing an empty list[[]]
. This change applies both for empty files, and when the provided filter string doesn't match any rows in the file.
Storage
An alternative S3 client is now available and enabled by default. It handles file uploads more efficiently, by setting the Content-MD5 header during upload thus allowing S3 to perform file validation instead of having LogScale do it via post-upload validation steps. This form of validation should work for all uploads, including when server-side encryption is enabled. The new S3 client only supports this validation mode, so setting the following variables will have no effect:
In case of issues, the S3 client can be disabled by setting
USE_AWS_SDK=false
, which will set LogScale back to the previous default client. Should you need to do this, please reach out to Support to have the issue addressed, because the previous client will be deprecated and removed eventually.
API
Support for array and object handling in the
fields
object has been added for Ingesting with HTTP Event Collector (HEC) events.
Fixed in this release
UI Changes
The event histogram would not adhere to the timezone selected for the query.
GraphQL API
The getFileContent() GraphQL endpoint will now return an
UploadedFileSnapshot!
datatype with the field totalLinesCount: 0 when a file has no matches for a given filter string. Previously it would return the total number of lines in the file.
API
fields
object did not show up in Ingesting with HTTP Event Collector (HEC) events. This issue has now been fixed.
Functions
Parsing the empty string as a number could lead to errors causing the query to fail (in
formatTime()
function, for example). This issue has now been fixed.
Falcon LogScale 1.145.0 GA (2024-07-02)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.145.0 | GA | 2024-07-02 | Cloud | 2025-09-30 | No | 1.112 | No |
Available for download two days after release.
Bug fixes and updates.
Breaking Changes
The following items create a breaking change in the behavior, response or operation of this release.
Advance Warning
The following items are due to change in a future release.
Installation and Deployment
The LogScale Launcher Script script for starting LogScale will be modified to change the way CPU core usage can be configured. The
-XX:ActiveProcessorCount=n
command-line option will be ignored if set. Users that need to configure the core count manually should setCORES=n
environment variable instead. This will cause the launcher to configure both LogScale and the JVM properly.This change is scheduled for 1.148.0.
For more information, see Configuring Available CPU Cores.
Deprecation
Items that have been deprecated and may be removed in a future release.
The following API endpoints are deprecated and marked for removal in 1.148.0:
POST
/api/v1/clusterconfig/kafka-queues/partition-assignment
GET
/api/v1/clusterconfig/kafka-queues/partition-assignment
POST
/api/v1/clusterconfig/kafka-queues/partition-assignment/set-replication-defaults
The deprecated methods are used for viewing and changing the partition assignment in Kafka for the ingest queue. Administrators should use Kafka's own tools for editing partition assignments instead, such as the bin/kafka-reassign-partitions.sh and bin/kafka-topics.sh scripts that ship with the Kafka install.
The
server.tar.gz
release artifact has been deprecated. Users should switch to theOS/architecture-specific server-linux_x64.tar.gz
orserver-alpine_x64.tar.gz
, which include bundled JDKs. Users installing a Docker image do not need to make any changes. With this change, LogScale will no longer support bringing your own JDK, we will bundle one with releases instead.We are making this change for the following reasons:
By bundling a JDK specifically for LogScale, we can customize the JDK to contain only the functionality needed by LogScale. This is a benefit from a security perspective, and also reduces the size of release artifacts.
Bundling the JDK ensures that the JDK version in use is one we've tested with, which makes it more likely a customer install will perform similar to our own internal setups.
By bundling the JDK, we will only need to support one JDK version. This means we can take advantage of enhanced JDK features sooner, such as specific performance improvements, which benefits everyone.
The last release where
server.tar.gz artifact
is included will be 1.154.0.We are deprecating the
humio/kafka
andhumio/zookeeper
Docker images due to low use. The planned final release for these images will be with LogScale 1.148.0.Better alternatives are available going forward. We recommend the following:
If you still require
humio/kafka
orhumio/zookeeper
for needs that cannot be covered by these alternatives, please contact Support and share your concerns.The
HUMIO_JVM_ARGS
environment variable in the LogScale Launcher Script script will be removed in 1.154.0.The variable existed for migration from older deployments where the launcher script was not available. The launcher script replaces the need for manually setting parameters in this variable, so the use of this variable is no longer required. Using the launcher script is now the recommended method of launching LogScale. For more details on the launcher script, see LogScale Launcher Script. Clusters that still set this configuration should migrate to the other variables described at Configuration.
The lastScheduledSearch field from the
ScheduledSearch
datatype is now deprecated and planned for removal in LogScale version 1.202. The new lastExecuted and lastTriggered fields have been added to theScheduledSearch
datatype to replace lastScheduledSearch.
New features and improvements
UI Changes
When exporting data to CSV, the Export to File dialog now offers the ability to select field names that are suggested based on the query results, or to select all fields in one click.
For more information, see Exporting Data.
When a file is referenced in a query, the
Search
page now shows a new tab next to the Results and Events tabs, bearing the name of the uploaded file. Activating the file tab will fetch the contents of the file and will show them as aTable
widget. Alternatively, if the file cannot be queried, a download link will be presented instead.For more information, see Creating a File.
Automation and Alerts
Audit logs for Alerts and Scheduled Searches now contain the package, if installed from a package.
Audit logs for Filter Alerts now contain the language version of the alert query.
GraphQL API
The new startFromDateTime argument has been added to s3ConfigureArchiving GraphQL mutation. When set, S3Archiving does not consider segment files that have a start time that is before this point in time. This in particular allows enabling S3 archiving only from a point in time and going forward, without archiving all the older files too.
Configuration
A new dynamic configuration variable
GraphQlDirectivesAmountLimit
has been added to restrict how many GraphQL directives can be in a query. Valid values are integers from 5 to 1,000. The default value is 25.
Functions
The new query function
text:contains()
is introduced. The function tests if a specific substring is present within a given string. It takes two arguments:string
andsubstring
, both of which can be provided as plain text, field values, or results of an expression.For more information, see
text:contains()
.The new query function
array:append()
is introduced, used to append one or more values to an existing array, or to create a new array.For more information, see
array:append()
.
Fixed in this release
UI Changes
A long list of large queries would break the queries' list appearing under the Recent tab by not being updatable. The limit to recent queries has now been set to 30.
For more information, see Recalling Queries.
The dialog to quickly switch to another repository would open when pressing the undo hotkey on Windows machines. This wrong behavior has now been fixed.
It was not possible to sort by columns other than ID in the Cluster nodes table under the UI menu. This issue has now been fixed.
Automation and Alerts
Actions would show up as scheduled searches and vice versa when viewing the contents of a package. This issue has now been fixed.
The read-only alert page would wrongly report that actions were being throttled when a filter alert had disabled throttling. This issue has now been fixed.
GraphQL API
The background processing underlying the redactEvents() mutation would fail if the filter included tags. This error has now been fixed.
Storage
Notifying to Global Database about file changes could be slow. This issue has now been fixed.
Dashboards and Widgets
Arguments for parameters no longer used in a deleted query could be submitted anyway when invoking a saved query that uses the same arguments, thus generating an error. This issue has now been fixed.
Ingestion
A wrong order of the output events for parsers have been fixed — the output now returns the correct event order.
Improvement
Storage
The global topic throughput has been improved for particular updates to segments in datasources with many segments.
For more information, see Global Database.
Let segment merge span vary by +/- 10% of the configured value to avoid all segment targets switching to a new merge targets at the same point in time.
Falcon LogScale 1.144.0 GA (2024-06-25)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.144.0 | GA | 2024-06-25 | Cloud | 2025-09-30 | No | 1.112 | No |
Available for download two days after release.
Bug fixes and updates.
Advance Warning
The following items are due to change in a future release.
Installation and Deployment
The LogScale Launcher Script script for starting LogScale will be modified to change the way CPU core usage can be configured. The
-XX:ActiveProcessorCount=n
command-line option will be ignored if set. Users that need to configure the core count manually should setCORES=n
environment variable instead. This will cause the launcher to configure both LogScale and the JVM properly.This change is scheduled for 1.148.0.
For more information, see Configuring Available CPU Cores.
Deprecation
Items that have been deprecated and may be removed in a future release.
The following API endpoints are deprecated and marked for removal in 1.148.0:
POST
/api/v1/clusterconfig/kafka-queues/partition-assignment
GET
/api/v1/clusterconfig/kafka-queues/partition-assignment
POST
/api/v1/clusterconfig/kafka-queues/partition-assignment/set-replication-defaults
The deprecated methods are used for viewing and changing the partition assignment in Kafka for the ingest queue. Administrators should use Kafka's own tools for editing partition assignments instead, such as the bin/kafka-reassign-partitions.sh and bin/kafka-topics.sh scripts that ship with the Kafka install.
The
server.tar.gz
release artifact has been deprecated. Users should switch to theOS/architecture-specific server-linux_x64.tar.gz
orserver-alpine_x64.tar.gz
, which include bundled JDKs. Users installing a Docker image do not need to make any changes. With this change, LogScale will no longer support bringing your own JDK, we will bundle one with releases instead.We are making this change for the following reasons:
By bundling a JDK specifically for LogScale, we can customize the JDK to contain only the functionality needed by LogScale. This is a benefit from a security perspective, and also reduces the size of release artifacts.
Bundling the JDK ensures that the JDK version in use is one we've tested with, which makes it more likely a customer install will perform similar to our own internal setups.
By bundling the JDK, we will only need to support one JDK version. This means we can take advantage of enhanced JDK features sooner, such as specific performance improvements, which benefits everyone.
The last release where
server.tar.gz artifact
is included will be 1.154.0.We are deprecating the
humio/kafka
andhumio/zookeeper
Docker images due to low use. The planned final release for these images will be with LogScale 1.148.0.Better alternatives are available going forward. We recommend the following:
If you still require
humio/kafka
orhumio/zookeeper
for needs that cannot be covered by these alternatives, please contact Support and share your concerns.The
HUMIO_JVM_ARGS
environment variable in the LogScale Launcher Script script will be removed in 1.154.0.The variable existed for migration from older deployments where the launcher script was not available. The launcher script replaces the need for manually setting parameters in this variable, so the use of this variable is no longer required. Using the launcher script is now the recommended method of launching LogScale. For more details on the launcher script, see LogScale Launcher Script. Clusters that still set this configuration should migrate to the other variables described at Configuration.
The lastScheduledSearch field from the
ScheduledSearch
datatype is now deprecated and planned for removal in LogScale version 1.202. The new lastExecuted and lastTriggered fields have been added to theScheduledSearch
datatype to replace lastScheduledSearch.
Behavior Changes
Scripts or environment which make use of these tools should be checked and updated for the new configuration:
Installation and Deployment
The default
cleanup.policy
for the transientChatter-events topic has been switched fromcompact
todelete,compact
. This change will not apply to existing clusters. Changing this setting todelete,compact
via Kafka's command line tools is particularly recommended iftransientChatter
is taking up excessive space on disk, whereas it is less relevant in production environments where Kafka's disks tend to be large.Configuration
When global publish to Kafka times out from digester threads, the system would initiate a failure shutdown. Instead, from this 1.144 version the system retries the publish to Global Database indefinitely for those specific global transactions that originate in a digester thread. If retries occur, these get logged with an error executeTransactionRetryingOnTimeout: unable to execute transaction for global, retrying.
New features and improvements
Automation and Alerts
Two new GraphQL fields have been added in the
ScheduledSearch
datatype:lastExecuted will hold the timestamp of the end of the search interval on the last scheduled search run.
lastTriggered will hold the timestamp of the end of the search interval on the last scheduled search run that found results and triggered actions.
These two new fields are now also displayed in the
Scheduled Searches
user interface.For more information, see Last Executed and Last Triggered Scheduled Search.
GraphQL API
The log line containing
Executed GraphQL query
in the humio repository, that is logged for every GraphQL call, now contains the name of the mutations and queries that are executed.
Storage
Support for bucket storage upload validation has changed. LogScale now supports the following three validation modes:
Checking the ETag HTTP response header on the upload response. This mode is the default, and can be opted out of via the
BUCKET_STORAGE_IGNORE_ETAG_UPLOAD
configuration parameter.Checking the ETag HTTP response header on a HEAD request done for the uploaded file. This is the second preferred mode, and can be opted out of via the
BUCKET_STORAGE_IGNORE_ETAG_AFTER_UPLOAD
configuration parameter.Downloading the file that was uploaded, in order to validate the
checksum
file. This mode is enabled if neither of the other modes are enabled.
Previous validation modes that did not compare checksums have been removed, as they were not reliable indicators of the uploaded file integrity.
Fixed in this release
UI Changes
When managing sessions within an organization, it was not possible to sort active sessions by the Last active timestamp column. This issue has now been fixed.
Falcon LogScale 1.143.0 GA (2024-06-18)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.143.0 | GA | 2024-06-18 | Cloud | 2025-09-30 | No | 1.112 | No |
Available for download two days after release.
Bug fixes and updates.
Advance Warning
The following items are due to change in a future release.
Installation and Deployment
The LogScale Launcher Script script for starting LogScale will be modified to change the way CPU core usage can be configured. The
-XX:ActiveProcessorCount=n
command-line option will be ignored if set. Users that need to configure the core count manually should setCORES=n
environment variable instead. This will cause the launcher to configure both LogScale and the JVM properly.This change is scheduled for 1.148.0.
For more information, see Configuring Available CPU Cores.
Removed
Items that have been removed as of this release.
Other
Unnecessary
digest-coordinator-changes
anddesired-digest-coordinator-changes
metrics have been removed. Instead, the logging in theIngestPartitionCoordinator
class has been improved, to allow monitoring of when reassignment of desired and current digesters happens — by searching forWrote changes to desired digest partitions
/Wrote changes to current digest partitions
.
Deprecation
Items that have been deprecated and may be removed in a future release.
The following API endpoints are deprecated and marked for removal in 1.148.0:
POST
/api/v1/clusterconfig/kafka-queues/partition-assignment
GET
/api/v1/clusterconfig/kafka-queues/partition-assignment
POST
/api/v1/clusterconfig/kafka-queues/partition-assignment/set-replication-defaults
The deprecated methods are used for viewing and changing the partition assignment in Kafka for the ingest queue. Administrators should use Kafka's own tools for editing partition assignments instead, such as the bin/kafka-reassign-partitions.sh and bin/kafka-topics.sh scripts that ship with the Kafka install.
The
server.tar.gz
release artifact has been deprecated. Users should switch to theOS/architecture-specific server-linux_x64.tar.gz
orserver-alpine_x64.tar.gz
, which include bundled JDKs. Users installing a Docker image do not need to make any changes. With this change, LogScale will no longer support bringing your own JDK, we will bundle one with releases instead.We are making this change for the following reasons:
By bundling a JDK specifically for LogScale, we can customize the JDK to contain only the functionality needed by LogScale. This is a benefit from a security perspective, and also reduces the size of release artifacts.
Bundling the JDK ensures that the JDK version in use is one we've tested with, which makes it more likely a customer install will perform similar to our own internal setups.
By bundling the JDK, we will only need to support one JDK version. This means we can take advantage of enhanced JDK features sooner, such as specific performance improvements, which benefits everyone.
The last release where
server.tar.gz artifact
is included will be 1.154.0.We are deprecating the
humio/kafka
andhumio/zookeeper
Docker images due to low use. The planned final release for these images will be with LogScale 1.148.0.Better alternatives are available going forward. We recommend the following:
If you still require
humio/kafka
orhumio/zookeeper
for needs that cannot be covered by these alternatives, please contact Support and share your concerns.The
HUMIO_JVM_ARGS
environment variable in the LogScale Launcher Script script will be removed in 1.154.0.The variable existed for migration from older deployments where the launcher script was not available. The launcher script replaces the need for manually setting parameters in this variable, so the use of this variable is no longer required. Using the launcher script is now the recommended method of launching LogScale. For more details on the launcher script, see LogScale Launcher Script. Clusters that still set this configuration should migrate to the other variables described at Configuration.
Upgrades
Changes that may occur or be required during an upgrade.
Installation and Deployment
The minimum version of Java compatible with LogScale is now 21. Docker users, and users installing the release artifacts that bundle the JDK, are not affected.
It is recommended to switch to the release artifacts that bundle a JDK, because LogScale no longer supports bringing your own JDK as of release 1.138, see Falcon LogScale 1.138.0 GA (2024-05-14)
New features and improvements
Security
When extending Retention span or size, any segments that were marked for deletion — but where the files remain in the system — are automatically resurrected. How much data you reclaim via this depends on the
backupAfterMillis
configuration on the repository.For more information, see Audit Logging.
GraphQL API
The new environmentVariableUsage() GraphQL API has been introduced for listing non-secret environment variables used by a node. This is intended as an aid to help do configuration discovery when managing a large number of LogScale clusters.
The new concatenateQueries() GraphQL API has been introduced for programmatically concatenating multiple queries into one. This is intended to eliminate errors that might occur if queries are combined naively.
The preview tag has been removed from the following GraphQL mutations:
DeleteIngestFeed
resetQuota
testAwsS3SqsIngestFeed
Functions
Fixed in this release
UI Changes
In the Export to File dialog, when using the keyboard to switch between options, a different item than the one selected was highlighted. This issue has now been fixed.
Storage
Digest threads could fail to start digesting if
global
is very large, and if writing toglobal
is slow. This issue has now been fixed.
Falcon LogScale 1.142.4 LTS (2024-12-17)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.142.4 | LTS | 2024-12-17 | Cloud | 2025-07-31 | No | 1.112 | No |
Hide file hashes
TAR Checksum | Value |
---|---|
MD5 | 46e03b9a1ede2060d3c8bf9a25b911f6 |
SHA1 | 886af9087b98b610c920f83febce4c63c8d88c5d |
SHA256 | b291f2475cddd3dc725c4ee3eb8de07358ed6ce419ae80a0d7be601f54af3b1f |
SHA512 | 5099f6aa1db5bd7b07fe4e9d4b9896066e79a107bf7137915b8366e1f3507e1023722dc726f64ab227df0d9a12870291ff8effa11ebc42787a4cebf545d09a70 |
Docker Image | Included JDK | SHA256 Checksum |
---|---|---|
humio | 22 | 03744c0915c08858e830b97cd378ae4ff99aadbcf48a04577be980fc1566c199 |
humio-core | 22 | 56c3c63c56bc1326f98712d0e2ea989352dc555684d8e6ec55694c0c18ad6aa7 |
kafka | 22 | e42e0305c854d26a4adc09b26bbf77bb1383e56afe93005975efbf8756c09996 |
zookeeper | 22 | 32114da378502a98f093bf21dfb1d2e435916a654e86ac0e92ac1ea383757b3a |
Download
These notes include entries from the following previous releases: 1.142.1, 1.142.3
Bug fixes and updates.
Breaking Changes
The following items create a breaking change in the behavior, response or operation of this release.
Functions
The
limit
parameter has been added to therdns()
function. It is controlled by dynamic configurationsRdnsMaxLimit
andRdnsDefaultLimit
. This is a breaking change addition due to incidents caused by the large implicit limit used before.For more information, see
rdns()
.
Advance Warning
The following items are due to change in a future release.
Installation and Deployment
The LogScale Launcher Script script for starting LogScale will be modified to change the way CPU core usage can be configured. The
-XX:ActiveProcessorCount=n
command-line option will be ignored if set. Users that need to configure the core count manually should setCORES=n
environment variable instead. This will cause the launcher to configure both LogScale and the JVM properly.This change is scheduled for 1.148.0.
For more information, see Configuring Available CPU Cores.
Deprecation
Items that have been deprecated and may be removed in a future release.
The following API endpoints are deprecated and marked for removal in 1.148.0:
POST
/api/v1/clusterconfig/kafka-queues/partition-assignment
GET
/api/v1/clusterconfig/kafka-queues/partition-assignment
POST
/api/v1/clusterconfig/kafka-queues/partition-assignment/set-replication-defaults
The deprecated methods are used for viewing and changing the partition assignment in Kafka for the ingest queue. Administrators should use Kafka's own tools for editing partition assignments instead, such as the bin/kafka-reassign-partitions.sh and bin/kafka-topics.sh scripts that ship with the Kafka install.
The
server.tar.gz
release artifact has been deprecated. Users should switch to theOS/architecture-specific server-linux_x64.tar.gz
orserver-alpine_x64.tar.gz
, which include bundled JDKs. Users installing a Docker image do not need to make any changes. With this change, LogScale will no longer support bringing your own JDK, we will bundle one with releases instead.We are making this change for the following reasons:
By bundling a JDK specifically for LogScale, we can customize the JDK to contain only the functionality needed by LogScale. This is a benefit from a security perspective, and also reduces the size of release artifacts.
Bundling the JDK ensures that the JDK version in use is one we've tested with, which makes it more likely a customer install will perform similar to our own internal setups.
By bundling the JDK, we will only need to support one JDK version. This means we can take advantage of enhanced JDK features sooner, such as specific performance improvements, which benefits everyone.
The last release where
server.tar.gz artifact
is included will be 1.154.0.We are deprecating the
humio/kafka
andhumio/zookeeper
Docker images due to low use. The planned final release for these images will be with LogScale 1.148.0.Better alternatives are available going forward. We recommend the following:
If you still require
humio/kafka
orhumio/zookeeper
for needs that cannot be covered by these alternatives, please contact Support and share your concerns.The
HUMIO_JVM_ARGS
environment variable in the LogScale Launcher Script script will be removed in 1.154.0.The variable existed for migration from older deployments where the launcher script was not available. The launcher script replaces the need for manually setting parameters in this variable, so the use of this variable is no longer required. Using the launcher script is now the recommended method of launching LogScale. For more details on the launcher script, see LogScale Launcher Script. Clusters that still set this configuration should migrate to the other variables described at Configuration.
Behavior Changes
Scripts or environment which make use of these tools should be checked and updated for the new configuration:
API
It is no longer possible to revive a query by polling it after it has been stopped.
For more information, see Running Query Jobs.
Other
LogScale deletes
humiotmp
directories when gracefully shut down, but this can causetmp
directories to leak if LogScale crashes. LogScale now also deletes these directories on startup.
Upgrades
Changes that may occur or be required during an upgrade.
Installation and Deployment
The bundled JDK is upgraded to 22.0.2.
The Kafka client has been upgraded to 3.7.0. The Kafka server version in the deprecated humio/kafka Docker image is also upgraded to 3.7.0.
Bundled JDK upgraded to 22.0.1.
The JDK has been upgraded to 23.0.1
New features and improvements
Installation and Deployment
Changing the
NODE_ROLES
of a host is now forbidden. A host will now crash if the role it is configured to have doesn't match what is listed in global for that host. People wishing to change the role of a host in a cluster should instead remove that host from the cluster by unregistering it, wipe the data directory of the host, and boot the node back into the cluster as if it were a completely new node. The node will be assigned a new vhost identifier when doing this.Unused modules have been removed from the JDK bundled with LogScale releases, thus reducing the size of release artifacts.
UI Changes
Time zone data has been updated to IANA 2024a and has been trimmed to +/- 5 years from the release date of IANA 2024a.
Layout changes have been made in the
Connections
UI page.For more information, see Connections.
The maximum limit for saved query names has been set to 200 characters.
The warnings for numbers out of the browser's safe number range have been slightly modified.
For more information, see Troubleshooting: UI Warning: The actual value is different from what is displayed.
A new Event List. It formats all fields in the event in key-value pairs by grouping a field list by prefix.
column type has been added in theFor more information, see Column Properties.
Automation and Alerts
Scheduled Reports can now be created. Scheduled Reports generate reports directly from dashboards and send them to the selected email addresses on a regular schedule.
For more information, see Scheduled PDF Reports.
Two new GraphQL fields have been added in the
ScheduledSearch
datatype:lastExecuted will hold the timestamp of the end of the search interval on the last scheduled search run.
lastTriggered will hold the timestamp of the end of the search interval on the last scheduled search run that found results and triggered actions.
These two new fields are now also displayed in the
Scheduled Searches
user interface.For more information, see Last Executed and Last Triggered Scheduled Search.
GraphQL API
A new unsetDynamicConfig GraphQL mutation is introduced to unset dynamic configurations.
Added a new GraphQL API generateParserFromTemplate() for decoding a parser YAML template without installing it.
API
Upgrade to the latest Jakarta Mail API to prevent a warning message from being logged about a missing mail configuration file.
Information about files used in a query is now added to the query result returned by the API.
Configuration
The
EXACT_MATCH_LIMIT
configuration has been removed. It is no longer needed, since files are limited by size instead of rows.When
UNSAFE_RELAX_MULTI_CLUSTER_PROTOCOL_VERSION_CHECK
is set to ensure Multi-Cluster Compatibility Across Versions, attempting to search in clusters older than version 1.131.2 is not allowed and a UI message will now be displayed.A new
QueryBacktrackingLimit
dynamic configuration is available through GraphQL as experimental. It allows to limit a query iterating over individual events too many times (which may happen with an excessive use ofcopyEvent()
,join()
andsplit()
functions, orregex()
with repeat-flags). The default for this limit is 3,000 and can be modified with the dynamic configuration. At present, the feature flag sets this limit off by default.
Dashboards and Widgets
A parameter panel widget type has been added to allow users to drag parameters from the top panel and into these panels. Also, a parameter
width
is now adjustable in the settings.For more information, see Parameter Panel Widget.
Ingestion
Self-hosted only: derived tags (like
#repo
) are now included when executing Event Forwarding Rules. These fields will be included in the forwarded events unless filtered byselect()
ordrop(#repo)
in the rule.Audit logs related to Event Forwarders no longer include the properties of the event forwarder.
Event forwarder disablement is now audit logged with type disable instead of enable.
The parser assertions can now be written and loaded to YAML files, using the V3 parser format.
Log Collector
Fleet Management now supports ephemeral hosts. If a collector is enrolled with the parameter
--ephemeralTimeout
, after being offline for the specified duration in hours it will disappear from theFleet Overview
interface and be unenrolled. The feature requires LogScale Collector version 1.7.0 or above.Live and Historic options for
Fleet Overview
are introduced. When Live, the overview will show online collectors and continuously be updated with e.g. new CPU metrics or status changes. The Historic view will display all records of collectors for the last 30 days. In this case the overview will not be updated with new information.For more information, see Switching between Live and Historic overview.
Functions
The
onlyTrue
parameter has been added to thebitfield:extractFlags()
query function, it allows to output only flags whose value istrue
.For more information, see
bitfield:extractFlags()
.array:filter
has been fixed as performing a filter test on an array field outputted from this function would sometimes lead to no results.The query editor now gives warnings about certain regex constructs that are valid but suboptimal. Specifically, quantified wildcards in the beginning or end of an (unanchored) regex.
Multi-valued arguments can now be passed to a saved query.
For more information, see User Functions (Saved Searches).
Other
A new metric
max_ingest_delay
is introduced to keep track of the current maximum ingest delay across all Kafka partitions.Two new metrics have been introduced:
internal-throttled-poll-rate
keeps track of the number of times polling workers during query execution was throttled due to rate limiting.internal-throttled-poll-wait-time
keeps track of maximum delays per poll round due to rate limiting.
Fixed in this release
Storage
Taking nodes offline in a cluster that does not use bucket storage could prevent cleanup of mini-segments associated with merge targets owned by the offline nodes, causing global to grow. To solve this, the cluster now moves merge targets that have not yet achieved full replication to follow digest nodes.
The Did not query segment error spuriously appearing when the cluster performs digest reassignment has now been fixed.
The file synchronization job would stop if upload to bucket storage fails. This issue has been fixed.
Dashboards and Widgets
Shared dashboards created on the special humio-search-all view wouldn't load correctly. This issue has now been fixed.
The execution of dashboard parameter queries has been changed to only run as live when the dashboard itself is live.
Dragging a parameter to an empty Parameter Panel Widget would sometimes not move the parameter. This issue has been fixed.
Functions
The query editor has been fixed as field auto-completions would sometimes not be suggested.
The query editor would mark the entire query as erroneous when
count()
was given withdistinct=true
parameter but missing an argument for thefield
parameter. This issue has been fixed.Live queries using Field Aliasing on a repository with Tag Groupings enabled could fail. This issue has now been fixed.
The
time:xxx()
functions have been fixed as they did not correctly use the query's time zone as default. The offset was applied in an opposite manner, such that for example GMT+2 was applied as GMT-2. This has now been fixed.
Other
A regression introduced in version 1.132 has been fixed, where a file name starting with
shared/
would be recognized as a shared file instead of a regular file. However, a shared file should be referred to using exactly/shared/
as a prefix.Fixing a very rare edge case that could cause creation of malformed entities in global when a nested entity — such as a datasource — was deleted.
Improvement
UI Changes
When a saved query is used, the query editor will display the query string when hovering over it.
Storage
Logging improvements have been made around bucket uploads to assist with troubleshooting slow uploads, which are only seen in clusters with very large data sets.
Packages
Validate that there are no duplicate names used for each package template type during package installations (for example you cannot use the same name for multiple parsers that are part of the same package).
Falcon LogScale 1.142.3 LTS (2024-08-23)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.142.3 | LTS | 2024-08-23 | Cloud | 2025-07-31 | No | 1.112 | No |
Hide file hashes
TAR Checksum | Value |
---|---|
MD5 | 7285214a4a6c2f39a3228a27be561436 |
SHA1 | 1a31d2308e2711329e47b7491432415461bcdef8 |
SHA256 | d82c531d9eabaafb2aa8a03e543a816e330ce2e24382f15daa35e4ee2a7051b4 |
SHA512 | dd00d62eb3d5bbf35fdab9506e7849c0daa517dd9c4445d230439006551372011037d22d93c372aa277b604753a224e26f555e8d5423e2a466b3a425ed362da0 |
Docker Image | Included JDK | SHA256 Checksum |
---|---|---|
humio | 22 | 25c1361f8b3bf3421541a1a9af6996f638853db50bea1235916f28a16987a2b7 |
humio-core | 22 | 34deeaba55a91180f34289b12016c2187f529613d7a8d17aa27f62760052c21e |
kafka | 22 | a15f53a1d94904b35828125449c4ed769eedaf66abb976c0c6dcf8c6a3038ac8 |
zookeeper | 22 | 54111cfa77e2fcc7c013c6da4a1ef0293e2c3d63a7edd642d74b060531016fab |
Download
These notes include entries from the following previous releases: 1.142.1
Bug fixes and updates.
Breaking Changes
The following items create a breaking change in the behavior, response or operation of this release.
Functions
The
limit
parameter has been added to therdns()
function. It is controlled by dynamic configurationsRdnsMaxLimit
andRdnsDefaultLimit
. This is a breaking change addition due to incidents caused by the large implicit limit used before.For more information, see
rdns()
.
Advance Warning
The following items are due to change in a future release.
Installation and Deployment
The LogScale Launcher Script script for starting LogScale will be modified to change the way CPU core usage can be configured. The
-XX:ActiveProcessorCount=n
command-line option will be ignored if set. Users that need to configure the core count manually should setCORES=n
environment variable instead. This will cause the launcher to configure both LogScale and the JVM properly.This change is scheduled for 1.148.0.
For more information, see Configuring Available CPU Cores.
Deprecation
Items that have been deprecated and may be removed in a future release.
The following API endpoints are deprecated and marked for removal in 1.148.0:
POST
/api/v1/clusterconfig/kafka-queues/partition-assignment
GET
/api/v1/clusterconfig/kafka-queues/partition-assignment
POST
/api/v1/clusterconfig/kafka-queues/partition-assignment/set-replication-defaults
The deprecated methods are used for viewing and changing the partition assignment in Kafka for the ingest queue. Administrators should use Kafka's own tools for editing partition assignments instead, such as the bin/kafka-reassign-partitions.sh and bin/kafka-topics.sh scripts that ship with the Kafka install.
The
server.tar.gz
release artifact has been deprecated. Users should switch to theOS/architecture-specific server-linux_x64.tar.gz
orserver-alpine_x64.tar.gz
, which include bundled JDKs. Users installing a Docker image do not need to make any changes. With this change, LogScale will no longer support bringing your own JDK, we will bundle one with releases instead.We are making this change for the following reasons:
By bundling a JDK specifically for LogScale, we can customize the JDK to contain only the functionality needed by LogScale. This is a benefit from a security perspective, and also reduces the size of release artifacts.
Bundling the JDK ensures that the JDK version in use is one we've tested with, which makes it more likely a customer install will perform similar to our own internal setups.
By bundling the JDK, we will only need to support one JDK version. This means we can take advantage of enhanced JDK features sooner, such as specific performance improvements, which benefits everyone.
The last release where
server.tar.gz artifact
is included will be 1.154.0.We are deprecating the
humio/kafka
andhumio/zookeeper
Docker images due to low use. The planned final release for these images will be with LogScale 1.148.0.Better alternatives are available going forward. We recommend the following:
If you still require
humio/kafka
orhumio/zookeeper
for needs that cannot be covered by these alternatives, please contact Support and share your concerns.The
HUMIO_JVM_ARGS
environment variable in the LogScale Launcher Script script will be removed in 1.154.0.The variable existed for migration from older deployments where the launcher script was not available. The launcher script replaces the need for manually setting parameters in this variable, so the use of this variable is no longer required. Using the launcher script is now the recommended method of launching LogScale. For more details on the launcher script, see LogScale Launcher Script. Clusters that still set this configuration should migrate to the other variables described at Configuration.
Behavior Changes
Scripts or environment which make use of these tools should be checked and updated for the new configuration:
API
It is no longer possible to revive a query by polling it after it has been stopped.
For more information, see Running Query Jobs.
Other
LogScale deletes
humiotmp
directories when gracefully shut down, but this can causetmp
directories to leak if LogScale crashes. LogScale now also deletes these directories on startup.
Upgrades
Changes that may occur or be required during an upgrade.
Installation and Deployment
The bundled JDK is upgraded to 22.0.2.
The Kafka client has been upgraded to 3.7.0. The Kafka server version in the deprecated humio/kafka Docker image is also upgraded to 3.7.0.
Bundled JDK upgraded to 22.0.1.
New features and improvements
Installation and Deployment
Changing the
NODE_ROLES
of a host is now forbidden. A host will now crash if the role it is configured to have doesn't match what is listed in global for that host. People wishing to change the role of a host in a cluster should instead remove that host from the cluster by unregistering it, wipe the data directory of the host, and boot the node back into the cluster as if it were a completely new node. The node will be assigned a new vhost identifier when doing this.Unused modules have been removed from the JDK bundled with LogScale releases, thus reducing the size of release artifacts.
UI Changes
Time zone data has been updated to IANA 2024a and has been trimmed to +/- 5 years from the release date of IANA 2024a.
Layout changes have been made in the
Connections
UI page.For more information, see Connections.
The maximum limit for saved query names has been set to 200 characters.
The warnings for numbers out of the browser's safe number range have been slightly modified.
For more information, see Troubleshooting: UI Warning: The actual value is different from what is displayed.
A new Event List. It formats all fields in the event in key-value pairs by grouping a field list by prefix.
column type has been added in theFor more information, see Column Properties.
Automation and Alerts
Scheduled Reports can now be created. Scheduled Reports generate reports directly from dashboards and send them to the selected email addresses on a regular schedule.
For more information, see Scheduled PDF Reports.
Two new GraphQL fields have been added in the
ScheduledSearch
datatype:lastExecuted will hold the timestamp of the end of the search interval on the last scheduled search run.
lastTriggered will hold the timestamp of the end of the search interval on the last scheduled search run that found results and triggered actions.
These two new fields are now also displayed in the
Scheduled Searches
user interface.For more information, see Last Executed and Last Triggered Scheduled Search.
GraphQL API
A new unsetDynamicConfig GraphQL mutation is introduced to unset dynamic configurations.
Added a new GraphQL API generateParserFromTemplate() for decoding a parser YAML template without installing it.
API
Upgrade to the latest Jakarta Mail API to prevent a warning message from being logged about a missing mail configuration file.
Information about files used in a query is now added to the query result returned by the API.
Configuration
The
EXACT_MATCH_LIMIT
configuration has been removed. It is no longer needed, since files are limited by size instead of rows.When
UNSAFE_RELAX_MULTI_CLUSTER_PROTOCOL_VERSION_CHECK
is set to ensure Multi-Cluster Compatibility Across Versions, attempting to search in clusters older than version 1.131.2 is not allowed and a UI message will now be displayed.A new
QueryBacktrackingLimit
dynamic configuration is available through GraphQL as experimental. It allows to limit a query iterating over individual events too many times (which may happen with an excessive use ofcopyEvent()
,join()
andsplit()
functions, orregex()
with repeat-flags). The default for this limit is 3,000 and can be modified with the dynamic configuration. At present, the feature flag sets this limit off by default.
Dashboards and Widgets
A parameter panel widget type has been added to allow users to drag parameters from the top panel and into these panels. Also, a parameter
width
is now adjustable in the settings.For more information, see Parameter Panel Widget.
Ingestion
Self-hosted only: derived tags (like
#repo
) are now included when executing Event Forwarding Rules. These fields will be included in the forwarded events unless filtered byselect()
ordrop(#repo)
in the rule.Audit logs related to Event Forwarders no longer include the properties of the event forwarder.
Event forwarder disablement is now audit logged with type disable instead of enable.
The parser assertions can now be written and loaded to YAML files, using the V3 parser format.
Log Collector
Fleet Management now supports ephemeral hosts. If a collector is enrolled with the parameter
--ephemeralTimeout
, after being offline for the specified duration in hours it will disappear from theFleet Overview
interface and be unenrolled. The feature requires LogScale Collector version 1.7.0 or above.Live and Historic options for
Fleet Overview
are introduced. When Live, the overview will show online collectors and continuously be updated with e.g. new CPU metrics or status changes. The Historic view will display all records of collectors for the last 30 days. In this case the overview will not be updated with new information.For more information, see Switching between Live and Historic overview.
Functions
The
onlyTrue
parameter has been added to thebitfield:extractFlags()
query function, it allows to output only flags whose value istrue
.For more information, see
bitfield:extractFlags()
.array:filter
has been fixed as performing a filter test on an array field outputted from this function would sometimes lead to no results.The query editor now gives warnings about certain regex constructs that are valid but suboptimal. Specifically, quantified wildcards in the beginning or end of an (unanchored) regex.
Multi-valued arguments can now be passed to a saved query.
For more information, see User Functions (Saved Searches).
Other
A new metric
max_ingest_delay
is introduced to keep track of the current maximum ingest delay across all Kafka partitions.Two new metrics have been introduced:
internal-throttled-poll-rate
keeps track of the number of times polling workers during query execution was throttled due to rate limiting.internal-throttled-poll-wait-time
keeps track of maximum delays per poll round due to rate limiting.
Fixed in this release
Storage
Taking nodes offline in a cluster that does not use bucket storage could prevent cleanup of mini-segments associated with merge targets owned by the offline nodes, causing global to grow. To solve this, the cluster now moves merge targets that have not yet achieved full replication to follow digest nodes.
The Did not query segment error spuriously appearing when the cluster performs digest reassignment has now been fixed.
The file synchronization job would stop if upload to bucket storage fails. This issue has been fixed.
Dashboards and Widgets
Shared dashboards created on the special humio-search-all view wouldn't load correctly. This issue has now been fixed.
The execution of dashboard parameter queries has been changed to only run as live when the dashboard itself is live.
Dragging a parameter to an empty Parameter Panel Widget would sometimes not move the parameter. This issue has been fixed.
Functions
The query editor has been fixed as field auto-completions would sometimes not be suggested.
The query editor would mark the entire query as erroneous when
count()
was given withdistinct=true
parameter but missing an argument for thefield
parameter. This issue has been fixed.Live queries using Field Aliasing on a repository with Tag Groupings enabled could fail. This issue has now been fixed.
The
time:xxx()
functions have been fixed as they did not correctly use the query's time zone as default. The offset was applied in an opposite manner, such that for example GMT+2 was applied as GMT-2. This has now been fixed.
Other
A regression introduced in version 1.132 has been fixed, where a file name starting with
shared/
would be recognized as a shared file instead of a regular file. However, a shared file should be referred to using exactly/shared/
as a prefix.Fixing a very rare edge case that could cause creation of malformed entities in global when a nested entity — such as a datasource — was deleted.
Improvement
UI Changes
When a saved query is used, the query editor will display the query string when hovering over it.
Storage
Logging improvements have been made around bucket uploads to assist with troubleshooting slow uploads, which are only seen in clusters with very large data sets.
Packages
Validate that there are no duplicate names used for each package template type during package installations (for example you cannot use the same name for multiple parsers that are part of the same package).
Falcon LogScale 1.142.2 Internal (2024-07-09)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.142.2 | Internal | 2024-07-09 | Internal Only | 2025-07-31 | No | 1.112 | No |
Available for download two days after release.
Internal-only release.
Advance Warning
The following items are due to change in a future release.
Installation and Deployment
The LogScale Launcher Script script for starting LogScale will be modified to change the way CPU core usage can be configured. The
-XX:ActiveProcessorCount=n
command-line option will be ignored if set. Users that need to configure the core count manually should setCORES=n
environment variable instead. This will cause the launcher to configure both LogScale and the JVM properly.This change is scheduled for 1.148.0.
For more information, see Configuring Available CPU Cores.
Deprecation
Items that have been deprecated and may be removed in a future release.
The following API endpoints are deprecated and marked for removal in 1.148.0:
POST
/api/v1/clusterconfig/kafka-queues/partition-assignment
GET
/api/v1/clusterconfig/kafka-queues/partition-assignment
POST
/api/v1/clusterconfig/kafka-queues/partition-assignment/set-replication-defaults
The deprecated methods are used for viewing and changing the partition assignment in Kafka for the ingest queue. Administrators should use Kafka's own tools for editing partition assignments instead, such as the bin/kafka-reassign-partitions.sh and bin/kafka-topics.sh scripts that ship with the Kafka install.
The
server.tar.gz
release artifact has been deprecated. Users should switch to theOS/architecture-specific server-linux_x64.tar.gz
orserver-alpine_x64.tar.gz
, which include bundled JDKs. Users installing a Docker image do not need to make any changes. With this change, LogScale will no longer support bringing your own JDK, we will bundle one with releases instead.We are making this change for the following reasons:
By bundling a JDK specifically for LogScale, we can customize the JDK to contain only the functionality needed by LogScale. This is a benefit from a security perspective, and also reduces the size of release artifacts.
Bundling the JDK ensures that the JDK version in use is one we've tested with, which makes it more likely a customer install will perform similar to our own internal setups.
By bundling the JDK, we will only need to support one JDK version. This means we can take advantage of enhanced JDK features sooner, such as specific performance improvements, which benefits everyone.
The last release where
server.tar.gz artifact
is included will be 1.154.0.We are deprecating the
humio/kafka
andhumio/zookeeper
Docker images due to low use. The planned final release for these images will be with LogScale 1.148.0.Better alternatives are available going forward. We recommend the following:
If you still require
humio/kafka
orhumio/zookeeper
for needs that cannot be covered by these alternatives, please contact Support and share your concerns.The
HUMIO_JVM_ARGS
environment variable in the LogScale Launcher Script script will be removed in 1.154.0.The variable existed for migration from older deployments where the launcher script was not available. The launcher script replaces the need for manually setting parameters in this variable, so the use of this variable is no longer required. Using the launcher script is now the recommended method of launching LogScale. For more details on the launcher script, see LogScale Launcher Script. Clusters that still set this configuration should migrate to the other variables described at Configuration.
Falcon LogScale 1.142.1 LTS (2024-07-03)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.142.1 | LTS | 2024-07-03 | Cloud | 2025-07-31 | No | 1.112 | No |
Hide file hashes
TAR Checksum | Value |
---|---|
MD5 | 1a5dd967685b998da46afaed3c0fe18c |
SHA1 | 4b87496f773a8ac0c51e5b27f35de15475fc34fd |
SHA256 | b2fc87e706d02f48694caaf422f2700f9d178f56afe06e35a006ae1b8524a844 |
SHA512 | 62ed51ae91d7e4c2c9276a1473ae26303ba89f36dece7f7ffbbb09d169c52b219ef7f79a3886c60cb9163823c8564feda3b58bfec23cc25b9abf107fbc7308a5 |
Docker Image | Included JDK | SHA256 Checksum |
---|---|---|
humio | 22 | 04af3a13ac01a9278105b223bc61639b20c735439fc9a131d49ec240cd50bc26 |
humio-core | 22 | a3868201a659cccb6bf44e0aedc18de6789938ac1e500b49aebc9362ec106759 |
kafka | 22 | 30bff675f267171b99046d68419429f3b78e0e258282feade9bae1d726100b92 |
zookeeper | 22 | cc49c209a4de0de071e0be5bba530c6f39b012b0183f444daf2b73ea56cae646 |
Download
Bug fixes and updates.
Breaking Changes
The following items create a breaking change in the behavior, response or operation of this release.
Functions
The
limit
parameter has been added to therdns()
function. It is controlled by dynamic configurationsRdnsMaxLimit
andRdnsDefaultLimit
. This is a breaking change addition due to incidents caused by the large implicit limit used before.For more information, see
rdns()
.
Advance Warning
The following items are due to change in a future release.
Installation and Deployment
The LogScale Launcher Script script for starting LogScale will be modified to change the way CPU core usage can be configured. The
-XX:ActiveProcessorCount=n
command-line option will be ignored if set. Users that need to configure the core count manually should setCORES=n
environment variable instead. This will cause the launcher to configure both LogScale and the JVM properly.This change is scheduled for 1.148.0.
For more information, see Configuring Available CPU Cores.
Deprecation
Items that have been deprecated and may be removed in a future release.
The following API endpoints are deprecated and marked for removal in 1.148.0:
POST
/api/v1/clusterconfig/kafka-queues/partition-assignment
GET
/api/v1/clusterconfig/kafka-queues/partition-assignment
POST
/api/v1/clusterconfig/kafka-queues/partition-assignment/set-replication-defaults
The deprecated methods are used for viewing and changing the partition assignment in Kafka for the ingest queue. Administrators should use Kafka's own tools for editing partition assignments instead, such as the bin/kafka-reassign-partitions.sh and bin/kafka-topics.sh scripts that ship with the Kafka install.
The
server.tar.gz
release artifact has been deprecated. Users should switch to theOS/architecture-specific server-linux_x64.tar.gz
orserver-alpine_x64.tar.gz
, which include bundled JDKs. Users installing a Docker image do not need to make any changes. With this change, LogScale will no longer support bringing your own JDK, we will bundle one with releases instead.We are making this change for the following reasons:
By bundling a JDK specifically for LogScale, we can customize the JDK to contain only the functionality needed by LogScale. This is a benefit from a security perspective, and also reduces the size of release artifacts.
Bundling the JDK ensures that the JDK version in use is one we've tested with, which makes it more likely a customer install will perform similar to our own internal setups.
By bundling the JDK, we will only need to support one JDK version. This means we can take advantage of enhanced JDK features sooner, such as specific performance improvements, which benefits everyone.
The last release where
server.tar.gz artifact
is included will be 1.154.0.We are deprecating the
humio/kafka
andhumio/zookeeper
Docker images due to low use. The planned final release for these images will be with LogScale 1.148.0.Better alternatives are available going forward. We recommend the following:
If you still require
humio/kafka
orhumio/zookeeper
for needs that cannot be covered by these alternatives, please contact Support and share your concerns.The
HUMIO_JVM_ARGS
environment variable in the LogScale Launcher Script script will be removed in 1.154.0.The variable existed for migration from older deployments where the launcher script was not available. The launcher script replaces the need for manually setting parameters in this variable, so the use of this variable is no longer required. Using the launcher script is now the recommended method of launching LogScale. For more details on the launcher script, see LogScale Launcher Script. Clusters that still set this configuration should migrate to the other variables described at Configuration.
Behavior Changes
Scripts or environment which make use of these tools should be checked and updated for the new configuration:
API
It is no longer possible to revive a query by polling it after it has been stopped.
For more information, see Running Query Jobs.
Other
LogScale deletes
humiotmp
directories when gracefully shut down, but this can causetmp
directories to leak if LogScale crashes. LogScale now also deletes these directories on startup.
Upgrades
Changes that may occur or be required during an upgrade.
Installation and Deployment
The Kafka client has been upgraded to 3.7.0. The Kafka server version in the deprecated humio/kafka Docker image is also upgraded to 3.7.0.
Bundled JDK upgraded to 22.0.1.
New features and improvements
Installation and Deployment
Changing the
NODE_ROLES
of a host is now forbidden. A host will now crash if the role it is configured to have doesn't match what is listed in global for that host. People wishing to change the role of a host in a cluster should instead remove that host from the cluster by unregistering it, wipe the data directory of the host, and boot the node back into the cluster as if it were a completely new node. The node will be assigned a new vhost identifier when doing this.Unused modules have been removed from the JDK bundled with LogScale releases, thus reducing the size of release artifacts.
UI Changes
Time zone data has been updated to IANA 2024a and has been trimmed to +/- 5 years from the release date of IANA 2024a.
Layout changes have been made in the
Connections
UI page.For more information, see Connections.
The maximum limit for saved query names has been set to 200 characters.
The warnings for numbers out of the browser's safe number range have been slightly modified.
For more information, see Troubleshooting: UI Warning: The actual value is different from what is displayed.
A new Event List. It formats all fields in the event in key-value pairs by grouping a field list by prefix.
column type has been added in theFor more information, see Column Properties.
Automation and Alerts
Scheduled Reports can now be created. Scheduled Reports generate reports directly from dashboards and send them to the selected email addresses on a regular schedule.
For more information, see Scheduled PDF Reports.
GraphQL API
A new unsetDynamicConfig GraphQL mutation is introduced to unset dynamic configurations.
Added a new GraphQL API generateParserFromTemplate() for decoding a parser YAML template without installing it.
API
Upgrade to the latest Jakarta Mail API to prevent a warning message from being logged about a missing mail configuration file.
Information about files used in a query is now added to the query result returned by the API.
Configuration
The
EXACT_MATCH_LIMIT
configuration has been removed. It is no longer needed, since files are limited by size instead of rows.When
UNSAFE_RELAX_MULTI_CLUSTER_PROTOCOL_VERSION_CHECK
is set to ensure Multi-Cluster Compatibility Across Versions, attempting to search in clusters older than version 1.131.2 is not allowed and a UI message will now be displayed.A new
QueryBacktrackingLimit
dynamic configuration is available through GraphQL as experimental. It allows to limit a query iterating over individual events too many times (which may happen with an excessive use ofcopyEvent()
,join()
andsplit()
functions, orregex()
with repeat-flags). The default for this limit is 3,000 and can be modified with the dynamic configuration. At present, the feature flag sets this limit off by default.
Dashboards and Widgets
A parameter panel widget type has been added to allow users to drag parameters from the top panel and into these panels. Also, a parameter
width
is now adjustable in the settings.For more information, see Parameter Panel Widget.
Ingestion
Self-hosted only: derived tags (like
#repo
) are now included when executing Event Forwarding Rules. These fields will be included in the forwarded events unless filtered byselect()
ordrop(#repo)
in the rule.Audit logs related to Event Forwarders no longer include the properties of the event forwarder.
Event forwarder disablement is now audit logged with type disable instead of enable.
The parser assertions can now be written and loaded to YAML files, using the V3 parser format.
Log Collector
Fleet Management now supports ephemeral hosts. If a collector is enrolled with the parameter
--ephemeralTimeout
, after being offline for the specified duration in hours it will disappear from theFleet Overview
interface and be unenrolled. The feature requires LogScale Collector version 1.7.0 or above.Live and Historic options for
Fleet Overview
are introduced. When Live, the overview will show online collectors and continuously be updated with e.g. new CPU metrics or status changes. The Historic view will display all records of collectors for the last 30 days. In this case the overview will not be updated with new information.For more information, see Switching between Live and Historic overview.
Functions
The
onlyTrue
parameter has been added to thebitfield:extractFlags()
query function, it allows to output only flags whose value istrue
.For more information, see
bitfield:extractFlags()
.array:filter
has been fixed as performing a filter test on an array field outputted from this function would sometimes lead to no results.The query editor now gives warnings about certain regex constructs that are valid but suboptimal. Specifically, quantified wildcards in the beginning or end of an (unanchored) regex.
Multi-valued arguments can now be passed to a saved query.
For more information, see User Functions (Saved Searches).
Other
A new metric
max_ingest_delay
is introduced to keep track of the current maximum ingest delay across all Kafka partitions.Two new metrics have been introduced:
internal-throttled-poll-rate
keeps track of the number of times polling workers during query execution was throttled due to rate limiting.internal-throttled-poll-wait-time
keeps track of maximum delays per poll round due to rate limiting.
Fixed in this release
Storage
Taking nodes offline in a cluster that does not use bucket storage could prevent cleanup of mini-segments associated with merge targets owned by the offline nodes, causing global to grow. To solve this, the cluster now moves merge targets that have not yet achieved full replication to follow digest nodes.
The Did not query segment error spuriously appearing when the cluster performs digest reassignment has now been fixed.
The file synchronization job would stop if upload to bucket storage fails. This issue has been fixed.
Dashboards and Widgets
The execution of dashboard parameter queries has been changed to only run as live when the dashboard itself is live.
Dragging a parameter to an empty Parameter Panel Widget would sometimes not move the parameter. This issue has been fixed.
Functions
The query editor has been fixed as field auto-completions would sometimes not be suggested.
The query editor would mark the entire query as erroneous when
count()
was given withdistinct=true
parameter but missing an argument for thefield
parameter. This issue has been fixed.The
time:xxx()
functions have been fixed as they did not correctly use the query's time zone as default. The offset was applied in an opposite manner, such that for example GMT+2 was applied as GMT-2. This has now been fixed.
Other
A regression introduced in version 1.132 has been fixed, where a file name starting with
shared/
would be recognized as a shared file instead of a regular file. However, a shared file should be referred to using exactly/shared/
as a prefix.Fixing a very rare edge case that could cause creation of malformed entities in global when a nested entity — such as a datasource — was deleted.
Improvement
UI Changes
When a saved query is used, the query editor will display the query string when hovering over it.
Storage
Logging improvements have been made around bucket uploads to assist with troubleshooting slow uploads, which are only seen in clusters with very large data sets.
Packages
Validate that there are no duplicate names used for each package template type during package installations (for example you cannot use the same name for multiple parsers that are part of the same package).
Falcon LogScale 1.142.0 GA (2024-06-11)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.142.0 | GA | 2024-06-11 | Cloud | 2025-07-31 | No | 1.112 | No |
Available for download two days after release.
Bug fixes and updates.
Breaking Changes
The following items create a breaking change in the behavior, response or operation of this release.
Functions
The
any
argument insort()
has been removed. Queries whereany
is explicitly set will be rejected. Please change the argument to eithernumber
,hex
orstring
, depending on which option is the best fit for the data your query operates on.The following changes have been made to
sort()
:
It will no longer try to guess the
type
of the field values and instead default tonumber
.The
number
andhex
options have been redefined to be total orders: values of the given type are sorted according to their natural order and those that could not be understood as the given type are sorted lexicographically. For instance, sorting the values10
,100
,20
,bcd
,cde
,abc
in an ascending order withnumber
will be rendered as:10, 20, 100, abc, bcd, cde
Advance Warning
The following items are due to change in a future release.
Installation and Deployment
The LogScale Launcher Script script for starting LogScale will be modified to change the way CPU core usage can be configured. The
-XX:ActiveProcessorCount=n
command-line option will be ignored if set. Users that need to configure the core count manually should setCORES=n
environment variable instead. This will cause the launcher to configure both LogScale and the JVM properly.This change is scheduled for 1.148.0.
For more information, see Configuring Available CPU Cores.
Deprecation
Items that have been deprecated and may be removed in a future release.
The following API endpoints are deprecated and marked for removal in 1.148.0:
POST
/api/v1/clusterconfig/kafka-queues/partition-assignment
GET
/api/v1/clusterconfig/kafka-queues/partition-assignment
POST
/api/v1/clusterconfig/kafka-queues/partition-assignment/set-replication-defaults
The deprecated methods are used for viewing and changing the partition assignment in Kafka for the ingest queue. Administrators should use Kafka's own tools for editing partition assignments instead, such as the bin/kafka-reassign-partitions.sh and bin/kafka-topics.sh scripts that ship with the Kafka install.
The
server.tar.gz
release artifact has been deprecated. Users should switch to theOS/architecture-specific server-linux_x64.tar.gz
orserver-alpine_x64.tar.gz
, which include bundled JDKs. Users installing a Docker image do not need to make any changes. With this change, LogScale will no longer support bringing your own JDK, we will bundle one with releases instead.We are making this change for the following reasons:
By bundling a JDK specifically for LogScale, we can customize the JDK to contain only the functionality needed by LogScale. This is a benefit from a security perspective, and also reduces the size of release artifacts.
Bundling the JDK ensures that the JDK version in use is one we've tested with, which makes it more likely a customer install will perform similar to our own internal setups.
By bundling the JDK, we will only need to support one JDK version. This means we can take advantage of enhanced JDK features sooner, such as specific performance improvements, which benefits everyone.
The last release where
server.tar.gz artifact
is included will be 1.154.0.We are deprecating the
humio/kafka
andhumio/zookeeper
Docker images due to low use. The planned final release for these images will be with LogScale 1.148.0.Better alternatives are available going forward. We recommend the following:
If you still require
humio/kafka
orhumio/zookeeper
for needs that cannot be covered by these alternatives, please contact Support and share your concerns.The
HUMIO_JVM_ARGS
environment variable in the LogScale Launcher Script script will be removed in 1.154.0.The variable existed for migration from older deployments where the launcher script was not available. The launcher script replaces the need for manually setting parameters in this variable, so the use of this variable is no longer required. Using the launcher script is now the recommended method of launching LogScale. For more details on the launcher script, see LogScale Launcher Script. Clusters that still set this configuration should migrate to the other variables described at Configuration.
Behavior Changes
Scripts or environment which make use of these tools should be checked and updated for the new configuration:
Storage
When a digest leader exceeds the
PRIMARY_STORAGE_MAX_FILL_PERCENTAGE
, instead of pausing by releasing leadership of all partitions, it'll pause while holding on to leadership.
New features and improvements
Security
The new
ManageViewConnections
Organization Administration permission has been added. It grants access to:List all views and repositories
Create views linked to any repository
Update Connections of any existing view.
Installation and Deployment
NUMA support for the Docker images is now enabled:
The launcher script has been updated to set -XX:+UseNUMA in the default
HUMIO_JVM_PERFORMANCE_OPTS
.The Docker images have been updated to include libnuma.so.1, which allows the JDK to optimize for NUMA hardware.
Dashboards and Widgets
Widget-level time selection can now be adjusted when a dashboard is used in view mode. This change adds flexibility in working with time on the dashboard and allows for easy comparative analysis on the fly.
For more information, see Widget Time Selector.
Fixed in this release
Storage
A fix has been made to reduce contention on loading
decompressMeta
in segment files, resulting in performance improvement.Pending merges of segments would contend with the verification of segments being transferred between nodes/bucket. This resulted in spuriously long transfer times, due to queueing of the verification step for the segment file. This issue has now been fixed.
Improvement
Storage
The amount of work required for the local segment verifier at boot of nodes has been reduced.
Falcon LogScale 1.141.0 GA (2024-06-04)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.141.0 | GA | 2024-06-04 | Cloud | 2025-07-31 | No | 1.112 | No |
Available for download two days after release.
Bug fixes and updates.
Advance Warning
The following items are due to change in a future release.
Installation and Deployment
The LogScale Launcher Script script for starting LogScale will be modified to change the way CPU core usage can be configured. The
-XX:ActiveProcessorCount=n
command-line option will be ignored if set. Users that need to configure the core count manually should setCORES=n
environment variable instead. This will cause the launcher to configure both LogScale and the JVM properly.This change is scheduled for 1.148.0.
For more information, see Configuring Available CPU Cores.
Deprecation
Items that have been deprecated and may be removed in a future release.
The
any
argument to thetype
parameter ofsort()
andtable()
has been deprecated and will be removed in version 1.142.Warnings prompts will be shown in queries that fall into either of these two cases:
If you are explicitly supplying an
any
argument, please either simply remove both the parameter and the argument, for example changesort(..., type=any)
tosort(...)
or supply the argument fortype
that corresponds to your data.If you are sorting hexadecimal values by their equivalent numerical values, please change the argument of
type
parameter tohex
e.g.sort(..., type=hex)
.In all other cases, no action is needed.
The new default value for
sort()
andtable()
will benumber
. Both functions will fall back to lexicographical ordering for values that cannot be understood as the provided argument fortype
.The following API endpoints are deprecated and marked for removal in 1.148.0:
POST
/api/v1/clusterconfig/kafka-queues/partition-assignment
GET
/api/v1/clusterconfig/kafka-queues/partition-assignment
POST
/api/v1/clusterconfig/kafka-queues/partition-assignment/set-replication-defaults
The deprecated methods are used for viewing and changing the partition assignment in Kafka for the ingest queue. Administrators should use Kafka's own tools for editing partition assignments instead, such as the bin/kafka-reassign-partitions.sh and bin/kafka-topics.sh scripts that ship with the Kafka install.
The
server.tar.gz
release artifact has been deprecated. Users should switch to theOS/architecture-specific server-linux_x64.tar.gz
orserver-alpine_x64.tar.gz
, which include bundled JDKs. Users installing a Docker image do not need to make any changes. With this change, LogScale will no longer support bringing your own JDK, we will bundle one with releases instead.We are making this change for the following reasons:
By bundling a JDK specifically for LogScale, we can customize the JDK to contain only the functionality needed by LogScale. This is a benefit from a security perspective, and also reduces the size of release artifacts.
Bundling the JDK ensures that the JDK version in use is one we've tested with, which makes it more likely a customer install will perform similar to our own internal setups.
By bundling the JDK, we will only need to support one JDK version. This means we can take advantage of enhanced JDK features sooner, such as specific performance improvements, which benefits everyone.
The last release where
server.tar.gz artifact
is included will be 1.154.0.We are deprecating the
humio/kafka
andhumio/zookeeper
Docker images due to low use. The planned final release for these images will be with LogScale 1.148.0.Better alternatives are available going forward. We recommend the following:
If you still require
humio/kafka
orhumio/zookeeper
for needs that cannot be covered by these alternatives, please contact Support and share your concerns.The
HUMIO_JVM_ARGS
environment variable in the LogScale Launcher Script script will be removed in 1.154.0.The variable existed for migration from older deployments where the launcher script was not available. The launcher script replaces the need for manually setting parameters in this variable, so the use of this variable is no longer required. Using the launcher script is now the recommended method of launching LogScale. For more details on the launcher script, see LogScale Launcher Script. Clusters that still set this configuration should migrate to the other variables described at Configuration.
The following GraphQL queries and mutations for interacting with parsers are deprecated and scheduled for removal in version 1.142.
The deprecated createParser mutation is replaced by createParserV2() . The differences between the old and new mutation are:
testData input field is replaced by testCases, which can contain more data than the old tests could. This includes adding assertions to the output of a test. These assertions are not displayed in the UI yet. To emulate the old API, you can take the old test string and put it in the ParserTestEventInput inside the ParserTestCaseInput, and they will behave the same as before.
fieldsToBeRemovedBeforeParsing can now be specified as part of the parser creation.
force field is renamed to allowOverwritingExistingParser.
sourceCode field is renamed to script.
tagFields field is renamed to fieldsToTag.
languageVersion is no longer an enum, but a LanguageVersionInputType instead.
The mutation returns a
Parser
, instead of aParser
wrapped in an object.The mutation fails when a parser has more than 2,000 test cases, or the test input in a single test case exceeds 40,000 characters.
The deprecated removeParser mutation is replaced by deleteParser. The difference between the old and new mutation is:
The mutation returns boolean to represent success or failure, instead of a
Parser
wrapped in an object.The deprecated testParser mutation is replaced by testParserV2() . The differences between the old and new mutation are:
The test cases are now structured types, instead of just being strings. To emulate the old API, take the test string and put it in the
ParserTestEventInput
inside theParserTestCaseInput
, and they will behave the same as before.The new test cases can contain assertions about the contents of the output.
The mutation output is significantly different from before, as it provides more detailed information on how a test case has failed.
The mutation now accepts both a language version and list of fields to be removed before parsing.
The parserScript field is renamed to script.
The tagFields field is renamed to fieldsToTag.
The deprecated updateParser mutation is replaced by updateParserV2() where more extensive test cases can be set. Continuing to use the previous API may result in test information on parsers being lost. To ensure information is not unintentionally erased, please migrate away from the deprecated APIs for both reading and updating parser test cases and use updateParserV2() instead. The differences between the previous and the new mutation are:
testData input field is replaced by testCases, which can contain more data than the old tests could. This includes adding assertions to the output of a test. These assertions are not displayed in the UI yet. To emulate the old API, you can take the old test string and put it in the
ParserTestEventInput
inside theParserTestCaseInput
, and they will behave the same as before.sourceCode field, used to updating the parser script, is changed to the script field, which takes a
UpdateParserScriptInput
object. This updates the parser script and the language version together.tagFields field is renamed to fieldsToTag.
The languageVersion is located inside the
UpdateParserScriptInput
object, and is no longer an enum, but a LanguageVersionInputType instead.The repositoryName and id fields are now correctly marked as mandatory in the schema. Previously this wasn't the case, even though the mutation would fail without them.
The mutation returns a
Parser
, instead of aParser
wrapped in an object.The old mutation had a bug where it would overwrite the languageVersion with a default value in some cases, which is fixed in the new one.
The mutation fails when a parser has more than 2,000 test cases, or the test input in a single test case exceeds 40,000 characters.
On the
Parser
type:
testData field is deprecated and replaced by testCases.
sourceCode field is deprecated and replaced by script.
tagFields field is deprecated and replaced by fieldsToTag.
For more information, see
Parser
,DeleteParserInput
,LanguageVersionInputType
, createParserV2() , testParserV2() , updateParserV2() .
Upgrades
Changes that may occur or be required during an upgrade.
Installation and Deployment
Bundled JDK upgraded to 22.0.1.
New features and improvements
API
Upgrade to the latest Jakarta Mail API to prevent a warning message from being logged about a missing mail configuration file.
Configuration
When
UNSAFE_RELAX_MULTI_CLUSTER_PROTOCOL_VERSION_CHECK
is set to ensure Multi-Cluster Compatibility Across Versions, attempting to search in clusters older than version 1.131.2 is not allowed and a UI message will now be displayed.
Fixed in this release
Storage
The Did not query segment error spuriously appearing when the cluster performs digest reassignment has now been fixed.
Dashboards and Widgets
Dragging a parameter to an empty Parameter Panel Widget would sometimes not move the parameter. This issue has been fixed.
Functions
The
time:xxx()
functions have been fixed as they did not correctly use the query's time zone as default. The offset was applied in an opposite manner, such that for example GMT+2 was applied as GMT-2. This has now been fixed.
Other
A regression introduced in version 1.132 has been fixed, where a file name starting with
shared/
would be recognized as a shared file instead of a regular file. However, a shared file should be referred to using exactly/shared/
as a prefix.
Improvement
Packages
Validate that there are no duplicate names used for each package template type during package installations (for example you cannot use the same name for multiple parsers that are part of the same package).
Falcon LogScale 1.140.0 GA (2024-05-28)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.140.0 | GA | 2024-05-28 | Cloud | 2025-07-31 | No | 1.112 | No |
Available for download two days after release.
Bug fixes and updates.
Advance Warning
The following items are due to change in a future release.
Installation and Deployment
The LogScale Launcher Script script for starting LogScale will be modified to change the way CPU core usage can be configured. The
-XX:ActiveProcessorCount=n
command-line option will be ignored if set. Users that need to configure the core count manually should setCORES=n
environment variable instead. This will cause the launcher to configure both LogScale and the JVM properly.This change is scheduled for 1.148.0.
For more information, see Configuring Available CPU Cores.
Deprecation
Items that have been deprecated and may be removed in a future release.
The
any
argument to thetype
parameter ofsort()
andtable()
has been deprecated and will be removed in version 1.142.Warnings prompts will be shown in queries that fall into either of these two cases:
If you are explicitly supplying an
any
argument, please either simply remove both the parameter and the argument, for example changesort(..., type=any)
tosort(...)
or supply the argument fortype
that corresponds to your data.If you are sorting hexadecimal values by their equivalent numerical values, please change the argument of
type
parameter tohex
e.g.sort(..., type=hex)
.In all other cases, no action is needed.
The new default value for
sort()
andtable()
will benumber
. Both functions will fall back to lexicographical ordering for values that cannot be understood as the provided argument fortype
.The following API endpoints are deprecated and marked for removal in 1.148.0:
POST
/api/v1/clusterconfig/kafka-queues/partition-assignment
GET
/api/v1/clusterconfig/kafka-queues/partition-assignment
POST
/api/v1/clusterconfig/kafka-queues/partition-assignment/set-replication-defaults
The deprecated methods are used for viewing and changing the partition assignment in Kafka for the ingest queue. Administrators should use Kafka's own tools for editing partition assignments instead, such as the bin/kafka-reassign-partitions.sh and bin/kafka-topics.sh scripts that ship with the Kafka install.
The
server.tar.gz
release artifact has been deprecated. Users should switch to theOS/architecture-specific server-linux_x64.tar.gz
orserver-alpine_x64.tar.gz
, which include bundled JDKs. Users installing a Docker image do not need to make any changes. With this change, LogScale will no longer support bringing your own JDK, we will bundle one with releases instead.We are making this change for the following reasons:
By bundling a JDK specifically for LogScale, we can customize the JDK to contain only the functionality needed by LogScale. This is a benefit from a security perspective, and also reduces the size of release artifacts.
Bundling the JDK ensures that the JDK version in use is one we've tested with, which makes it more likely a customer install will perform similar to our own internal setups.
By bundling the JDK, we will only need to support one JDK version. This means we can take advantage of enhanced JDK features sooner, such as specific performance improvements, which benefits everyone.
The last release where
server.tar.gz artifact
is included will be 1.154.0.We are deprecating the
humio/kafka
andhumio/zookeeper
Docker images due to low use. The planned final release for these images will be with LogScale 1.148.0.Better alternatives are available going forward. We recommend the following:
If you still require
humio/kafka
orhumio/zookeeper
for needs that cannot be covered by these alternatives, please contact Support and share your concerns.The
HUMIO_JVM_ARGS
environment variable in the LogScale Launcher Script script will be removed in 1.154.0.The variable existed for migration from older deployments where the launcher script was not available. The launcher script replaces the need for manually setting parameters in this variable, so the use of this variable is no longer required. Using the launcher script is now the recommended method of launching LogScale. For more details on the launcher script, see LogScale Launcher Script. Clusters that still set this configuration should migrate to the other variables described at Configuration.
The following GraphQL queries and mutations for interacting with parsers are deprecated and scheduled for removal in version 1.142.
The deprecated createParser mutation is replaced by createParserV2() . The differences between the old and new mutation are:
testData input field is replaced by testCases, which can contain more data than the old tests could. This includes adding assertions to the output of a test. These assertions are not displayed in the UI yet. To emulate the old API, you can take the old test string and put it in the ParserTestEventInput inside the ParserTestCaseInput, and they will behave the same as before.
fieldsToBeRemovedBeforeParsing can now be specified as part of the parser creation.
force field is renamed to allowOverwritingExistingParser.
sourceCode field is renamed to script.
tagFields field is renamed to fieldsToTag.
languageVersion is no longer an enum, but a LanguageVersionInputType instead.
The mutation returns a
Parser
, instead of aParser
wrapped in an object.The mutation fails when a parser has more than 2,000 test cases, or the test input in a single test case exceeds 40,000 characters.
The deprecated removeParser mutation is replaced by deleteParser. The difference between the old and new mutation is:
The mutation returns boolean to represent success or failure, instead of a
Parser
wrapped in an object.The deprecated testParser mutation is replaced by testParserV2() . The differences between the old and new mutation are:
The test cases are now structured types, instead of just being strings. To emulate the old API, take the test string and put it in the
ParserTestEventInput
inside theParserTestCaseInput
, and they will behave the same as before.The new test cases can contain assertions about the contents of the output.
The mutation output is significantly different from before, as it provides more detailed information on how a test case has failed.
The mutation now accepts both a language version and list of fields to be removed before parsing.
The parserScript field is renamed to script.
The tagFields field is renamed to fieldsToTag.
The deprecated updateParser mutation is replaced by updateParserV2() where more extensive test cases can be set. Continuing to use the previous API may result in test information on parsers being lost. To ensure information is not unintentionally erased, please migrate away from the deprecated APIs for both reading and updating parser test cases and use updateParserV2() instead. The differences between the previous and the new mutation are:
testData input field is replaced by testCases, which can contain more data than the old tests could. This includes adding assertions to the output of a test. These assertions are not displayed in the UI yet. To emulate the old API, you can take the old test string and put it in the
ParserTestEventInput
inside theParserTestCaseInput
, and they will behave the same as before.sourceCode field, used to updating the parser script, is changed to the script field, which takes a
UpdateParserScriptInput
object. This updates the parser script and the language version together.tagFields field is renamed to fieldsToTag.
The languageVersion is located inside the
UpdateParserScriptInput
object, and is no longer an enum, but a LanguageVersionInputType instead.The repositoryName and id fields are now correctly marked as mandatory in the schema. Previously this wasn't the case, even though the mutation would fail without them.
The mutation returns a
Parser
, instead of aParser
wrapped in an object.The old mutation had a bug where it would overwrite the languageVersion with a default value in some cases, which is fixed in the new one.
The mutation fails when a parser has more than 2,000 test cases, or the test input in a single test case exceeds 40,000 characters.
On the
Parser
type:
testData field is deprecated and replaced by testCases.
sourceCode field is deprecated and replaced by script.
tagFields field is deprecated and replaced by fieldsToTag.
For more information, see
Parser
,DeleteParserInput
,LanguageVersionInputType
, createParserV2() , testParserV2() , updateParserV2() .
New features and improvements
UI Changes
A new Event List. It formats all fields in the event in key-value pairs by grouping a field list by prefix.
column type has been added in theFor more information, see Column Properties.
GraphQL API
Added a new GraphQL API generateParserFromTemplate() for decoding a parser YAML template without installing it.
API
Information about files used in a query is now added to the query result returned by the API.
Configuration
The
EXACT_MATCH_LIMIT
configuration has been removed. It is no longer needed, since files are limited by size instead of rows.
Functions
Multi-valued arguments can now be passed to a saved query.
For more information, see User Functions (Saved Searches).
Falcon LogScale 1.139.0 GA (2024-05-21)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.139.0 | GA | 2024-05-21 | Cloud | 2025-07-31 | No | 1.112 | No |
Available for download two days after release.
Bug fixes and updates.
Advance Warning
The following items are due to change in a future release.
Installation and Deployment
The LogScale Launcher Script script for starting LogScale will be modified to change the way CPU core usage can be configured. The
-XX:ActiveProcessorCount=n
command-line option will be ignored if set. Users that need to configure the core count manually should setCORES=n
environment variable instead. This will cause the launcher to configure both LogScale and the JVM properly.This change is scheduled for 1.148.0.
For more information, see Configuring Available CPU Cores.
Deprecation
Items that have been deprecated and may be removed in a future release.
The
any
argument to thetype
parameter ofsort()
andtable()
has been deprecated and will be removed in version 1.142.Warnings prompts will be shown in queries that fall into either of these two cases:
If you are explicitly supplying an
any
argument, please either simply remove both the parameter and the argument, for example changesort(..., type=any)
tosort(...)
or supply the argument fortype
that corresponds to your data.If you are sorting hexadecimal values by their equivalent numerical values, please change the argument of
type
parameter tohex
e.g.sort(..., type=hex)
.In all other cases, no action is needed.
The new default value for
sort()
andtable()
will benumber
. Both functions will fall back to lexicographical ordering for values that cannot be understood as the provided argument fortype
.The following API endpoints are deprecated and marked for removal in 1.148.0:
POST
/api/v1/clusterconfig/kafka-queues/partition-assignment
GET
/api/v1/clusterconfig/kafka-queues/partition-assignment
POST
/api/v1/clusterconfig/kafka-queues/partition-assignment/set-replication-defaults
The deprecated methods are used for viewing and changing the partition assignment in Kafka for the ingest queue. Administrators should use Kafka's own tools for editing partition assignments instead, such as the bin/kafka-reassign-partitions.sh and bin/kafka-topics.sh scripts that ship with the Kafka install.
The
server.tar.gz
release artifact has been deprecated. Users should switch to theOS/architecture-specific server-linux_x64.tar.gz
orserver-alpine_x64.tar.gz
, which include bundled JDKs. Users installing a Docker image do not need to make any changes. With this change, LogScale will no longer support bringing your own JDK, we will bundle one with releases instead.We are making this change for the following reasons:
By bundling a JDK specifically for LogScale, we can customize the JDK to contain only the functionality needed by LogScale. This is a benefit from a security perspective, and also reduces the size of release artifacts.
Bundling the JDK ensures that the JDK version in use is one we've tested with, which makes it more likely a customer install will perform similar to our own internal setups.
By bundling the JDK, we will only need to support one JDK version. This means we can take advantage of enhanced JDK features sooner, such as specific performance improvements, which benefits everyone.
The last release where
server.tar.gz artifact
is included will be 1.154.0.We are deprecating the
humio/kafka
andhumio/zookeeper
Docker images due to low use. The planned final release for these images will be with LogScale 1.148.0.Better alternatives are available going forward. We recommend the following:
If you still require
humio/kafka
orhumio/zookeeper
for needs that cannot be covered by these alternatives, please contact Support and share your concerns.The
HUMIO_JVM_ARGS
environment variable in the LogScale Launcher Script script will be removed in 1.154.0.The variable existed for migration from older deployments where the launcher script was not available. The launcher script replaces the need for manually setting parameters in this variable, so the use of this variable is no longer required. Using the launcher script is now the recommended method of launching LogScale. For more details on the launcher script, see LogScale Launcher Script. Clusters that still set this configuration should migrate to the other variables described at Configuration.
The following GraphQL queries and mutations for interacting with parsers are deprecated and scheduled for removal in version 1.142.
The deprecated createParser mutation is replaced by createParserV2() . The differences between the old and new mutation are:
testData input field is replaced by testCases, which can contain more data than the old tests could. This includes adding assertions to the output of a test. These assertions are not displayed in the UI yet. To emulate the old API, you can take the old test string and put it in the ParserTestEventInput inside the ParserTestCaseInput, and they will behave the same as before.
fieldsToBeRemovedBeforeParsing can now be specified as part of the parser creation.
force field is renamed to allowOverwritingExistingParser.
sourceCode field is renamed to script.
tagFields field is renamed to fieldsToTag.
languageVersion is no longer an enum, but a LanguageVersionInputType instead.
The mutation returns a
Parser
, instead of aParser
wrapped in an object.The mutation fails when a parser has more than 2,000 test cases, or the test input in a single test case exceeds 40,000 characters.
The deprecated removeParser mutation is replaced by deleteParser. The difference between the old and new mutation is:
The mutation returns boolean to represent success or failure, instead of a
Parser
wrapped in an object.The deprecated testParser mutation is replaced by testParserV2() . The differences between the old and new mutation are:
The test cases are now structured types, instead of just being strings. To emulate the old API, take the test string and put it in the
ParserTestEventInput
inside theParserTestCaseInput
, and they will behave the same as before.The new test cases can contain assertions about the contents of the output.
The mutation output is significantly different from before, as it provides more detailed information on how a test case has failed.
The mutation now accepts both a language version and list of fields to be removed before parsing.
The parserScript field is renamed to script.
The tagFields field is renamed to fieldsToTag.
The deprecated updateParser mutation is replaced by updateParserV2() where more extensive test cases can be set. Continuing to use the previous API may result in test information on parsers being lost. To ensure information is not unintentionally erased, please migrate away from the deprecated APIs for both reading and updating parser test cases and use updateParserV2() instead. The differences between the previous and the new mutation are:
testData input field is replaced by testCases, which can contain more data than the old tests could. This includes adding assertions to the output of a test. These assertions are not displayed in the UI yet. To emulate the old API, you can take the old test string and put it in the
ParserTestEventInput
inside theParserTestCaseInput
, and they will behave the same as before.sourceCode field, used to updating the parser script, is changed to the script field, which takes a
UpdateParserScriptInput
object. This updates the parser script and the language version together.tagFields field is renamed to fieldsToTag.
The languageVersion is located inside the
UpdateParserScriptInput
object, and is no longer an enum, but a LanguageVersionInputType instead.The repositoryName and id fields are now correctly marked as mandatory in the schema. Previously this wasn't the case, even though the mutation would fail without them.
The mutation returns a
Parser
, instead of aParser
wrapped in an object.The old mutation had a bug where it would overwrite the languageVersion with a default value in some cases, which is fixed in the new one.
The mutation fails when a parser has more than 2,000 test cases, or the test input in a single test case exceeds 40,000 characters.
On the
Parser
type:
testData field is deprecated and replaced by testCases.
sourceCode field is deprecated and replaced by script.
tagFields field is deprecated and replaced by fieldsToTag.
For more information, see
Parser
,DeleteParserInput
,LanguageVersionInputType
, createParserV2() , testParserV2() , updateParserV2() .
Behavior Changes
Scripts or environment which make use of these tools should be checked and updated for the new configuration:
API
It is no longer possible to revive a query by polling it after it has been stopped.
For more information, see Running Query Jobs.
Other
LogScale deletes
humiotmp
directories when gracefully shut down, but this can causetmp
directories to leak if LogScale crashes. LogScale now also deletes these directories on startup.
New features and improvements
UI Changes
The maximum limit for saved query names has been set to 200 characters.
The warnings for numbers out of the browser's safe number range have been slightly modified.
For more information, see Troubleshooting: UI Warning: The actual value is different from what is displayed.
Configuration
A new
QueryBacktrackingLimit
dynamic configuration is available through GraphQL as experimental. It allows to limit a query iterating over individual events too many times (which may happen with an excessive use ofcopyEvent()
,join()
andsplit()
functions, orregex()
with repeat-flags). The default for this limit is 3,000 and can be modified with the dynamic configuration. At present, the feature flag sets this limit off by default.
Ingestion
Audit logs related to Event Forwarders no longer include the properties of the event forwarder.
Event forwarder disablement is now audit logged with type disable instead of enable.
The parser assertions can now be written and loaded to YAML files, using the V3 parser format.
Functions
The
onlyTrue
parameter has been added to thebitfield:extractFlags()
query function, it allows to output only flags whose value istrue
.For more information, see
bitfield:extractFlags()
.The query editor now gives warnings about certain regex constructs that are valid but suboptimal. Specifically, quantified wildcards in the beginning or end of an (unanchored) regex.
Other
Two new metrics have been introduced:
internal-throttled-poll-rate
keeps track of the number of times polling workers during query execution was throttled due to rate limiting.internal-throttled-poll-wait-time
keeps track of maximum delays per poll round due to rate limiting.
Improvement
UI Changes
When a saved query is used, the query editor will display the query string when hovering over it.
Falcon LogScale 1.138.0 GA (2024-05-14)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.138.0 | GA | 2024-05-14 | Cloud | 2025-07-31 | No | 1.112 | No |
Available for download two days after release.
Bug fixes and updates.
Advance Warning
The following items are due to change in a future release.
Installation and Deployment
The LogScale Launcher Script script for starting LogScale will be modified to change the way CPU core usage can be configured. The
-XX:ActiveProcessorCount=n
command-line option will be ignored if set. Users that need to configure the core count manually should setCORES=n
environment variable instead. This will cause the launcher to configure both LogScale and the JVM properly.This change is scheduled for 1.148.0.
For more information, see Configuring Available CPU Cores.
Deprecation
Items that have been deprecated and may be removed in a future release.
The
any
argument to thetype
parameter ofsort()
andtable()
has been deprecated and will be removed in version 1.142.Warnings prompts will be shown in queries that fall into either of these two cases:
If you are explicitly supplying an
any
argument, please either simply remove both the parameter and the argument, for example changesort(..., type=any)
tosort(...)
or supply the argument fortype
that corresponds to your data.If you are sorting hexadecimal values by their equivalent numerical values, please change the argument of
type
parameter tohex
e.g.sort(..., type=hex)
.In all other cases, no action is needed.
The new default value for
sort()
andtable()
will benumber
. Both functions will fall back to lexicographical ordering for values that cannot be understood as the provided argument fortype
.The following API endpoints are deprecated and marked for removal in 1.148.0:
POST
/api/v1/clusterconfig/kafka-queues/partition-assignment
GET
/api/v1/clusterconfig/kafka-queues/partition-assignment
POST
/api/v1/clusterconfig/kafka-queues/partition-assignment/set-replication-defaults
The deprecated methods are used for viewing and changing the partition assignment in Kafka for the ingest queue. Administrators should use Kafka's own tools for editing partition assignments instead, such as the bin/kafka-reassign-partitions.sh and bin/kafka-topics.sh scripts that ship with the Kafka install.
The
server.tar.gz
release artifact has been deprecated. Users should switch to theOS/architecture-specific server-linux_x64.tar.gz
orserver-alpine_x64.tar.gz
, which include bundled JDKs. Users installing a Docker image do not need to make any changes. With this change, LogScale will no longer support bringing your own JDK, we will bundle one with releases instead.We are making this change for the following reasons:
By bundling a JDK specifically for LogScale, we can customize the JDK to contain only the functionality needed by LogScale. This is a benefit from a security perspective, and also reduces the size of release artifacts.
Bundling the JDK ensures that the JDK version in use is one we've tested with, which makes it more likely a customer install will perform similar to our own internal setups.
By bundling the JDK, we will only need to support one JDK version. This means we can take advantage of enhanced JDK features sooner, such as specific performance improvements, which benefits everyone.
The last release where
server.tar.gz artifact
is included will be 1.154.0.We are deprecating the
humio/kafka
andhumio/zookeeper
Docker images due to low use. The planned final release for these images will be with LogScale 1.148.0.Better alternatives are available going forward. We recommend the following:
If you still require
humio/kafka
orhumio/zookeeper
for needs that cannot be covered by these alternatives, please contact Support and share your concerns.The
HUMIO_JVM_ARGS
environment variable in the LogScale Launcher Script script will be removed in 1.154.0.The variable existed for migration from older deployments where the launcher script was not available. The launcher script replaces the need for manually setting parameters in this variable, so the use of this variable is no longer required. Using the launcher script is now the recommended method of launching LogScale. For more details on the launcher script, see LogScale Launcher Script. Clusters that still set this configuration should migrate to the other variables described at Configuration.
The following GraphQL queries and mutations for interacting with parsers are deprecated and scheduled for removal in version 1.142.
The deprecated createParser mutation is replaced by createParserV2() . The differences between the old and new mutation are:
testData input field is replaced by testCases, which can contain more data than the old tests could. This includes adding assertions to the output of a test. These assertions are not displayed in the UI yet. To emulate the old API, you can take the old test string and put it in the ParserTestEventInput inside the ParserTestCaseInput, and they will behave the same as before.
fieldsToBeRemovedBeforeParsing can now be specified as part of the parser creation.
force field is renamed to allowOverwritingExistingParser.
sourceCode field is renamed to script.
tagFields field is renamed to fieldsToTag.
languageVersion is no longer an enum, but a LanguageVersionInputType instead.
The mutation returns a
Parser
, instead of aParser
wrapped in an object.The mutation fails when a parser has more than 2,000 test cases, or the test input in a single test case exceeds 40,000 characters.
The deprecated removeParser mutation is replaced by deleteParser. The difference between the old and new mutation is:
The mutation returns boolean to represent success or failure, instead of a
Parser
wrapped in an object.The deprecated testParser mutation is replaced by testParserV2() . The differences between the old and new mutation are:
The test cases are now structured types, instead of just being strings. To emulate the old API, take the test string and put it in the
ParserTestEventInput
inside theParserTestCaseInput
, and they will behave the same as before.The new test cases can contain assertions about the contents of the output.
The mutation output is significantly different from before, as it provides more detailed information on how a test case has failed.
The mutation now accepts both a language version and list of fields to be removed before parsing.
The parserScript field is renamed to script.
The tagFields field is renamed to fieldsToTag.
The deprecated updateParser mutation is replaced by updateParserV2() where more extensive test cases can be set. Continuing to use the previous API may result in test information on parsers being lost. To ensure information is not unintentionally erased, please migrate away from the deprecated APIs for both reading and updating parser test cases and use updateParserV2() instead. The differences between the previous and the new mutation are:
testData input field is replaced by testCases, which can contain more data than the old tests could. This includes adding assertions to the output of a test. These assertions are not displayed in the UI yet. To emulate the old API, you can take the old test string and put it in the
ParserTestEventInput
inside theParserTestCaseInput
, and they will behave the same as before.sourceCode field, used to updating the parser script, is changed to the script field, which takes a
UpdateParserScriptInput
object. This updates the parser script and the language version together.tagFields field is renamed to fieldsToTag.
The languageVersion is located inside the
UpdateParserScriptInput
object, and is no longer an enum, but a LanguageVersionInputType instead.The repositoryName and id fields are now correctly marked as mandatory in the schema. Previously this wasn't the case, even though the mutation would fail without them.
The mutation returns a
Parser
, instead of aParser
wrapped in an object.The old mutation had a bug where it would overwrite the languageVersion with a default value in some cases, which is fixed in the new one.
The mutation fails when a parser has more than 2,000 test cases, or the test input in a single test case exceeds 40,000 characters.
On the
Parser
type:
testData field is deprecated and replaced by testCases.
sourceCode field is deprecated and replaced by script.
tagFields field is deprecated and replaced by fieldsToTag.
For more information, see
Parser
,DeleteParserInput
,LanguageVersionInputType
, createParserV2() , testParserV2() , updateParserV2() .
Upgrades
Changes that may occur or be required during an upgrade.
Installation and Deployment
The Kafka client has been upgraded to 3.7.0. The Kafka server version in the deprecated humio/kafka Docker image is also upgraded to 3.7.0.
New features and improvements
Installation and Deployment
Changing the
NODE_ROLES
of a host is now forbidden. A host will now crash if the role it is configured to have doesn't match what is listed in global for that host. People wishing to change the role of a host in a cluster should instead remove that host from the cluster by unregistering it, wipe the data directory of the host, and boot the node back into the cluster as if it were a completely new node. The node will be assigned a new vhost identifier when doing this.Unused modules have been removed from the JDK bundled with LogScale releases, thus reducing the size of release artifacts.
UI Changes
Layout changes have been made in the
Connections
UI page.For more information, see Connections.
GraphQL API
A new unsetDynamicConfig GraphQL mutation is introduced to unset dynamic configurations.
Ingestion
Self-hosted only: derived tags (like
#repo
) are now included when executing Event Forwarding Rules. These fields will be included in the forwarded events unless filtered byselect()
ordrop(#repo)
in the rule.
Functions
array:filter
has been fixed as performing a filter test on an array field outputted from this function would sometimes lead to no results.
Other
A new metric
max_ingest_delay
is introduced to keep track of the current maximum ingest delay across all Kafka partitions.
Fixed in this release
Storage
Taking nodes offline in a cluster that does not use bucket storage could prevent cleanup of mini-segments associated with merge targets owned by the offline nodes, causing global to grow. To solve this, the cluster now moves merge targets that have not yet achieved full replication to follow digest nodes.
The file synchronization job would stop if upload to bucket storage fails. This issue has been fixed.
Dashboards and Widgets
The execution of dashboard parameter queries has been changed to only run as live when the dashboard itself is live.
Other
Fixing a very rare edge case that could cause creation of malformed entities in global when a nested entity — such as a datasource — was deleted.
Improvement
Storage
Logging improvements have been made around bucket uploads to assist with troubleshooting slow uploads, which are only seen in clusters with very large data sets.
Falcon LogScale 1.137.0 GA (2024-05-07)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.137.0 | GA | 2024-05-07 | Cloud | 2025-07-31 | No | 1.112 | No |
Available for download two days after release.
Bug fixes and updates.
Breaking Changes
The following items create a breaking change in the behavior, response or operation of this release.
Functions
The
limit
parameter has been added to therdns()
function. It is controlled by dynamic configurationsRdnsMaxLimit
andRdnsDefaultLimit
. This is a breaking change addition due to incidents caused by the large implicit limit used before.For more information, see
rdns()
.
Advance Warning
The following items are due to change in a future release.
Installation and Deployment
The LogScale Launcher Script script for starting LogScale will be modified to change the way CPU core usage can be configured. The
-XX:ActiveProcessorCount=n
command-line option will be ignored if set. Users that need to configure the core count manually should setCORES=n
environment variable instead. This will cause the launcher to configure both LogScale and the JVM properly.This change is scheduled for 1.148.0.
For more information, see Configuring Available CPU Cores.
Deprecation
Items that have been deprecated and may be removed in a future release.
The
any
argument to thetype
parameter ofsort()
andtable()
has been deprecated and will be removed in version 1.142.Warnings prompts will be shown in queries that fall into either of these two cases:
If you are explicitly supplying an
any
argument, please either simply remove both the parameter and the argument, for example changesort(..., type=any)
tosort(...)
or supply the argument fortype
that corresponds to your data.If you are sorting hexadecimal values by their equivalent numerical values, please change the argument of
type
parameter tohex
e.g.sort(..., type=hex)
.In all other cases, no action is needed.
The new default value for
sort()
andtable()
will benumber
. Both functions will fall back to lexicographical ordering for values that cannot be understood as the provided argument fortype
.The following API endpoints are deprecated and marked for removal in 1.148.0:
POST
/api/v1/clusterconfig/kafka-queues/partition-assignment
GET
/api/v1/clusterconfig/kafka-queues/partition-assignment
POST
/api/v1/clusterconfig/kafka-queues/partition-assignment/set-replication-defaults
The deprecated methods are used for viewing and changing the partition assignment in Kafka for the ingest queue. Administrators should use Kafka's own tools for editing partition assignments instead, such as the bin/kafka-reassign-partitions.sh and bin/kafka-topics.sh scripts that ship with the Kafka install.
We are deprecating the
humio/kafka
andhumio/zookeeper
Docker images due to low use. The planned final release for these images will be with LogScale 1.148.0.Better alternatives are available going forward. We recommend the following:
If you still require
humio/kafka
orhumio/zookeeper
for needs that cannot be covered by these alternatives, please contact Support and share your concerns.The
HUMIO_JVM_ARGS
environment variable in the LogScale Launcher Script script will be removed in 1.154.0.The variable existed for migration from older deployments where the launcher script was not available. The launcher script replaces the need for manually setting parameters in this variable, so the use of this variable is no longer required. Using the launcher script is now the recommended method of launching LogScale. For more details on the launcher script, see LogScale Launcher Script. Clusters that still set this configuration should migrate to the other variables described at Configuration.
The following GraphQL queries and mutations for interacting with parsers are deprecated and scheduled for removal in version 1.142.
The deprecated createParser mutation is replaced by createParserV2() . The differences between the old and new mutation are:
testData input field is replaced by testCases, which can contain more data than the old tests could. This includes adding assertions to the output of a test. These assertions are not displayed in the UI yet. To emulate the old API, you can take the old test string and put it in the ParserTestEventInput inside the ParserTestCaseInput, and they will behave the same as before.
fieldsToBeRemovedBeforeParsing can now be specified as part of the parser creation.
force field is renamed to allowOverwritingExistingParser.
sourceCode field is renamed to script.
tagFields field is renamed to fieldsToTag.
languageVersion is no longer an enum, but a LanguageVersionInputType instead.
The mutation returns a
Parser
, instead of aParser
wrapped in an object.The mutation fails when a parser has more than 2,000 test cases, or the test input in a single test case exceeds 40,000 characters.
The deprecated removeParser mutation is replaced by deleteParser. The difference between the old and new mutation is:
The mutation returns boolean to represent success or failure, instead of a
Parser
wrapped in an object.The deprecated testParser mutation is replaced by testParserV2() . The differences between the old and new mutation are:
The test cases are now structured types, instead of just being strings. To emulate the old API, take the test string and put it in the
ParserTestEventInput
inside theParserTestCaseInput
, and they will behave the same as before.The new test cases can contain assertions about the contents of the output.
The mutation output is significantly different from before, as it provides more detailed information on how a test case has failed.
The mutation now accepts both a language version and list of fields to be removed before parsing.
The parserScript field is renamed to script.
The tagFields field is renamed to fieldsToTag.
The deprecated updateParser mutation is replaced by updateParserV2() where more extensive test cases can be set. Continuing to use the previous API may result in test information on parsers being lost. To ensure information is not unintentionally erased, please migrate away from the deprecated APIs for both reading and updating parser test cases and use updateParserV2() instead. The differences between the previous and the new mutation are:
testData input field is replaced by testCases, which can contain more data than the old tests could. This includes adding assertions to the output of a test. These assertions are not displayed in the UI yet. To emulate the old API, you can take the old test string and put it in the
ParserTestEventInput
inside theParserTestCaseInput
, and they will behave the same as before.sourceCode field, used to updating the parser script, is changed to the script field, which takes a
UpdateParserScriptInput
object. This updates the parser script and the language version together.tagFields field is renamed to fieldsToTag.
The languageVersion is located inside the
UpdateParserScriptInput
object, and is no longer an enum, but a LanguageVersionInputType instead.The repositoryName and id fields are now correctly marked as mandatory in the schema. Previously this wasn't the case, even though the mutation would fail without them.
The mutation returns a
Parser
, instead of aParser
wrapped in an object.The old mutation had a bug where it would overwrite the languageVersion with a default value in some cases, which is fixed in the new one.
The mutation fails when a parser has more than 2,000 test cases, or the test input in a single test case exceeds 40,000 characters.
On the
Parser
type:
testData field is deprecated and replaced by testCases.
sourceCode field is deprecated and replaced by script.
tagFields field is deprecated and replaced by fieldsToTag.
For more information, see
Parser
,DeleteParserInput
,LanguageVersionInputType
, createParserV2() , testParserV2() , updateParserV2() .
New features and improvements
UI Changes
Time zone data has been updated to IANA 2024a and has been trimmed to +/- 5 years from the release date of IANA 2024a.
Automation and Alerts
Scheduled Reports can now be created. Scheduled Reports generate reports directly from dashboards and send them to the selected email addresses on a regular schedule.
For more information, see Scheduled PDF Reports.
Dashboards and Widgets
A parameter panel widget type has been added to allow users to drag parameters from the top panel and into these panels. Also, a parameter
width
is now adjustable in the settings.For more information, see Parameter Panel Widget.
Log Collector
Fleet Management now supports ephemeral hosts. If a collector is enrolled with the parameter
--ephemeralTimeout
, after being offline for the specified duration in hours it will disappear from theFleet Overview
interface and be unenrolled. The feature requires LogScale Collector version 1.7.0 or above.Live and Historic options for
Fleet Overview
are introduced. When Live, the overview will show online collectors and continuously be updated with e.g. new CPU metrics or status changes. The Historic view will display all records of collectors for the last 30 days. In this case the overview will not be updated with new information.For more information, see Switching between Live and Historic overview.
Fixed in this release
Functions
The query editor has been fixed as field auto-completions would sometimes not be suggested.
The query editor would mark the entire query as erroneous when
count()
was given withdistinct=true
parameter but missing an argument for thefield
parameter. This issue has been fixed.
Falcon LogScale 1.136.2 LTS (2024-06-12)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.136.2 | LTS | 2024-06-12 | Cloud | 2025-05-31 | No | 1.112 | No |
Hide file hashes
TAR Checksum | Value |
---|---|
MD5 | e9ff17d2c3f763bbe282fc4055aa3ea4 |
SHA1 | 6d215f73a3f0794f5d25293dab541bb2172d525c |
SHA256 | 6682216a929202b826c7a3b2bbf504cee03c1c2c0ead20e87324b92c7f3e84cf |
SHA512 | e0a5092cce05067186ef90bb001092b880107a3e53bace59cb83f0f56bd919381f5bfb7cc4794a382e4756faa34777996b477dc58074dbc61ee0a4e2d2b8b9d5 |
Docker Image | Included JDK | SHA256 Checksum |
---|---|---|
humio | 21 | 2d23d1ac912f2521ea2f6df58d1eb71809a37aab906e4af1833ad6515d71aa39 |
humio-core | 21 | 3045f568bf56c831aa2d068de4e21921ab1a58730a17f6b72d0f20fc34467315 |
kafka | 21 | 5e7bcafde7f97247d39436debf051eed74d392d3c8108814deba44c1e5201532 |
zookeeper | 21 | 41f009aaf13990b57fefbc7c53718251d66e805412e9cb7afb970c55bb189304 |
Download: https://repo.humio.com/repository/maven-releases/com/humio/server/1.136.2/server-1.136.2.tar.gz
These notes include entries from the following previous releases: 1.136.1
Bug fixes and updates.
Important
Due to a known memory issue in this release, customers are advised to upgrade to 1.137.0 or later.
Breaking Changes
The following items create a breaking change in the behavior, response or operation of this release.
Functions
The
limit
parameter has been added to therdns()
function. It is controlled by dynamic configurationsRdnsMaxLimit
andRdnsDefaultLimit
. This is a breaking change addition due to incidents caused by the large implicit limit used before.For more information, see
rdns()
.
Advance Warning
The following items are due to change in a future release.
Installation and Deployment
The LogScale Launcher Script script for starting LogScale will be modified to change the way CPU core usage can be configured. The
-XX:ActiveProcessorCount=n
command-line option will be ignored if set. Users that need to configure the core count manually should setCORES=n
environment variable instead. This will cause the launcher to configure both LogScale and the JVM properly.This change is scheduled for 1.148.0.
For more information, see Configuring Available CPU Cores.
Removed
Items that have been removed as of this release.
Storage
The full JDK has been removed from the Docker images, leaving only the bundled JDK that is part of LogScale release tarballs.
Deprecation
Items that have been deprecated and may be removed in a future release.
The
any
argument to thetype
parameter ofsort()
andtable()
has been deprecated and will be removed in version 1.142.Warnings prompts will be shown in queries that fall into either of these two cases:
If you are explicitly supplying an
any
argument, please either simply remove both the parameter and the argument, for example changesort(..., type=any)
tosort(...)
or supply the argument fortype
that corresponds to your data.If you are sorting hexadecimal values by their equivalent numerical values, please change the argument of
type
parameter tohex
e.g.sort(..., type=hex)
.In all other cases, no action is needed.
The new default value for
sort()
andtable()
will benumber
. Both functions will fall back to lexicographical ordering for values that cannot be understood as the provided argument fortype
.The following API endpoints are deprecated and marked for removal in 1.148.0:
POST
/api/v1/clusterconfig/kafka-queues/partition-assignment
GET
/api/v1/clusterconfig/kafka-queues/partition-assignment
POST
/api/v1/clusterconfig/kafka-queues/partition-assignment/set-replication-defaults
The deprecated methods are used for viewing and changing the partition assignment in Kafka for the ingest queue. Administrators should use Kafka's own tools for editing partition assignments instead, such as the bin/kafka-reassign-partitions.sh and bin/kafka-topics.sh scripts that ship with the Kafka install.
We are deprecating the
humio/kafka
andhumio/zookeeper
Docker images due to low use. The planned final release for these images will be with LogScale 1.148.0.Better alternatives are available going forward. We recommend the following:
If you still require
humio/kafka
orhumio/zookeeper
for needs that cannot be covered by these alternatives, please contact Support and share your concerns.The
HUMIO_JVM_ARGS
environment variable in the LogScale Launcher Script script will be removed in 1.154.0.The variable existed for migration from older deployments where the launcher script was not available. The launcher script replaces the need for manually setting parameters in this variable, so the use of this variable is no longer required. Using the launcher script is now the recommended method of launching LogScale. For more details on the launcher script, see LogScale Launcher Script. Clusters that still set this configuration should migrate to the other variables described at Configuration.
The following GraphQL queries and mutations for interacting with parsers are deprecated and scheduled for removal in version 1.142.
The deprecated createParser mutation is replaced by createParserV2() . The differences between the old and new mutation are:
testData input field is replaced by testCases, which can contain more data than the old tests could. This includes adding assertions to the output of a test. These assertions are not displayed in the UI yet. To emulate the old API, you can take the old test string and put it in the ParserTestEventInput inside the ParserTestCaseInput, and they will behave the same as before.
fieldsToBeRemovedBeforeParsing can now be specified as part of the parser creation.
force field is renamed to allowOverwritingExistingParser.
sourceCode field is renamed to script.
tagFields field is renamed to fieldsToTag.
languageVersion is no longer an enum, but a LanguageVersionInputType instead.
The mutation returns a
Parser
, instead of aParser
wrapped in an object.The mutation fails when a parser has more than 2,000 test cases, or the test input in a single test case exceeds 40,000 characters.
The deprecated removeParser mutation is replaced by deleteParser. The difference between the old and new mutation is:
The mutation returns boolean to represent success or failure, instead of a
Parser
wrapped in an object.The deprecated testParser mutation is replaced by testParserV2() . The differences between the old and new mutation are:
The test cases are now structured types, instead of just being strings. To emulate the old API, take the test string and put it in the
ParserTestEventInput
inside theParserTestCaseInput
, and they will behave the same as before.The new test cases can contain assertions about the contents of the output.
The mutation output is significantly different from before, as it provides more detailed information on how a test case has failed.
The mutation now accepts both a language version and list of fields to be removed before parsing.
The parserScript field is renamed to script.
The tagFields field is renamed to fieldsToTag.
The deprecated updateParser mutation is replaced by updateParserV2() where more extensive test cases can be set. Continuing to use the previous API may result in test information on parsers being lost. To ensure information is not unintentionally erased, please migrate away from the deprecated APIs for both reading and updating parser test cases and use updateParserV2() instead. The differences between the previous and the new mutation are:
testData input field is replaced by testCases, which can contain more data than the old tests could. This includes adding assertions to the output of a test. These assertions are not displayed in the UI yet. To emulate the old API, you can take the old test string and put it in the
ParserTestEventInput
inside theParserTestCaseInput
, and they will behave the same as before.sourceCode field, used to updating the parser script, is changed to the script field, which takes a
UpdateParserScriptInput
object. This updates the parser script and the language version together.tagFields field is renamed to fieldsToTag.
The languageVersion is located inside the
UpdateParserScriptInput
object, and is no longer an enum, but a LanguageVersionInputType instead.The repositoryName and id fields are now correctly marked as mandatory in the schema. Previously this wasn't the case, even though the mutation would fail without them.
The mutation returns a
Parser
, instead of aParser
wrapped in an object.The old mutation had a bug where it would overwrite the languageVersion with a default value in some cases, which is fixed in the new one.
The mutation fails when a parser has more than 2,000 test cases, or the test input in a single test case exceeds 40,000 characters.
On the
Parser
type:
testData field is deprecated and replaced by testCases.
sourceCode field is deprecated and replaced by script.
tagFields field is deprecated and replaced by fieldsToTag.
For more information, see
Parser
,DeleteParserInput
,LanguageVersionInputType
, createParserV2() , testParserV2() , updateParserV2() .
Behavior Changes
Scripts or environment which make use of these tools should be checked and updated for the new configuration:
Queries
Hitting the query count quota no longer cancels existing queries, but only disallows starting new ones.
For more information, see Query Count.
Upgrades
Changes that may occur or be required during an upgrade.
Storage
Docker images have been upgraded to Java 22.
Added new deployment artifacts. The published tarballs (e.g.
server.tar.gz
) are now available with a bundled JDK. The platforms currently supported are linux_x64 for 64-bit Linux, and alpine_x64 for 64-bit Alpine Linux and other musl-based Linux distributions. The Docker images have been updated to use this bundled JDK internally. We encourage users to migrate to using the tarballs with bundled JDKs.
New features and improvements
Installation and Deployment
The LogScale Launcher Script now sets -XX:+UseTransparentHugePages as part of the mandatory flags. THP is already enabled for all processes on many Linux distributions by default. This flag enables THP on systems where processes must opt into THP via madvise. We strongly recommend enabling THP for LogScale.
UI Changes
Time zone data has been updated to IANA 2024a and has been trimmed to +/- 5 years from the release date of IANA 2024a.
The query editor now shows completions for known field values that have previously been observed in results. For instance,
#repo = m
may show completions for repositories starting withm
seen in previous results.Sign up to LogScale Community Edition is no longer available for new users. Links, pages and UI flows to access it have been removed.
The number of events in the current window has been added to Metric Types as window_count.
Automation and Alerts
Added logging when Alerts with Field-Based Throttling discard values and thus potentially trigger again before the throttle period expires.
For more information, see Field-Based Throttling.
The limit of 50 characters when naming a scheduled search is now removed.
GraphQL API
The querySearchDomains() query has been extended with the option to filter results by limit name as well as ordering results by limit name.
For more information, see querySearchDomains() .
Storage
The bucket transfer prioritization has been adjusted. When behind on both uploads and downloads, 75% of the
S3_STORAGE_CONCURRENCY
capacity is reserved for uploads, and 25% for downloads, rather than using all slots for downloads.We reverted a change introduced in 1.131.0 intended to cause fewer mini-segments to move in the cluster when digest reassignment occurs. The change could cause mini-segments to not be balanced across cluster nodes in the expected way.
Configuration
The following configuration parameters have been introduced:
The amount of global meta data required for retention spans of over 30 days has been reduced. The amount of global meta data required in clusters with high number of active datasources has also been reduced, as well as the global size of mini segments, by combining them into larger mini segments.
Pre-merging mini segments now reduces the number of segment files on disk (and in bucket) and reduces the amount of meta data for segment targets in progress. This allows getting larger target segment files and reduces the amount of "undersized" merging of "completed" segments. It also allows a smaller flush interval for mini segments without incurring in a larger number of mini segments.
This feature is only supported from v1.112.0. To safely enable it by default, we are now raising to v1.112.0 the minimum version to upgrade from, to disallow rollback to versions older than this version.
The feature is on by default. It can be disabled using the feature flag PreMergeMiniSegments. Disabling the feature stops future merges of mini segments into larger mini segment files, but does not alter the defaults below, nor modify how already merged mini-segments behave.
For more information, see Global Database, Ingestion: Digest Phase.
The default values for the following configuration parameters have changed:
FLUSH_BLOCK_SECONDS = 900
(was 1,800)MAX_HOURS_SEGMENT_OPEN = 720
(was 24, maximum is now 24,000)
Dashboards and Widgets
The automatic rendering of URLs as links has been disabled for the
Table
widget. Only URLs appearing in queries with the markdown style e.g.[CrowdStrike](https://crowdstrike.com)
will be automatically rendered as links in theTable
widget columns. Content, including plain URLs e.g.https://crowdstrike.com
, can still be rendered as links, but this should now be explicitly configured using the Show as→ widget property.For more information, see Table Widget Properties.
Dashboard parameters have gotten the following updates:
The name of the parameter is on top of the input field, so more space is available for both parts.
A
button has been added to multi-value parameters so that all values can be removed in one click.The parameter configuration form has been moved to the side panel.
Multiple values can be added at once to a multi-value parameter by inputting a comma separated list of values, which can be used as individual values.
For more information, see Multi-value Parameters.
Ingestion
Ingest feed scheduling has been changed to be more gradual in ramping up concurrency and will also reduce concurrency in response to failures. This will make high-pressure failing ingest feeds fall back to periodic retries instead of constantly retrying.
For more information, see Ingest Data from AWS S3.
Parser test cases can now include assertions. This allows you to specify that you expect certain fields to have certain values in a test case after parsing, or that you expect certain fields to not be present at all. Note that the assertions are not exported as part of the YAML template yet.
For more information, see Writing a Parser.
Log Collector
Introducing Fleet Management Remote Updates allowing users to install the LogScale Collector via curl / PowerShell, and manage upgrades and downgrades centrally from Fleet Management.
For more information, see Managing Falcon Log Collector Versions - Instances, Manage Versions - Groups, Install Falcon Log Collector.
Queries
Queries are now allowed to be queued for start by the query coordinator for a maximum of 10 minutes.
For more information, see Query Coordination.
Functions
The optional
limit
parameter has been added to thereadFile()
function to limit the number of rows of the file returned.The
geography:distance()
function is now generally available. The default value for theas
parameter has been changed to_distance
.For more information, see
geography:distance()
.onDuplicate
parameter has been added tokvParse()
to specify how to handle duplicate fields.For Cloud customers: the maximum value of the
limit
parameter fortail()
andhead()
functions has been increased to20,000
.For Self-Hosted solutions: the maximum value of the
limit
parameter fortail()
andhead()
functions has been aligned with theStateRowLimit
dynamic configuration. This means that the upper value oflimit
is now adjustable for these two functions.The
readFile()
function will show a warning when the results are truncated due to reaching global result row limit. This behaviour was previously silent.
Other
New metrics
ingest-queue-write-offset
andingest-queue-read-offset
have been added, reporting the Kafka offsets of the most recently written and read events on the ingest queue.The
ConfigLoggerJob
now also logsdigestReplicationFactor
,segmentReplicationFactor
,minHostAlivePercentageToEnableClusterRebalancing
,allowUpdateDesiredDigesters
andallowRebalanceExistingSegments
.New metric
events-parsed
has been added, serving as an indicator for how many input events a parser has been applied to.
Fixed in this release
Security
Various OIDC caching issues have been fixed including ensuring refresh of the JWKS cache once per hour by default.
UI Changes
The formatting of @timestamp has been improved to make time-based visualizations fully compatible with time zones when selecting time zones other than the browser default.
The error Failed to fetch data for aliased fields would sometimes appear on the
Search
page of the sandbox repository. This issue has been fixed.Data statistics in the
Organizations
overview page could not be populated in some cases.Fixed an issue that prevented users from copying the query string from the flyout in the Recent / Saved queries panel.
Still existing Humio occurrences have been replaced with LogScale in a lot of places, primarily in GraphQL documentation and error messages.
Storage
redactEvents
segment rewriting has been fixed for several issues that could cause either failure to complete the rewrite, or events to be missed in rare cases. Users should be aware that redaction jobs that were submitted prior to upgrading to a fixed version may fail to complete correctly, or may miss events. Therefore, you are encouraged to resubmit redactions you have recently submitted, to ensure the events are actually gone.Pending merges of segments would contend with the verification of segments being transferred between nodes/bucket. This resulted in spuriously long transfer times, due to queueing of the verification step for the segment file. This issue has now been fixed.
Dashboards and Widgets
A visualization issue has been fixed as the dropdown menu for saving a dashboard widget was showing a wrong title in dashboards not belonging to a package.
Parameters appearing between a string containing
\\
and any other string would not be correctly detected. This issue has been fixed.Other options than exporting to CSV file were not possible on the
Dashboard
page for a widget and on theSearch
page for a query result. This issue is now fixed.
Queries
Multiple clients might trigger concurrent computation of the result step for a shared query. This issue has been fixed: now only one pending computation is allowed at a time.
Functions
The error message when providing a non-existing query function in an anonymous query e.g.
bucket(function=[{_noFunction()}])
has been fixed.The
table()
function has been fixed as it would wrongly accept a limit of 0, causing serialisation to break between cluster nodes.
Other
A regression introduced in version 1.132 has been fixed, where a file name starting with
shared/
would be recognized as a shared file instead of a regular file. However, a shared file should be referred to using exactly/shared/
as a prefix.DNS lookup was blocked by heavy disk IO when using a HTTP proxy, causing timeouts. This issue has been fixed.
Packages
Uploading a package zip would fail on Windows devices. This issue has been fixed.
Known Issues
Other
An issue has been identified where a memory leak could cause a node to exhaust the available memory. Customers are advised to upgrade to 1.137.0 or higher.
Improvement
Installation and Deployment
An error log is displayed if the latency on global-events exceeds 150 seconds, to prevent nodes from crashing.
Storage
Removed some work from the thread scheduling bucket transfers that could be slightly expensive in cases where the cluster had fallen behind on uploads.
Configuration
Whenever a SAML or OIDC IdP is created or updated, any leading or trailing whitespace will be trimmed from its fields. This is to avoid configuration errors.
Falcon LogScale 1.136.1 LTS (2024-05-29)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.136.1 | LTS | 2024-05-29 | Cloud | 2025-05-31 | No | 1.112 | No |
Hide file hashes
TAR Checksum | Value |
---|---|
MD5 | 5d85313bbb4ca534e2dbdad64cb93cf4 |
SHA1 | 22258d1028543b021397b016ab8ec46ca1ba157a |
SHA256 | d7bba11fa29c730476fefe1a90176dd5c38ed7f5da0569beab0f0f60e6f2b1fa |
SHA512 | 0dd5b99de53bdc3c48dc3977b2a86793d440f1aab6941715f0ec9d46b640e3bb20d9c11930d6db05e538e83f50edb336eefae590f2428e718bd3aee517806e4a |
Docker Image | Included JDK | SHA256 Checksum |
---|---|---|
humio | 21 | 1274cc9fbdcee71a206ea9d3c874a331387359e0df8797874360b4b28abb6e28 |
humio-core | 21 | c29cef0886b22d41c346624dd1109b977e8efe7873f1f68a306f5135d7f55a6c |
kafka | 21 | 1c9f0ce03d3877e78893061b2ef67cb2e7cb0b534a858d01e9616be88a5cbd40 |
zookeeper | 21 | fe64ed2195c1df1fa3a668a4d010973e8a602b9a37862bf41de06fc698db7226 |
Download: https://repo.humio.com/repository/maven-releases/com/humio/server/1.136.1/server-1.136.1.tar.gz
Bug fixes and updates.
Breaking Changes
The following items create a breaking change in the behavior, response or operation of this release.
Functions
The
limit
parameter has been added to therdns()
function. It is controlled by dynamic configurationsRdnsMaxLimit
andRdnsDefaultLimit
. This is a breaking change addition due to incidents caused by the large implicit limit used before.For more information, see
rdns()
.
Advance Warning
The following items are due to change in a future release.
Installation and Deployment
The LogScale Launcher Script script for starting LogScale will be modified to change the way CPU core usage can be configured. The
-XX:ActiveProcessorCount=n
command-line option will be ignored if set. Users that need to configure the core count manually should setCORES=n
environment variable instead. This will cause the launcher to configure both LogScale and the JVM properly.This change is scheduled for 1.148.0.
For more information, see Configuring Available CPU Cores.
Removed
Items that have been removed as of this release.
Storage
The full JDK has been removed from the Docker images, leaving only the bundled JDK that is part of LogScale release tarballs.
Deprecation
Items that have been deprecated and may be removed in a future release.
The
any
argument to thetype
parameter ofsort()
andtable()
has been deprecated and will be removed in version 1.142.Warnings prompts will be shown in queries that fall into either of these two cases:
If you are explicitly supplying an
any
argument, please either simply remove both the parameter and the argument, for example changesort(..., type=any)
tosort(...)
or supply the argument fortype
that corresponds to your data.If you are sorting hexadecimal values by their equivalent numerical values, please change the argument of
type
parameter tohex
e.g.sort(..., type=hex)
.In all other cases, no action is needed.
The new default value for
sort()
andtable()
will benumber
. Both functions will fall back to lexicographical ordering for values that cannot be understood as the provided argument fortype
.The following API endpoints are deprecated and marked for removal in 1.148.0:
POST
/api/v1/clusterconfig/kafka-queues/partition-assignment
GET
/api/v1/clusterconfig/kafka-queues/partition-assignment
POST
/api/v1/clusterconfig/kafka-queues/partition-assignment/set-replication-defaults
The deprecated methods are used for viewing and changing the partition assignment in Kafka for the ingest queue. Administrators should use Kafka's own tools for editing partition assignments instead, such as the bin/kafka-reassign-partitions.sh and bin/kafka-topics.sh scripts that ship with the Kafka install.
We are deprecating the
humio/kafka
andhumio/zookeeper
Docker images due to low use. The planned final release for these images will be with LogScale 1.148.0.Better alternatives are available going forward. We recommend the following:
If you still require
humio/kafka
orhumio/zookeeper
for needs that cannot be covered by these alternatives, please contact Support and share your concerns.The
HUMIO_JVM_ARGS
environment variable in the LogScale Launcher Script script will be removed in 1.154.0.The variable existed for migration from older deployments where the launcher script was not available. The launcher script replaces the need for manually setting parameters in this variable, so the use of this variable is no longer required. Using the launcher script is now the recommended method of launching LogScale. For more details on the launcher script, see LogScale Launcher Script. Clusters that still set this configuration should migrate to the other variables described at Configuration.
The following GraphQL queries and mutations for interacting with parsers are deprecated and scheduled for removal in version 1.142.
The deprecated createParser mutation is replaced by createParserV2() . The differences between the old and new mutation are:
testData input field is replaced by testCases, which can contain more data than the old tests could. This includes adding assertions to the output of a test. These assertions are not displayed in the UI yet. To emulate the old API, you can take the old test string and put it in the ParserTestEventInput inside the ParserTestCaseInput, and they will behave the same as before.
fieldsToBeRemovedBeforeParsing can now be specified as part of the parser creation.
force field is renamed to allowOverwritingExistingParser.
sourceCode field is renamed to script.
tagFields field is renamed to fieldsToTag.
languageVersion is no longer an enum, but a LanguageVersionInputType instead.
The mutation returns a
Parser
, instead of aParser
wrapped in an object.The mutation fails when a parser has more than 2,000 test cases, or the test input in a single test case exceeds 40,000 characters.
The deprecated removeParser mutation is replaced by deleteParser. The difference between the old and new mutation is:
The mutation returns boolean to represent success or failure, instead of a
Parser
wrapped in an object.The deprecated testParser mutation is replaced by testParserV2() . The differences between the old and new mutation are:
The test cases are now structured types, instead of just being strings. To emulate the old API, take the test string and put it in the
ParserTestEventInput
inside theParserTestCaseInput
, and they will behave the same as before.The new test cases can contain assertions about the contents of the output.
The mutation output is significantly different from before, as it provides more detailed information on how a test case has failed.
The mutation now accepts both a language version and list of fields to be removed before parsing.
The parserScript field is renamed to script.
The tagFields field is renamed to fieldsToTag.
The deprecated updateParser mutation is replaced by updateParserV2() where more extensive test cases can be set. Continuing to use the previous API may result in test information on parsers being lost. To ensure information is not unintentionally erased, please migrate away from the deprecated APIs for both reading and updating parser test cases and use updateParserV2() instead. The differences between the previous and the new mutation are:
testData input field is replaced by testCases, which can contain more data than the old tests could. This includes adding assertions to the output of a test. These assertions are not displayed in the UI yet. To emulate the old API, you can take the old test string and put it in the
ParserTestEventInput
inside theParserTestCaseInput
, and they will behave the same as before.sourceCode field, used to updating the parser script, is changed to the script field, which takes a
UpdateParserScriptInput
object. This updates the parser script and the language version together.tagFields field is renamed to fieldsToTag.
The languageVersion is located inside the
UpdateParserScriptInput
object, and is no longer an enum, but a LanguageVersionInputType instead.The repositoryName and id fields are now correctly marked as mandatory in the schema. Previously this wasn't the case, even though the mutation would fail without them.
The mutation returns a
Parser
, instead of aParser
wrapped in an object.The old mutation had a bug where it would overwrite the languageVersion with a default value in some cases, which is fixed in the new one.
The mutation fails when a parser has more than 2,000 test cases, or the test input in a single test case exceeds 40,000 characters.
On the
Parser
type:
testData field is deprecated and replaced by testCases.
sourceCode field is deprecated and replaced by script.
tagFields field is deprecated and replaced by fieldsToTag.
For more information, see
Parser
,DeleteParserInput
,LanguageVersionInputType
, createParserV2() , testParserV2() , updateParserV2() .
Behavior Changes
Scripts or environment which make use of these tools should be checked and updated for the new configuration:
Queries
Hitting the query count quota no longer cancels existing queries, but only disallows starting new ones.
For more information, see Query Count.
Upgrades
Changes that may occur or be required during an upgrade.
Storage
Docker images have been upgraded to Java 22.
Added new deployment artifacts. The published tarballs (e.g.
server.tar.gz
) are now available with a bundled JDK. The platforms currently supported are linux_x64 for 64-bit Linux, and alpine_x64 for 64-bit Alpine Linux and other musl-based Linux distributions. The Docker images have been updated to use this bundled JDK internally. We encourage users to migrate to using the tarballs with bundled JDKs.
New features and improvements
Installation and Deployment
The LogScale Launcher Script now sets -XX:+UseTransparentHugePages as part of the mandatory flags. THP is already enabled for all processes on many Linux distributions by default. This flag enables THP on systems where processes must opt into THP via madvise. We strongly recommend enabling THP for LogScale.
UI Changes
Time zone data has been updated to IANA 2024a and has been trimmed to +/- 5 years from the release date of IANA 2024a.
The query editor now shows completions for known field values that have previously been observed in results. For instance,
#repo = m
may show completions for repositories starting withm
seen in previous results.Sign up to LogScale Community Edition is no longer available for new users. Links, pages and UI flows to access it have been removed.
The number of events in the current window has been added to Metric Types as window_count.
Automation and Alerts
Added logging when Alerts with Field-Based Throttling discard values and thus potentially trigger again before the throttle period expires.
For more information, see Field-Based Throttling.
The limit of 50 characters when naming a scheduled search is now removed.
GraphQL API
The querySearchDomains() query has been extended with the option to filter results by limit name as well as ordering results by limit name.
For more information, see querySearchDomains() .
Storage
The bucket transfer prioritization has been adjusted. When behind on both uploads and downloads, 75% of the
S3_STORAGE_CONCURRENCY
capacity is reserved for uploads, and 25% for downloads, rather than using all slots for downloads.We reverted a change introduced in 1.131.0 intended to cause fewer mini-segments to move in the cluster when digest reassignment occurs. The change could cause mini-segments to not be balanced across cluster nodes in the expected way.
Configuration
The following configuration parameters have been introduced:
The amount of global meta data required for retention spans of over 30 days has been reduced. The amount of global meta data required in clusters with high number of active datasources has also been reduced, as well as the global size of mini segments, by combining them into larger mini segments.
Pre-merging mini segments now reduces the number of segment files on disk (and in bucket) and reduces the amount of meta data for segment targets in progress. This allows getting larger target segment files and reduces the amount of "undersized" merging of "completed" segments. It also allows a smaller flush interval for mini segments without incurring in a larger number of mini segments.
This feature is only supported from v1.112.0. To safely enable it by default, we are now raising to v1.112.0 the minimum version to upgrade from, to disallow rollback to versions older than this version.
The feature is on by default. It can be disabled using the feature flag PreMergeMiniSegments. Disabling the feature stops future merges of mini segments into larger mini segment files, but does not alter the defaults below, nor modify how already merged mini-segments behave.
For more information, see Global Database, Ingestion: Digest Phase.
The default values for the following configuration parameters have changed:
FLUSH_BLOCK_SECONDS = 900
(was 1,800)MAX_HOURS_SEGMENT_OPEN = 720
(was 24, maximum is now 24,000)
Dashboards and Widgets
The automatic rendering of URLs as links has been disabled for the
Table
widget. Only URLs appearing in queries with the markdown style e.g.[CrowdStrike](https://crowdstrike.com)
will be automatically rendered as links in theTable
widget columns. Content, including plain URLs e.g.https://crowdstrike.com
, can still be rendered as links, but this should now be explicitly configured using the Show as→ widget property.For more information, see Table Widget Properties.
Dashboard parameters have gotten the following updates:
The name of the parameter is on top of the input field, so more space is available for both parts.
A
button has been added to multi-value parameters so that all values can be removed in one click.The parameter configuration form has been moved to the side panel.
Multiple values can be added at once to a multi-value parameter by inputting a comma separated list of values, which can be used as individual values.
For more information, see Multi-value Parameters.
Ingestion
Ingest feed scheduling has been changed to be more gradual in ramping up concurrency and will also reduce concurrency in response to failures. This will make high-pressure failing ingest feeds fall back to periodic retries instead of constantly retrying.
For more information, see Ingest Data from AWS S3.
Parser test cases can now include assertions. This allows you to specify that you expect certain fields to have certain values in a test case after parsing, or that you expect certain fields to not be present at all. Note that the assertions are not exported as part of the YAML template yet.
For more information, see Writing a Parser.
Log Collector
Introducing Fleet Management Remote Updates allowing users to install the LogScale Collector via curl / PowerShell, and manage upgrades and downgrades centrally from Fleet Management.
For more information, see Managing Falcon Log Collector Versions - Instances, Manage Versions - Groups, Install Falcon Log Collector.
Queries
Queries are now allowed to be queued for start by the query coordinator for a maximum of 10 minutes.
For more information, see Query Coordination.
Functions
The optional
limit
parameter has been added to thereadFile()
function to limit the number of rows of the file returned.The
geography:distance()
function is now generally available. The default value for theas
parameter has been changed to_distance
.For more information, see
geography:distance()
.onDuplicate
parameter has been added tokvParse()
to specify how to handle duplicate fields.For Cloud customers: the maximum value of the
limit
parameter fortail()
andhead()
functions has been increased to20,000
.For Self-Hosted solutions: the maximum value of the
limit
parameter fortail()
andhead()
functions has been aligned with theStateRowLimit
dynamic configuration. This means that the upper value oflimit
is now adjustable for these two functions.The
readFile()
function will show a warning when the results are truncated due to reaching global result row limit. This behaviour was previously silent.
Other
New metrics
ingest-queue-write-offset
andingest-queue-read-offset
have been added, reporting the Kafka offsets of the most recently written and read events on the ingest queue.The
ConfigLoggerJob
now also logsdigestReplicationFactor
,segmentReplicationFactor
,minHostAlivePercentageToEnableClusterRebalancing
,allowUpdateDesiredDigesters
andallowRebalanceExistingSegments
.New metric
events-parsed
has been added, serving as an indicator for how many input events a parser has been applied to.
Fixed in this release
Security
Various OIDC caching issues have been fixed including ensuring refresh of the JWKS cache once per hour by default.
UI Changes
The formatting of @timestamp has been improved to make time-based visualizations fully compatible with time zones when selecting time zones other than the browser default.
The error Failed to fetch data for aliased fields would sometimes appear on the
Search
page of the sandbox repository. This issue has been fixed.Data statistics in the
Organizations
overview page could not be populated in some cases.Fixed an issue that prevented users from copying the query string from the flyout in the Recent / Saved queries panel.
Still existing Humio occurrences have been replaced with LogScale in a lot of places, primarily in GraphQL documentation and error messages.
Storage
redactEvents
segment rewriting has been fixed for several issues that could cause either failure to complete the rewrite, or events to be missed in rare cases. Users should be aware that redaction jobs that were submitted prior to upgrading to a fixed version may fail to complete correctly, or may miss events. Therefore, you are encouraged to resubmit redactions you have recently submitted, to ensure the events are actually gone.
Dashboards and Widgets
A visualization issue has been fixed as the dropdown menu for saving a dashboard widget was showing a wrong title in dashboards not belonging to a package.
Parameters appearing between a string containing
\\
and any other string would not be correctly detected. This issue has been fixed.Other options than exporting to CSV file were not possible on the
Dashboard
page for a widget and on theSearch
page for a query result. This issue is now fixed.
Queries
Multiple clients might trigger concurrent computation of the result step for a shared query. This issue has been fixed: now only one pending computation is allowed at a time.
Functions
The error message when providing a non-existing query function in an anonymous query e.g.
bucket(function=[{_noFunction()}])
has been fixed.The
table()
function has been fixed as it would wrongly accept a limit of 0, causing serialisation to break between cluster nodes.
Other
DNS lookup was blocked by heavy disk IO when using a HTTP proxy, causing timeouts. This issue has been fixed.
Packages
Uploading a package zip would fail on Windows devices. This issue has been fixed.
Improvement
Installation and Deployment
An error log is displayed if the latency on global-events exceeds 150 seconds, to prevent nodes from crashing.
Storage
Removed some work from the thread scheduling bucket transfers that could be slightly expensive in cases where the cluster had fallen behind on uploads.
Configuration
Whenever a SAML or OIDC IdP is created or updated, any leading or trailing whitespace will be trimmed from its fields. This is to avoid configuration errors.
Falcon LogScale 1.136.0 GA (2024-04-30)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.136.0 | GA | 2024-04-30 | Cloud | 2025-05-31 | No | 1.112 | No |
Available for download two days after release.
Bug fixes and updates.
Advance Warning
The following items are due to change in a future release.
Installation and Deployment
The LogScale Launcher Script script for starting LogScale will be modified to change the way CPU core usage can be configured. The
-XX:ActiveProcessorCount=n
command-line option will be ignored if set. Users that need to configure the core count manually should setCORES=n
environment variable instead. This will cause the launcher to configure both LogScale and the JVM properly.This change is scheduled for 1.148.0.
For more information, see Configuring Available CPU Cores.
Removed
Items that have been removed as of this release.
Storage
The full JDK has been removed from the Docker images, leaving only the bundled JDK that is part of LogScale release tarballs.
Deprecation
Items that have been deprecated and may be removed in a future release.
The
any
argument to thetype
parameter ofsort()
andtable()
has been deprecated and will be removed in version 1.142.Warnings prompts will be shown in queries that fall into either of these two cases:
If you are explicitly supplying an
any
argument, please either simply remove both the parameter and the argument, for example changesort(..., type=any)
tosort(...)
or supply the argument fortype
that corresponds to your data.If you are sorting hexadecimal values by their equivalent numerical values, please change the argument of
type
parameter tohex
e.g.sort(..., type=hex)
.In all other cases, no action is needed.
The new default value for
sort()
andtable()
will benumber
. Both functions will fall back to lexicographical ordering for values that cannot be understood as the provided argument fortype
.The following API endpoints are deprecated and marked for removal in 1.148.0:
POST
/api/v1/clusterconfig/kafka-queues/partition-assignment
GET
/api/v1/clusterconfig/kafka-queues/partition-assignment
POST
/api/v1/clusterconfig/kafka-queues/partition-assignment/set-replication-defaults
The deprecated methods are used for viewing and changing the partition assignment in Kafka for the ingest queue. Administrators should use Kafka's own tools for editing partition assignments instead, such as the bin/kafka-reassign-partitions.sh and bin/kafka-topics.sh scripts that ship with the Kafka install.
We are deprecating the
humio/kafka
andhumio/zookeeper
Docker images due to low use. The planned final release for these images will be with LogScale 1.148.0.Better alternatives are available going forward. We recommend the following:
If you still require
humio/kafka
orhumio/zookeeper
for needs that cannot be covered by these alternatives, please contact Support and share your concerns.The
HUMIO_JVM_ARGS
environment variable in the LogScale Launcher Script script will be removed in 1.154.0.The variable existed for migration from older deployments where the launcher script was not available. The launcher script replaces the need for manually setting parameters in this variable, so the use of this variable is no longer required. Using the launcher script is now the recommended method of launching LogScale. For more details on the launcher script, see LogScale Launcher Script. Clusters that still set this configuration should migrate to the other variables described at Configuration.
The following GraphQL queries and mutations for interacting with parsers are deprecated and scheduled for removal in version 1.142.
The deprecated createParser mutation is replaced by createParserV2() . The differences between the old and new mutation are:
testData input field is replaced by testCases, which can contain more data than the old tests could. This includes adding assertions to the output of a test. These assertions are not displayed in the UI yet. To emulate the old API, you can take the old test string and put it in the ParserTestEventInput inside the ParserTestCaseInput, and they will behave the same as before.
fieldsToBeRemovedBeforeParsing can now be specified as part of the parser creation.
force field is renamed to allowOverwritingExistingParser.
sourceCode field is renamed to script.
tagFields field is renamed to fieldsToTag.
languageVersion is no longer an enum, but a LanguageVersionInputType instead.
The mutation returns a
Parser
, instead of aParser
wrapped in an object.The mutation fails when a parser has more than 2,000 test cases, or the test input in a single test case exceeds 40,000 characters.
The deprecated removeParser mutation is replaced by deleteParser. The difference between the old and new mutation is:
The mutation returns boolean to represent success or failure, instead of a
Parser
wrapped in an object.The deprecated testParser mutation is replaced by testParserV2() . The differences between the old and new mutation are:
The test cases are now structured types, instead of just being strings. To emulate the old API, take the test string and put it in the
ParserTestEventInput
inside theParserTestCaseInput
, and they will behave the same as before.The new test cases can contain assertions about the contents of the output.
The mutation output is significantly different from before, as it provides more detailed information on how a test case has failed.
The mutation now accepts both a language version and list of fields to be removed before parsing.
The parserScript field is renamed to script.
The tagFields field is renamed to fieldsToTag.
The deprecated updateParser mutation is replaced by updateParserV2() where more extensive test cases can be set. Continuing to use the previous API may result in test information on parsers being lost. To ensure information is not unintentionally erased, please migrate away from the deprecated APIs for both reading and updating parser test cases and use updateParserV2() instead. The differences between the previous and the new mutation are:
testData input field is replaced by testCases, which can contain more data than the old tests could. This includes adding assertions to the output of a test. These assertions are not displayed in the UI yet. To emulate the old API, you can take the old test string and put it in the
ParserTestEventInput
inside theParserTestCaseInput
, and they will behave the same as before.sourceCode field, used to updating the parser script, is changed to the script field, which takes a
UpdateParserScriptInput
object. This updates the parser script and the language version together.tagFields field is renamed to fieldsToTag.
The languageVersion is located inside the
UpdateParserScriptInput
object, and is no longer an enum, but a LanguageVersionInputType instead.The repositoryName and id fields are now correctly marked as mandatory in the schema. Previously this wasn't the case, even though the mutation would fail without them.
The mutation returns a
Parser
, instead of aParser
wrapped in an object.The old mutation had a bug where it would overwrite the languageVersion with a default value in some cases, which is fixed in the new one.
The mutation fails when a parser has more than 2,000 test cases, or the test input in a single test case exceeds 40,000 characters.
On the
Parser
type:
testData field is deprecated and replaced by testCases.
sourceCode field is deprecated and replaced by script.
tagFields field is deprecated and replaced by fieldsToTag.
For more information, see
Parser
,DeleteParserInput
,LanguageVersionInputType
, createParserV2() , testParserV2() , updateParserV2() .
New features and improvements
GraphQL API
The querySearchDomains() query has been extended with the option to filter results by limit name as well as ordering results by limit name.
For more information, see querySearchDomains() .
Ingestion
Parser test cases can now include assertions. This allows you to specify that you expect certain fields to have certain values in a test case after parsing, or that you expect certain fields to not be present at all. Note that the assertions are not exported as part of the YAML template yet.
For more information, see Writing a Parser.
Log Collector
Introducing Fleet Management Remote Updates allowing users to install the LogScale Collector via curl / PowerShell, and manage upgrades and downgrades centrally from Fleet Management.
For more information, see Managing Falcon Log Collector Versions - Instances, Manage Versions - Groups, Install Falcon Log Collector.
Fixed in this release
UI Changes
Still existing Humio occurrences have been replaced with LogScale in a lot of places, primarily in GraphQL documentation and error messages.
Functions
The
table()
function has been fixed as it would wrongly accept a limit of 0, causing serialisation to break between cluster nodes.
Other
DNS lookup was blocked by heavy disk IO when using a HTTP proxy, causing timeouts. This issue has been fixed.
Falcon LogScale 1.135.0 GA (2024-04-23)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.135.0 | GA | 2024-04-23 | Cloud | 2025-05-31 | No | 1.112 | No |
Available for download two days after release.
Bug fixes and updates.
Advance Warning
The following items are due to change in a future release.
Installation and Deployment
The LogScale Launcher Script script for starting LogScale will be modified to change the way CPU core usage can be configured. The
-XX:ActiveProcessorCount=n
command-line option will be ignored if set. Users that need to configure the core count manually should setCORES=n
environment variable instead. This will cause the launcher to configure both LogScale and the JVM properly.This change is scheduled for 1.148.0.
For more information, see Configuring Available CPU Cores.
Deprecation
Items that have been deprecated and may be removed in a future release.
The assetType GraphQL field on
Alert
,Dashboard
,Parser
,SavedQuery
andViewInteraction
datatypes has been deprecated and will be removed in version 1.136 of LogScale.The
any
argument to thetype
parameter ofsort()
andtable()
has been deprecated and will be removed in version 1.142.Warnings prompts will be shown in queries that fall into either of these two cases:
If you are explicitly supplying an
any
argument, please either simply remove both the parameter and the argument, for example changesort(..., type=any)
tosort(...)
or supply the argument fortype
that corresponds to your data.If you are sorting hexadecimal values by their equivalent numerical values, please change the argument of
type
parameter tohex
e.g.sort(..., type=hex)
.In all other cases, no action is needed.
The new default value for
sort()
andtable()
will benumber
. Both functions will fall back to lexicographical ordering for values that cannot be understood as the provided argument fortype
.The following API endpoints are deprecated and marked for removal in 1.148.0:
POST
/api/v1/clusterconfig/kafka-queues/partition-assignment
GET
/api/v1/clusterconfig/kafka-queues/partition-assignment
POST
/api/v1/clusterconfig/kafka-queues/partition-assignment/set-replication-defaults
The deprecated methods are used for viewing and changing the partition assignment in Kafka for the ingest queue. Administrators should use Kafka's own tools for editing partition assignments instead, such as the bin/kafka-reassign-partitions.sh and bin/kafka-topics.sh scripts that ship with the Kafka install.
In the GraphQL API, the
ChangeTriggersAndAction
enum value for both thePermission
andViewAction
enum is now deprecated and will be removed in version 1.136 of LogScale.We are deprecating the
humio/kafka
andhumio/zookeeper
Docker images due to low use. The planned final release for these images will be with LogScale 1.148.0.Better alternatives are available going forward. We recommend the following:
If you still require
humio/kafka
orhumio/zookeeper
for needs that cannot be covered by these alternatives, please contact Support and share your concerns.The
HUMIO_JVM_ARGS
environment variable in the LogScale Launcher Script script will be removed in 1.154.0.The variable existed for migration from older deployments where the launcher script was not available. The launcher script replaces the need for manually setting parameters in this variable, so the use of this variable is no longer required. Using the launcher script is now the recommended method of launching LogScale. For more details on the launcher script, see LogScale Launcher Script. Clusters that still set this configuration should migrate to the other variables described at Configuration.
The following GraphQL queries and mutations for interacting with parsers are deprecated and scheduled for removal in version 1.142.
The deprecated createParser mutation is replaced by createParserV2() . The differences between the old and new mutation are:
testData input field is replaced by testCases, which can contain more data than the old tests could. This includes adding assertions to the output of a test. These assertions are not displayed in the UI yet. To emulate the old API, you can take the old test string and put it in the ParserTestEventInput inside the ParserTestCaseInput, and they will behave the same as before.
fieldsToBeRemovedBeforeParsing can now be specified as part of the parser creation.
force field is renamed to allowOverwritingExistingParser.
sourceCode field is renamed to script.
tagFields field is renamed to fieldsToTag.
languageVersion is no longer an enum, but a LanguageVersionInputType instead.
The mutation returns a
Parser
, instead of aParser
wrapped in an object.The mutation fails when a parser has more than 2,000 test cases, or the test input in a single test case exceeds 40,000 characters.
The deprecated removeParser mutation is replaced by deleteParser. The difference between the old and new mutation is:
The mutation returns boolean to represent success or failure, instead of a
Parser
wrapped in an object.The deprecated testParser mutation is replaced by testParserV2() . The differences between the old and new mutation are:
The test cases are now structured types, instead of just being strings. To emulate the old API, take the test string and put it in the
ParserTestEventInput
inside theParserTestCaseInput
, and they will behave the same as before.The new test cases can contain assertions about the contents of the output.
The mutation output is significantly different from before, as it provides more detailed information on how a test case has failed.
The mutation now accepts both a language version and list of fields to be removed before parsing.
The parserScript field is renamed to script.
The tagFields field is renamed to fieldsToTag.
The deprecated updateParser mutation is replaced by updateParserV2() where more extensive test cases can be set. Continuing to use the previous API may result in test information on parsers being lost. To ensure information is not unintentionally erased, please migrate away from the deprecated APIs for both reading and updating parser test cases and use updateParserV2() instead. The differences between the previous and the new mutation are:
testData input field is replaced by testCases, which can contain more data than the old tests could. This includes adding assertions to the output of a test. These assertions are not displayed in the UI yet. To emulate the old API, you can take the old test string and put it in the
ParserTestEventInput
inside theParserTestCaseInput
, and they will behave the same as before.sourceCode field, used to updating the parser script, is changed to the script field, which takes a
UpdateParserScriptInput
object. This updates the parser script and the language version together.tagFields field is renamed to fieldsToTag.
The languageVersion is located inside the
UpdateParserScriptInput
object, and is no longer an enum, but a LanguageVersionInputType instead.The repositoryName and id fields are now correctly marked as mandatory in the schema. Previously this wasn't the case, even though the mutation would fail without them.
The mutation returns a
Parser
, instead of aParser
wrapped in an object.The old mutation had a bug where it would overwrite the languageVersion with a default value in some cases, which is fixed in the new one.
The mutation fails when a parser has more than 2,000 test cases, or the test input in a single test case exceeds 40,000 characters.
On the
Parser
type:
testData field is deprecated and replaced by testCases.
sourceCode field is deprecated and replaced by script.
tagFields field is deprecated and replaced by fieldsToTag.
For more information, see
Parser
,DeleteParserInput
,LanguageVersionInputType
, createParserV2() , testParserV2() , updateParserV2() .In the GraphQL API, the name argument to the parser field on the
Repository
datatype has been deprecated and will be removed in version 1.136 of LogScale.
Upgrades
Changes that may occur or be required during an upgrade.
Storage
Docker images have been upgraded to Java 22.
Added new deployment artifacts. The published tarballs (e.g.
server.tar.gz
) are now available with a bundled JDK. The platforms currently supported are linux_x64 for 64-bit Linux, and alpine_x64 for 64-bit Alpine Linux and other musl-based Linux distributions. The Docker images have been updated to use this bundled JDK internally. We encourage users to migrate to using the tarballs with bundled JDKs.
New features and improvements
UI Changes
The query editor now shows completions for known field values that have previously been observed in results. For instance,
#repo = m
may show completions for repositories starting withm
seen in previous results.
Automation and Alerts
Added logging when Alerts with Field-Based Throttling discard values and thus potentially trigger again before the throttle period expires.
For more information, see Field-Based Throttling.
Storage
We reverted a change introduced in 1.131.0 intended to cause fewer mini-segments to move in the cluster when digest reassignment occurs. The change could cause mini-segments to not be balanced across cluster nodes in the expected way.
Configuration
The following configuration parameters have been introduced:
The amount of global meta data required for retention spans of over 30 days has been reduced. The amount of global meta data required in clusters with high number of active datasources has also been reduced, as well as the global size of mini segments, by combining them into larger mini segments.
Pre-merging mini segments now reduces the number of segment files on disk (and in bucket) and reduces the amount of meta data for segment targets in progress. This allows getting larger target segment files and reduces the amount of "undersized" merging of "completed" segments. It also allows a smaller flush interval for mini segments without incurring in a larger number of mini segments.
This feature is only supported from v1.112.0. To safely enable it by default, we are now raising to v1.112.0 the minimum version to upgrade from, to disallow rollback to versions older than this version.
The feature is on by default. It can be disabled using the feature flag PreMergeMiniSegments. Disabling the feature stops future merges of mini segments into larger mini segment files, but does not alter the defaults below, nor modify how already merged mini-segments behave.
For more information, see Global Database, Ingestion: Digest Phase.
The default values for the following configuration parameters have changed:
FLUSH_BLOCK_SECONDS = 900
(was 1,800)MAX_HOURS_SEGMENT_OPEN = 720
(was 24, maximum is now 24,000)
Dashboards and Widgets
The automatic rendering of URLs as links has been disabled for the
Table
widget. Only URLs appearing in queries with the markdown style e.g.[CrowdStrike](https://crowdstrike.com)
will be automatically rendered as links in theTable
widget columns. Content, including plain URLs e.g.https://crowdstrike.com
, can still be rendered as links, but this should now be explicitly configured using the Show as→ widget property.For more information, see Table Widget Properties.
Dashboard parameters have gotten the following updates:
The name of the parameter is on top of the input field, so more space is available for both parts.
A
button has been added to multi-value parameters so that all values can be removed in one click.The parameter configuration form has been moved to the side panel.
Multiple values can be added at once to a multi-value parameter by inputting a comma separated list of values, which can be used as individual values.
For more information, see Multi-value Parameters.
Functions
onDuplicate
parameter has been added tokvParse()
to specify how to handle duplicate fields.
Fixed in this release
Dashboards and Widgets
Other options than exporting to CSV file were not possible on the
Dashboard
page for a widget and on theSearch
page for a query result. This issue is now fixed.
Functions
The error message when providing a non-existing query function in an anonymous query e.g.
bucket(function=[{_noFunction()}])
has been fixed.
Falcon LogScale 1.134.0 GA (2024-04-16)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.134.0 | GA | 2024-04-16 | Cloud | 2025-05-31 | No | 1.106 | No |
Available for download two days after release.
Bug fixes and updates.
Advance Warning
The following items are due to change in a future release.
Installation and Deployment
The LogScale Launcher Script script for starting LogScale will be modified to change the way CPU core usage can be configured. The
-XX:ActiveProcessorCount=n
command-line option will be ignored if set. Users that need to configure the core count manually should setCORES=n
environment variable instead. This will cause the launcher to configure both LogScale and the JVM properly.This change is scheduled for 1.148.0.
For more information, see Configuring Available CPU Cores.
Deprecation
Items that have been deprecated and may be removed in a future release.
The assetType GraphQL field on
Alert
,Dashboard
,Parser
,SavedQuery
andViewInteraction
datatypes has been deprecated and will be removed in version 1.136 of LogScale.The
any
argument to thetype
parameter ofsort()
andtable()
has been deprecated and will be removed in version 1.142.Warnings prompts will be shown in queries that fall into either of these two cases:
If you are explicitly supplying an
any
argument, please either simply remove both the parameter and the argument, for example changesort(..., type=any)
tosort(...)
or supply the argument fortype
that corresponds to your data.If you are sorting hexadecimal values by their equivalent numerical values, please change the argument of
type
parameter tohex
e.g.sort(..., type=hex)
.In all other cases, no action is needed.
The new default value for
sort()
andtable()
will benumber
. Both functions will fall back to lexicographical ordering for values that cannot be understood as the provided argument fortype
.The following API endpoints are deprecated and marked for removal in 1.148.0:
POST
/api/v1/clusterconfig/kafka-queues/partition-assignment
GET
/api/v1/clusterconfig/kafka-queues/partition-assignment
POST
/api/v1/clusterconfig/kafka-queues/partition-assignment/set-replication-defaults
The deprecated methods are used for viewing and changing the partition assignment in Kafka for the ingest queue. Administrators should use Kafka's own tools for editing partition assignments instead, such as the bin/kafka-reassign-partitions.sh and bin/kafka-topics.sh scripts that ship with the Kafka install.
In the GraphQL API, the
ChangeTriggersAndAction
enum value for both thePermission
andViewAction
enum is now deprecated and will be removed in version 1.136 of LogScale.We are deprecating the
humio/kafka
andhumio/zookeeper
Docker images due to low use. The planned final release for these images will be with LogScale 1.148.0.Better alternatives are available going forward. We recommend the following:
If you still require
humio/kafka
orhumio/zookeeper
for needs that cannot be covered by these alternatives, please contact Support and share your concerns.The
HUMIO_JVM_ARGS
environment variable in the LogScale Launcher Script script will be removed in 1.154.0.The variable existed for migration from older deployments where the launcher script was not available. The launcher script replaces the need for manually setting parameters in this variable, so the use of this variable is no longer required. Using the launcher script is now the recommended method of launching LogScale. For more details on the launcher script, see LogScale Launcher Script. Clusters that still set this configuration should migrate to the other variables described at Configuration.
The following GraphQL queries and mutations for interacting with parsers are deprecated and scheduled for removal in version 1.142.
The deprecated createParser mutation is replaced by createParserV2() . The differences between the old and new mutation are:
testData input field is replaced by testCases, which can contain more data than the old tests could. This includes adding assertions to the output of a test. These assertions are not displayed in the UI yet. To emulate the old API, you can take the old test string and put it in the ParserTestEventInput inside the ParserTestCaseInput, and they will behave the same as before.
fieldsToBeRemovedBeforeParsing can now be specified as part of the parser creation.
force field is renamed to allowOverwritingExistingParser.
sourceCode field is renamed to script.
tagFields field is renamed to fieldsToTag.
languageVersion is no longer an enum, but a LanguageVersionInputType instead.
The mutation returns a
Parser
, instead of aParser
wrapped in an object.The mutation fails when a parser has more than 2,000 test cases, or the test input in a single test case exceeds 40,000 characters.
The deprecated removeParser mutation is replaced by deleteParser. The difference between the old and new mutation is:
The mutation returns boolean to represent success or failure, instead of a
Parser
wrapped in an object.The deprecated testParser mutation is replaced by testParserV2() . The differences between the old and new mutation are:
The test cases are now structured types, instead of just being strings. To emulate the old API, take the test string and put it in the
ParserTestEventInput
inside theParserTestCaseInput
, and they will behave the same as before.The new test cases can contain assertions about the contents of the output.
The mutation output is significantly different from before, as it provides more detailed information on how a test case has failed.
The mutation now accepts both a language version and list of fields to be removed before parsing.
The parserScript field is renamed to script.
The tagFields field is renamed to fieldsToTag.
The deprecated updateParser mutation is replaced by updateParserV2() where more extensive test cases can be set. Continuing to use the previous API may result in test information on parsers being lost. To ensure information is not unintentionally erased, please migrate away from the deprecated APIs for both reading and updating parser test cases and use updateParserV2() instead. The differences between the previous and the new mutation are:
testData input field is replaced by testCases, which can contain more data than the old tests could. This includes adding assertions to the output of a test. These assertions are not displayed in the UI yet. To emulate the old API, you can take the old test string and put it in the
ParserTestEventInput
inside theParserTestCaseInput
, and they will behave the same as before.sourceCode field, used to updating the parser script, is changed to the script field, which takes a
UpdateParserScriptInput
object. This updates the parser script and the language version together.tagFields field is renamed to fieldsToTag.
The languageVersion is located inside the
UpdateParserScriptInput
object, and is no longer an enum, but a LanguageVersionInputType instead.The repositoryName and id fields are now correctly marked as mandatory in the schema. Previously this wasn't the case, even though the mutation would fail without them.
The mutation returns a
Parser
, instead of aParser
wrapped in an object.The old mutation had a bug where it would overwrite the languageVersion with a default value in some cases, which is fixed in the new one.
The mutation fails when a parser has more than 2,000 test cases, or the test input in a single test case exceeds 40,000 characters.
On the
Parser
type:
testData field is deprecated and replaced by testCases.
sourceCode field is deprecated and replaced by script.
tagFields field is deprecated and replaced by fieldsToTag.
For more information, see
Parser
,DeleteParserInput
,LanguageVersionInputType
, createParserV2() , testParserV2() , updateParserV2() .In the GraphQL API, the name argument to the parser field on the
Repository
datatype has been deprecated and will be removed in version 1.136 of LogScale.
Behavior Changes
Scripts or environment which make use of these tools should be checked and updated for the new configuration:
Queries
Hitting the query count quota no longer cancels existing queries, but only disallows starting new ones.
For more information, see Query Count.
New features and improvements
UI Changes
Sign up to LogScale Community Edition is no longer available for new users. Links, pages and UI flows to access it have been removed.
The number of events in the current window has been added to Metric Types as window_count.
Functions
The
geography:distance()
function is now generally available. The default value for theas
parameter has been changed to_distance
.For more information, see
geography:distance()
.The
readFile()
function will show a warning when the results are truncated due to reaching global result row limit. This behaviour was previously silent.
Other
The
ConfigLoggerJob
now also logsdigestReplicationFactor
,segmentReplicationFactor
,minHostAlivePercentageToEnableClusterRebalancing
,allowUpdateDesiredDigesters
andallowRebalanceExistingSegments
.
Fixed in this release
UI Changes
The formatting of @timestamp has been improved to make time-based visualizations fully compatible with time zones when selecting time zones other than the browser default.
Data statistics in the
Organizations
overview page could not be populated in some cases.
Dashboards and Widgets
A visualization issue has been fixed as the dropdown menu for saving a dashboard widget was showing a wrong title in dashboards not belonging to a package.
Queries
Multiple clients might trigger concurrent computation of the result step for a shared query. This issue has been fixed: now only one pending computation is allowed at a time.
Falcon LogScale 1.133.0 GA (2024-04-09)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.133.0 | GA | 2024-04-09 | Cloud | 2025-05-31 | No | 1.106 | No |
Available for download two days after release.
Bug fixes and updates.
Advance Warning
The following items are due to change in a future release.
Installation and Deployment
The LogScale Launcher Script script for starting LogScale will be modified to change the way CPU core usage can be configured. The
-XX:ActiveProcessorCount=n
command-line option will be ignored if set. Users that need to configure the core count manually should setCORES=n
environment variable instead. This will cause the launcher to configure both LogScale and the JVM properly.This change is scheduled for 1.148.0.
For more information, see Configuring Available CPU Cores.
Deprecation
Items that have been deprecated and may be removed in a future release.
The assetType GraphQL field on
Alert
,Dashboard
,Parser
,SavedQuery
andViewInteraction
datatypes has been deprecated and will be removed in version 1.136 of LogScale.The
any
argument to thetype
parameter ofsort()
andtable()
has been deprecated and will be removed in version 1.142.Warnings prompts will be shown in queries that fall into either of these two cases:
If you are explicitly supplying an
any
argument, please either simply remove both the parameter and the argument, for example changesort(..., type=any)
tosort(...)
or supply the argument fortype
that corresponds to your data.If you are sorting hexadecimal values by their equivalent numerical values, please change the argument of
type
parameter tohex
e.g.sort(..., type=hex)
.In all other cases, no action is needed.
The new default value for
sort()
andtable()
will benumber
. Both functions will fall back to lexicographical ordering for values that cannot be understood as the provided argument fortype
.The following API endpoints are deprecated and marked for removal in 1.148.0:
POST
/api/v1/clusterconfig/kafka-queues/partition-assignment
GET
/api/v1/clusterconfig/kafka-queues/partition-assignment
POST
/api/v1/clusterconfig/kafka-queues/partition-assignment/set-replication-defaults
The deprecated methods are used for viewing and changing the partition assignment in Kafka for the ingest queue. Administrators should use Kafka's own tools for editing partition assignments instead, such as the bin/kafka-reassign-partitions.sh and bin/kafka-topics.sh scripts that ship with the Kafka install.
In the GraphQL API, the
ChangeTriggersAndAction
enum value for both thePermission
andViewAction
enum is now deprecated and will be removed in version 1.136 of LogScale.We are deprecating the
humio/kafka
andhumio/zookeeper
Docker images due to low use. The planned final release for these images will be with LogScale 1.148.0.Better alternatives are available going forward. We recommend the following:
If you still require
humio/kafka
orhumio/zookeeper
for needs that cannot be covered by these alternatives, please contact Support and share your concerns.The
HUMIO_JVM_ARGS
environment variable in the LogScale Launcher Script script will be removed in 1.154.0.The variable existed for migration from older deployments where the launcher script was not available. The launcher script replaces the need for manually setting parameters in this variable, so the use of this variable is no longer required. Using the launcher script is now the recommended method of launching LogScale. For more details on the launcher script, see LogScale Launcher Script. Clusters that still set this configuration should migrate to the other variables described at Configuration.
The following GraphQL queries and mutations for interacting with parsers are deprecated and scheduled for removal in version 1.142.
The deprecated createParser mutation is replaced by createParserV2() . The differences between the old and new mutation are:
testData input field is replaced by testCases, which can contain more data than the old tests could. This includes adding assertions to the output of a test. These assertions are not displayed in the UI yet. To emulate the old API, you can take the old test string and put it in the ParserTestEventInput inside the ParserTestCaseInput, and they will behave the same as before.
fieldsToBeRemovedBeforeParsing can now be specified as part of the parser creation.
force field is renamed to allowOverwritingExistingParser.
sourceCode field is renamed to script.
tagFields field is renamed to fieldsToTag.
languageVersion is no longer an enum, but a LanguageVersionInputType instead.
The mutation returns a
Parser
, instead of aParser
wrapped in an object.The mutation fails when a parser has more than 2,000 test cases, or the test input in a single test case exceeds 40,000 characters.
The deprecated removeParser mutation is replaced by deleteParser. The difference between the old and new mutation is:
The mutation returns boolean to represent success or failure, instead of a
Parser
wrapped in an object.The deprecated testParser mutation is replaced by testParserV2() . The differences between the old and new mutation are:
The test cases are now structured types, instead of just being strings. To emulate the old API, take the test string and put it in the
ParserTestEventInput
inside theParserTestCaseInput
, and they will behave the same as before.The new test cases can contain assertions about the contents of the output.
The mutation output is significantly different from before, as it provides more detailed information on how a test case has failed.
The mutation now accepts both a language version and list of fields to be removed before parsing.
The parserScript field is renamed to script.
The tagFields field is renamed to fieldsToTag.
The deprecated updateParser mutation is replaced by updateParserV2() where more extensive test cases can be set. Continuing to use the previous API may result in test information on parsers being lost. To ensure information is not unintentionally erased, please migrate away from the deprecated APIs for both reading and updating parser test cases and use updateParserV2() instead. The differences between the previous and the new mutation are:
testData input field is replaced by testCases, which can contain more data than the old tests could. This includes adding assertions to the output of a test. These assertions are not displayed in the UI yet. To emulate the old API, you can take the old test string and put it in the
ParserTestEventInput
inside theParserTestCaseInput
, and they will behave the same as before.sourceCode field, used to updating the parser script, is changed to the script field, which takes a
UpdateParserScriptInput
object. This updates the parser script and the language version together.tagFields field is renamed to fieldsToTag.
The languageVersion is located inside the
UpdateParserScriptInput
object, and is no longer an enum, but a LanguageVersionInputType instead.The repositoryName and id fields are now correctly marked as mandatory in the schema. Previously this wasn't the case, even though the mutation would fail without them.
The mutation returns a
Parser
, instead of aParser
wrapped in an object.The old mutation had a bug where it would overwrite the languageVersion with a default value in some cases, which is fixed in the new one.
The mutation fails when a parser has more than 2,000 test cases, or the test input in a single test case exceeds 40,000 characters.
On the
Parser
type:
testData field is deprecated and replaced by testCases.
sourceCode field is deprecated and replaced by script.
tagFields field is deprecated and replaced by fieldsToTag.
For more information, see
Parser
,DeleteParserInput
,LanguageVersionInputType
, createParserV2() , testParserV2() , updateParserV2() .In the GraphQL API, the name argument to the parser field on the
Repository
datatype has been deprecated and will be removed in version 1.136 of LogScale.
New features and improvements
Installation and Deployment
The LogScale Launcher Script now sets -XX:+UseTransparentHugePages as part of the mandatory flags. THP is already enabled for all processes on many Linux distributions by default. This flag enables THP on systems where processes must opt into THP via madvise. We strongly recommend enabling THP for LogScale.
Automation and Alerts
The limit of 50 characters when naming a scheduled search is now removed.
Storage
The bucket transfer prioritization has been adjusted. When behind on both uploads and downloads, 75% of the
S3_STORAGE_CONCURRENCY
capacity is reserved for uploads, and 25% for downloads, rather than using all slots for downloads.
Queries
Queries are now allowed to be queued for start by the query coordinator for a maximum of 10 minutes.
For more information, see Query Coordination.
Functions
The optional
limit
parameter has been added to thereadFile()
function to limit the number of rows of the file returned.
Fixed in this release
UI Changes
The error Failed to fetch data for aliased fields would sometimes appear on the
Search
page of the sandbox repository. This issue has been fixed.Fixed an issue that prevented users from copying the query string from the flyout in the Recent / Saved queries panel.
Storage
redactEvents
segment rewriting has been fixed for several issues that could cause either failure to complete the rewrite, or events to be missed in rare cases. Users should be aware that redaction jobs that were submitted prior to upgrading to a fixed version may fail to complete correctly, or may miss events. Therefore, you are encouraged to resubmit redactions you have recently submitted, to ensure the events are actually gone.
Dashboards and Widgets
Parameters appearing between a string containing
\\
and any other string would not be correctly detected. This issue has been fixed.
Packages
Uploading a package zip would fail on Windows devices. This issue has been fixed.
Improvement
Storage
Removed some work from the thread scheduling bucket transfers that could be slightly expensive in cases where the cluster had fallen behind on uploads.
Configuration
Whenever a SAML or OIDC IdP is created or updated, any leading or trailing whitespace will be trimmed from its fields. This is to avoid configuration errors.
Falcon LogScale 1.132.0 GA (2024-04-02)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.132.0 | GA | 2024-04-02 | Cloud | 2025-05-31 | No | 1.106 | No |
Available for download two days after release.
Bug fixes and updates.
Advance Warning
The following items are due to change in a future release.
Installation and Deployment
The LogScale Launcher Script script for starting LogScale will be modified to change the way CPU core usage can be configured. The
-XX:ActiveProcessorCount=n
command-line option will be ignored if set. Users that need to configure the core count manually should setCORES=n
environment variable instead. This will cause the launcher to configure both LogScale and the JVM properly.This change is scheduled for 1.148.0.
For more information, see Configuring Available CPU Cores.
Deprecation
Items that have been deprecated and may be removed in a future release.
The assetType GraphQL field on
Alert
,Dashboard
,Parser
,SavedQuery
andViewInteraction
datatypes has been deprecated and will be removed in version 1.136 of LogScale.The
any
argument to thetype
parameter ofsort()
andtable()
has been deprecated and will be removed in version 1.142.Warnings prompts will be shown in queries that fall into either of these two cases:
If you are explicitly supplying an
any
argument, please either simply remove both the parameter and the argument, for example changesort(..., type=any)
tosort(...)
or supply the argument fortype
that corresponds to your data.If you are sorting hexadecimal values by their equivalent numerical values, please change the argument of
type
parameter tohex
e.g.sort(..., type=hex)
.In all other cases, no action is needed.
The new default value for
sort()
andtable()
will benumber
. Both functions will fall back to lexicographical ordering for values that cannot be understood as the provided argument fortype
.The following API endpoints are deprecated and marked for removal in 1.148.0:
POST
/api/v1/clusterconfig/kafka-queues/partition-assignment
GET
/api/v1/clusterconfig/kafka-queues/partition-assignment
POST
/api/v1/clusterconfig/kafka-queues/partition-assignment/set-replication-defaults
The deprecated methods are used for viewing and changing the partition assignment in Kafka for the ingest queue. Administrators should use Kafka's own tools for editing partition assignments instead, such as the bin/kafka-reassign-partitions.sh and bin/kafka-topics.sh scripts that ship with the Kafka install.
In the GraphQL API, the
ChangeTriggersAndAction
enum value for both thePermission
andViewAction
enum is now deprecated and will be removed in version 1.136 of LogScale.We are deprecating the
humio/kafka
andhumio/zookeeper
Docker images due to low use. The planned final release for these images will be with LogScale 1.148.0.Better alternatives are available going forward. We recommend the following:
If you still require
humio/kafka
orhumio/zookeeper
for needs that cannot be covered by these alternatives, please contact Support and share your concerns.The
HUMIO_JVM_ARGS
environment variable in the LogScale Launcher Script script will be removed in 1.154.0.The variable existed for migration from older deployments where the launcher script was not available. The launcher script replaces the need for manually setting parameters in this variable, so the use of this variable is no longer required. Using the launcher script is now the recommended method of launching LogScale. For more details on the launcher script, see LogScale Launcher Script. Clusters that still set this configuration should migrate to the other variables described at Configuration.
The following GraphQL queries and mutations for interacting with parsers are deprecated and scheduled for removal in version 1.142.
The deprecated createParser mutation is replaced by createParserV2() . The differences between the old and new mutation are:
testData input field is replaced by testCases, which can contain more data than the old tests could. This includes adding assertions to the output of a test. These assertions are not displayed in the UI yet. To emulate the old API, you can take the old test string and put it in the ParserTestEventInput inside the ParserTestCaseInput, and they will behave the same as before.
fieldsToBeRemovedBeforeParsing can now be specified as part of the parser creation.
force field is renamed to allowOverwritingExistingParser.
sourceCode field is renamed to script.
tagFields field is renamed to fieldsToTag.
languageVersion is no longer an enum, but a LanguageVersionInputType instead.
The mutation returns a
Parser
, instead of aParser
wrapped in an object.The mutation fails when a parser has more than 2,000 test cases, or the test input in a single test case exceeds 40,000 characters.
The deprecated removeParser mutation is replaced by deleteParser. The difference between the old and new mutation is:
The mutation returns boolean to represent success or failure, instead of a
Parser
wrapped in an object.The deprecated testParser mutation is replaced by testParserV2() . The differences between the old and new mutation are:
The test cases are now structured types, instead of just being strings. To emulate the old API, take the test string and put it in the
ParserTestEventInput
inside theParserTestCaseInput
, and they will behave the same as before.The new test cases can contain assertions about the contents of the output.
The mutation output is significantly different from before, as it provides more detailed information on how a test case has failed.
The mutation now accepts both a language version and list of fields to be removed before parsing.
The parserScript field is renamed to script.
The tagFields field is renamed to fieldsToTag.
The deprecated updateParser mutation is replaced by updateParserV2() where more extensive test cases can be set. Continuing to use the previous API may result in test information on parsers being lost. To ensure information is not unintentionally erased, please migrate away from the deprecated APIs for both reading and updating parser test cases and use updateParserV2() instead. The differences between the previous and the new mutation are:
testData input field is replaced by testCases, which can contain more data than the old tests could. This includes adding assertions to the output of a test. These assertions are not displayed in the UI yet. To emulate the old API, you can take the old test string and put it in the
ParserTestEventInput
inside theParserTestCaseInput
, and they will behave the same as before.sourceCode field, used to updating the parser script, is changed to the script field, which takes a
UpdateParserScriptInput
object. This updates the parser script and the language version together.tagFields field is renamed to fieldsToTag.
The languageVersion is located inside the
UpdateParserScriptInput
object, and is no longer an enum, but a LanguageVersionInputType instead.The repositoryName and id fields are now correctly marked as mandatory in the schema. Previously this wasn't the case, even though the mutation would fail without them.
The mutation returns a
Parser
, instead of aParser
wrapped in an object.The old mutation had a bug where it would overwrite the languageVersion with a default value in some cases, which is fixed in the new one.
The mutation fails when a parser has more than 2,000 test cases, or the test input in a single test case exceeds 40,000 characters.
On the
Parser
type:
testData field is deprecated and replaced by testCases.
sourceCode field is deprecated and replaced by script.
tagFields field is deprecated and replaced by fieldsToTag.
For more information, see
Parser
,DeleteParserInput
,LanguageVersionInputType
, createParserV2() , testParserV2() , updateParserV2() .In the GraphQL API, the name argument to the parser field on the
Repository
datatype has been deprecated and will be removed in version 1.136 of LogScale.
New features and improvements
Ingestion
Ingest feed scheduling has been changed to be more gradual in ramping up concurrency and will also reduce concurrency in response to failures. This will make high-pressure failing ingest feeds fall back to periodic retries instead of constantly retrying.
For more information, see Ingest Data from AWS S3.
Functions
For Cloud customers: the maximum value of the
limit
parameter fortail()
andhead()
functions has been increased to20,000
.For Self-Hosted solutions: the maximum value of the
limit
parameter fortail()
andhead()
functions has been aligned with theStateRowLimit
dynamic configuration. This means that the upper value oflimit
is now adjustable for these two functions.
Other
New metrics
ingest-queue-write-offset
andingest-queue-read-offset
have been added, reporting the Kafka offsets of the most recently written and read events on the ingest queue.New metric
events-parsed
has been added, serving as an indicator for how many input events a parser has been applied to.
Fixed in this release
Security
Various OIDC caching issues have been fixed including ensuring refresh of the JWKS cache once per hour by default.
Improvement
Installation and Deployment
An error log is displayed if the latency on global-events exceeds 150 seconds, to prevent nodes from crashing.
Falcon LogScale 1.131.3 LTS (2024-09-24)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.131.3 | LTS | 2024-09-24 | Cloud | 2025-04-30 | No | 1.106 | No |
Hide file hashes
TAR Checksum | Value |
---|---|
MD5 | 210df32b488b5a75d9a1ea779dc40e04 |
SHA1 | 0b918f757936770508e55e5f479cad6dc4f72c6b |
SHA256 | 58b9c5744497f965a524aba0f8549b6eee6d6c003b6bd9c24509d2cf70a27382 |
SHA512 | 8af1dbb4a5bfe06cf4f619e1d69176c0bef74aa7d6cbf34b59fc8b85bdaa6e6c0ea236280c8c9a6b4beb109ad131bb526900d35a88c1a984a66d339d09c67493 |
Docker Image | Included JDK | SHA256 Checksum |
---|---|---|
humio | 21 | a1a61becd5ff90e86da3550ec313f6d74d32b77f76b8e354d43cc0f99da882c3 |
humio-core | 21 | 72c441c0a6e8a662ee978764a4922f994cdad462865818dc1688135c65637156 |
kafka | 21 | 483a01f8c46b359456ef99af08687613ff5fe65ab33d98409ff44c2951bdc6ec |
zookeeper | 21 | f811b7648ca568ddf6b7fde65a91664017f08aad7edb7fcca32bef51eda1de69 |
Download: https://repo.humio.com/repository/maven-releases/com/humio/server/1.131.3/server-1.131.3.tar.gz
These notes include entries from the following previous releases: 1.131.1, 1.131.2
Bug fixes and updates.
Removed
Items that have been removed as of this release.
GraphQL API
The enabledFeatures() query has been removed from GraphQL schema. Use featureFlags() query instead.
Deprecation
Items that have been deprecated and may be removed in a future release.
The assetType GraphQL field on
Alert
,Dashboard
,Parser
,SavedQuery
andViewInteraction
datatypes has been deprecated and will be removed in version 1.136 of LogScale.The
any
argument to thetype
parameter ofsort()
andtable()
has been deprecated and will be removed in version 1.142.Warnings prompts will be shown in queries that fall into either of these two cases:
If you are explicitly supplying an
any
argument, please either simply remove both the parameter and the argument, for example changesort(..., type=any)
tosort(...)
or supply the argument fortype
that corresponds to your data.If you are sorting hexadecimal values by their equivalent numerical values, please change the argument of
type
parameter tohex
e.g.sort(..., type=hex)
.In all other cases, no action is needed.
The new default value for
sort()
andtable()
will benumber
. Both functions will fall back to lexicographical ordering for values that cannot be understood as the provided argument fortype
.In the GraphQL API, the
ChangeTriggersAndAction
enum value for both thePermission
andViewAction
enum is now deprecated and will be removed in version 1.136 of LogScale.We are deprecating the
humio/kafka
andhumio/zookeeper
Docker images due to low use. The planned final release for these images will be with LogScale 1.148.0.Better alternatives are available going forward. We recommend the following:
If you still require
humio/kafka
orhumio/zookeeper
for needs that cannot be covered by these alternatives, please contact Support and share your concerns.The following GraphQL queries and mutations for interacting with parsers are deprecated and scheduled for removal in version 1.142.
The deprecated createParser mutation is replaced by createParserV2() . The differences between the old and new mutation are:
testData input field is replaced by testCases, which can contain more data than the old tests could. This includes adding assertions to the output of a test. These assertions are not displayed in the UI yet. To emulate the old API, you can take the old test string and put it in the ParserTestEventInput inside the ParserTestCaseInput, and they will behave the same as before.
fieldsToBeRemovedBeforeParsing can now be specified as part of the parser creation.
force field is renamed to allowOverwritingExistingParser.
sourceCode field is renamed to script.
tagFields field is renamed to fieldsToTag.
languageVersion is no longer an enum, but a LanguageVersionInputType instead.
The mutation returns a
Parser
, instead of aParser
wrapped in an object.The mutation fails when a parser has more than 2,000 test cases, or the test input in a single test case exceeds 40,000 characters.
The deprecated removeParser mutation is replaced by deleteParser. The difference between the old and new mutation is:
The mutation returns boolean to represent success or failure, instead of a
Parser
wrapped in an object.The deprecated testParser mutation is replaced by testParserV2() . The differences between the old and new mutation are:
The test cases are now structured types, instead of just being strings. To emulate the old API, take the test string and put it in the
ParserTestEventInput
inside theParserTestCaseInput
, and they will behave the same as before.The new test cases can contain assertions about the contents of the output.
The mutation output is significantly different from before, as it provides more detailed information on how a test case has failed.
The mutation now accepts both a language version and list of fields to be removed before parsing.
The parserScript field is renamed to script.
The tagFields field is renamed to fieldsToTag.
The deprecated updateParser mutation is replaced by updateParserV2() where more extensive test cases can be set. Continuing to use the previous API may result in test information on parsers being lost. To ensure information is not unintentionally erased, please migrate away from the deprecated APIs for both reading and updating parser test cases and use updateParserV2() instead. The differences between the previous and the new mutation are:
testData input field is replaced by testCases, which can contain more data than the old tests could. This includes adding assertions to the output of a test. These assertions are not displayed in the UI yet. To emulate the old API, you can take the old test string and put it in the
ParserTestEventInput
inside theParserTestCaseInput
, and they will behave the same as before.sourceCode field, used to updating the parser script, is changed to the script field, which takes a
UpdateParserScriptInput
object. This updates the parser script and the language version together.tagFields field is renamed to fieldsToTag.
The languageVersion is located inside the
UpdateParserScriptInput
object, and is no longer an enum, but a LanguageVersionInputType instead.The repositoryName and id fields are now correctly marked as mandatory in the schema. Previously this wasn't the case, even though the mutation would fail without them.
The mutation returns a
Parser
, instead of aParser
wrapped in an object.The old mutation had a bug where it would overwrite the languageVersion with a default value in some cases, which is fixed in the new one.
The mutation fails when a parser has more than 2,000 test cases, or the test input in a single test case exceeds 40,000 characters.
On the
Parser
type:
testData field is deprecated and replaced by testCases.
sourceCode field is deprecated and replaced by script.
tagFields field is deprecated and replaced by fieldsToTag.
For more information, see
Parser
,DeleteParserInput
,LanguageVersionInputType
, createParserV2() , testParserV2() , updateParserV2() .In the GraphQL API, the name argument to the parser field on the
Repository
datatype has been deprecated and will be removed in version 1.136 of LogScale.
Behavior Changes
Scripts or environment which make use of these tools should be checked and updated for the new configuration:
Security
DNS caches are now invalidated after 60 seconds instead of never. To override this behavior, set the security policy
networkaddress.cache.ttl
in the security manager of the JRE (see Java Networking Properties).Ingestion
It is no longer possible to delete a parser that is being used in an ingest feed. The parser must first be removed from the ingest feed.
For more information, see Delete an Ingest Feed.
Upgrades
Changes that may occur or be required during an upgrade.
Installation and Deployment
The minimum required LogScale version to upgrade from has been raised to 1.106, in order to remove some workarounds for compatibility with old versions.
New features and improvements
Security
Added support for authorizing with an external JWT from an IdP setup in our cloud environment.
The audience for dynamic OIDC IdPs in our cloud environments are now
logscale-$orgId
, where$orgId
is the ID of your organization.Added support for Okta federated IdP OIDC extension to identity providers setup in cloud.
UI Changes
Time zone data has been updated to IANA 2024a and has been trimmed to +/- 5 years from the release date of IANA 2024a.
Automation and Alerts
Throttling and field-based throttling have been introduced as optional functionalities in Filter Alerts. The minimum throttling period is 1 minute.
The customizable trigger limit for Filter Alerts is removed. The trigger limit is now automatically determined based on the associated actions. If one or more email actions are associated, the trigger limit will be 15, otherwise, the trigger limit will be 100. Any existing customizable trigger limit of 1 will be treated as a throttling period of 1 minute, all other custom trigger limits will be ignored. This is a non-backwards compatible change to the GraphQL APIs for Filter Alerts, so any automation for these alerts must be updated.
GraphQL API
The new scopeClaim input argument has been added to
OidcConfigurationInput
andUpdateOidcConfigurationInput
for dynamic OIDC configurations in our clouds.If the IdP is dynamic, we will try to grab the scope claim based on the value given as an input to either newOIDCIdentityProvider() or updateOIDCIdentityProvider() mutations. It will fallback to the cluster configuration.
Configuration
The new dynamic configuration
MaxOpenSegmentsOnWorker
is implemented to control hard cap on open segment files for the scheduler. The scheduler should in most cases not reach this limit and it only acts as a backstop. Therefore, we recommend that administrators do not modify this setting unless advised to do so by CrowdStrike Support.Authorization attempted via JWT tokens will now only try to grab user information from the user info endpoint if the scope in the access token contains any of the following:
profile
,email
,openid
. If no such scope is located in the token, LogScale will try to extract the username from the token and no other user details will be added. We will extract the scope claim based on the new environment variableOIDC_SCOPE_CLAIM
, whose default isscope
.
Ingestion
New parser APIs have been introduced for more extensive parser testing. In the API, parser test cases now have a new structure.
For more information, see createParserV2() ,
DeleteParserInput
, testParserV2() , updateParserV2() , andParser
.Ingest feeds can read from an AWS SQS queue that has been populated with AWS SNS subscription events.
For more information, see Ingest Data from AWS S3.
Queries
Queries are now allowed to be queued for start by the query coordinator for a maximum of 10 minutes.
For more information, see Query Coordination.
Functions
The
parseTimestamp()
function is now able to parse timestamps with nanosecond precision.The
setField()
query function is introduced. It takes two expressions,target
andvalue
and sets the field named by the result of thetarget
expression to the result of thevalue
expression. This function can be used to manipulate fields whose names are not statically known, but computed at runtime.For more information, see
setField()
.The
getField()
query function is introduced. It takes an expression,source
, and sets the field defined byas
to the result of thesource
expression. This function can be used to manipulate fields whose names are not statically known, but computed at runtime.For more information, see
getField()
.
Other
The split by AWS record setting within ingest feeds will now accept numbers with leading zeros.
The
missing-cluster-nodes
metric will now track the nodes that are missing heartbeat data in addition to the nodes that have outdated heartbeat data. The newmissing-cluster-nodes-stateful
metric will track the registered nodes with outdated/missing heartbeat data that can write to global.For more information, see Node-Level Metrics.
The default IP filter for IdP and RDNS operations is now more restrictive: RDNS now defaults to denying lookups of reserved IP ranges and the filter has been updated to deny additional reserved IP ranges, as specified by the IANA. Self hosted administrators can specify their own filters by using the environment variables
IP_FILTER_IDP
,IP_FILTER_RDNS
, andIP_FILTER_RDNS_SERVER
respectively.
Fixed in this release
UI Changes
Field aliases could not be read on the sandbox repository. This issue is now fixed.
CSV files produced by LogScale for sending as attachments from email actions or uploaded through a LogScale Repository action could contain values where part of the text was duplicated. This would only happen for values that needed to be quoted. This issue is now fixed.
Automation and Alerts
Filter Alerts with field-based throttling could trigger on two events with the same value for the throttle field, if actions were slow. This issue is now fixed.
Dashboards and Widgets
Shared dashboards created on the special humio-search-all view wouldn't load correctly. This issue has now been fixed.
A dashboard with fixed shared time as default would not update correctly when selecting a new relative time. This issue is now fixed.
Ingestion
Fixed an issue that prevented the creation of Netflow/UDP protocol ingest listeners.
Cloning a parser from the UI would not clone the fields to be removed before parsing. This issue is now fixed.
Queries
Multiple clients might trigger concurrent computation of the result step for a shared query. This issue has been fixed: now only one pending computation is allowed at a time.
Functions
Live queries using Field Aliasing on a repository with Tag Groupings enabled could fail. This issue has now been fixed.
Other
An issue with the IOC Configuration causing the local database to update too often has now been fixed.
Packages
Uploading a package zip would fail on Windows devices. This issue has been fixed.
Updating a package could fail, if one of the assets from the package had been deleted from the view where the package was installed. This issue has been fixed.
When attempting to upload a package disguised as a folder, some browsers would get a generic error messages. To fix this issue, only zip files are accepted now.
Early Access
Functions
A new query function
readFile()
is released in Early Access. It allows using a CSV Lookup File as data input for a query.For more information, see
readFile()
.
Improvement
Storage
Moved the work of creating a global snapshot for upload to bucket storage from the thread coordinating segment uploads/downloads to a separate thread. This improves the reliability of uploading and download the global snapshot to/from bucket storage.
SegmentChangesJobTrigger
has been disabled on nodes configured to not be able to store segments, thus saving some CPU time.
Configuration
The default value for
AUTOSHARDING_MAX
has changed from 128 to 1,024.The default maximum limit for
groupBy()
has been increased from 200,000 to 1,000,000, meaning that this function can now be asked to collect up to a million groups. However, due to stability concerns it will not allowgroupBy()
to return the full million rows as a result when this function is the last aggregator: this is governed by theQueryResultRowCountLimit
dynamic configuration, which remains unchanged. Therefore, this new limit is best utilized whengroupBy()
is used as a computational tool for creating groups that are then later aggressively filtered and/or aggregated down in size. If you experience resource strain or starvation on your cluster, you can reduce the maximum limit via theGroupMaxLimit
dynamic configuration.The default value for
AUTOSHARDING_TRIGGER_DELAY_MS
has changed from 1 hour to 4 hours.The default memory limit for the query coordinator node has been increased from 400 MB to 4 GB. This new limit allows each query to use up to 1 GB of memory and thus produce more results, at the cost of taking up more resources. This in turn indirectly limits the amount of concurrent queries as the query scheduler may choose not to run a given query before existing queries have completed. If you experience resource strain or starvation on your cluster, you can reduce the memory limit by setting the
QueryCoordinatorMemoryLimit
dynamic configuration to 400,000,000.
Functions
Live queries now restart and run with the updated version of a saved query when the saved query changes.
For more information, see User Functions (Saved Searches).
Reduction of memory requirements when processing empty arrays in functions that accept them. This helps reduce the memory required to use these functions with empty arrays.
Other
Improved handling of segments being replaced due to either merging or event redaction, to address rare cases of event duplication when segments are replaced multiple times shortly after each other.
Falcon LogScale 1.131.2 LTS (2024-05-14)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.131.2 | LTS | 2024-05-14 | Cloud | 2025-04-30 | No | 1.106 | No |
Hide file hashes
TAR Checksum | Value |
---|---|
MD5 | e35c49087dcc810d4b2a636f116e3aff |
SHA1 | cb77fc4309aa57d22903eb33647e4b909eb428fe |
SHA256 | f38d06a4ad1a3cffb65af1f935c737d8b9e4568debf8df1624bf86365f5c31ee |
SHA512 | 79ff9cf7e16753537e551eb486003c45f88e1a20adf99ee6e977f1dfd534bf889c8be74e9f4ef6f9624733b82f19b7772b123f454668871601756c7aa1c0fe35 |
Docker Image | Included JDK | SHA256 Checksum |
---|---|---|
humio | 21 | 183d1bb7618a662c06febe4257d320123e8a916f18c0db9806d37317093af540 |
humio-core | 21 | 452d98fd6cf99a0404ee7f7d81898a67276afa63312aba9cf2a53f5bea36a383 |
kafka | 21 | 87c87d012ec4f461ec64764b4c01e07bb1b17357f74fc8fcdb3839792b421605 |
zookeeper | 21 | 4ec1a15874dc1b4065cd18359c3f419b337d4796b0718a6b4827af329a72a719 |
Download: https://repo.humio.com/repository/maven-releases/com/humio/server/1.131.2/server-1.131.2.tar.gz
These notes include entries from the following previous releases: 1.131.1
Bug fixes and updates.
Removed
Items that have been removed as of this release.
GraphQL API
The enabledFeatures() query has been removed from GraphQL schema. Use featureFlags() query instead.
Deprecation
Items that have been deprecated and may be removed in a future release.
The assetType GraphQL field on
Alert
,Dashboard
,Parser
,SavedQuery
andViewInteraction
datatypes has been deprecated and will be removed in version 1.136 of LogScale.The
any
argument to thetype
parameter ofsort()
andtable()
has been deprecated and will be removed in version 1.142.Warnings prompts will be shown in queries that fall into either of these two cases:
If you are explicitly supplying an
any
argument, please either simply remove both the parameter and the argument, for example changesort(..., type=any)
tosort(...)
or supply the argument fortype
that corresponds to your data.If you are sorting hexadecimal values by their equivalent numerical values, please change the argument of
type
parameter tohex
e.g.sort(..., type=hex)
.In all other cases, no action is needed.
The new default value for
sort()
andtable()
will benumber
. Both functions will fall back to lexicographical ordering for values that cannot be understood as the provided argument fortype
.In the GraphQL API, the
ChangeTriggersAndAction
enum value for both thePermission
andViewAction
enum is now deprecated and will be removed in version 1.136 of LogScale.We are deprecating the
humio/kafka
andhumio/zookeeper
Docker images due to low use. The planned final release for these images will be with LogScale 1.148.0.Better alternatives are available going forward. We recommend the following:
If you still require
humio/kafka
orhumio/zookeeper
for needs that cannot be covered by these alternatives, please contact Support and share your concerns.The following GraphQL queries and mutations for interacting with parsers are deprecated and scheduled for removal in version 1.142.
The deprecated createParser mutation is replaced by createParserV2() . The differences between the old and new mutation are:
testData input field is replaced by testCases, which can contain more data than the old tests could. This includes adding assertions to the output of a test. These assertions are not displayed in the UI yet. To emulate the old API, you can take the old test string and put it in the ParserTestEventInput inside the ParserTestCaseInput, and they will behave the same as before.
fieldsToBeRemovedBeforeParsing can now be specified as part of the parser creation.
force field is renamed to allowOverwritingExistingParser.
sourceCode field is renamed to script.
tagFields field is renamed to fieldsToTag.
languageVersion is no longer an enum, but a LanguageVersionInputType instead.
The mutation returns a
Parser
, instead of aParser
wrapped in an object.The mutation fails when a parser has more than 2,000 test cases, or the test input in a single test case exceeds 40,000 characters.
The deprecated removeParser mutation is replaced by deleteParser. The difference between the old and new mutation is:
The mutation returns boolean to represent success or failure, instead of a
Parser
wrapped in an object.The deprecated testParser mutation is replaced by testParserV2() . The differences between the old and new mutation are:
The test cases are now structured types, instead of just being strings. To emulate the old API, take the test string and put it in the
ParserTestEventInput
inside theParserTestCaseInput
, and they will behave the same as before.The new test cases can contain assertions about the contents of the output.
The mutation output is significantly different from before, as it provides more detailed information on how a test case has failed.
The mutation now accepts both a language version and list of fields to be removed before parsing.
The parserScript field is renamed to script.
The tagFields field is renamed to fieldsToTag.
The deprecated updateParser mutation is replaced by updateParserV2() where more extensive test cases can be set. Continuing to use the previous API may result in test information on parsers being lost. To ensure information is not unintentionally erased, please migrate away from the deprecated APIs for both reading and updating parser test cases and use updateParserV2() instead. The differences between the previous and the new mutation are:
testData input field is replaced by testCases, which can contain more data than the old tests could. This includes adding assertions to the output of a test. These assertions are not displayed in the UI yet. To emulate the old API, you can take the old test string and put it in the
ParserTestEventInput
inside theParserTestCaseInput
, and they will behave the same as before.sourceCode field, used to updating the parser script, is changed to the script field, which takes a
UpdateParserScriptInput
object. This updates the parser script and the language version together.tagFields field is renamed to fieldsToTag.
The languageVersion is located inside the
UpdateParserScriptInput
object, and is no longer an enum, but a LanguageVersionInputType instead.The repositoryName and id fields are now correctly marked as mandatory in the schema. Previously this wasn't the case, even though the mutation would fail without them.
The mutation returns a
Parser
, instead of aParser
wrapped in an object.The old mutation had a bug where it would overwrite the languageVersion with a default value in some cases, which is fixed in the new one.
The mutation fails when a parser has more than 2,000 test cases, or the test input in a single test case exceeds 40,000 characters.
On the
Parser
type:
testData field is deprecated and replaced by testCases.
sourceCode field is deprecated and replaced by script.
tagFields field is deprecated and replaced by fieldsToTag.
For more information, see
Parser
,DeleteParserInput
,LanguageVersionInputType
, createParserV2() , testParserV2() , updateParserV2() .In the GraphQL API, the name argument to the parser field on the
Repository
datatype has been deprecated and will be removed in version 1.136 of LogScale.
Behavior Changes
Scripts or environment which make use of these tools should be checked and updated for the new configuration:
Security
DNS caches are now invalidated after 60 seconds instead of never. To override this behavior, set the security policy
networkaddress.cache.ttl
in the security manager of the JRE (see Java Networking Properties).Ingestion
It is no longer possible to delete a parser that is being used in an ingest feed. The parser must first be removed from the ingest feed.
For more information, see Delete an Ingest Feed.
Upgrades
Changes that may occur or be required during an upgrade.
Installation and Deployment
The minimum required LogScale version to upgrade from has been raised to 1.106, in order to remove some workarounds for compatibility with old versions.
New features and improvements
Security
Added support for authorizing with an external JWT from an IdP setup in our cloud environment.
The audience for dynamic OIDC IdPs in our cloud environments are now
logscale-$orgId
, where$orgId
is the ID of your organization.Added support for Okta federated IdP OIDC extension to identity providers setup in cloud.
UI Changes
Time zone data has been updated to IANA 2024a and has been trimmed to +/- 5 years from the release date of IANA 2024a.
Automation and Alerts
Throttling and field-based throttling have been introduced as optional functionalities in Filter Alerts. The minimum throttling period is 1 minute.
The customizable trigger limit for Filter Alerts is removed. The trigger limit is now automatically determined based on the associated actions. If one or more email actions are associated, the trigger limit will be 15, otherwise, the trigger limit will be 100. Any existing customizable trigger limit of 1 will be treated as a throttling period of 1 minute, all other custom trigger limits will be ignored. This is a non-backwards compatible change to the GraphQL APIs for Filter Alerts, so any automation for these alerts must be updated.
GraphQL API
The new scopeClaim input argument has been added to
OidcConfigurationInput
andUpdateOidcConfigurationInput
for dynamic OIDC configurations in our clouds.If the IdP is dynamic, we will try to grab the scope claim based on the value given as an input to either newOIDCIdentityProvider() or updateOIDCIdentityProvider() mutations. It will fallback to the cluster configuration.
Configuration
The new dynamic configuration
MaxOpenSegmentsOnWorker
is implemented to control hard cap on open segment files for the scheduler. The scheduler should in most cases not reach this limit and it only acts as a backstop. Therefore, we recommend that administrators do not modify this setting unless advised to do so by CrowdStrike Support.Authorization attempted via JWT tokens will now only try to grab user information from the user info endpoint if the scope in the access token contains any of the following:
profile
,email
,openid
. If no such scope is located in the token, LogScale will try to extract the username from the token and no other user details will be added. We will extract the scope claim based on the new environment variableOIDC_SCOPE_CLAIM
, whose default isscope
.
Ingestion
New parser APIs have been introduced for more extensive parser testing. In the API, parser test cases now have a new structure.
For more information, see createParserV2() ,
DeleteParserInput
, testParserV2() , updateParserV2() , andParser
.Ingest feeds can read from an AWS SQS queue that has been populated with AWS SNS subscription events.
For more information, see Ingest Data from AWS S3.
Queries
Queries are now allowed to be queued for start by the query coordinator for a maximum of 10 minutes.
For more information, see Query Coordination.
Functions
The
parseTimestamp()
function is now able to parse timestamps with nanosecond precision.The
setField()
query function is introduced. It takes two expressions,target
andvalue
and sets the field named by the result of thetarget
expression to the result of thevalue
expression. This function can be used to manipulate fields whose names are not statically known, but computed at runtime.For more information, see
setField()
.The
getField()
query function is introduced. It takes an expression,source
, and sets the field defined byas
to the result of thesource
expression. This function can be used to manipulate fields whose names are not statically known, but computed at runtime.For more information, see
getField()
.
Other
The split by AWS record setting within ingest feeds will now accept numbers with leading zeros.
The
missing-cluster-nodes
metric will now track the nodes that are missing heartbeat data in addition to the nodes that have outdated heartbeat data. The newmissing-cluster-nodes-stateful
metric will track the registered nodes with outdated/missing heartbeat data that can write to global.For more information, see Node-Level Metrics.
The default IP filter for IdP and RDNS operations is now more restrictive: RDNS now defaults to denying lookups of reserved IP ranges and the filter has been updated to deny additional reserved IP ranges, as specified by the IANA. Self hosted administrators can specify their own filters by using the environment variables
IP_FILTER_IDP
,IP_FILTER_RDNS
, andIP_FILTER_RDNS_SERVER
respectively.
Fixed in this release
UI Changes
Field aliases could not be read on the sandbox repository. This issue is now fixed.
CSV files produced by LogScale for sending as attachments from email actions or uploaded through a LogScale Repository action could contain values where part of the text was duplicated. This would only happen for values that needed to be quoted. This issue is now fixed.
Automation and Alerts
Filter Alerts with field-based throttling could trigger on two events with the same value for the throttle field, if actions were slow. This issue is now fixed.
Dashboards and Widgets
A dashboard with fixed shared time as default would not update correctly when selecting a new relative time. This issue is now fixed.
Ingestion
Fixed an issue that prevented the creation of Netflow/UDP protocol ingest listeners.
Cloning a parser from the UI would not clone the fields to be removed before parsing. This issue is now fixed.
Queries
Multiple clients might trigger concurrent computation of the result step for a shared query. This issue has been fixed: now only one pending computation is allowed at a time.
Other
An issue with the IOC Configuration causing the local database to update too often has now been fixed.
Packages
Uploading a package zip would fail on Windows devices. This issue has been fixed.
Updating a package could fail, if one of the assets from the package had been deleted from the view where the package was installed. This issue has been fixed.
When attempting to upload a package disguised as a folder, some browsers would get a generic error messages. To fix this issue, only zip files are accepted now.
Early Access
Functions
A new query function
readFile()
is released in Early Access. It allows using a CSV Lookup File as data input for a query.For more information, see
readFile()
.
Improvement
Storage
Moved the work of creating a global snapshot for upload to bucket storage from the thread coordinating segment uploads/downloads to a separate thread. This improves the reliability of uploading and download the global snapshot to/from bucket storage.
SegmentChangesJobTrigger
has been disabled on nodes configured to not be able to store segments, thus saving some CPU time.
Configuration
The default value for
AUTOSHARDING_MAX
has changed from 128 to 1,024.The default maximum limit for
groupBy()
has been increased from 200,000 to 1,000,000, meaning that this function can now be asked to collect up to a million groups. However, due to stability concerns it will not allowgroupBy()
to return the full million rows as a result when this function is the last aggregator: this is governed by theQueryResultRowCountLimit
dynamic configuration, which remains unchanged. Therefore, this new limit is best utilized whengroupBy()
is used as a computational tool for creating groups that are then later aggressively filtered and/or aggregated down in size. If you experience resource strain or starvation on your cluster, you can reduce the maximum limit via theGroupMaxLimit
dynamic configuration.The default value for
AUTOSHARDING_TRIGGER_DELAY_MS
has changed from 1 hour to 4 hours.The default memory limit for the query coordinator node has been increased from 400 MB to 4 GB. This new limit allows each query to use up to 1 GB of memory and thus produce more results, at the cost of taking up more resources. This in turn indirectly limits the amount of concurrent queries as the query scheduler may choose not to run a given query before existing queries have completed. If you experience resource strain or starvation on your cluster, you can reduce the memory limit by setting the
QueryCoordinatorMemoryLimit
dynamic configuration to 400,000,000.
Functions
Live queries now restart and run with the updated version of a saved query when the saved query changes.
For more information, see User Functions (Saved Searches).
Reduction of memory requirements when processing empty arrays in functions that accept them. This helps reduce the memory required to use these functions with empty arrays.
Other
Improved handling of segments being replaced due to either merging or event redaction, to address rare cases of event duplication when segments are replaced multiple times shortly after each other.
Falcon LogScale 1.131.1 LTS (2024-04-17)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.131.1 | LTS | 2024-04-17 | Cloud | 2025-04-30 | No | 1.106 | No |
Hide file hashes
TAR Checksum | Value |
---|---|
MD5 | 4a9223ff7d628a52257783b70b084726 |
SHA1 | 3666c2ac1eea45e07ea9a89f0c16eafffebc1e01 |
SHA256 | 5eb83a4ee2c9a8792f1ac1ec9ddad9282a5e9e98d523a77556762eded9fd50ad |
SHA512 | 86000582f6b4134f85943ae2385b0b17113f241f988864c9113f2df639f4a2f97a6eba69edb305ec57e2e0db53578a79fb7f54aa15b9acd909092d8cc88f1438 |
Docker Image | Included JDK | SHA256 Checksum |
---|---|---|
humio | 21 | adcf2fea3d8f9c10b764a73577959eeb5c58cdb2955e69846b26effc5758e0b9 |
humio-core | 21 | 2985c7ec6bde2f3c8904f71d238e7fdd70547c9d71488aea997acb89cf2d15ec |
kafka | 21 | 262c7e74062a32cecee9119836752ee6310662d570f80926e7dd36dcb785d380 |
zookeeper | 21 | b9b0349704cc996701c65cf713c1584c0b5db7f70cb00d53bf1051c50e0e99ab |
Download: https://repo.humio.com/repository/maven-releases/com/humio/server/1.131.1/server-1.131.1.tar.gz
Bug fixes and updates.
Removed
Items that have been removed as of this release.
GraphQL API
The enabledFeatures() query has been removed from GraphQL schema. Use featureFlags() query instead.
Deprecation
Items that have been deprecated and may be removed in a future release.
The assetType GraphQL field on
Alert
,Dashboard
,Parser
,SavedQuery
andViewInteraction
datatypes has been deprecated and will be removed in version 1.136 of LogScale.The
any
argument to thetype
parameter ofsort()
andtable()
has been deprecated and will be removed in version 1.142.Warnings prompts will be shown in queries that fall into either of these two cases:
If you are explicitly supplying an
any
argument, please either simply remove both the parameter and the argument, for example changesort(..., type=any)
tosort(...)
or supply the argument fortype
that corresponds to your data.If you are sorting hexadecimal values by their equivalent numerical values, please change the argument of
type
parameter tohex
e.g.sort(..., type=hex)
.In all other cases, no action is needed.
The new default value for
sort()
andtable()
will benumber
. Both functions will fall back to lexicographical ordering for values that cannot be understood as the provided argument fortype
.In the GraphQL API, the
ChangeTriggersAndAction
enum value for both thePermission
andViewAction
enum is now deprecated and will be removed in version 1.136 of LogScale.We are deprecating the
humio/kafka
andhumio/zookeeper
Docker images due to low use. The planned final release for these images will be with LogScale 1.148.0.Better alternatives are available going forward. We recommend the following:
If you still require
humio/kafka
orhumio/zookeeper
for needs that cannot be covered by these alternatives, please contact Support and share your concerns.The following GraphQL queries and mutations for interacting with parsers are deprecated and scheduled for removal in version 1.142.
The deprecated createParser mutation is replaced by createParserV2() . The differences between the old and new mutation are:
testData input field is replaced by testCases, which can contain more data than the old tests could. This includes adding assertions to the output of a test. These assertions are not displayed in the UI yet. To emulate the old API, you can take the old test string and put it in the ParserTestEventInput inside the ParserTestCaseInput, and they will behave the same as before.
fieldsToBeRemovedBeforeParsing can now be specified as part of the parser creation.
force field is renamed to allowOverwritingExistingParser.
sourceCode field is renamed to script.
tagFields field is renamed to fieldsToTag.
languageVersion is no longer an enum, but a LanguageVersionInputType instead.
The mutation returns a
Parser
, instead of aParser
wrapped in an object.The mutation fails when a parser has more than 2,000 test cases, or the test input in a single test case exceeds 40,000 characters.
The deprecated removeParser mutation is replaced by deleteParser. The difference between the old and new mutation is:
The mutation returns boolean to represent success or failure, instead of a
Parser
wrapped in an object.The deprecated testParser mutation is replaced by testParserV2() . The differences between the old and new mutation are:
The test cases are now structured types, instead of just being strings. To emulate the old API, take the test string and put it in the
ParserTestEventInput
inside theParserTestCaseInput
, and they will behave the same as before.The new test cases can contain assertions about the contents of the output.
The mutation output is significantly different from before, as it provides more detailed information on how a test case has failed.
The mutation now accepts both a language version and list of fields to be removed before parsing.
The parserScript field is renamed to script.
The tagFields field is renamed to fieldsToTag.
The deprecated updateParser mutation is replaced by updateParserV2() where more extensive test cases can be set. Continuing to use the previous API may result in test information on parsers being lost. To ensure information is not unintentionally erased, please migrate away from the deprecated APIs for both reading and updating parser test cases and use updateParserV2() instead. The differences between the previous and the new mutation are:
testData input field is replaced by testCases, which can contain more data than the old tests could. This includes adding assertions to the output of a test. These assertions are not displayed in the UI yet. To emulate the old API, you can take the old test string and put it in the
ParserTestEventInput
inside theParserTestCaseInput
, and they will behave the same as before.sourceCode field, used to updating the parser script, is changed to the script field, which takes a
UpdateParserScriptInput
object. This updates the parser script and the language version together.tagFields field is renamed to fieldsToTag.
The languageVersion is located inside the
UpdateParserScriptInput
object, and is no longer an enum, but a LanguageVersionInputType instead.The repositoryName and id fields are now correctly marked as mandatory in the schema. Previously this wasn't the case, even though the mutation would fail without them.
The mutation returns a
Parser
, instead of aParser
wrapped in an object.The old mutation had a bug where it would overwrite the languageVersion with a default value in some cases, which is fixed in the new one.
The mutation fails when a parser has more than 2,000 test cases, or the test input in a single test case exceeds 40,000 characters.
On the
Parser
type:
testData field is deprecated and replaced by testCases.
sourceCode field is deprecated and replaced by script.
tagFields field is deprecated and replaced by fieldsToTag.
For more information, see
Parser
,DeleteParserInput
,LanguageVersionInputType
, createParserV2() , testParserV2() , updateParserV2() .In the GraphQL API, the name argument to the parser field on the
Repository
datatype has been deprecated and will be removed in version 1.136 of LogScale.
Behavior Changes
Scripts or environment which make use of these tools should be checked and updated for the new configuration:
Security
DNS caches are now invalidated after 60 seconds instead of never. To override this behavior, set the security policy
networkaddress.cache.ttl
in the security manager of the JRE (see Java Networking Properties).Ingestion
It is no longer possible to delete a parser that is being used in an ingest feed. The parser must first be removed from the ingest feed.
For more information, see Delete an Ingest Feed.
Upgrades
Changes that may occur or be required during an upgrade.
Installation and Deployment
The minimum required LogScale version to upgrade from has been raised to 1.106, in order to remove some workarounds for compatibility with old versions.
New features and improvements
Security
Added support for authorizing with an external JWT from an IdP setup in our cloud environment.
The audience for dynamic OIDC IdPs in our cloud environments are now
logscale-$orgId
, where$orgId
is the ID of your organization.Added support for Okta federated IdP OIDC extension to identity providers setup in cloud.
Automation and Alerts
Throttling and field-based throttling have been introduced as optional functionalities in Filter Alerts. The minimum throttling period is 1 minute.
The customizable trigger limit for Filter Alerts is removed. The trigger limit is now automatically determined based on the associated actions. If one or more email actions are associated, the trigger limit will be 15, otherwise, the trigger limit will be 100. Any existing customizable trigger limit of 1 will be treated as a throttling period of 1 minute, all other custom trigger limits will be ignored. This is a non-backwards compatible change to the GraphQL APIs for Filter Alerts, so any automation for these alerts must be updated.
GraphQL API
The new scopeClaim input argument has been added to
OidcConfigurationInput
andUpdateOidcConfigurationInput
for dynamic OIDC configurations in our clouds.If the IdP is dynamic, we will try to grab the scope claim based on the value given as an input to either newOIDCIdentityProvider() or updateOIDCIdentityProvider() mutations. It will fallback to the cluster configuration.
Configuration
The new dynamic configuration
MaxOpenSegmentsOnWorker
is implemented to control hard cap on open segment files for the scheduler. The scheduler should in most cases not reach this limit and it only acts as a backstop. Therefore, we recommend that administrators do not modify this setting unless advised to do so by CrowdStrike Support.Authorization attempted via JWT tokens will now only try to grab user information from the user info endpoint if the scope in the access token contains any of the following:
profile
,email
,openid
. If no such scope is located in the token, LogScale will try to extract the username from the token and no other user details will be added. We will extract the scope claim based on the new environment variableOIDC_SCOPE_CLAIM
, whose default isscope
.
Ingestion
New parser APIs have been introduced for more extensive parser testing. In the API, parser test cases now have a new structure.
For more information, see createParserV2() ,
DeleteParserInput
, testParserV2() , updateParserV2() , andParser
.Ingest feeds can read from an AWS SQS queue that has been populated with AWS SNS subscription events.
For more information, see Ingest Data from AWS S3.
Queries
Queries are now allowed to be queued for start by the query coordinator for a maximum of 10 minutes.
For more information, see Query Coordination.
Functions
The
parseTimestamp()
function is now able to parse timestamps with nanosecond precision.The
setField()
query function is introduced. It takes two expressions,target
andvalue
and sets the field named by the result of thetarget
expression to the result of thevalue
expression. This function can be used to manipulate fields whose names are not statically known, but computed at runtime.For more information, see
setField()
.The
getField()
query function is introduced. It takes an expression,source
, and sets the field defined byas
to the result of thesource
expression. This function can be used to manipulate fields whose names are not statically known, but computed at runtime.For more information, see
getField()
.
Other
The split by AWS record setting within ingest feeds will now accept numbers with leading zeros.
The
missing-cluster-nodes
metric will now track the nodes that are missing heartbeat data in addition to the nodes that have outdated heartbeat data. The newmissing-cluster-nodes-stateful
metric will track the registered nodes with outdated/missing heartbeat data that can write to global.For more information, see Node-Level Metrics.
The default IP filter for IdP and RDNS operations is now more restrictive: RDNS now defaults to denying lookups of reserved IP ranges and the filter has been updated to deny additional reserved IP ranges, as specified by the IANA. Self hosted administrators can specify their own filters by using the environment variables
IP_FILTER_IDP
,IP_FILTER_RDNS
, andIP_FILTER_RDNS_SERVER
respectively.
Fixed in this release
UI Changes
Field aliases could not be read on the sandbox repository. This issue is now fixed.
CSV files produced by LogScale for sending as attachments from email actions or uploaded through a LogScale Repository action could contain values where part of the text was duplicated. This would only happen for values that needed to be quoted. This issue is now fixed.
Automation and Alerts
Filter Alerts with field-based throttling could trigger on two events with the same value for the throttle field, if actions were slow. This issue is now fixed.
Dashboards and Widgets
A dashboard with fixed shared time as default would not update correctly when selecting a new relative time. This issue is now fixed.
Ingestion
Fixed an issue that prevented the creation of Netflow/UDP protocol ingest listeners.
Cloning a parser from the UI would not clone the fields to be removed before parsing. This issue is now fixed.
Queries
Multiple clients might trigger concurrent computation of the result step for a shared query. This issue has been fixed: now only one pending computation is allowed at a time.
Other
An issue with the IOC Configuration causing the local database to update too often has now been fixed.
Packages
Updating a package could fail, if one of the assets from the package had been deleted from the view where the package was installed. This issue has been fixed.
When attempting to upload a package disguised as a folder, some browsers would get a generic error messages. To fix this issue, only zip files are accepted now.
Early Access
Functions
A new query function
readFile()
is released in Early Access. It allows using a CSV Lookup File as data input for a query.For more information, see
readFile()
.
Improvement
Storage
Moved the work of creating a global snapshot for upload to bucket storage from the thread coordinating segment uploads/downloads to a separate thread. This improves the reliability of uploading and download the global snapshot to/from bucket storage.
SegmentChangesJobTrigger
has been disabled on nodes configured to not be able to store segments, thus saving some CPU time.
Configuration
The default value for
AUTOSHARDING_MAX
has changed from 128 to 1,024.The default maximum limit for
groupBy()
has been increased from 200,000 to 1,000,000, meaning that this function can now be asked to collect up to a million groups. However, due to stability concerns it will not allowgroupBy()
to return the full million rows as a result when this function is the last aggregator: this is governed by theQueryResultRowCountLimit
dynamic configuration, which remains unchanged. Therefore, this new limit is best utilized whengroupBy()
is used as a computational tool for creating groups that are then later aggressively filtered and/or aggregated down in size. If you experience resource strain or starvation on your cluster, you can reduce the maximum limit via theGroupMaxLimit
dynamic configuration.The default value for
AUTOSHARDING_TRIGGER_DELAY_MS
has changed from 1 hour to 4 hours.The default memory limit for the query coordinator node has been increased from 400 MB to 4 GB. This new limit allows each query to use up to 1 GB of memory and thus produce more results, at the cost of taking up more resources. This in turn indirectly limits the amount of concurrent queries as the query scheduler may choose not to run a given query before existing queries have completed. If you experience resource strain or starvation on your cluster, you can reduce the memory limit by setting the
QueryCoordinatorMemoryLimit
dynamic configuration to 400,000,000.
Functions
Live queries now restart and run with the updated version of a saved query when the saved query changes.
For more information, see User Functions (Saved Searches).
Reduction of memory requirements when processing empty arrays in functions that accept them. This helps reduce the memory required to use these functions with empty arrays.
Other
Improved handling of segments being replaced due to either merging or event redaction, to address rare cases of event duplication when segments are replaced multiple times shortly after each other.
Falcon LogScale 1.131.0 GA (2024-03-26)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.131.0 | GA | 2024-03-26 | Cloud | 2025-04-30 | No | 1.106 | No |
Available for download two days after release.
Bug fixes and updates.
Deprecation
Items that have been deprecated and may be removed in a future release.
The assetType GraphQL field on
Alert
,Dashboard
,Parser
,SavedQuery
andViewInteraction
datatypes has been deprecated and will be removed in version 1.136 of LogScale.The
any
argument to thetype
parameter ofsort()
andtable()
has been deprecated and will be removed in version 1.142.Warnings prompts will be shown in queries that fall into either of these two cases:
If you are explicitly supplying an
any
argument, please either simply remove both the parameter and the argument, for example changesort(..., type=any)
tosort(...)
or supply the argument fortype
that corresponds to your data.If you are sorting hexadecimal values by their equivalent numerical values, please change the argument of
type
parameter tohex
e.g.sort(..., type=hex)
.In all other cases, no action is needed.
The new default value for
sort()
andtable()
will benumber
. Both functions will fall back to lexicographical ordering for values that cannot be understood as the provided argument fortype
.In the GraphQL API, the
ChangeTriggersAndAction
enum value for both thePermission
andViewAction
enum is now deprecated and will be removed in version 1.136 of LogScale.We are deprecating the
humio/kafka
andhumio/zookeeper
Docker images due to low use. The planned final release for these images will be with LogScale 1.148.0.Better alternatives are available going forward. We recommend the following:
If you still require
humio/kafka
orhumio/zookeeper
for needs that cannot be covered by these alternatives, please contact Support and share your concerns.The following GraphQL queries and mutations for interacting with parsers are deprecated and scheduled for removal in version 1.142.
The deprecated createParser mutation is replaced by createParserV2() . The differences between the old and new mutation are:
testData input field is replaced by testCases, which can contain more data than the old tests could. This includes adding assertions to the output of a test. These assertions are not displayed in the UI yet. To emulate the old API, you can take the old test string and put it in the ParserTestEventInput inside the ParserTestCaseInput, and they will behave the same as before.
fieldsToBeRemovedBeforeParsing can now be specified as part of the parser creation.
force field is renamed to allowOverwritingExistingParser.
sourceCode field is renamed to script.
tagFields field is renamed to fieldsToTag.
languageVersion is no longer an enum, but a LanguageVersionInputType instead.
The mutation returns a
Parser
, instead of aParser
wrapped in an object.The mutation fails when a parser has more than 2,000 test cases, or the test input in a single test case exceeds 40,000 characters.
The deprecated removeParser mutation is replaced by deleteParser. The difference between the old and new mutation is:
The mutation returns boolean to represent success or failure, instead of a
Parser
wrapped in an object.The deprecated testParser mutation is replaced by testParserV2() . The differences between the old and new mutation are:
The test cases are now structured types, instead of just being strings. To emulate the old API, take the test string and put it in the
ParserTestEventInput
inside theParserTestCaseInput
, and they will behave the same as before.The new test cases can contain assertions about the contents of the output.
The mutation output is significantly different from before, as it provides more detailed information on how a test case has failed.
The mutation now accepts both a language version and list of fields to be removed before parsing.
The parserScript field is renamed to script.
The tagFields field is renamed to fieldsToTag.
The deprecated updateParser mutation is replaced by updateParserV2() where more extensive test cases can be set. Continuing to use the previous API may result in test information on parsers being lost. To ensure information is not unintentionally erased, please migrate away from the deprecated APIs for both reading and updating parser test cases and use updateParserV2() instead. The differences between the previous and the new mutation are:
testData input field is replaced by testCases, which can contain more data than the old tests could. This includes adding assertions to the output of a test. These assertions are not displayed in the UI yet. To emulate the old API, you can take the old test string and put it in the
ParserTestEventInput
inside theParserTestCaseInput
, and they will behave the same as before.sourceCode field, used to updating the parser script, is changed to the script field, which takes a
UpdateParserScriptInput
object. This updates the parser script and the language version together.tagFields field is renamed to fieldsToTag.
The languageVersion is located inside the
UpdateParserScriptInput
object, and is no longer an enum, but a LanguageVersionInputType instead.The repositoryName and id fields are now correctly marked as mandatory in the schema. Previously this wasn't the case, even though the mutation would fail without them.
The mutation returns a
Parser
, instead of aParser
wrapped in an object.The old mutation had a bug where it would overwrite the languageVersion with a default value in some cases, which is fixed in the new one.
The mutation fails when a parser has more than 2,000 test cases, or the test input in a single test case exceeds 40,000 characters.
On the
Parser
type:
testData field is deprecated and replaced by testCases.
sourceCode field is deprecated and replaced by script.
tagFields field is deprecated and replaced by fieldsToTag.
For more information, see
Parser
,DeleteParserInput
,LanguageVersionInputType
, createParserV2() , testParserV2() , updateParserV2() .In the GraphQL API, the name argument to the parser field on the
Repository
datatype has been deprecated and will be removed in version 1.136 of LogScale.
Behavior Changes
Scripts or environment which make use of these tools should be checked and updated for the new configuration:
Storage
We've removed a throttling behavior that prevented background merges of mini-segments from running when digest load is high. Such throttling can cause
global
in the LogScale cluster to grow over time if the digest load isn't transient, which is undesirable.Moving mini-segments to the digest leader in cases where it is not necessary is now avoided. This new behavior reduces
global
traffic on digest reassignment.Registering local segment files is skipped on nodes that are configured to not have storage via their node role.
When booting a node, wait until we've caught up to the top of
global
before publishing the start message. This should help avoid global publish timeouts on boot whenglobal
has a lot of traffic.
New features and improvements
UI Changes
The parser test window width can now be resized.
Other
The
metrics
endpoint for the scheduled report render node has been updated to output the Prometheus text based format.
Fixed in this release
UI Changes
Duplicate HTML escape has been removed to prevent recursive field references having double escaped formatting in emails.
Storage
We've fixed a rarely hit error in the query scheduler causing a
ClassCastException
forscala.runtime.Nothing.
.
Functions
join()
function has been fixed as warnings of the sub-query would not propagate to the main-query result.Serialization of very large query states would crash nodes by requesting an array larger than what the JVM can allocate. This issue has been fixed.
Early Access
Functions
The
readFile()
function no longer supports JSON files. The function is currently labeled as Early Access.
Improvement
Storage
Concurrency for segment merging is improved, thus avoiding some unnecessary and inefficient pauses in execution.
We've switched to running the
RetentionJob
in a separate thread fromDataSyncJob
. This should enable more consistent merging.The
RetentionJob
work is now divided among nodes such that there's no overlap. This reduces traffic inglobal
.An internal limit on use of off-heap memory has been adjusted to allow more threads to perform segment merging in parallel.
Functions
Some performance improvements have been made to the
join()
function, allowing it to skip blocks that do not contain the specified fields of the main and sub-query.
Falcon LogScale 1.130.0 GA (2024-03-19)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.130.0 | GA | 2024-03-19 | Cloud | 2025-04-30 | No | 1.106 | No |
Available for download two days after release.
Bug fixes and updates.
Advance Warning
The following items are due to change in a future release.
Installation and Deployment
We aim to stop publishing the
jar
distribution of LogScale (e.g.server-1.117.jar
) as of LogScale version 1.130.0.Users deploying via Docker images are not affected. Users deploying on bare metal should ensure they deploy the
tar
artifact, and not thejar
artifact.A migration guide for bare metal deployments is available at How-To: Migrating from server.jar to Launcher Startup.
We intend to drop support for Java 17, making Java 21 the minimum. We plan to make this change in March 2024.
Deprecation
Items that have been deprecated and may be removed in a future release.
The assetType GraphQL field on
Alert
,Dashboard
,Parser
,SavedQuery
andViewInteraction
datatypes has been deprecated and will be removed in version 1.136 of LogScale.The
any
argument to thetype
parameter ofsort()
andtable()
has been deprecated and will be removed in version 1.142.Warnings prompts will be shown in queries that fall into either of these two cases:
If you are explicitly supplying an
any
argument, please either simply remove both the parameter and the argument, for example changesort(..., type=any)
tosort(...)
or supply the argument fortype
that corresponds to your data.If you are sorting hexadecimal values by their equivalent numerical values, please change the argument of
type
parameter tohex
e.g.sort(..., type=hex)
.In all other cases, no action is needed.
The new default value for
sort()
andtable()
will benumber
. Both functions will fall back to lexicographical ordering for values that cannot be understood as the provided argument fortype
.The
humio
Docker image is deprecated in favor ofhumio-core
.humio
is no longer considered suitable for production use, as it runs Kafka and ZooKeeper on the same host as LogScale, which our deployment guidelines no longer recommend. The final release ofhumio
Docker image will be in version 1.130.0.The new
humio-single-node-demo
image is an all-in-one container suitable for quick and easy demonstration setups, but which is entirely unsupported for production use.For more information, see Installing Using Containers.
In the GraphQL API, the
ChangeTriggersAndAction
enum value for both thePermission
andViewAction
enum is now deprecated and will be removed in version 1.136 of LogScale.We are deprecating the
humio/kafka
andhumio/zookeeper
Docker images due to low use. The planned final release for these images will be with LogScale 1.148.0.Better alternatives are available going forward. We recommend the following:
If you still require
humio/kafka
orhumio/zookeeper
for needs that cannot be covered by these alternatives, please contact Support and share your concerns.The following GraphQL queries and mutations for interacting with parsers are deprecated and scheduled for removal in version 1.142.
The deprecated createParser mutation is replaced by createParserV2() . The differences between the old and new mutation are:
testData input field is replaced by testCases, which can contain more data than the old tests could. This includes adding assertions to the output of a test. These assertions are not displayed in the UI yet. To emulate the old API, you can take the old test string and put it in the ParserTestEventInput inside the ParserTestCaseInput, and they will behave the same as before.
fieldsToBeRemovedBeforeParsing can now be specified as part of the parser creation.
force field is renamed to allowOverwritingExistingParser.
sourceCode field is renamed to script.
tagFields field is renamed to fieldsToTag.
languageVersion is no longer an enum, but a LanguageVersionInputType instead.
The mutation returns a
Parser
, instead of aParser
wrapped in an object.The mutation fails when a parser has more than 2,000 test cases, or the test input in a single test case exceeds 40,000 characters.
The deprecated removeParser mutation is replaced by deleteParser. The difference between the old and new mutation is:
The mutation returns boolean to represent success or failure, instead of a
Parser
wrapped in an object.The deprecated testParser mutation is replaced by testParserV2() . The differences between the old and new mutation are:
The test cases are now structured types, instead of just being strings. To emulate the old API, take the test string and put it in the
ParserTestEventInput
inside theParserTestCaseInput
, and they will behave the same as before.The new test cases can contain assertions about the contents of the output.
The mutation output is significantly different from before, as it provides more detailed information on how a test case has failed.
The mutation now accepts both a language version and list of fields to be removed before parsing.
The parserScript field is renamed to script.
The tagFields field is renamed to fieldsToTag.
The deprecated updateParser mutation is replaced by updateParserV2() where more extensive test cases can be set. Continuing to use the previous API may result in test information on parsers being lost. To ensure information is not unintentionally erased, please migrate away from the deprecated APIs for both reading and updating parser test cases and use updateParserV2() instead. The differences between the previous and the new mutation are:
testData input field is replaced by testCases, which can contain more data than the old tests could. This includes adding assertions to the output of a test. These assertions are not displayed in the UI yet. To emulate the old API, you can take the old test string and put it in the
ParserTestEventInput
inside theParserTestCaseInput
, and they will behave the same as before.sourceCode field, used to updating the parser script, is changed to the script field, which takes a
UpdateParserScriptInput
object. This updates the parser script and the language version together.tagFields field is renamed to fieldsToTag.
The languageVersion is located inside the
UpdateParserScriptInput
object, and is no longer an enum, but a LanguageVersionInputType instead.The repositoryName and id fields are now correctly marked as mandatory in the schema. Previously this wasn't the case, even though the mutation would fail without them.
The mutation returns a
Parser
, instead of aParser
wrapped in an object.The old mutation had a bug where it would overwrite the languageVersion with a default value in some cases, which is fixed in the new one.
The mutation fails when a parser has more than 2,000 test cases, or the test input in a single test case exceeds 40,000 characters.
On the
Parser
type:
testData field is deprecated and replaced by testCases.
sourceCode field is deprecated and replaced by script.
tagFields field is deprecated and replaced by fieldsToTag.
For more information, see
Parser
,DeleteParserInput
,LanguageVersionInputType
, createParserV2() , testParserV2() , updateParserV2() .In the GraphQL API, the name argument to the parser field on the
Repository
datatype has been deprecated and will be removed in version 1.136 of LogScale.
Behavior Changes
Scripts or environment which make use of these tools should be checked and updated for the new configuration:
Security
DNS caches are now invalidated after 60 seconds instead of never. To override this behavior, set the security policy
networkaddress.cache.ttl
in the security manager of the JRE (see Java Networking Properties).
New features and improvements
Functions
The
parseTimestamp()
function is now able to parse timestamps with nanosecond precision.
Fixed in this release
Automation and Alerts
Filter Alerts with field-based throttling could trigger on two events with the same value for the throttle field, if actions were slow. This issue is now fixed.
Dashboards and Widgets
A dashboard with fixed shared time as default would not update correctly when selecting a new relative time. This issue is now fixed.
Early Access
Functions
A new query function
readFile()
is released in Early Access. It allows using a CSV Lookup File as data input for a query.For more information, see
readFile()
.
Improvement
Storage
Moved the work of creating a global snapshot for upload to bucket storage from the thread coordinating segment uploads/downloads to a separate thread. This improves the reliability of uploading and download the global snapshot to/from bucket storage.
Functions
Reduction of memory requirements when processing empty arrays in functions that accept them. This helps reduce the memory required to use these functions with empty arrays.
Falcon LogScale 1.129.0 GA (2024-03-12)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.129.0 | GA | 2024-03-12 | Cloud | 2025-04-30 | No | 1.106 | No |
Available for download two days after release.
Bug fixes and updates.
Advance Warning
The following items are due to change in a future release.
Installation and Deployment
We aim to stop publishing the
jar
distribution of LogScale (e.g.server-1.117.jar
) as of LogScale version 1.130.0.Users deploying via Docker images are not affected. Users deploying on bare metal should ensure they deploy the
tar
artifact, and not thejar
artifact.A migration guide for bare metal deployments is available at How-To: Migrating from server.jar to Launcher Startup.
We intend to drop support for Java 17, making Java 21 the minimum. We plan to make this change in March 2024.
Removed
Items that have been removed as of this release.
GraphQL API
The enabledFeatures() query has been removed from GraphQL schema. Use featureFlags() query instead.
Deprecation
Items that have been deprecated and may be removed in a future release.
The assetType GraphQL field on
Alert
,Dashboard
,Parser
,SavedQuery
andViewInteraction
datatypes has been deprecated and will be removed in version 1.136 of LogScale.The
any
argument to thetype
parameter ofsort()
andtable()
has been deprecated and will be removed in version 1.142.Warnings prompts will be shown in queries that fall into either of these two cases:
If you are explicitly supplying an
any
argument, please either simply remove both the parameter and the argument, for example changesort(..., type=any)
tosort(...)
or supply the argument fortype
that corresponds to your data.If you are sorting hexadecimal values by their equivalent numerical values, please change the argument of
type
parameter tohex
e.g.sort(..., type=hex)
.In all other cases, no action is needed.
The new default value for
sort()
andtable()
will benumber
. Both functions will fall back to lexicographical ordering for values that cannot be understood as the provided argument fortype
.The
humio
Docker image is deprecated in favor ofhumio-core
.humio
is no longer considered suitable for production use, as it runs Kafka and ZooKeeper on the same host as LogScale, which our deployment guidelines no longer recommend. The final release ofhumio
Docker image will be in version 1.130.0.The new
humio-single-node-demo
image is an all-in-one container suitable for quick and easy demonstration setups, but which is entirely unsupported for production use.For more information, see Installing Using Containers.
In the GraphQL API, the
ChangeTriggersAndAction
enum value for both thePermission
andViewAction
enum is now deprecated and will be removed in version 1.136 of LogScale.We are deprecating the
humio/kafka
andhumio/zookeeper
Docker images due to low use. The planned final release for these images will be with LogScale 1.148.0.Better alternatives are available going forward. We recommend the following:
If you still require
humio/kafka
orhumio/zookeeper
for needs that cannot be covered by these alternatives, please contact Support and share your concerns.The following GraphQL queries and mutations for interacting with parsers are deprecated and scheduled for removal in version 1.142.
The deprecated createParser mutation is replaced by createParserV2() . The differences between the old and new mutation are:
testData input field is replaced by testCases, which can contain more data than the old tests could. This includes adding assertions to the output of a test. These assertions are not displayed in the UI yet. To emulate the old API, you can take the old test string and put it in the ParserTestEventInput inside the ParserTestCaseInput, and they will behave the same as before.
fieldsToBeRemovedBeforeParsing can now be specified as part of the parser creation.
force field is renamed to allowOverwritingExistingParser.
sourceCode field is renamed to script.
tagFields field is renamed to fieldsToTag.
languageVersion is no longer an enum, but a LanguageVersionInputType instead.
The mutation returns a
Parser
, instead of aParser
wrapped in an object.The mutation fails when a parser has more than 2,000 test cases, or the test input in a single test case exceeds 40,000 characters.
The deprecated removeParser mutation is replaced by deleteParser. The difference between the old and new mutation is:
The mutation returns boolean to represent success or failure, instead of a
Parser
wrapped in an object.The deprecated testParser mutation is replaced by testParserV2() . The differences between the old and new mutation are:
The test cases are now structured types, instead of just being strings. To emulate the old API, take the test string and put it in the
ParserTestEventInput
inside theParserTestCaseInput
, and they will behave the same as before.The new test cases can contain assertions about the contents of the output.
The mutation output is significantly different from before, as it provides more detailed information on how a test case has failed.
The mutation now accepts both a language version and list of fields to be removed before parsing.
The parserScript field is renamed to script.
The tagFields field is renamed to fieldsToTag.
The deprecated updateParser mutation is replaced by updateParserV2() where more extensive test cases can be set. Continuing to use the previous API may result in test information on parsers being lost. To ensure information is not unintentionally erased, please migrate away from the deprecated APIs for both reading and updating parser test cases and use updateParserV2() instead. The differences between the previous and the new mutation are:
testData input field is replaced by testCases, which can contain more data than the old tests could. This includes adding assertions to the output of a test. These assertions are not displayed in the UI yet. To emulate the old API, you can take the old test string and put it in the
ParserTestEventInput
inside theParserTestCaseInput
, and they will behave the same as before.sourceCode field, used to updating the parser script, is changed to the script field, which takes a
UpdateParserScriptInput
object. This updates the parser script and the language version together.tagFields field is renamed to fieldsToTag.
The languageVersion is located inside the
UpdateParserScriptInput
object, and is no longer an enum, but a LanguageVersionInputType instead.The repositoryName and id fields are now correctly marked as mandatory in the schema. Previously this wasn't the case, even though the mutation would fail without them.
The mutation returns a
Parser
, instead of aParser
wrapped in an object.The old mutation had a bug where it would overwrite the languageVersion with a default value in some cases, which is fixed in the new one.
The mutation fails when a parser has more than 2,000 test cases, or the test input in a single test case exceeds 40,000 characters.
On the
Parser
type:
testData field is deprecated and replaced by testCases.
sourceCode field is deprecated and replaced by script.
tagFields field is deprecated and replaced by fieldsToTag.
For more information, see
Parser
,DeleteParserInput
,LanguageVersionInputType
, createParserV2() , testParserV2() , updateParserV2() .In the GraphQL API, the name argument to the parser field on the
Repository
datatype has been deprecated and will be removed in version 1.136 of LogScale.
Behavior Changes
Scripts or environment which make use of these tools should be checked and updated for the new configuration:
Ingestion
We have reverted the behavior of blocking heavy queries in case of high ingest, and returned to the behavior of only stopping the query, due to issues caused by the blockage. Heavy queries causing ingest delay will be handled differently in a future version release.
Upgrades
Changes that may occur or be required during an upgrade.
Installation and Deployment
The minimum required LogScale version to upgrade from has been raised to 1.106, in order to remove some workarounds for compatibility with old versions.
New features and improvements
Security
Added support for authorizing with an external JWT from an IdP setup in our cloud environment.
The audience for dynamic OIDC IdPs in our cloud environments are now
logscale-$orgId
, where$orgId
is the ID of your organization.Added support for Okta federated IdP OIDC extension to identity providers setup in cloud.
Automation and Alerts
Throttling and field-based throttling have been introduced as optional functionalities in Filter Alerts. The minimum throttling period is 1 minute.
The customizable trigger limit for Filter Alerts is removed. The trigger limit is now automatically determined based on the associated actions. If one or more email actions are associated, the trigger limit will be 15, otherwise, the trigger limit will be 100. Any existing customizable trigger limit of 1 will be treated as a throttling period of 1 minute, all other custom trigger limits will be ignored. This is a non-backwards compatible change to the GraphQL APIs for Filter Alerts, so any automation for these alerts must be updated.
GraphQL API
The new scopeClaim input argument has been added to
OidcConfigurationInput
andUpdateOidcConfigurationInput
for dynamic OIDC configurations in our clouds.If the IdP is dynamic, we will try to grab the scope claim based on the value given as an input to either newOIDCIdentityProvider() or updateOIDCIdentityProvider() mutations. It will fallback to the cluster configuration.
Configuration
Authorization attempted via JWT tokens will now only try to grab user information from the user info endpoint if the scope in the access token contains any of the following:
profile
,email
,openid
. If no such scope is located in the token, LogScale will try to extract the username from the token and no other user details will be added. We will extract the scope claim based on the new environment variableOIDC_SCOPE_CLAIM
, whose default isscope
.
Ingestion
New parser APIs have been introduced for more extensive parser testing. In the API, parser test cases now have a new structure.
For more information, see createParserV2() ,
DeleteParserInput
, testParserV2() , updateParserV2() , andParser
.
Other
The split by AWS record setting within ingest feeds will now accept numbers with leading zeros.
The default IP filter for IdP and RDNS operations is now more restrictive: RDNS now defaults to denying lookups of reserved IP ranges and the filter has been updated to deny additional reserved IP ranges, as specified by the IANA. Self hosted administrators can specify their own filters by using the environment variables
IP_FILTER_IDP
,IP_FILTER_RDNS
, andIP_FILTER_RDNS_SERVER
respectively.
Fixed in this release
Ingestion
Cloning a parser from the UI would not clone the fields to be removed before parsing. This issue is now fixed.
Improvement
Other
Improved handling of segments being replaced due to either merging or event redaction, to address rare cases of event duplication when segments are replaced multiple times shortly after each other.
Falcon LogScale 1.128.0 GA (2024-03-05)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.128.0 | GA | 2024-03-05 | Cloud | 2025-04-30 | No | 1.70.0 | No |
Available for download two days after release.
Bug fixes and updates.
Advance Warning
The following items are due to change in a future release.
Installation and Deployment
We aim to stop publishing the
jar
distribution of LogScale (e.g.server-1.117.jar
) as of LogScale version 1.130.0.Users deploying via Docker images are not affected. Users deploying on bare metal should ensure they deploy the
tar
artifact, and not thejar
artifact.A migration guide for bare metal deployments is available at How-To: Migrating from server.jar to Launcher Startup.
We intend to drop support for Java 17, making Java 21 the minimum. We plan to make this change in March 2024.
Deprecation
Items that have been deprecated and may be removed in a future release.
The assetType GraphQL field on
Alert
,Dashboard
,Parser
,SavedQuery
andViewInteraction
datatypes has been deprecated and will be removed in version 1.136 of LogScale.The
any
argument to thetype
parameter ofsort()
andtable()
has been deprecated and will be removed in version 1.142.Warnings prompts will be shown in queries that fall into either of these two cases:
If you are explicitly supplying an
any
argument, please either simply remove both the parameter and the argument, for example changesort(..., type=any)
tosort(...)
or supply the argument fortype
that corresponds to your data.If you are sorting hexadecimal values by their equivalent numerical values, please change the argument of
type
parameter tohex
e.g.sort(..., type=hex)
.In all other cases, no action is needed.
The new default value for
sort()
andtable()
will benumber
. Both functions will fall back to lexicographical ordering for values that cannot be understood as the provided argument fortype
.The
humio
Docker image is deprecated in favor ofhumio-core
.humio
is no longer considered suitable for production use, as it runs Kafka and ZooKeeper on the same host as LogScale, which our deployment guidelines no longer recommend. The final release ofhumio
Docker image will be in version 1.130.0.The new
humio-single-node-demo
image is an all-in-one container suitable for quick and easy demonstration setups, but which is entirely unsupported for production use.For more information, see Installing Using Containers.
In the GraphQL API, the
ChangeTriggersAndAction
enum value for both thePermission
andViewAction
enum is now deprecated and will be removed in version 1.136 of LogScale.We are deprecating the
humio/kafka
andhumio/zookeeper
Docker images due to low use. The planned final release for these images will be with LogScale 1.148.0.Better alternatives are available going forward. We recommend the following:
If you still require
humio/kafka
orhumio/zookeeper
for needs that cannot be covered by these alternatives, please contact Support and share your concerns.In the GraphQL API, the name argument to the parser field on the
Repository
datatype has been deprecated and will be removed in version 1.136 of LogScale.
New features and improvements
Configuration
The new dynamic configuration
MaxOpenSegmentsOnWorker
is implemented to control hard cap on open segment files for the scheduler. The scheduler should in most cases not reach this limit and it only acts as a backstop. Therefore, we recommend that administrators do not modify this setting unless advised to do so by CrowdStrike Support.
Fixed in this release
UI Changes
CSV files produced by LogScale for sending as attachments from email actions or uploaded through a LogScale Repository action could contain values where part of the text was duplicated. This would only happen for values that needed to be quoted. This issue is now fixed.
Packages
When attempting to upload a package disguised as a folder, some browsers would get a generic error messages. To fix this issue, only zip files are accepted now.
Improvement
Configuration
The default value for
AUTOSHARDING_MAX
has changed from 128 to 1,024.The default value for
AUTOSHARDING_TRIGGER_DELAY_MS
has changed from 1 hour to 4 hours.
Falcon LogScale 1.127.0 GA (2024-02-27)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.127.0 | GA | 2024-02-27 | Cloud | 2025-04-30 | No | 1.70.0 | No |
Available for download two days after release.
Bug fixes and updates.
Advance Warning
The following items are due to change in a future release.
Installation and Deployment
We aim to stop publishing the
jar
distribution of LogScale (e.g.server-1.117.jar
) as of LogScale version 1.130.0.Users deploying via Docker images are not affected. Users deploying on bare metal should ensure they deploy the
tar
artifact, and not thejar
artifact.A migration guide for bare metal deployments is available at How-To: Migrating from server.jar to Launcher Startup.
We intend to drop support for Java 17, making Java 21 the minimum. We plan to make this change in March 2024.
Deprecation
Items that have been deprecated and may be removed in a future release.
The assetType GraphQL field on
Alert
,Dashboard
,Parser
,SavedQuery
andViewInteraction
datatypes has been deprecated and will be removed in version 1.136 of LogScale.The
any
argument to thetype
parameter ofsort()
andtable()
has been deprecated and will be removed in version 1.142.Warnings prompts will be shown in queries that fall into either of these two cases:
If you are explicitly supplying an
any
argument, please either simply remove both the parameter and the argument, for example changesort(..., type=any)
tosort(...)
or supply the argument fortype
that corresponds to your data.If you are sorting hexadecimal values by their equivalent numerical values, please change the argument of
type
parameter tohex
e.g.sort(..., type=hex)
.In all other cases, no action is needed.
The new default value for
sort()
andtable()
will benumber
. Both functions will fall back to lexicographical ordering for values that cannot be understood as the provided argument fortype
.The
humio
Docker image is deprecated in favor ofhumio-core
.humio
is no longer considered suitable for production use, as it runs Kafka and ZooKeeper on the same host as LogScale, which our deployment guidelines no longer recommend. The final release ofhumio
Docker image will be in version 1.130.0.The new
humio-single-node-demo
image is an all-in-one container suitable for quick and easy demonstration setups, but which is entirely unsupported for production use.For more information, see Installing Using Containers.
In the GraphQL API, the
ChangeTriggersAndAction
enum value for both thePermission
andViewAction
enum is now deprecated and will be removed in version 1.136 of LogScale.We are deprecating the
humio/kafka
andhumio/zookeeper
Docker images due to low use. The planned final release for these images will be with LogScale 1.148.0.Better alternatives are available going forward. We recommend the following:
If you still require
humio/kafka
orhumio/zookeeper
for needs that cannot be covered by these alternatives, please contact Support and share your concerns.In the GraphQL API, the name argument to the parser field on the
Repository
datatype has been deprecated and will be removed in version 1.136 of LogScale.
New features and improvements
Functions
The
setField()
query function is introduced. It takes two expressions,target
andvalue
and sets the field named by the result of thetarget
expression to the result of thevalue
expression. This function can be used to manipulate fields whose names are not statically known, but computed at runtime.For more information, see
setField()
.The
getField()
query function is introduced. It takes an expression,source
, and sets the field defined byas
to the result of thesource
expression. This function can be used to manipulate fields whose names are not statically known, but computed at runtime.For more information, see
getField()
.
Fixed in this release
Ingestion
Fixed an issue that prevented the creation of Netflow/UDP protocol ingest listeners.
Improvement
Configuration
The default maximum limit for
groupBy()
has been increased from 200,000 to 1,000,000, meaning that this function can now be asked to collect up to a million groups. However, due to stability concerns it will not allowgroupBy()
to return the full million rows as a result when this function is the last aggregator: this is governed by theQueryResultRowCountLimit
dynamic configuration, which remains unchanged. Therefore, this new limit is best utilized whengroupBy()
is used as a computational tool for creating groups that are then later aggressively filtered and/or aggregated down in size. If you experience resource strain or starvation on your cluster, you can reduce the maximum limit via theGroupMaxLimit
dynamic configuration.The default memory limit for the query coordinator node has been increased from 400 MB to 4 GB. This new limit allows each query to use up to 1 GB of memory and thus produce more results, at the cost of taking up more resources. This in turn indirectly limits the amount of concurrent queries as the query scheduler may choose not to run a given query before existing queries have completed. If you experience resource strain or starvation on your cluster, you can reduce the memory limit by setting the
QueryCoordinatorMemoryLimit
dynamic configuration to 400,000,000.
Functions
Live queries now restart and run with the updated version of a saved query when the saved query changes.
For more information, see User Functions (Saved Searches).
Falcon LogScale 1.126.0 GA (2024-02-20)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.126.0 | GA | 2024-02-20 | Cloud | 2025-04-30 | No | 1.70.0 | No |
Available for download two days after release.
Bug fixes and updates.
Advance Warning
The following items are due to change in a future release.
Installation and Deployment
We aim to stop publishing the
jar
distribution of LogScale (e.g.server-1.117.jar
) as of LogScale version 1.130.0.Users deploying via Docker images are not affected. Users deploying on bare metal should ensure they deploy the
tar
artifact, and not thejar
artifact.A migration guide for bare metal deployments is available at How-To: Migrating from server.jar to Launcher Startup.
We intend to drop support for Java 17, making Java 21 the minimum. We plan to make this change in March 2024.
Deprecation
Items that have been deprecated and may be removed in a future release.
The assetType GraphQL field on
Alert
,Dashboard
,Parser
,SavedQuery
andViewInteraction
datatypes has been deprecated and will be removed in version 1.136 of LogScale.The
any
argument to thetype
parameter ofsort()
andtable()
has been deprecated and will be removed in version 1.142.Warnings prompts will be shown in queries that fall into either of these two cases:
If you are explicitly supplying an
any
argument, please either simply remove both the parameter and the argument, for example changesort(..., type=any)
tosort(...)
or supply the argument fortype
that corresponds to your data.If you are sorting hexadecimal values by their equivalent numerical values, please change the argument of
type
parameter tohex
e.g.sort(..., type=hex)
.In all other cases, no action is needed.
The new default value for
sort()
andtable()
will benumber
. Both functions will fall back to lexicographical ordering for values that cannot be understood as the provided argument fortype
.The
humio
Docker image is deprecated in favor ofhumio-core
.humio
is no longer considered suitable for production use, as it runs Kafka and ZooKeeper on the same host as LogScale, which our deployment guidelines no longer recommend. The final release ofhumio
Docker image will be in version 1.130.0.The new
humio-single-node-demo
image is an all-in-one container suitable for quick and easy demonstration setups, but which is entirely unsupported for production use.For more information, see Installing Using Containers.
In the GraphQL API, the
ChangeTriggersAndAction
enum value for both thePermission
andViewAction
enum is now deprecated and will be removed in version 1.136 of LogScale.We are deprecating the
humio/kafka
andhumio/zookeeper
Docker images due to low use. The planned final release for these images will be with LogScale 1.148.0.Better alternatives are available going forward. We recommend the following:
If you still require
humio/kafka
orhumio/zookeeper
for needs that cannot be covered by these alternatives, please contact Support and share your concerns.In the GraphQL API, the name argument to the parser field on the
Repository
datatype has been deprecated and will be removed in version 1.136 of LogScale.
New features and improvements
Configuration
Ingest rate monitoring for autosharding improved. For clusters with more than 10 nodes, only a subset of the nodes will be reporting their ingest rate for any given datasource, and the total rate for each datasource estimated based on that. The dynamic configuration
TargetMaxRateForDatasource
still sets the threshold for sharding; however, once the rate is exceeded, it is no longer needed to be twice theTargetMaxRateForDatasource
configuration before shards are added.
Ingestion
Ingest feeds can read from an AWS SQS queue that has been populated with AWS SNS subscription events.
For more information, see Ingest Data from AWS S3.
Fixed in this release
UI Changes
Field aliases could not be read on the sandbox repository. This issue is now fixed.
Other
An issue with the IOC Configuration causing the local database to update too often has now been fixed.
Packages
Updating a package could fail, if one of the assets from the package had been deleted from the view where the package was installed. This issue has been fixed.
Falcon LogScale 1.125.0 GA (2024-02-13)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.125.0 | GA | 2024-02-13 | Cloud | 2025-04-30 | No | 1.70.0 | No |
Available for download two days after release.
Bug fixes and updates.
Advance Warning
The following items are due to change in a future release.
Installation and Deployment
We aim to stop publishing the
jar
distribution of LogScale (e.g.server-1.117.jar
) as of LogScale version 1.130.0.Users deploying via Docker images are not affected. Users deploying on bare metal should ensure they deploy the
tar
artifact, and not thejar
artifact.A migration guide for bare metal deployments is available at How-To: Migrating from server.jar to Launcher Startup.
We intend to drop support for Java 17, making Java 21 the minimum. We plan to make this change in March 2024.
Deprecation
Items that have been deprecated and may be removed in a future release.
The assetType GraphQL field on
Alert
,Dashboard
,Parser
,SavedQuery
andViewInteraction
datatypes has been deprecated and will be removed in version 1.136 of LogScale.The
any
argument to thetype
parameter ofsort()
andtable()
has been deprecated and will be removed in version 1.142.Warnings prompts will be shown in queries that fall into either of these two cases:
If you are explicitly supplying an
any
argument, please either simply remove both the parameter and the argument, for example changesort(..., type=any)
tosort(...)
or supply the argument fortype
that corresponds to your data.If you are sorting hexadecimal values by their equivalent numerical values, please change the argument of
type
parameter tohex
e.g.sort(..., type=hex)
.In all other cases, no action is needed.
The new default value for
sort()
andtable()
will benumber
. Both functions will fall back to lexicographical ordering for values that cannot be understood as the provided argument fortype
.The
humio
Docker image is deprecated in favor ofhumio-core
.humio
is no longer considered suitable for production use, as it runs Kafka and ZooKeeper on the same host as LogScale, which our deployment guidelines no longer recommend. The final release ofhumio
Docker image will be in version 1.130.0.The new
humio-single-node-demo
image is an all-in-one container suitable for quick and easy demonstration setups, but which is entirely unsupported for production use.For more information, see Installing Using Containers.
In the GraphQL API, the
ChangeTriggersAndAction
enum value for both thePermission
andViewAction
enum is now deprecated and will be removed in version 1.136 of LogScale.We are deprecating the
humio/kafka
andhumio/zookeeper
Docker images due to low use. The planned final release for these images will be with LogScale 1.148.0.Better alternatives are available going forward. We recommend the following:
If you still require
humio/kafka
orhumio/zookeeper
for needs that cannot be covered by these alternatives, please contact Support and share your concerns.In the GraphQL API, the name argument to the parser field on the
Repository
datatype has been deprecated and will be removed in version 1.136 of LogScale.
Behavior Changes
Scripts or environment which make use of these tools should be checked and updated for the new configuration:
Ingestion
It is no longer possible to delete a parser that is being used in an ingest feed. The parser must first be removed from the ingest feed.
For more information, see Delete an Ingest Feed.
New features and improvements
Other
The
missing-cluster-nodes
metric will now track the nodes that are missing heartbeat data in addition to the nodes that have outdated heartbeat data. The newmissing-cluster-nodes-stateful
metric will track the registered nodes with outdated/missing heartbeat data that can write to global.For more information, see Node-Level Metrics.
Improvement
Storage
Allowed reassignment of digest that assigns partitions unevenly to hosts. This is to support clusters where hosts are not evenly sized, and so an even partition assignment is not expected.
SegmentChangesJobTrigger
has been disabled on nodes configured to not be able to store segments, thus saving some CPU time.
Falcon LogScale 1.124.3 LTS (2024-05-14)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.124.3 | LTS | 2024-05-14 | Cloud | 2025-03-01 | No | 1.70.0 | No |
Hide file hashes
TAR Checksum | Value |
---|---|
MD5 | 730dd2226aa23d325b7fecc7f7ee4138 |
SHA1 | 3fdde9ee0728eace9808f845504c9f584d0da81f |
SHA256 | fd5d88be54cc487db542d4c3dad3072913edad50540e0fe2e14487ee26525d0b |
SHA512 | 22377e496e6cd0abd4aac0c6fba41d35596703246d98d0e2dcd8ddb026127a6fce4a2fa4bad6e46682f0686508b58c5c13ad23db8ad3554356b157aeb6c95e0e |
Docker Image | Included JDK | SHA256 Checksum |
---|---|---|
humio | 21 | 84fb05447e3776cace395a8799a17476aca125bc7b4ee50d515a4e3aa89d3282 |
humio-core | 21 | 81b999f222d55ca6c36a05fbb1237444a6de91997eef7520cd208d16a7d29618 |
kafka | 21 | b375c5ce0bbfbc3dea1fa2fbaa2be5cc66b2c59dd38710c1b09acdbce176a40f |
zookeeper | 21 | e054e07d2ba316b0c9e59699b420f37bac2057e39516f5a7263d46c13628dc26 |
Download: https://repo.humio.com/repository/maven-releases/com/humio/server/1.124.3/server-1.124.3.tar.gz
These notes include entries from the following previous releases: 1.124.1, 1.124.2
Bug fixes and updates.
Breaking Changes
The following items create a breaking change in the behavior, response or operation of this release.
Functions
The default accuracy of the
percentile()
function has been adjusted. This means that any query that does not explicitly set the accuracy may see a change in reported percentile. Specifically, thepercentile()
function may now deviate by up to one 100th of the true percentile, meaning that if a given percentile has a true value of 1000,percentile()
may report a percentile in the range of[990; 1010]
.On the flip side,
percentile()
now uses less memory by default, which should allow for additional series or groups when this function is used with eithertimeChart()
orgroupBy()
and the default accuracy is used.
Advance Warning
The following items are due to change in a future release.
Installation and Deployment
We aim to stop publishing the
jar
distribution of LogScale (e.g.server-1.117.jar
) as of LogScale version 1.130.0.Users deploying via Docker images are not affected. Users deploying on bare metal should ensure they deploy the
tar
artifact, and not thejar
artifact.A migration guide for bare metal deployments is available at How-To: Migrating from server.jar to Launcher Startup.
We intend to drop support for Java 17, making Java 21 the minimum. We plan to make this change in March 2024.
Removed
Items that have been removed as of this release.
GraphQL API
Removed the
Asset
interface type in GraphQL thatAlert
,Dashboard
,Parser
,SavedQuery
andViewInteraction
datatypes implemented. It was not used as a type for any field. All fields from theAsset
interface type are still present in the implementing types.Configuration
The
DEFAULT_PARTITION_COUNT
configuration parameter has been removed, as it was unused by the system due to earlier changes to partition handling.
Deprecation
Items that have been deprecated and may be removed in a future release.
The assetType GraphQL field on
Alert
,Dashboard
,Parser
,SavedQuery
andViewInteraction
datatypes has been deprecated and will be removed in version 1.136 of LogScale.The
humio
Docker image is deprecated in favor ofhumio-core
.humio
is no longer considered suitable for production use, as it runs Kafka and ZooKeeper on the same host as LogScale, which our deployment guidelines no longer recommend. The final release ofhumio
Docker image will be in version 1.130.0.The new
humio-single-node-demo
image is an all-in-one container suitable for quick and easy demonstration setups, but which is entirely unsupported for production use.For more information, see Installing Using Containers.
In the GraphQL API, the
ChangeTriggersAndAction
enum value for both thePermission
andViewAction
enum is now deprecated and will be removed in version 1.136 of LogScale.The
QUERY_COORDINATOR
environment variable is deprecated. To control whether a node should be allowed to be a query coordinator, use thequery
node task instead. Node tasks can be assigned and unassigned at runtime using the assignTasks() and unassignTasks() GraphQL mutations respectively, or controlled using theINITIAL_DISABLED_NODE_TASKS
environment variable.For more information, see
INITIAL_DISABLED_NODE_TASK
.We are deprecating the
humio/kafka
andhumio/zookeeper
Docker images due to low use. The planned final release for these images will be with LogScale 1.148.0.Better alternatives are available going forward. We recommend the following:
If you still require
humio/kafka
orhumio/zookeeper
for needs that cannot be covered by these alternatives, please contact Support and share your concerns.In the GraphQL API, the name argument to the parser field on the
Repository
datatype has been deprecated and will be removed in version 1.136 of LogScale.
Behavior Changes
Scripts or environment which make use of these tools should be checked and updated for the new configuration:
Storage
We have adjusted the code that calculates where to start reading from the ingest queue to be more conservative. It will no longer allow for skipping past segments that are not fully replicated when later segments on the same datasource are fully replicated. This fixes a very rare edge case that could cause data loss on clusters using ephemeral disks. Due to the changed behavior, any segment failing to properly replicate will now cause LogScale to stop deleting data from the affected Kafka partition. Cluster administrators are strongly encouraged to monitor this case, by keeping under observation Kafka's disk usage.
Ingestion
We have reverted the behavior of blocking heavy queries in case of high ingest, and returned to the behavior of only stopping the query, due to issues caused by the blockage. Heavy queries causing ingest delay will be handled differently in a future version release.
Upgrades
Changes that may occur or be required during an upgrade.
Installation and Deployment
Kafka client library has been upgraded to 3.6.1. Some minor changes have been made to serializers used by LogScale to reduce memory copying.
New features and improvements
UI Changes
Time zone data has been updated to IANA 2024a and has been trimmed to +/- 5 years from the release date of IANA 2024a.
Time zone data has been updated to IANA 2023d.
Deletion of a file that is actively used by live queries will now stop those queries.
For more information, see Exporting or Deleting a File.
Multi-Cluster Search — early adopter release for Self-hosted LogScale.
Keep the data close to the source, search from single UI
Search across multiple LogScale clusters in a single view
Support key functionalities like alerts & dashboards
The functionality is limited to LogScale self-hosted versions at this point.
For more information, see LogScale Multi-Cluster Search.
When Manage Users, it is now possible to filter users based also on their assigned roles (for example, type
admin
in the Users search field).The Field Aliasing feature is introduced. Implementing Field Aliasing in your workflow simplifies data correlation from various sources. With this feature, users can give alternative names — aliases — to fields created at parse time, across a view, or the entire organization. It makes data interpretation more intuitive and provides analysts with a smoother search experience.
For more information, see Field Aliasing.
Automation and Alerts
The following changes affects the UI for Standard Alerts:
A minimum time window of 1 minute is introduced, since anything smaller will not produce reliable results. Any existing standard alert with a time window smaller than 1 minute will not run, instead an error notification will be shown.
It is no longer possible to specify the time window and the throttle period in milliseconds. Any existing standard alerts with a time window or throttle period specified in milliseconds will have it rounded to the nearest second.
When saving the alert, the query window is automatically changed to the largest unit in the Relative Time Syntax that can represent it. For example
24h
is changed to1d
and60s
is changed to1m
.
The
ChangeTriggersAndActions
permission is now replaced by two new permissions:ChangeTriggers
permission is needed to edit alerts or scheduled searches.ChangeActions
permission is needed to edit actions as well as viewing them. Viewing the name and type of actions when editing triggers is still possible without this permission.
Any user with the legacy
ChangeTriggersAndActions
permissions will by default have both. It is possible to remove one of them for more granular access controls.A slow-query logging has been added when an alert is slow to start due to the query not having finished the historical part.
GraphQL API
Added limits for GraphQL queries on the total number of selected fields and fragments. Defaults are
1000
for authenticated and150
for unauthenticated users.Cluster administrators can adjust these limits with the
GraphQLSelectionSizeLimit
andUnauthenticatedGraphQLSelectionSizeLimit
dynamic configurations.
Storage
The following validation constraints are added on boot:
LOCAL_STORAGE_PERCENTAGE
is less thanSECONDARY_STORAGE_MAX_FILL_PERCENTAGE
on nodes with secondary storage configured.LOCAL_STORAGE_PERCENTAGE
is less thanPRIMARY_STORAGE_MAX_FILL_PERCENTAGE
on nodes without secondary storage configured.
Nodes will crash on boot if these constraints are violated.
We have changed how LogScale handles being temporarily bottlenecked by bucket storage. Uploads are now prioritized ahead of downloads, which reduces the impact on ingest work.
Configuration
The meaning of
S3_STORAGE_CONCURRENCY
andGCP_STORAGE_CONCURRENCY
configuration variables has slightly changed. The settings are used for throttling downloads and uploads for bucket storage. Previously, a setting ofS3_STORAGE_CONCURRENCY=10
for example, meant that LogScale would allow 10 concurrent uploads, and 10 concurrent downloads. Now, it means that LogScale will allow a total of 10 transfers at a time, disregarding the transfer direction.New dynamic configurations have been added:
defaultDigestReplicationFactor
dynamic configuration defaults to2
if the value is not explicitly set and there is more than 1 node in the cluster performing digest.If necessary, a different default can be explicitly set using the
DEFAULT_DIGEST_REPLICATION_FACTOR
environment variable.defaultSegmentReplicationFactor
dynamic configuration defaults to2
if the value is not explicitly set, unless there is only 1 node in the cluster storing segments, or ifUSING_EPHEMERAL_DISKS
environment variable is set totrue
.If necessary, a different default can be explicitly set using the
DEFAULT_SEGMENT_REPLICATION_FACTOR
environment variable.
Ingest rate monitoring for autosharding improved. For clusters with more than 10 nodes, only a subset of the nodes will be reporting their ingest rate for any given datasource, and the total rate for each datasource estimated based on that. The dynamic configuration
TargetMaxRateForDatasource
still sets the threshold for sharding; however, once the rate is exceeded, it is no longer needed to be twice theTargetMaxRateForDatasource
configuration before shards are added.
Dashboards and Widgets
A series of improvements has been added to the dashboard layout experience:
New widgets will be added in the topmost available space
When you drag widgets up, all widgets in the same column will move together
Improved experience when swapping the order of widgets (horizontally or vertically)
Ingestion
Introducing Ingest Feeds, a new pull-based ingest source that ingests logs stored in AWS S3. The files within the AWS S3 bucket can be Gzip compressed and we currently support newline delimited files and the JSON object format in which CloudTrail logs are stored in. Ingest Feeds require some configuration setup on the AWS side to get started.
This feature is part of a gradual rollout process and may not be available on your cloud instance, but will be available to all customers in the following weeks.
For more information, see Ingest Data from AWS S3.
The limits on the size of parser test cases when exporting as templates or packages has been increased.
The amount of logging produced by
DigestLeadershipLoggerJob
has been reduced in clusters with many ingest queue partitions.
Log Collector
Groups have been added to Fleet Management for the LogScale Collector. This feature makes it possible to define dynamic groups using a filter based upon a subset of the LogScale Query Language Syntax. New Collectors enrolled into the fleet will automatically be configured based upon the groups filters they match, eliminating the need for manually assigning a configuration to every new LogScale Collector. Groups also allow you to combine multiple reusable configuration snippets.
Additionally the management of instances has been simplified and merged into this new feature, and therefore the Assigned Instances page has been removed to favor use of the Group functions.
For more information, see Manage Groups.
Queries
The worker-level prioritization of queries has been changed. The new prioritization will attempt to divide time evenly between all users, and divide the time given to each user evenly among that user's queries.
Live query cost metrics corrections:
livequeries-rate
metric has changed from long to double.livequeries-rate-canceled-due-to-digest-delay
metric has changed from long to double.
For more information, see Node-Level Metrics.
Functions
The new
array:length()
function has been introduced. It finds the length of an array by counting the number of array entries.For more information, see
array:length()
.
Fixed in this release
UI Changes
When hovering over a query function in the query editor, the link to the function documentation now always points to the latest version of the page.
Automation and Alerts
After updating Scheduled searches where the action was failing, they would constantly fail with a None.get error until they were disabled and enabled again, or the LogScale cluster was restarted. This issue is now fixed.
Storage
Fixed an issue that could cause repositories undeleted using the mechanism described at Restoring a Repository or View to be only partially restored. Some deleted datasources within the repositories could erroneously be skipped during restoration.
For more information, see Restoring a Repository or View.
Dashboards and Widgets
Users were prevented from exporting results of queries containing multi value parameters. This issue is now fixed.
Queries
Queries in some cases would be killed as if they were blocked even though they did not match the criteria of the block. This issue is now fixed.
Fixed a bug in which the second poll inside the cluster could be delayed by upwards of 10 seconds. This fix ensures that the time between polls will never be later than the start time of the query, this means that early polls will not be delayed too much, enabling faster query responses.
Functions
selectLast()
has been fixed for an issue that could cause this query function to miss events in certain cases.
Other
It was not possible to create a new repository with a time retention greater than 365 days. Now, the UI limit is the one that is set on the customer organization.
Input validation on fields when creating new repositories is now also improved.
Improvement
Storage
Allowed reassignment of digest that assigns partitions unevenly to hosts. This is to support clusters where hosts are not evenly sized, and so an even partition assignment is not expected.
Configuration
The default limit for uploading CSV Lookup Files set by
MaxCsvFileUploadSizeBytes
dynamic configuration has been increased from100MB
to200MB
. IfMAX_FILEUPLOAD_SIZE
is set, its value will be the default for bothMaxCsvFileUploadSizeBytes
andMaxJsonFileUploadSizeBytes
.
Ingestion
The cancelling mechanism for specific costly queries has been improved to solve cases where those queries got restarted anyway: the query with the exact match on the query string is now blocked for 5 minutes. This will free enough CPU for ingest to catch up and avoid blocking queries for too long.
Falcon LogScale 1.124.2 LTS (2024-03-20)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.124.2 | LTS | 2024-03-20 | Cloud | 2025-03-01 | No | 1.70.0 | No |
Hide file hashes
TAR Checksum | Value |
---|---|
MD5 | b66d180a49887e3cbb65b5315e835579 |
SHA1 | 3be945d1557a751eb690f5c0a6dca940095aa1f3 |
SHA256 | 588f89b0de65413dad67c96653d4debdeb2f9012299a1c237040f64c8c48bf5c |
SHA512 | a5504f59b9d42a6c28d9ad6cc3e98c0ef1db16286412cc15d4f9114f01c0249ba152b92ab5c8c94d3a1155f80209042891a23a1780feaa2928fab9c2a5c14390 |
Docker Image | Included JDK | SHA256 Checksum |
---|---|---|
humio | 21 | b75ee983542ee41303143cd18e2b7dab0ac0e9a06a667a027f1eb4cfb3e40c23 |
humio-core | 21 | 89a299ad54f71b0dce43e3149362c336739bc3002e5f45decd1512282bcf4ef9 |
kafka | 21 | 9e170eff22a95031a763af235b1cb11ff560a07167136ceaa2e31bb127bc9779 |
zookeeper | 21 | 5809022af39312ef3352365ab63aeaa81844f74717411a9338e32516b0be91d7 |
Download: https://repo.humio.com/repository/maven-releases/com/humio/server/1.124.2/server-1.124.2.tar.gz
These notes include entries from the following previous releases: 1.124.1
Bug fixes and updates.
Breaking Changes
The following items create a breaking change in the behavior, response or operation of this release.
Functions
The default accuracy of the
percentile()
function has been adjusted. This means that any query that does not explicitly set the accuracy may see a change in reported percentile. Specifically, thepercentile()
function may now deviate by up to one 100th of the true percentile, meaning that if a given percentile has a true value of 1000,percentile()
may report a percentile in the range of[990; 1010]
.On the flip side,
percentile()
now uses less memory by default, which should allow for additional series or groups when this function is used with eithertimeChart()
orgroupBy()
and the default accuracy is used.
Advance Warning
The following items are due to change in a future release.
Installation and Deployment
We aim to stop publishing the
jar
distribution of LogScale (e.g.server-1.117.jar
) as of LogScale version 1.130.0.Users deploying via Docker images are not affected. Users deploying on bare metal should ensure they deploy the
tar
artifact, and not thejar
artifact.A migration guide for bare metal deployments is available at How-To: Migrating from server.jar to Launcher Startup.
We intend to drop support for Java 17, making Java 21 the minimum. We plan to make this change in March 2024.
Removed
Items that have been removed as of this release.
GraphQL API
Removed the
Asset
interface type in GraphQL thatAlert
,Dashboard
,Parser
,SavedQuery
andViewInteraction
datatypes implemented. It was not used as a type for any field. All fields from theAsset
interface type are still present in the implementing types.Configuration
The
DEFAULT_PARTITION_COUNT
configuration parameter has been removed, as it was unused by the system due to earlier changes to partition handling.
Deprecation
Items that have been deprecated and may be removed in a future release.
The assetType GraphQL field on
Alert
,Dashboard
,Parser
,SavedQuery
andViewInteraction
datatypes has been deprecated and will be removed in version 1.136 of LogScale.The
humio
Docker image is deprecated in favor ofhumio-core
.humio
is no longer considered suitable for production use, as it runs Kafka and ZooKeeper on the same host as LogScale, which our deployment guidelines no longer recommend. The final release ofhumio
Docker image will be in version 1.130.0.The new
humio-single-node-demo
image is an all-in-one container suitable for quick and easy demonstration setups, but which is entirely unsupported for production use.For more information, see Installing Using Containers.
In the GraphQL API, the
ChangeTriggersAndAction
enum value for both thePermission
andViewAction
enum is now deprecated and will be removed in version 1.136 of LogScale.The
QUERY_COORDINATOR
environment variable is deprecated. To control whether a node should be allowed to be a query coordinator, use thequery
node task instead. Node tasks can be assigned and unassigned at runtime using the assignTasks() and unassignTasks() GraphQL mutations respectively, or controlled using theINITIAL_DISABLED_NODE_TASKS
environment variable.For more information, see
INITIAL_DISABLED_NODE_TASK
.We are deprecating the
humio/kafka
andhumio/zookeeper
Docker images due to low use. The planned final release for these images will be with LogScale 1.148.0.Better alternatives are available going forward. We recommend the following:
If you still require
humio/kafka
orhumio/zookeeper
for needs that cannot be covered by these alternatives, please contact Support and share your concerns.In the GraphQL API, the name argument to the parser field on the
Repository
datatype has been deprecated and will be removed in version 1.136 of LogScale.
Behavior Changes
Scripts or environment which make use of these tools should be checked and updated for the new configuration:
Storage
We have adjusted the code that calculates where to start reading from the ingest queue to be more conservative. It will no longer allow for skipping past segments that are not fully replicated when later segments on the same datasource are fully replicated. This fixes a very rare edge case that could cause data loss on clusters using ephemeral disks. Due to the changed behavior, any segment failing to properly replicate will now cause LogScale to stop deleting data from the affected Kafka partition. Cluster administrators are strongly encouraged to monitor this case, by keeping under observation Kafka's disk usage.
Ingestion
We have reverted the behavior of blocking heavy queries in case of high ingest, and returned to the behavior of only stopping the query, due to issues caused by the blockage. Heavy queries causing ingest delay will be handled differently in a future version release.
Upgrades
Changes that may occur or be required during an upgrade.
Installation and Deployment
Kafka client library has been upgraded to 3.6.1. Some minor changes have been made to serializers used by LogScale to reduce memory copying.
New features and improvements
UI Changes
Time zone data has been updated to IANA 2023d.
Deletion of a file that is actively used by live queries will now stop those queries.
For more information, see Exporting or Deleting a File.
Multi-Cluster Search — early adopter release for Self-hosted LogScale.
Keep the data close to the source, search from single UI
Search across multiple LogScale clusters in a single view
Support key functionalities like alerts & dashboards
The functionality is limited to LogScale self-hosted versions at this point.
For more information, see LogScale Multi-Cluster Search.
When Manage Users, it is now possible to filter users based also on their assigned roles (for example, type
admin
in the Users search field).The Field Aliasing feature is introduced. Implementing Field Aliasing in your workflow simplifies data correlation from various sources. With this feature, users can give alternative names — aliases — to fields created at parse time, across a view, or the entire organization. It makes data interpretation more intuitive and provides analysts with a smoother search experience.
For more information, see Field Aliasing.
Automation and Alerts
The following changes affects the UI for Standard Alerts:
A minimum time window of 1 minute is introduced, since anything smaller will not produce reliable results. Any existing standard alert with a time window smaller than 1 minute will not run, instead an error notification will be shown.
It is no longer possible to specify the time window and the throttle period in milliseconds. Any existing standard alerts with a time window or throttle period specified in milliseconds will have it rounded to the nearest second.
When saving the alert, the query window is automatically changed to the largest unit in the Relative Time Syntax that can represent it. For example
24h
is changed to1d
and60s
is changed to1m
.
The
ChangeTriggersAndActions
permission is now replaced by two new permissions:ChangeTriggers
permission is needed to edit alerts or scheduled searches.ChangeActions
permission is needed to edit actions as well as viewing them. Viewing the name and type of actions when editing triggers is still possible without this permission.
Any user with the legacy
ChangeTriggersAndActions
permissions will by default have both. It is possible to remove one of them for more granular access controls.A slow-query logging has been added when an alert is slow to start due to the query not having finished the historical part.
GraphQL API
Added limits for GraphQL queries on the total number of selected fields and fragments. Defaults are
1000
for authenticated and150
for unauthenticated users.Cluster administrators can adjust these limits with the
GraphQLSelectionSizeLimit
andUnauthenticatedGraphQLSelectionSizeLimit
dynamic configurations.
Storage
The following validation constraints are added on boot:
LOCAL_STORAGE_PERCENTAGE
is less thanSECONDARY_STORAGE_MAX_FILL_PERCENTAGE
on nodes with secondary storage configured.LOCAL_STORAGE_PERCENTAGE
is less thanPRIMARY_STORAGE_MAX_FILL_PERCENTAGE
on nodes without secondary storage configured.
Nodes will crash on boot if these constraints are violated.
We have changed how LogScale handles being temporarily bottlenecked by bucket storage. Uploads are now prioritized ahead of downloads, which reduces the impact on ingest work.
Configuration
The meaning of
S3_STORAGE_CONCURRENCY
andGCP_STORAGE_CONCURRENCY
configuration variables has slightly changed. The settings are used for throttling downloads and uploads for bucket storage. Previously, a setting ofS3_STORAGE_CONCURRENCY=10
for example, meant that LogScale would allow 10 concurrent uploads, and 10 concurrent downloads. Now, it means that LogScale will allow a total of 10 transfers at a time, disregarding the transfer direction.New dynamic configurations have been added:
defaultDigestReplicationFactor
dynamic configuration defaults to2
if the value is not explicitly set and there is more than 1 node in the cluster performing digest.If necessary, a different default can be explicitly set using the
DEFAULT_DIGEST_REPLICATION_FACTOR
environment variable.defaultSegmentReplicationFactor
dynamic configuration defaults to2
if the value is not explicitly set, unless there is only 1 node in the cluster storing segments, or ifUSING_EPHEMERAL_DISKS
environment variable is set totrue
.If necessary, a different default can be explicitly set using the
DEFAULT_SEGMENT_REPLICATION_FACTOR
environment variable.
Ingest rate monitoring for autosharding improved. For clusters with more than 10 nodes, only a subset of the nodes will be reporting their ingest rate for any given datasource, and the total rate for each datasource estimated based on that. The dynamic configuration
TargetMaxRateForDatasource
still sets the threshold for sharding; however, once the rate is exceeded, it is no longer needed to be twice theTargetMaxRateForDatasource
configuration before shards are added.
Dashboards and Widgets
A series of improvements has been added to the dashboard layout experience:
New widgets will be added in the topmost available space
When you drag widgets up, all widgets in the same column will move together
Improved experience when swapping the order of widgets (horizontally or vertically)
Ingestion
Introducing Ingest Feeds, a new pull-based ingest source that ingests logs stored in AWS S3. The files within the AWS S3 bucket can be Gzip compressed and we currently support newline delimited files and the JSON object format in which CloudTrail logs are stored in. Ingest Feeds require some configuration setup on the AWS side to get started.
This feature is part of a gradual rollout process and may not be available on your cloud instance, but will be available to all customers in the following weeks.
For more information, see Ingest Data from AWS S3.
The limits on the size of parser test cases when exporting as templates or packages has been increased.
The amount of logging produced by
DigestLeadershipLoggerJob
has been reduced in clusters with many ingest queue partitions.
Log Collector
Groups have been added to Fleet Management for the LogScale Collector. This feature makes it possible to define dynamic groups using a filter based upon a subset of the LogScale Query Language Syntax. New Collectors enrolled into the fleet will automatically be configured based upon the groups filters they match, eliminating the need for manually assigning a configuration to every new LogScale Collector. Groups also allow you to combine multiple reusable configuration snippets.
Additionally the management of instances has been simplified and merged into this new feature, and therefore the Assigned Instances page has been removed to favor use of the Group functions.
For more information, see Manage Groups.
Queries
The worker-level prioritization of queries has been changed. The new prioritization will attempt to divide time evenly between all users, and divide the time given to each user evenly among that user's queries.
Live query cost metrics corrections:
livequeries-rate
metric has changed from long to double.livequeries-rate-canceled-due-to-digest-delay
metric has changed from long to double.
For more information, see Node-Level Metrics.
Functions
The new
array:length()
function has been introduced. It finds the length of an array by counting the number of array entries.For more information, see
array:length()
.
Fixed in this release
UI Changes
When hovering over a query function in the query editor, the link to the function documentation now always points to the latest version of the page.
Automation and Alerts
After updating Scheduled searches where the action was failing, they would constantly fail with a None.get error until they were disabled and enabled again, or the LogScale cluster was restarted. This issue is now fixed.
Storage
Fixed an issue that could cause repositories undeleted using the mechanism described at Restoring a Repository or View to be only partially restored. Some deleted datasources within the repositories could erroneously be skipped during restoration.
For more information, see Restoring a Repository or View.
Dashboards and Widgets
Users were prevented from exporting results of queries containing multi value parameters. This issue is now fixed.
Queries
Queries in some cases would be killed as if they were blocked even though they did not match the criteria of the block. This issue is now fixed.
Fixed a bug in which the second poll inside the cluster could be delayed by upwards of 10 seconds. This fix ensures that the time between polls will never be later than the start time of the query, this means that early polls will not be delayed too much, enabling faster query responses.
Functions
selectLast()
has been fixed for an issue that could cause this query function to miss events in certain cases.
Other
It was not possible to create a new repository with a time retention greater than 365 days. Now, the UI limit is the one that is set on the customer organization.
Input validation on fields when creating new repositories is now also improved.
Improvement
Storage
Allowed reassignment of digest that assigns partitions unevenly to hosts. This is to support clusters where hosts are not evenly sized, and so an even partition assignment is not expected.
Configuration
The default limit for uploading CSV Lookup Files set by
MaxCsvFileUploadSizeBytes
dynamic configuration has been increased from100MB
to200MB
. IfMAX_FILEUPLOAD_SIZE
is set, its value will be the default for bothMaxCsvFileUploadSizeBytes
andMaxJsonFileUploadSizeBytes
.
Ingestion
The cancelling mechanism for specific costly queries has been improved to solve cases where those queries got restarted anyway: the query with the exact match on the query string is now blocked for 5 minutes. This will free enough CPU for ingest to catch up and avoid blocking queries for too long.
Falcon LogScale 1.124.1 LTS (2024-02-29)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.124.1 | LTS | 2024-02-29 | Cloud | 2025-03-01 | No | 1.70.0 | No |
Hide file hashes
TAR Checksum | Value |
---|---|
MD5 | 7cb9867805e79bf88d9a8824b94bd717 |
SHA1 | 45e7db8dc5e8371351d8abea91b0d0b8c3560cf0 |
SHA256 | 4dd76bb4abd36b13350a509d57d7d3867f317cd1910a50e677c52db86e377b79 |
SHA512 | bc02e36be4a077b4cd3c210f2fe8eda63674334c9c47847fa1a0df0cdc21d24973daab9ab2078187f35093238a0932fcb19fd06b89aa4238d5a70b88066174b4 |
Docker Image | Included JDK | SHA256 Checksum |
---|---|---|
humio | 21 | fef66d15aed2ee6f2b5e48a332b7d08119ed6901450a165a8f78de0f452a9ac6 |
humio-core | 21 | afafad52ac55c06fbd841a025a917b4445fd458b501797ed02886808773d2fa1 |
kafka | 21 | 9390b66e795e152a6bc70055603e14a41ae2d5064e4942de16c8ec3eb65d9dd4 |
zookeeper | 21 | c7fec5d72136886e971acf69a1faa8bfd81b126c426d77b1e6a992b82fc3b64b |
Download: https://repo.humio.com/repository/maven-releases/com/humio/server/1.124.1/server-1.124.1.tar.gz
Bug fixes and updates.
Breaking Changes
The following items create a breaking change in the behavior, response or operation of this release.
Functions
The default accuracy of the
percentile()
function has been adjusted. This means that any query that does not explicitly set the accuracy may see a change in reported percentile. Specifically, thepercentile()
function may now deviate by up to one 100th of the true percentile, meaning that if a given percentile has a true value of 1000,percentile()
may report a percentile in the range of[990; 1010]
.On the flip side,
percentile()
now uses less memory by default, which should allow for additional series or groups when this function is used with eithertimeChart()
orgroupBy()
and the default accuracy is used.
Advance Warning
The following items are due to change in a future release.
Installation and Deployment
We aim to stop publishing the
jar
distribution of LogScale (e.g.server-1.117.jar
) as of LogScale version 1.130.0.Users deploying via Docker images are not affected. Users deploying on bare metal should ensure they deploy the
tar
artifact, and not thejar
artifact.A migration guide for bare metal deployments is available at How-To: Migrating from server.jar to Launcher Startup.
We intend to drop support for Java 17, making Java 21 the minimum. We plan to make this change in March 2024.
Removed
Items that have been removed as of this release.
GraphQL API
Removed the
Asset
interface type in GraphQL thatAlert
,Dashboard
,Parser
,SavedQuery
andViewInteraction
datatypes implemented. It was not used as a type for any field. All fields from theAsset
interface type are still present in the implementing types.Configuration
The
DEFAULT_PARTITION_COUNT
configuration parameter has been removed, as it was unused by the system due to earlier changes to partition handling.
Deprecation
Items that have been deprecated and may be removed in a future release.
The assetType GraphQL field on
Alert
,Dashboard
,Parser
,SavedQuery
andViewInteraction
datatypes has been deprecated and will be removed in version 1.136 of LogScale.The
humio
Docker image is deprecated in favor ofhumio-core
.humio
is no longer considered suitable for production use, as it runs Kafka and ZooKeeper on the same host as LogScale, which our deployment guidelines no longer recommend. The final release ofhumio
Docker image will be in version 1.130.0.The new
humio-single-node-demo
image is an all-in-one container suitable for quick and easy demonstration setups, but which is entirely unsupported for production use.For more information, see Installing Using Containers.
In the GraphQL API, the
ChangeTriggersAndAction
enum value for both thePermission
andViewAction
enum is now deprecated and will be removed in version 1.136 of LogScale.The
QUERY_COORDINATOR
environment variable is deprecated. To control whether a node should be allowed to be a query coordinator, use thequery
node task instead. Node tasks can be assigned and unassigned at runtime using the assignTasks() and unassignTasks() GraphQL mutations respectively, or controlled using theINITIAL_DISABLED_NODE_TASKS
environment variable.For more information, see
INITIAL_DISABLED_NODE_TASK
.We are deprecating the
humio/kafka
andhumio/zookeeper
Docker images due to low use. The planned final release for these images will be with LogScale 1.148.0.Better alternatives are available going forward. We recommend the following:
If you still require
humio/kafka
orhumio/zookeeper
for needs that cannot be covered by these alternatives, please contact Support and share your concerns.In the GraphQL API, the name argument to the parser field on the
Repository
datatype has been deprecated and will be removed in version 1.136 of LogScale.
Behavior Changes
Scripts or environment which make use of these tools should be checked and updated for the new configuration:
Storage
We have adjusted the code that calculates where to start reading from the ingest queue to be more conservative. It will no longer allow for skipping past segments that are not fully replicated when later segments on the same datasource are fully replicated. This fixes a very rare edge case that could cause data loss on clusters using ephemeral disks. Due to the changed behavior, any segment failing to properly replicate will now cause LogScale to stop deleting data from the affected Kafka partition. Cluster administrators are strongly encouraged to monitor this case, by keeping under observation Kafka's disk usage.
Upgrades
Changes that may occur or be required during an upgrade.
Installation and Deployment
Kafka client library has been upgraded to 3.6.1. Some minor changes have been made to serializers used by LogScale to reduce memory copying.
New features and improvements
UI Changes
Time zone data has been updated to IANA 2023d.
Deletion of a file that is actively used by live queries will now stop those queries.
For more information, see Exporting or Deleting a File.
Multi-Cluster Search — early adopter release for Self-hosted LogScale.
Keep the data close to the source, search from single UI
Search across multiple LogScale clusters in a single view
Support key functionalities like alerts & dashboards
The functionality is limited to LogScale self-hosted versions at this point.
For more information, see LogScale Multi-Cluster Search.
When Manage Users, it is now possible to filter users based also on their assigned roles (for example, type
admin
in the Users search field).The Field Aliasing feature is introduced. Implementing Field Aliasing in your workflow simplifies data correlation from various sources. With this feature, users can give alternative names — aliases — to fields created at parse time, across a view, or the entire organization. It makes data interpretation more intuitive and provides analysts with a smoother search experience.
For more information, see Field Aliasing.
Automation and Alerts
The following changes affects the UI for Standard Alerts:
A minimum time window of 1 minute is introduced, since anything smaller will not produce reliable results. Any existing standard alert with a time window smaller than 1 minute will not run, instead an error notification will be shown.
It is no longer possible to specify the time window and the throttle period in milliseconds. Any existing standard alerts with a time window or throttle period specified in milliseconds will have it rounded to the nearest second.
When saving the alert, the query window is automatically changed to the largest unit in the Relative Time Syntax that can represent it. For example
24h
is changed to1d
and60s
is changed to1m
.
The
ChangeTriggersAndActions
permission is now replaced by two new permissions:ChangeTriggers
permission is needed to edit alerts or scheduled searches.ChangeActions
permission is needed to edit actions as well as viewing them. Viewing the name and type of actions when editing triggers is still possible without this permission.
Any user with the legacy
ChangeTriggersAndActions
permissions will by default have both. It is possible to remove one of them for more granular access controls.A slow-query logging has been added when an alert is slow to start due to the query not having finished the historical part.
GraphQL API
Added limits for GraphQL queries on the total number of selected fields and fragments. Defaults are
1000
for authenticated and150
for unauthenticated users.Cluster administrators can adjust these limits with the
GraphQLSelectionSizeLimit
andUnauthenticatedGraphQLSelectionSizeLimit
dynamic configurations.
Storage
The following validation constraints are added on boot:
LOCAL_STORAGE_PERCENTAGE
is less thanSECONDARY_STORAGE_MAX_FILL_PERCENTAGE
on nodes with secondary storage configured.LOCAL_STORAGE_PERCENTAGE
is less thanPRIMARY_STORAGE_MAX_FILL_PERCENTAGE
on nodes without secondary storage configured.
Nodes will crash on boot if these constraints are violated.
We have changed how LogScale handles being temporarily bottlenecked by bucket storage. Uploads are now prioritized ahead of downloads, which reduces the impact on ingest work.
Configuration
The meaning of
S3_STORAGE_CONCURRENCY
andGCP_STORAGE_CONCURRENCY
configuration variables has slightly changed. The settings are used for throttling downloads and uploads for bucket storage. Previously, a setting ofS3_STORAGE_CONCURRENCY=10
for example, meant that LogScale would allow 10 concurrent uploads, and 10 concurrent downloads. Now, it means that LogScale will allow a total of 10 transfers at a time, disregarding the transfer direction.New dynamic configurations have been added:
defaultDigestReplicationFactor
dynamic configuration defaults to2
if the value is not explicitly set and there is more than 1 node in the cluster performing digest.If necessary, a different default can be explicitly set using the
DEFAULT_DIGEST_REPLICATION_FACTOR
environment variable.defaultSegmentReplicationFactor
dynamic configuration defaults to2
if the value is not explicitly set, unless there is only 1 node in the cluster storing segments, or ifUSING_EPHEMERAL_DISKS
environment variable is set totrue
.If necessary, a different default can be explicitly set using the
DEFAULT_SEGMENT_REPLICATION_FACTOR
environment variable.
Ingest rate monitoring for autosharding improved. For clusters with more than 10 nodes, only a subset of the nodes will be reporting their ingest rate for any given datasource, and the total rate for each datasource estimated based on that. The dynamic configuration
TargetMaxRateForDatasource
still sets the threshold for sharding; however, once the rate is exceeded, it is no longer needed to be twice theTargetMaxRateForDatasource
configuration before shards are added.
Dashboards and Widgets
A series of improvements has been added to the dashboard layout experience:
New widgets will be added in the topmost available space
When you drag widgets up, all widgets in the same column will move together
Improved experience when swapping the order of widgets (horizontally or vertically)
Ingestion
Introducing Ingest Feeds, a new pull-based ingest source that ingests logs stored in AWS S3. The files within the AWS S3 bucket can be Gzip compressed and we currently support newline delimited files and the JSON object format in which CloudTrail logs are stored in. Ingest Feeds require some configuration setup on the AWS side to get started.
This feature is part of a gradual rollout process and may not be available on your cloud instance, but will be available to all customers in the following weeks.
For more information, see Ingest Data from AWS S3.
The limits on the size of parser test cases when exporting as templates or packages has been increased.
The amount of logging produced by
DigestLeadershipLoggerJob
has been reduced in clusters with many ingest queue partitions.
Log Collector
Groups have been added to Fleet Management for the LogScale Collector. This feature makes it possible to define dynamic groups using a filter based upon a subset of the LogScale Query Language Syntax. New Collectors enrolled into the fleet will automatically be configured based upon the groups filters they match, eliminating the need for manually assigning a configuration to every new LogScale Collector. Groups also allow you to combine multiple reusable configuration snippets.
Additionally the management of instances has been simplified and merged into this new feature, and therefore the Assigned Instances page has been removed to favor use of the Group functions.
For more information, see Manage Groups.
Queries
The worker-level prioritization of queries has been changed. The new prioritization will attempt to divide time evenly between all users, and divide the time given to each user evenly among that user's queries.
Live query cost metrics corrections:
livequeries-rate
metric has changed from long to double.livequeries-rate-canceled-due-to-digest-delay
metric has changed from long to double.
For more information, see Node-Level Metrics.
Functions
The new
array:length()
function has been introduced. It finds the length of an array by counting the number of array entries.For more information, see
array:length()
.
Fixed in this release
UI Changes
When hovering over a query function in the query editor, the link to the function documentation now always points to the latest version of the page.
Automation and Alerts
After updating Scheduled searches where the action was failing, they would constantly fail with a None.get error until they were disabled and enabled again, or the LogScale cluster was restarted. This issue is now fixed.
Storage
Fixed an issue that could cause repositories undeleted using the mechanism described at Restoring a Repository or View to be only partially restored. Some deleted datasources within the repositories could erroneously be skipped during restoration.
For more information, see Restoring a Repository or View.
Dashboards and Widgets
Users were prevented from exporting results of queries containing multi value parameters. This issue is now fixed.
Queries
Queries in some cases would be killed as if they were blocked even though they did not match the criteria of the block. This issue is now fixed.
Functions
selectLast()
has been fixed for an issue that could cause this query function to miss events in certain cases.
Other
It was not possible to create a new repository with a time retention greater than 365 days. Now, the UI limit is the one that is set on the customer organization.
Input validation on fields when creating new repositories is now also improved.
Improvement
Storage
Allowed reassignment of digest that assigns partitions unevenly to hosts. This is to support clusters where hosts are not evenly sized, and so an even partition assignment is not expected.
Configuration
The default limit for uploading CSV Lookup Files set by
MaxCsvFileUploadSizeBytes
dynamic configuration has been increased from100MB
to200MB
. IfMAX_FILEUPLOAD_SIZE
is set, its value will be the default for bothMaxCsvFileUploadSizeBytes
andMaxJsonFileUploadSizeBytes
.
Ingestion
The cancelling mechanism for specific costly queries has been improved to solve cases where those queries got restarted anyway: the query with the exact match on the query string is now blocked for 5 minutes. This will free enough CPU for ingest to catch up and avoid blocking queries for too long.
Falcon LogScale 1.124.0 GA (2024-02-06)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.124.0 | GA | 2024-02-06 | Cloud | 2025-03-01 | No | 1.70.0 | No |
Available for download two days after release.
Bug fixes and updates.
Advance Warning
The following items are due to change in a future release.
Installation and Deployment
We aim to stop publishing the
jar
distribution of LogScale (e.g.server-1.117.jar
) as of LogScale version 1.130.0.Users deploying via Docker images are not affected. Users deploying on bare metal should ensure they deploy the
tar
artifact, and not thejar
artifact.A migration guide for bare metal deployments is available at How-To: Migrating from server.jar to Launcher Startup.
We intend to drop support for Java 17, making Java 21 the minimum. We plan to make this change in March 2024.
Deprecation
Items that have been deprecated and may be removed in a future release.
The assetType GraphQL field on
Alert
,Dashboard
,Parser
,SavedQuery
andViewInteraction
datatypes has been deprecated and will be removed in version 1.136 of LogScale.The
humio
Docker image is deprecated in favor ofhumio-core
.humio
is no longer considered suitable for production use, as it runs Kafka and ZooKeeper on the same host as LogScale, which our deployment guidelines no longer recommend. The final release ofhumio
Docker image will be in version 1.130.0.The new
humio-single-node-demo
image is an all-in-one container suitable for quick and easy demonstration setups, but which is entirely unsupported for production use.For more information, see Installing Using Containers.
In the GraphQL API, the
ChangeTriggersAndAction
enum value for both thePermission
andViewAction
enum is now deprecated and will be removed in version 1.136 of LogScale.We are deprecating the
humio/kafka
andhumio/zookeeper
Docker images due to low use. The planned final release for these images will be with LogScale 1.148.0.Better alternatives are available going forward. We recommend the following:
If you still require
humio/kafka
orhumio/zookeeper
for needs that cannot be covered by these alternatives, please contact Support and share your concerns.In the GraphQL API, the name argument to the parser field on the
Repository
datatype has been deprecated and will be removed in version 1.136 of LogScale.
Behavior Changes
Scripts or environment which make use of these tools should be checked and updated for the new configuration:
Storage
We have adjusted the code that calculates where to start reading from the ingest queue to be more conservative. It will no longer allow for skipping past segments that are not fully replicated when later segments on the same datasource are fully replicated. This fixes a very rare edge case that could cause data loss on clusters using ephemeral disks. Due to the changed behavior, any segment failing to properly replicate will now cause LogScale to stop deleting data from the affected Kafka partition. Cluster administrators are strongly encouraged to monitor this case, by keeping under observation Kafka's disk usage.
New features and improvements
UI Changes
Multi-Cluster Search — early adopter release for Self-hosted LogScale.
Keep the data close to the source, search from single UI
Search across multiple LogScale clusters in a single view
Support key functionalities like alerts & dashboards
The functionality is limited to LogScale self-hosted versions at this point.
For more information, see LogScale Multi-Cluster Search.
The Field Aliasing feature is introduced. Implementing Field Aliasing in your workflow simplifies data correlation from various sources. With this feature, users can give alternative names — aliases — to fields created at parse time, across a view, or the entire organization. It makes data interpretation more intuitive and provides analysts with a smoother search experience.
For more information, see Field Aliasing.
Fixed in this release
Storage
Fixed an issue that could cause repositories undeleted using the mechanism described at Restoring a Repository or View to be only partially restored. Some deleted datasources within the repositories could erroneously be skipped during restoration.
For more information, see Restoring a Repository or View.
Improvement
Storage
Allowed reassignment of digest that assigns partitions unevenly to hosts. This is to support clusters where hosts are not evenly sized, and so an even partition assignment is not expected.
Configuration
The default limit for uploading CSV Lookup Files set by
MaxCsvFileUploadSizeBytes
dynamic configuration has been increased from100MB
to200MB
. IfMAX_FILEUPLOAD_SIZE
is set, its value will be the default for bothMaxCsvFileUploadSizeBytes
andMaxJsonFileUploadSizeBytes
.
Falcon LogScale 1.123.0 GA (2024-01-30)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.123.0 | GA | 2024-01-30 | Cloud | 2025-03-01 | No | 1.70.0 | No |
Available for download two days after release.
Bug fixes and updates.
Advance Warning
The following items are due to change in a future release.
Installation and Deployment
We aim to stop publishing the
jar
distribution of LogScale (e.g.server-1.117.jar
) as of LogScale version 1.130.0.Users deploying via Docker images are not affected. Users deploying on bare metal should ensure they deploy the
tar
artifact, and not thejar
artifact.A migration guide for bare metal deployments is available at How-To: Migrating from server.jar to Launcher Startup.
We intend to drop support for Java 17, making Java 21 the minimum. We plan to make this change in March 2024.
Deprecation
Items that have been deprecated and may be removed in a future release.
The assetType GraphQL field on
Alert
,Dashboard
,Parser
,SavedQuery
andViewInteraction
datatypes has been deprecated and will be removed in version 1.136 of LogScale.The
humio
Docker image is deprecated in favor ofhumio-core
.humio
is no longer considered suitable for production use, as it runs Kafka and ZooKeeper on the same host as LogScale, which our deployment guidelines no longer recommend. The final release ofhumio
Docker image will be in version 1.130.0.The new
humio-single-node-demo
image is an all-in-one container suitable for quick and easy demonstration setups, but which is entirely unsupported for production use.For more information, see Installing Using Containers.
In the GraphQL API, the
ChangeTriggersAndAction
enum value for both thePermission
andViewAction
enum is now deprecated and will be removed in version 1.136 of LogScale.We are deprecating the
humio/kafka
andhumio/zookeeper
Docker images due to low use. The planned final release for these images will be with LogScale 1.148.0.Better alternatives are available going forward. We recommend the following:
If you still require
humio/kafka
orhumio/zookeeper
for needs that cannot be covered by these alternatives, please contact Support and share your concerns.In the GraphQL API, the name argument to the parser field on the
Repository
datatype has been deprecated and will be removed in version 1.136 of LogScale.
New features and improvements
UI Changes
When Manage Users, it is now possible to filter users based also on their assigned roles (for example, type
admin
in the Users search field).
Automation and Alerts
A slow-query logging has been added when an alert is slow to start due to the query not having finished the historical part.
Storage
We have changed how LogScale handles being temporarily bottlenecked by bucket storage. Uploads are now prioritized ahead of downloads, which reduces the impact on ingest work.
Configuration
The meaning of
S3_STORAGE_CONCURRENCY
andGCP_STORAGE_CONCURRENCY
configuration variables has slightly changed. The settings are used for throttling downloads and uploads for bucket storage. Previously, a setting ofS3_STORAGE_CONCURRENCY=10
for example, meant that LogScale would allow 10 concurrent uploads, and 10 concurrent downloads. Now, it means that LogScale will allow a total of 10 transfers at a time, disregarding the transfer direction.
Log Collector
Groups have been added to Fleet Management for the LogScale Collector. This feature makes it possible to define dynamic groups using a filter based upon a subset of the LogScale Query Language Syntax. New Collectors enrolled into the fleet will automatically be configured based upon the groups filters they match, eliminating the need for manually assigning a configuration to every new LogScale Collector. Groups also allow you to combine multiple reusable configuration snippets.
Additionally the management of instances has been simplified and merged into this new feature, and therefore the Assigned Instances page has been removed to favor use of the Group functions.
For more information, see Manage Groups.
Fixed in this release
Automation and Alerts
After updating Scheduled searches where the action was failing, they would constantly fail with a None.get error until they were disabled and enabled again, or the LogScale cluster was restarted. This issue is now fixed.
Queries
Queries in some cases would be killed as if they were blocked even though they did not match the criteria of the block. This issue is now fixed.
Other
It was not possible to create a new repository with a time retention greater than 365 days. Now, the UI limit is the one that is set on the customer organization.
Input validation on fields when creating new repositories is now also improved.
Improvement
Ingestion
The cancelling mechanism for specific costly queries has been improved to solve cases where those queries got restarted anyway: the query with the exact match on the query string is now blocked for 5 minutes. This will free enough CPU for ingest to catch up and avoid blocking queries for too long.
Falcon LogScale 1.122.0 GA (2024-01-23)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.122.0 | GA | 2024-01-23 | Cloud | 2025-03-01 | No | 1.70.0 | No |
Available for download two days after release.
Bug fixes and updates.
Advance Warning
The following items are due to change in a future release.
Installation and Deployment
We aim to stop publishing the
jar
distribution of LogScale (e.g.server-1.117.jar
) as of LogScale version 1.130.0.Users deploying via Docker images are not affected. Users deploying on bare metal should ensure they deploy the
tar
artifact, and not thejar
artifact.A migration guide for bare metal deployments is available at How-To: Migrating from server.jar to Launcher Startup.
We intend to drop support for Java 17, making Java 21 the minimum. We plan to make this change in March 2024.
Deprecation
Items that have been deprecated and may be removed in a future release.
The assetType GraphQL field on
Alert
,Dashboard
,Parser
,SavedQuery
andViewInteraction
datatypes has been deprecated and will be removed in version 1.136 of LogScale.The
humio
Docker image is deprecated in favor ofhumio-core
.humio
is no longer considered suitable for production use, as it runs Kafka and ZooKeeper on the same host as LogScale, which our deployment guidelines no longer recommend. The final release ofhumio
Docker image will be in version 1.130.0.The new
humio-single-node-demo
image is an all-in-one container suitable for quick and easy demonstration setups, but which is entirely unsupported for production use.For more information, see Installing Using Containers.
In the GraphQL API, the
ChangeTriggersAndAction
enum value for both thePermission
andViewAction
enum is now deprecated and will be removed in version 1.136 of LogScale.In the GraphQL API, the name argument to the parser field on the
Repository
datatype has been deprecated and will be removed in version 1.136 of LogScale.
New features and improvements
UI Changes
Time zone data has been updated to IANA 2023d.
Deletion of a file that is actively used by live queries will now stop those queries.
For more information, see Exporting or Deleting a File.
Automation and Alerts
The following changes affects the UI for Standard Alerts:
A minimum time window of 1 minute is introduced, since anything smaller will not produce reliable results. Any existing standard alert with a time window smaller than 1 minute will not run, instead an error notification will be shown.
It is no longer possible to specify the time window and the throttle period in milliseconds. Any existing standard alerts with a time window or throttle period specified in milliseconds will have it rounded to the nearest second.
When saving the alert, the query window is automatically changed to the largest unit in the Relative Time Syntax that can represent it. For example
24h
is changed to1d
and60s
is changed to1m
.
Configuration
New dynamic configurations have been added:
defaultDigestReplicationFactor
dynamic configuration defaults to2
if the value is not explicitly set and there is more than 1 node in the cluster performing digest.If necessary, a different default can be explicitly set using the
DEFAULT_DIGEST_REPLICATION_FACTOR
environment variable.defaultSegmentReplicationFactor
dynamic configuration defaults to2
if the value is not explicitly set, unless there is only 1 node in the cluster storing segments, or ifUSING_EPHEMERAL_DISKS
environment variable is set totrue
.If necessary, a different default can be explicitly set using the
DEFAULT_SEGMENT_REPLICATION_FACTOR
environment variable.
Dashboards and Widgets
A series of improvements has been added to the dashboard layout experience:
New widgets will be added in the topmost available space
When you drag widgets up, all widgets in the same column will move together
Improved experience when swapping the order of widgets (horizontally or vertically)
Queries
Live query cost metrics corrections:
livequeries-rate
metric has changed from long to double.livequeries-rate-canceled-due-to-digest-delay
metric has changed from long to double.
For more information, see Node-Level Metrics.
Falcon LogScale 1.121.0 GA (2024-01-16)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.121.0 | GA | 2024-01-16 | Cloud | 2025-03-01 | No | 1.70.0 | No |
Available for download two days after release.
Bug fixes and updates.
Advance Warning
The following items are due to change in a future release.
Installation and Deployment
We aim to stop publishing the
jar
distribution of LogScale (e.g.server-1.117.jar
) as of LogScale version 1.130.0.Users deploying via Docker images are not affected. Users deploying on bare metal should ensure they deploy the
tar
artifact, and not thejar
artifact.A migration guide for bare metal deployments is available at How-To: Migrating from server.jar to Launcher Startup.
We intend to drop support for Java 17, making Java 21 the minimum. We plan to make this change in March 2024.
Removed
Items that have been removed as of this release.
Configuration
The
DEFAULT_PARTITION_COUNT
configuration parameter has been removed, as it was unused by the system due to earlier changes to partition handling.
Deprecation
Items that have been deprecated and may be removed in a future release.
The assetType GraphQL field on
Alert
,Dashboard
,Parser
,SavedQuery
andViewInteraction
datatypes has been deprecated and will be removed in version 1.136 of LogScale.In the GraphQL API, the
ChangeTriggersAndAction
enum value for both thePermission
andViewAction
enum is now deprecated and will be removed in version 1.136 of LogScale.In the GraphQL API, the name argument to the parser field on the
Repository
datatype has been deprecated and will be removed in version 1.136 of LogScale.
New features and improvements
GraphQL API
Added limits for GraphQL queries on the total number of selected fields and fragments. Defaults are
1000
for authenticated and150
for unauthenticated users.Cluster administrators can adjust these limits with the
GraphQLSelectionSizeLimit
andUnauthenticatedGraphQLSelectionSizeLimit
dynamic configurations.
Ingestion
The amount of logging produced by
DigestLeadershipLoggerJob
has been reduced in clusters with many ingest queue partitions.
Functions
The new
array:length()
function has been introduced. It finds the length of an array by counting the number of array entries.For more information, see
array:length()
.
Fixed in this release
UI Changes
When hovering over a query function in the query editor, the link to the function documentation now always points to the latest version of the page.
Falcon LogScale 1.120.0 GA (2024-01-09)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.120.0 | GA | 2024-01-09 | Cloud | 2025-03-01 | No | 1.70.0 | No |
Available for download two days after release.
Bug fixes and updates.
Breaking Changes
The following items create a breaking change in the behavior, response or operation of this release.
Functions
The default accuracy of the
percentile()
function has been adjusted. This means that any query that does not explicitly set the accuracy may see a change in reported percentile. Specifically, thepercentile()
function may now deviate by up to one 100th of the true percentile, meaning that if a given percentile has a true value of 1000,percentile()
may report a percentile in the range of[990; 1010]
.On the flip side,
percentile()
now uses less memory by default, which should allow for additional series or groups when this function is used with eithertimeChart()
orgroupBy()
and the default accuracy is used.
Advance Warning
The following items are due to change in a future release.
Installation and Deployment
We aim to stop publishing the
jar
distribution of LogScale (e.g.server-1.117.jar
) as of LogScale version 1.130.0.Users deploying via Docker images are not affected. Users deploying on bare metal should ensure they deploy the
tar
artifact, and not thejar
artifact.A migration guide for bare metal deployments is available at How-To: Migrating from server.jar to Launcher Startup.
We intend to drop support for Java 17, making Java 21 the minimum. We plan to make this change in March 2024.
Deprecation
Items that have been deprecated and may be removed in a future release.
The assetType GraphQL field on
Alert
,Dashboard
,Parser
,SavedQuery
andViewInteraction
datatypes has been deprecated and will be removed in version 1.136 of LogScale.In the GraphQL API, the
ChangeTriggersAndAction
enum value for both thePermission
andViewAction
enum is now deprecated and will be removed in version 1.136 of LogScale.In the GraphQL API, the name argument to the parser field on the
Repository
datatype has been deprecated and will be removed in version 1.136 of LogScale.
Upgrades
Changes that may occur or be required during an upgrade.
Installation and Deployment
Kafka client library has been upgraded to 3.6.1. Some minor changes have been made to serializers used by LogScale to reduce memory copying.
New features and improvements
Automation and Alerts
The
ChangeTriggersAndActions
permission is now replaced by two new permissions:ChangeTriggers
permission is needed to edit alerts or scheduled searches.ChangeActions
permission is needed to edit actions as well as viewing them. Viewing the name and type of actions when editing triggers is still possible without this permission.
Any user with the legacy
ChangeTriggersAndActions
permissions will by default have both. It is possible to remove one of them for more granular access controls.
Storage
The following validation constraints are added on boot:
LOCAL_STORAGE_PERCENTAGE
is less thanSECONDARY_STORAGE_MAX_FILL_PERCENTAGE
on nodes with secondary storage configured.LOCAL_STORAGE_PERCENTAGE
is less thanPRIMARY_STORAGE_MAX_FILL_PERCENTAGE
on nodes without secondary storage configured.
Nodes will crash on boot if these constraints are violated.
Ingestion
Introducing Ingest Feeds, a new pull-based ingest source that ingests logs stored in AWS S3. The files within the AWS S3 bucket can be Gzip compressed and we currently support newline delimited files and the JSON object format in which CloudTrail logs are stored in. Ingest Feeds require some configuration setup on the AWS side to get started.
This feature is part of a gradual rollout process and may not be available on your cloud instance, but will be available to all customers in the following weeks.
For more information, see Ingest Data from AWS S3.
Fixed in this release
Dashboards and Widgets
Users were prevented from exporting results of queries containing multi value parameters. This issue is now fixed.
Functions
selectLast()
has been fixed for an issue that could cause this query function to miss events in certain cases.
Falcon LogScale 1.119.0 GA (2023-12-19)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.119.0 | GA | 2023-12-19 | Cloud | 2025-03-01 | No | 1.70.0 | No |
Available for download two days after release.
Bug fixes and updates.
Advance Warning
The following items are due to change in a future release.
Installation and Deployment
We intend to drop support for Java 17, making Java 21 the minimum. We plan to make this change in March 2024.
Removed
Items that have been removed as of this release.
GraphQL API
Removed the
Asset
interface type in GraphQL thatAlert
,Dashboard
,Parser
,SavedQuery
andViewInteraction
datatypes implemented. It was not used as a type for any field. All fields from theAsset
interface type are still present in the implementing types.
Deprecation
Items that have been deprecated and may be removed in a future release.
The assetType GraphQL field on
Alert
,Dashboard
,Parser
,SavedQuery
andViewInteraction
datatypes has been deprecated and will be removed in version 1.136 of LogScale.The
QUERY_COORDINATOR
environment variable is deprecated. To control whether a node should be allowed to be a query coordinator, use thequery
node task instead. Node tasks can be assigned and unassigned at runtime using the assignTasks() and unassignTasks() GraphQL mutations respectively, or controlled using theINITIAL_DISABLED_NODE_TASKS
environment variable.For more information, see
INITIAL_DISABLED_NODE_TASK
.
New features and improvements
Ingestion
The limits on the size of parser test cases when exporting as templates or packages has been increased.
Queries
The worker-level prioritization of queries has been changed. The new prioritization will attempt to divide time evenly between all users, and divide the time given to each user evenly among that user's queries.
Falcon LogScale 1.118.4 LTS (2024-02-23)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.118.4 | LTS | 2024-02-23 | Cloud | 2025-01-31 | No | 1.70.0 | No |
Hide file hashes
TAR Checksum | Value |
---|---|
MD5 | c5be7194bc05d24ab75bf25713489193 |
SHA1 | 2897fbe6a80dcc312d7fc554691b7ce66f8f8530 |
SHA256 | 3a841235ddecd9371e25f5982bc1928c4531f5e1a97c852c0e72236b42c229af |
SHA512 | cf26d6782eae95f177a343f46442408201b5335006227f50f6c1a2224e9036cd12f7fdd6fb6e3bc54c08b829fb90339fe400e1599f9b36fe6ff6a4fdbf93a726 |
Docker Image | Included JDK | SHA256 Checksum |
---|---|---|
humio | 21 | 33877a390533ca48e8e4e6d116bb2dfb71d60832686ab02eae74b557872f0c1e |
humio-core | 21 | 041e91dd7869fadd60878c5610ad33a8942890f7e7014b4c90230b4661a4d362 |
kafka | 21 | 49a9fd9441382170e71f5720a64e2841e69a77abfd7667dfe43e6cac4f38186a |
zookeeper | 21 | 220fe6e43ed70478b737ce98e2738d9065399b4a053771fcab66666f27a4bc13 |
Download: https://repo.humio.com/repository/maven-releases/com/humio/server/1.118.4/server-1.118.4.tar.gz
These notes include entries from the following previous releases: 1.118.2, 1.118.3
Bug fixes and performance improvements.
Breaking Changes
The following items create a breaking change in the behavior, response or operation of this release.
Functions
The new parameter
unit
is added toformatTime()
to specify whether the input field is in seconds or milliseconds, or if it should be auto-detected by the system.This is a breaking change: if you want to ensure fully backward-compatible behavior, set
unit=milliseconds
.For more information, see
formatTime()
.
Advance Warning
The following items are due to change in a future release.
Installation and Deployment
We intend to drop support for Java 17, making Java 21 the minimum. We plan to make this change in March 2024.
Removed
Items that have been removed as of this release.
API
The deprecated REST endpoints
api/v1/dataspaces/(id)/deleteevents
and/api/v1/repositories/(id)/deleteevents
have been removed. You can use the redactEvents GraphQL mutation and query instead.For more information, see redactEvents() .
Deprecation
Items that have been deprecated and may be removed in a future release.
GraphQL mutation updateOrganizationMutability is deprecated in favor of the new setBlockIngest mutation.
Behavior Changes
Scripts or environment which make use of these tools should be checked and updated for the new configuration:
Automation and Alerts
We have changed how Scheduled Searches handle query warnings, similar to what was done for Standard Alerts (see Falcon LogScale 1.112.0 GA (2023-10-24)). Previously, LogScale only triggered Scheduled Searches if there were no query warnings. Now, scheduled searches will trigger despite most query warnings, and the scheduled search status will show a warning instead of an error.
For query warnings about missing data, either due to ingest delay or some existing data that is currently unavailable, the scheduled search will retry for up to 10 minutes by default. This waiting time is configurable, see
SCHEDULED_SEARCH_MAX_WAIT_FOR_MISSING_DATA
for more information.Up until now, all query warnings were treated as errors: the scheduled search did not trigger even though it produced results, and the scheduled search was shown with an error in LogScale. Most query warnings meant that not all data was queried. The previous behaviour prevented the scheduled search from triggering in cases where it would not have, if all data had been available. For instance, a scheduled search that would trigger if a count of events dropped below a threshold. On the other hand, it made some scheduled searches not trigger, even though they would still have if all data was available. That meant that previously you would almost never have a scheduled search trigger when it should not, but you would sometimes have a scheduled search not trigger, when it should have. We have reverted this behavior.
With this change, we no longer recommend to set the configuration option
SCHEDULED_SEARCH_DESPITE_WARNINGS
totrue
, since it treats all query warnings as non-errors, and there are a few query warnings that should make the scheduled search fail.
Upgrades
Changes that may occur or be required during an upgrade.
Configuration
We've migrated from Akka dependency component to Apache Pekko. This means that all internal logs referencing Akka will be substituted with the Pekko counterpart. Users will need to update any triggers or dashboards that rely on such logs.
On Prem only: be aware that the Akka to Pekko migration also affects configuration field names in
application.conf
. Clusters that are using a customapplication.conf
will need to update their configuration to use the Pekko configuration names instead of the Akka configuration names.
New features and improvements
UI Changes
The
Files
page has a new layout and changes:It has been split into two pages: one containing a list of files and one with details of each file.
A view limit of 100 MB has been added and you'll get an error in the UI if you try to view files larger than this size.
It displays information on the size limits and the step needed for syncing the imported files.
For more information, see Files.
Parser test cases will automatically expand to the height of their content when loading the parser page now.
When selecting a parser test case, there is now a button to scroll to that test case again if you scroll away from it.
We have improved the navigation on the page for Alerts, Scheduled Searches and Actions and the page is now called
Automation
.For more information, see Automation.
Lookup Files require unique column headers to work as expected, which was previously validated when attempting to use the file. You could still install an invalid file into LogScale however, but now lookup files with duplicate header names are also blocked from being installed.
Automation and Alerts
LogScale now creates notifications for alerts and scheduled searches with warnings in addition to notifications for errors. The notifications for warnings will have a severity of warning.
When Filter Alerts encounter a query warning that could potentially affect the result of the alert, the warning is now saved with the alert, so that it is visible in the alerts overview, same as for Standard Alerts.
When clearing errors on alerts or scheduled searches, all notifications about the problem are now automatically deleted right when the error is cleared. Previously, notifications were only updated every 15 minutes. Note, that if the error returns, a new notification will be created.
GraphQL API
The redactEvents() mutation will no longer be allowed for users who have a limiting query prefix.
Added limits for GraphQL queries on the total number of selected fields and fragments. Defaults are
1000
for authenticated and150
for unauthenticated users.Cluster administrators can adjust these limits with the
GraphQLSelectionSizeLimit
andUnauthenticatedGraphQLSelectionSizeLimit
dynamic configurations.The new setBlockIngest GraphQL mutation is introduced to block ingest for the organization and set ingest to
paused
in the dataspaces owned by the organization.
Storage
Handling of IOExceptions in part of the segment reading code has been improved. Such exceptions will cause the segment to be excluded from the query, and potentially refetched from bucket storage, and a warning to be shown to the user, rather than cancelling the query.
Configuration
Added validation for
LOCAL_STORAGE_PERCENTAGE
configuration against thetargetDiskUsagePercentage
, that might be set on runtime, to enforce that theLOCAL_STORAGE_PERCENTAGE
variable is at least 5 percentage points larger thantargetDiskUsagePercentage
. Nodes that are violating this constraint will not be able to start. In addition, the setTargetDiskUsagePercentage mutation will not allow violating the constraint.QueryMemoryLimit
andLiveQueryMemoryLimit
dynamic configurations have been replaced withQueryCoordinatorMemoryLimit
, which controls the maximum memory usage of the coordinating node. This memory limit will, in turn, determine the limits of the static query state size and the live query state size.QueryCoordinatorMemoryLimit
defaults to 400MB;QueryMemoryLimit
andLiveQueryMemoryLimit
defaults to 100MB regardless of their previous configuration.For more information, see General Limits & Parameters.
The new
INITIAL_DISABLED_NODE_TASK
environment variable is introduced.For more information, see
INITIAL_DISABLED_NODE_TASK
.
Dashboards and Widgets
Small multiples functionality is introduced for the
Single Value
,Gauge
, andPie Chart
widgets. This feature allows you to partition your query result on a single dimension into multiple visuals of the same widget type for easy comparison.For more information, see Widgets.
We have added the new width option Fit to content for
Event List
columns. With this option selected, the width of the column depends on the content in the column.Table
widget.
Ingestion
When navigating between parser test cases, the table showing the outputs for the test case will now scroll to the top when you select a new test case.
A new mechanism is introduced that delays the response to a HTTP ingest request from nodes that also do digest when the digest node locally experiences digest lag. The following new dynamic configurations control this mechanism:
DelayIngestResponseDueToIngestLagMaxFactor
limits how much longer than the actual execution it may be, measured as a factor on top of the actual time spent (default is2
).DelayIngestResponseDueToIngestLagThreshold
sets the number of milliseconds of digest lag where the feature starts to kick in (default is20,000
).DelayIngestResponseDueToIngestLagScale
sets the number of milliseconds of lag that adds1
to the factor applied (default is300,000
).
The amount of logging produced by
DigestLeadershipLoggerJob
has been reduced in clusters with many ingest queue partitions.
Functions
The new query function
duration()
is introduced: it can be helpful in computations involving timestamps.Live queries that use files in either
match()
,cidr()
, orlookup()
functions are no longer restarted when the file is updated. Instead the files are swapped while the queries are still running.For more information, see Lookup Files Operations.
The new query function
parseUri()
is introduced to support parsing of URIs without a scheme.The new query function
if()
is introduced to compute one of two expressions depending on the outcome of a test.
Fixed in this release
UI Changes
Turned the dropdown menu in the TablePage upwards and set it to the front to fix a bug where the menu would be hidden.
The page for creating repository or view tokens would fail to load if the user didn't have a
Change IP filters
Organization settings permission.
Automation and Alerts
If a filter alert, standard alert or scheduled search was assigned to run on another node in the cluster, due to changes to the available cluster nodes, they would be wrongly marked as failing with an error like The alert is broken. Save the alert again to fix it and an error log. This issue is now fixed.
If an error occurred where the error message was huge, the error would not be stored on the failing alert or scheduled search. This issue has been fixed.
GraphQL API
Swapped parameters in GraphQL mutation updateOrganizationMutability have been fixed.
Storage
A case where we might ignore
LOCAL_STORAGE_PREFILL_PERCENTAGE
and prefetch bucketed segments even if above the limit has been fixed.Fixed an issue that could cause repositories undeleted using the mechanism described at Restoring a Repository or View to be only partially restored. Some deleted datasources within the repositories could erroneously be skipped during restoration.
For more information, see Restoring a Repository or View.
The setTargetDiskUsagePercentage mutation has been removed. The functionality that used this value has been updated to instead base decision-making on
PRIMARY_STORAGE_MAX_FILL_PERCENTAGE
, and onSECONDARY_STORAGE_MAX_FILL_PERCENTAGE
for nodes with secondary storage configured.
Dashboards and Widgets
The
Gauge
widget has been fixed as the Styling panel would not display configured thresholds.The hovered series in
Time Chart
widget have been fixed as they would not be highlighted in the tooltip.Users were prevented from exporting results of queries containing multi value parameters. This issue is now fixed.
The options for precision and thousands separators in
Table
widget have been fixed as they would not be saved correctly when editing other widgets on theSearch
page.The legend title in widget charts has been fixed as it would offset the content when positioned to the right.
The Styling panel in the
Table
widget has been fixed as threshold coloring could be assigned unintentionally.
Ingestion
Parser timeout errors on ingested events that would occur at shutdown have now been fixed.
A gap in the statistics of ingest per day experienced by some organizations on the Usage Page and in humio-usage repository, causing the graph to drop to zero, has now been fixed. As a consequence of this fix, the first measurement performed with version 1.114 will result in the graph showing a peak, since it would include statistics from the period where calculations were skipped.
A parser that failed to construct would sometimes result in events receiving a null error. This issue has been fixed.
A digest coordination issue has been fixed: it could cause mini-segments to stay behind on old digest leaders when leadership changes.
Queries
Occasional error logging from
QueryScheduler.reduceAndSetSnapshot
has been fixed.
Functions
cidr()
query function would fail to find some events when parameternegate=true
was set. This incorrect behavior has now been fixed.The
cidr()
function would handle a validation error incorrectly. This issue has been fixed.The
count()
function withdistinct
parameter would give an incorrect count forutf8
strings. This issue has been fixed.timeChart()
andbucket()
functions have been fixed as they would give slightly different results depending on whether theirlimit
argument was left out or explicitly set to the default value.
Improvement
Storage
Allowed reassignment of digest that assigns partitions unevenly to hosts. This is to support clusters where hosts are not evenly sized, and so an even partition assignment is not expected.
Falcon LogScale 1.118.3 LTS (2024-02-06)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.118.3 | LTS | 2024-02-06 | Cloud | 2025-01-31 | No | 1.70.0 | No |
Hide file hashes
TAR Checksum | Value |
---|---|
MD5 | bd20b91e48cfe6dddc10b7554d6a3ad1 |
SHA1 | bba9db6f0ed1dbe63991004479e140fd88162151 |
SHA256 | 05868147bf56827b465b5acb8a264e8e0ec3db29c58449002b01f2a7cdc483c4 |
SHA512 | 53961c1fafa99f0075915582d221c41f8281d50050b5bb11193a4e315053c9bc0267d3dd040837245b2309f481a17807a9bacca03ce9ad3e8f5368c7be21191a |
Download: https://repo.humio.com/repository/maven-releases/com/humio/server/1.118.3/server-1.118.3.tar.gz
These notes include entries from the following previous releases: 1.118.2
Bug fixes and updates.
Breaking Changes
The following items create a breaking change in the behavior, response or operation of this release.
Functions
The new parameter
unit
is added toformatTime()
to specify whether the input field is in seconds or milliseconds, or if it should be auto-detected by the system.This is a breaking change: if you want to ensure fully backward-compatible behavior, set
unit=milliseconds
.For more information, see
formatTime()
.
Advance Warning
The following items are due to change in a future release.
Installation and Deployment
We intend to drop support for Java 17, making Java 21 the minimum. We plan to make this change in March 2024.
Removed
Items that have been removed as of this release.
API
The deprecated REST endpoints
api/v1/dataspaces/(id)/deleteevents
and/api/v1/repositories/(id)/deleteevents
have been removed. You can use the redactEvents GraphQL mutation and query instead.For more information, see redactEvents() .
Deprecation
Items that have been deprecated and may be removed in a future release.
GraphQL mutation updateOrganizationMutability is deprecated in favor of the new setBlockIngest mutation.
Behavior Changes
Scripts or environment which make use of these tools should be checked and updated for the new configuration:
Automation and Alerts
We have changed how Scheduled Searches handle query warnings, similar to what was done for Standard Alerts (see Falcon LogScale 1.112.0 GA (2023-10-24)). Previously, LogScale only triggered Scheduled Searches if there were no query warnings. Now, scheduled searches will trigger despite most query warnings, and the scheduled search status will show a warning instead of an error.
For query warnings about missing data, either due to ingest delay or some existing data that is currently unavailable, the scheduled search will retry for up to 10 minutes by default. This waiting time is configurable, see
SCHEDULED_SEARCH_MAX_WAIT_FOR_MISSING_DATA
for more information.Up until now, all query warnings were treated as errors: the scheduled search did not trigger even though it produced results, and the scheduled search was shown with an error in LogScale. Most query warnings meant that not all data was queried. The previous behaviour prevented the scheduled search from triggering in cases where it would not have, if all data had been available. For instance, a scheduled search that would trigger if a count of events dropped below a threshold. On the other hand, it made some scheduled searches not trigger, even though they would still have if all data was available. That meant that previously you would almost never have a scheduled search trigger when it should not, but you would sometimes have a scheduled search not trigger, when it should have. We have reverted this behavior.
With this change, we no longer recommend to set the configuration option
SCHEDULED_SEARCH_DESPITE_WARNINGS
totrue
, since it treats all query warnings as non-errors, and there are a few query warnings that should make the scheduled search fail.
Upgrades
Changes that may occur or be required during an upgrade.
Configuration
We've migrated from Akka dependency component to Apache Pekko. This means that all internal logs referencing Akka will be substituted with the Pekko counterpart. Users will need to update any triggers or dashboards that rely on such logs.
On Prem only: be aware that the Akka to Pekko migration also affects configuration field names in
application.conf
. Clusters that are using a customapplication.conf
will need to update their configuration to use the Pekko configuration names instead of the Akka configuration names.
New features and improvements
UI Changes
The
Files
page has a new layout and changes:It has been split into two pages: one containing a list of files and one with details of each file.
A view limit of 100 MB has been added and you'll get an error in the UI if you try to view files larger than this size.
It displays information on the size limits and the step needed for syncing the imported files.
For more information, see Files.
Parser test cases will automatically expand to the height of their content when loading the parser page now.
When selecting a parser test case, there is now a button to scroll to that test case again if you scroll away from it.
We have improved the navigation on the page for Alerts, Scheduled Searches and Actions and the page is now called
Automation
.For more information, see Automation.
Lookup Files require unique column headers to work as expected, which was previously validated when attempting to use the file. You could still install an invalid file into LogScale however, but now lookup files with duplicate header names are also blocked from being installed.
Automation and Alerts
LogScale now creates notifications for alerts and scheduled searches with warnings in addition to notifications for errors. The notifications for warnings will have a severity of warning.
When Filter Alerts encounter a query warning that could potentially affect the result of the alert, the warning is now saved with the alert, so that it is visible in the alerts overview, same as for Standard Alerts.
When clearing errors on alerts or scheduled searches, all notifications about the problem are now automatically deleted right when the error is cleared. Previously, notifications were only updated every 15 minutes. Note, that if the error returns, a new notification will be created.
GraphQL API
The redactEvents() mutation will no longer be allowed for users who have a limiting query prefix.
Added limits for GraphQL queries on the total number of selected fields and fragments. Defaults are
1000
for authenticated and150
for unauthenticated users.Cluster administrators can adjust these limits with the
GraphQLSelectionSizeLimit
andUnauthenticatedGraphQLSelectionSizeLimit
dynamic configurations.The new setBlockIngest GraphQL mutation is introduced to block ingest for the organization and set ingest to
paused
in the dataspaces owned by the organization.
Storage
Handling of IOExceptions in part of the segment reading code has been improved. Such exceptions will cause the segment to be excluded from the query, and potentially refetched from bucket storage, and a warning to be shown to the user, rather than cancelling the query.
Configuration
Added validation for
LOCAL_STORAGE_PERCENTAGE
configuration against thetargetDiskUsagePercentage
, that might be set on runtime, to enforce that theLOCAL_STORAGE_PERCENTAGE
variable is at least 5 percentage points larger thantargetDiskUsagePercentage
. Nodes that are violating this constraint will not be able to start. In addition, the setTargetDiskUsagePercentage mutation will not allow violating the constraint.QueryMemoryLimit
andLiveQueryMemoryLimit
dynamic configurations have been replaced withQueryCoordinatorMemoryLimit
, which controls the maximum memory usage of the coordinating node. This memory limit will, in turn, determine the limits of the static query state size and the live query state size.QueryCoordinatorMemoryLimit
defaults to 400MB;QueryMemoryLimit
andLiveQueryMemoryLimit
defaults to 100MB regardless of their previous configuration.For more information, see General Limits & Parameters.
The new
INITIAL_DISABLED_NODE_TASK
environment variable is introduced.For more information, see
INITIAL_DISABLED_NODE_TASK
.
Dashboards and Widgets
Small multiples functionality is introduced for the
Single Value
,Gauge
, andPie Chart
widgets. This feature allows you to partition your query result on a single dimension into multiple visuals of the same widget type for easy comparison.For more information, see Widgets.
We have added the new width option Fit to content for
Event List
columns. With this option selected, the width of the column depends on the content in the column.Table
widget.
Ingestion
When navigating between parser test cases, the table showing the outputs for the test case will now scroll to the top when you select a new test case.
A new mechanism is introduced that delays the response to a HTTP ingest request from nodes that also do digest when the digest node locally experiences digest lag. The following new dynamic configurations control this mechanism:
DelayIngestResponseDueToIngestLagMaxFactor
limits how much longer than the actual execution it may be, measured as a factor on top of the actual time spent (default is2
).DelayIngestResponseDueToIngestLagThreshold
sets the number of milliseconds of digest lag where the feature starts to kick in (default is20,000
).DelayIngestResponseDueToIngestLagScale
sets the number of milliseconds of lag that adds1
to the factor applied (default is300,000
).
The amount of logging produced by
DigestLeadershipLoggerJob
has been reduced in clusters with many ingest queue partitions.
Functions
The new query function
duration()
is introduced: it can be helpful in computations involving timestamps.Live queries that use files in either
match()
,cidr()
, orlookup()
functions are no longer restarted when the file is updated. Instead the files are swapped while the queries are still running.For more information, see Lookup Files Operations.
The new query function
parseUri()
is introduced to support parsing of URIs without a scheme.The new query function
if()
is introduced to compute one of two expressions depending on the outcome of a test.
Fixed in this release
UI Changes
Turned the dropdown menu in the TablePage upwards and set it to the front to fix a bug where the menu would be hidden.
The page for creating repository or view tokens would fail to load if the user didn't have a
Change IP filters
Organization settings permission.
Automation and Alerts
If a filter alert, standard alert or scheduled search was assigned to run on another node in the cluster, due to changes to the available cluster nodes, they would be wrongly marked as failing with an error like The alert is broken. Save the alert again to fix it and an error log. This issue is now fixed.
If an error occurred where the error message was huge, the error would not be stored on the failing alert or scheduled search. This issue has been fixed.
GraphQL API
Swapped parameters in GraphQL mutation updateOrganizationMutability have been fixed.
Storage
A case where we might ignore
LOCAL_STORAGE_PREFILL_PERCENTAGE
and prefetch bucketed segments even if above the limit has been fixed.Fixed an issue that could cause repositories undeleted using the mechanism described at Restoring a Repository or View to be only partially restored. Some deleted datasources within the repositories could erroneously be skipped during restoration.
For more information, see Restoring a Repository or View.
The setTargetDiskUsagePercentage mutation has been removed. The functionality that used this value has been updated to instead base decision-making on
PRIMARY_STORAGE_MAX_FILL_PERCENTAGE
, and onSECONDARY_STORAGE_MAX_FILL_PERCENTAGE
for nodes with secondary storage configured.
Dashboards and Widgets
The
Gauge
widget has been fixed as the Styling panel would not display configured thresholds.The hovered series in
Time Chart
widget have been fixed as they would not be highlighted in the tooltip.Users were prevented from exporting results of queries containing multi value parameters. This issue is now fixed.
The options for precision and thousands separators in
Table
widget have been fixed as they would not be saved correctly when editing other widgets on theSearch
page.The legend title in widget charts has been fixed as it would offset the content when positioned to the right.
The Styling panel in the
Table
widget has been fixed as threshold coloring could be assigned unintentionally.
Ingestion
Parser timeout errors on ingested events that would occur at shutdown have now been fixed.
A gap in the statistics of ingest per day experienced by some organizations on the Usage Page and in humio-usage repository, causing the graph to drop to zero, has now been fixed. As a consequence of this fix, the first measurement performed with version 1.114 will result in the graph showing a peak, since it would include statistics from the period where calculations were skipped.
A parser that failed to construct would sometimes result in events receiving a null error. This issue has been fixed.
A digest coordination issue has been fixed: it could cause mini-segments to stay behind on old digest leaders when leadership changes.
Queries
Occasional error logging from
QueryScheduler.reduceAndSetSnapshot
has been fixed.
Functions
cidr()
query function would fail to find some events when parameternegate=true
was set. This incorrect behavior has now been fixed.The
cidr()
function would handle a validation error incorrectly. This issue has been fixed.The
count()
function withdistinct
parameter would give an incorrect count forutf8
strings. This issue has been fixed.timeChart()
andbucket()
functions have been fixed as they would give slightly different results depending on whether theirlimit
argument was left out or explicitly set to the default value.
Falcon LogScale 1.118.2 LTS (2024-01-17)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.118.2 | LTS | 2024-01-17 | Cloud | 2025-01-31 | No | 1.70.0 | No |
Hide file hashes
TAR Checksum | Value |
---|---|
MD5 | 3017ca48346bd2f8144d5a5d5f2b36e9 |
SHA1 | e58c6996a2c689441c40b83ca0ac79e3d145de8b |
SHA256 | 2253c36be65768e920270970189c78c8c290e416526cf596bf9f26f057506d74 |
SHA512 | 7d54033d4d3cbc830b33c94275f4296f0d7d8457042bb6358a91b5770f7a295b96c16bb52b34f75fab4cf0c64f7512a83881298d8411f49aa1ba10002a68c8e9 |
Docker Image | Included JDK | SHA256 Checksum |
---|---|---|
humio | 21 | ea03778fc8ac15a2a8f8a457faf6c2172534b2b56d402bda84b0c7b5aa9404f9 |
humio-core | 21 | 91bbcdb27c5f21d6e693716a248376efe28938ee670b6906a2552a74c744bf3e |
Download: https://repo.humio.com/repository/maven-releases/com/humio/server/1.118.2/server-1.118.2.tar.gz
Bug fixes and updates.
Breaking Changes
The following items create a breaking change in the behavior, response or operation of this release.
Functions
The new parameter
unit
is added toformatTime()
to specify whether the input field is in seconds or milliseconds, or if it should be auto-detected by the system.This is a breaking change: if you want to ensure fully backward-compatible behavior, set
unit=milliseconds
.For more information, see
formatTime()
.
Advance Warning
The following items are due to change in a future release.
Installation and Deployment
We intend to drop support for Java 17, making Java 21 the minimum. We plan to make this change in March 2024.
Removed
Items that have been removed as of this release.
API
The deprecated REST endpoints
api/v1/dataspaces/(id)/deleteevents
and/api/v1/repositories/(id)/deleteevents
have been removed. You can use the redactEvents GraphQL mutation and query instead.For more information, see redactEvents() .
Deprecation
Items that have been deprecated and may be removed in a future release.
GraphQL mutation updateOrganizationMutability is deprecated in favor of the new setBlockIngest mutation.
Behavior Changes
Scripts or environment which make use of these tools should be checked and updated for the new configuration:
Automation and Alerts
We have changed how Scheduled Searches handle query warnings, similar to what was done for Standard Alerts (see Falcon LogScale 1.112.0 GA (2023-10-24)). Previously, LogScale only triggered Scheduled Searches if there were no query warnings. Now, scheduled searches will trigger despite most query warnings, and the scheduled search status will show a warning instead of an error.
For query warnings about missing data, either due to ingest delay or some existing data that is currently unavailable, the scheduled search will retry for up to 10 minutes by default. This waiting time is configurable, see
SCHEDULED_SEARCH_MAX_WAIT_FOR_MISSING_DATA
for more information.Up until now, all query warnings were treated as errors: the scheduled search did not trigger even though it produced results, and the scheduled search was shown with an error in LogScale. Most query warnings meant that not all data was queried. The previous behaviour prevented the scheduled search from triggering in cases where it would not have, if all data had been available. For instance, a scheduled search that would trigger if a count of events dropped below a threshold. On the other hand, it made some scheduled searches not trigger, even though they would still have if all data was available. That meant that previously you would almost never have a scheduled search trigger when it should not, but you would sometimes have a scheduled search not trigger, when it should have. We have reverted this behavior.
With this change, we no longer recommend to set the configuration option
SCHEDULED_SEARCH_DESPITE_WARNINGS
totrue
, since it treats all query warnings as non-errors, and there are a few query warnings that should make the scheduled search fail.
Upgrades
Changes that may occur or be required during an upgrade.
Configuration
We've migrated from Akka dependency component to Apache Pekko. This means that all internal logs referencing Akka will be substituted with the Pekko counterpart. Users will need to update any triggers or dashboards that rely on such logs.
On Prem only: be aware that the Akka to Pekko migration also affects configuration field names in
application.conf
. Clusters that are using a customapplication.conf
will need to update their configuration to use the Pekko configuration names instead of the Akka configuration names.
New features and improvements
UI Changes
The
Files
page has a new layout and changes:It has been split into two pages: one containing a list of files and one with details of each file.
A view limit of 100 MB has been added and you'll get an error in the UI if you try to view files larger than this size.
It displays information on the size limits and the step needed for syncing the imported files.
For more information, see Files.
Parser test cases will automatically expand to the height of their content when loading the parser page now.
When selecting a parser test case, there is now a button to scroll to that test case again if you scroll away from it.
We have improved the navigation on the page for Alerts, Scheduled Searches and Actions and the page is now called
Automation
.For more information, see Automation.
Lookup Files require unique column headers to work as expected, which was previously validated when attempting to use the file. You could still install an invalid file into LogScale however, but now lookup files with duplicate header names are also blocked from being installed.
Automation and Alerts
LogScale now creates notifications for alerts and scheduled searches with warnings in addition to notifications for errors. The notifications for warnings will have a severity of warning.
When Filter Alerts encounter a query warning that could potentially affect the result of the alert, the warning is now saved with the alert, so that it is visible in the alerts overview, same as for Standard Alerts.
When clearing errors on alerts or scheduled searches, all notifications about the problem are now automatically deleted right when the error is cleared. Previously, notifications were only updated every 15 minutes. Note, that if the error returns, a new notification will be created.
GraphQL API
The redactEvents() mutation will no longer be allowed for users who have a limiting query prefix.
Added limits for GraphQL queries on the total number of selected fields and fragments. Defaults are
1000
for authenticated and150
for unauthenticated users.Cluster administrators can adjust these limits with the
GraphQLSelectionSizeLimit
andUnauthenticatedGraphQLSelectionSizeLimit
dynamic configurations.The new setBlockIngest GraphQL mutation is introduced to block ingest for the organization and set ingest to
paused
in the dataspaces owned by the organization.
Storage
Handling of IOExceptions in part of the segment reading code has been improved. Such exceptions will cause the segment to be excluded from the query, and potentially refetched from bucket storage, and a warning to be shown to the user, rather than cancelling the query.
Configuration
Added validation for
LOCAL_STORAGE_PERCENTAGE
configuration against thetargetDiskUsagePercentage
, that might be set on runtime, to enforce that theLOCAL_STORAGE_PERCENTAGE
variable is at least 5 percentage points larger thantargetDiskUsagePercentage
. Nodes that are violating this constraint will not be able to start. In addition, the setTargetDiskUsagePercentage mutation will not allow violating the constraint.QueryMemoryLimit
andLiveQueryMemoryLimit
dynamic configurations have been replaced withQueryCoordinatorMemoryLimit
, which controls the maximum memory usage of the coordinating node. This memory limit will, in turn, determine the limits of the static query state size and the live query state size.QueryCoordinatorMemoryLimit
defaults to 400MB;QueryMemoryLimit
andLiveQueryMemoryLimit
defaults to 100MB regardless of their previous configuration.For more information, see General Limits & Parameters.
The new
INITIAL_DISABLED_NODE_TASK
environment variable is introduced.For more information, see
INITIAL_DISABLED_NODE_TASK
.
Dashboards and Widgets
Small multiples functionality is introduced for the
Single Value
,Gauge
, andPie Chart
widgets. This feature allows you to partition your query result on a single dimension into multiple visuals of the same widget type for easy comparison.For more information, see Widgets.
We have added the new width option Fit to content for
Event List
columns. With this option selected, the width of the column depends on the content in the column.Table
widget.
Ingestion
When navigating between parser test cases, the table showing the outputs for the test case will now scroll to the top when you select a new test case.
A new mechanism is introduced that delays the response to a HTTP ingest request from nodes that also do digest when the digest node locally experiences digest lag. The following new dynamic configurations control this mechanism:
DelayIngestResponseDueToIngestLagMaxFactor
limits how much longer than the actual execution it may be, measured as a factor on top of the actual time spent (default is2
).DelayIngestResponseDueToIngestLagThreshold
sets the number of milliseconds of digest lag where the feature starts to kick in (default is20,000
).DelayIngestResponseDueToIngestLagScale
sets the number of milliseconds of lag that adds1
to the factor applied (default is300,000
).
Functions
The new query function
duration()
is introduced: it can be helpful in computations involving timestamps.Live queries that use files in either
match()
,cidr()
, orlookup()
functions are no longer restarted when the file is updated. Instead the files are swapped while the queries are still running.For more information, see Lookup Files Operations.
The new query function
parseUri()
is introduced to support parsing of URIs without a scheme.The new query function
if()
is introduced to compute one of two expressions depending on the outcome of a test.
Fixed in this release
UI Changes
Turned the dropdown menu in the TablePage upwards and set it to the front to fix a bug where the menu would be hidden.
The page for creating repository or view tokens would fail to load if the user didn't have a
Change IP filters
Organization settings permission.
Automation and Alerts
If a filter alert, standard alert or scheduled search was assigned to run on another node in the cluster, due to changes to the available cluster nodes, they would be wrongly marked as failing with an error like The alert is broken. Save the alert again to fix it and an error log. This issue is now fixed.
If an error occurred where the error message was huge, the error would not be stored on the failing alert or scheduled search. This issue has been fixed.
GraphQL API
Swapped parameters in GraphQL mutation updateOrganizationMutability have been fixed.
Storage
A case where we might ignore
LOCAL_STORAGE_PREFILL_PERCENTAGE
and prefetch bucketed segments even if above the limit has been fixed.The setTargetDiskUsagePercentage mutation has been removed. The functionality that used this value has been updated to instead base decision-making on
PRIMARY_STORAGE_MAX_FILL_PERCENTAGE
, and onSECONDARY_STORAGE_MAX_FILL_PERCENTAGE
for nodes with secondary storage configured.
Dashboards and Widgets
The
Gauge
widget has been fixed as the Styling panel would not display configured thresholds.The hovered series in
Time Chart
widget have been fixed as they would not be highlighted in the tooltip.The options for precision and thousands separators in
Table
widget have been fixed as they would not be saved correctly when editing other widgets on theSearch
page.The legend title in widget charts has been fixed as it would offset the content when positioned to the right.
The Styling panel in the
Table
widget has been fixed as threshold coloring could be assigned unintentionally.
Ingestion
Parser timeout errors on ingested events that would occur at shutdown have now been fixed.
A gap in the statistics of ingest per day experienced by some organizations on the Usage Page and in humio-usage repository, causing the graph to drop to zero, has now been fixed. As a consequence of this fix, the first measurement performed with version 1.114 will result in the graph showing a peak, since it would include statistics from the period where calculations were skipped.
A parser that failed to construct would sometimes result in events receiving a null error. This issue has been fixed.
A digest coordination issue has been fixed: it could cause mini-segments to stay behind on old digest leaders when leadership changes.
Queries
Occasional error logging from
QueryScheduler.reduceAndSetSnapshot
has been fixed.
Functions
cidr()
query function would fail to find some events when parameternegate=true
was set. This incorrect behavior has now been fixed.The
cidr()
function would handle a validation error incorrectly. This issue has been fixed.The
count()
function withdistinct
parameter would give an incorrect count forutf8
strings. This issue has been fixed.timeChart()
andbucket()
functions have been fixed as they would give slightly different results depending on whether theirlimit
argument was left out or explicitly set to the default value.
Falcon LogScale 1.118.1 Internal (2023-12-20)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.118.1 | Internal | 2023-12-20 | Internal Only | 2024-12-31 | No | 1.70.0 | No |
Available for download two days after release.
Internal-only release.
Advance Warning
The following items are due to change in a future release.
Installation and Deployment
We intend to drop support for Java 17, making Java 21 the minimum. We plan to make this change in March 2024.
Falcon LogScale 1.118.0 GA (2023-12-12)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.118.0 | GA | 2023-12-12 | Cloud | 2025-01-31 | No | 1.70.0 | No |
Available for download two days after release.
Bug fixes and updates.
Advance Warning
The following items are due to change in a future release.
Installation and Deployment
We intend to drop support for Java 17, making Java 21 the minimum. We plan to make this change in March 2024.
Removed
Items that have been removed as of this release.
API
The deprecated REST endpoints
api/v1/dataspaces/(id)/deleteevents
and/api/v1/repositories/(id)/deleteevents
have been removed. You can use the redactEvents GraphQL mutation and query instead.For more information, see redactEvents() .
New features and improvements
UI Changes
We have improved the navigation on the page for Alerts, Scheduled Searches and Actions and the page is now called
Automation
.For more information, see Automation.
Dashboards and Widgets
Small multiples functionality is introduced for the
Single Value
,Gauge
, andPie Chart
widgets. This feature allows you to partition your query result on a single dimension into multiple visuals of the same widget type for easy comparison.For more information, see Widgets.
We have added the new width option Fit to content for
Event List
columns. With this option selected, the width of the column depends on the content in the column.Event List
andTable
widgets now support custom date time formats.
Falcon LogScale 1.117.0 GA (2023-12-05)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.117.0 | GA | 2023-12-05 | Cloud | 2025-01-31 | No | 1.70.0 | No |
Available for download two days after release.
Bug fixes and updates.
Advance Warning
The following items are due to change in a future release.
Installation and Deployment
We intend to drop support for Java 17, making Java 21 the minimum. We plan to make this change in March 2024.
Deprecation
Items that have been deprecated and may be removed in a future release.
GraphQL mutation updateOrganizationMutability is deprecated in favor of the new setBlockIngest mutation.
New features and improvements
GraphQL API
The new setBlockIngest GraphQL mutation is introduced to block ingest for the organization and set ingest to
paused
in the dataspaces owned by the organization.
Configuration
The new
INITIAL_DISABLED_NODE_TASK
environment variable is introduced.For more information, see
INITIAL_DISABLED_NODE_TASK
.
Functions
Live queries that use files in either
match()
,cidr()
, orlookup()
functions are no longer restarted when the file is updated. Instead the files are swapped while the queries are still running.For more information, see Lookup Files Operations.
Fixed in this release
GraphQL API
Swapped parameters in GraphQL mutation updateOrganizationMutability have been fixed.
Dashboards and Widgets
The Styling panel in the
Table
widget has been fixed as threshold coloring could be assigned unintentionally.
Functions
timeChart()
andbucket()
functions have been fixed as they would give slightly different results depending on whether theirlimit
argument was left out or explicitly set to the default value.
Falcon LogScale 1.116.0 GA (2023-11-28)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.116.0 | GA | 2023-11-28 | Cloud | 2025-01-31 | No | 1.70.0 | No |
Available for download two days after release.
Bug fixes and updates.
Advance Warning
The following items are due to change in a future release.
Installation and Deployment
We intend to drop support for Java 17, making Java 21 the minimum. We plan to make this change in March 2024.
Upgrades
Changes that may occur or be required during an upgrade.
Configuration
We've migrated from Akka dependency component to Apache Pekko. This means that all internal logs referencing Akka will be substituted with the Pekko counterpart. Users will need to update any triggers or dashboards that rely on such logs.
On Prem only: be aware that the Akka to Pekko migration also affects configuration field names in
application.conf
. Clusters that are using a customapplication.conf
will need to update their configuration to use the Pekko configuration names instead of the Akka configuration names.
New features and improvements
Storage
Handling of IOExceptions in part of the segment reading code has been improved. Such exceptions will cause the segment to be excluded from the query, and potentially refetched from bucket storage, and a warning to be shown to the user, rather than cancelling the query.
Configuration
QueryMemoryLimit
andLiveQueryMemoryLimit
dynamic configurations have been replaced withQueryCoordinatorMemoryLimit
, which controls the maximum memory usage of the coordinating node. This memory limit will, in turn, determine the limits of the static query state size and the live query state size.QueryCoordinatorMemoryLimit
defaults to 400MB;QueryMemoryLimit
andLiveQueryMemoryLimit
defaults to 100MB regardless of their previous configuration.For more information, see General Limits & Parameters.
Fixed in this release
Dashboards and Widgets
The hovered series in
Time Chart
widget have been fixed as they would not be highlighted in the tooltip.The options for precision and thousands separators in
Table
widget have been fixed as they would not be saved correctly when editing other widgets on theSearch
page.The legend title in widget charts has been fixed as it would offset the content when positioned to the right.
Functions
Falcon LogScale 1.115.0 GA (2023-11-21)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.115.0 | GA | 2023-11-21 | Cloud | 2025-01-31 | No | 1.70.0 | No |
Available for download two days after release.
Bug fixes and updates.
Advance Warning
The following items are due to change in a future release.
Installation and Deployment
We intend to drop support for Java 17, making Java 21 the minimum. We plan to make this change in March 2024.
New features and improvements
UI Changes
The
Files
page has a new layout and changes:It has been split into two pages: one containing a list of files and one with details of each file.
A view limit of 100 MB has been added and you'll get an error in the UI if you try to view files larger than this size.
It displays information on the size limits and the step needed for syncing the imported files.
For more information, see Files.
Parser test cases will automatically expand to the height of their content when loading the parser page now.
When selecting a parser test case, there is now a button to scroll to that test case again if you scroll away from it.
Automation and Alerts
LogScale now creates notifications for alerts and scheduled searches with warnings in addition to notifications for errors. The notifications for warnings will have a severity of warning.
Ingestion
A new mechanism is introduced that delays the response to a HTTP ingest request from nodes that also do digest when the digest node locally experiences digest lag. The following new dynamic configurations control this mechanism:
DelayIngestResponseDueToIngestLagMaxFactor
limits how much longer than the actual execution it may be, measured as a factor on top of the actual time spent (default is2
).DelayIngestResponseDueToIngestLagThreshold
sets the number of milliseconds of digest lag where the feature starts to kick in (default is20,000
).DelayIngestResponseDueToIngestLagScale
sets the number of milliseconds of lag that adds1
to the factor applied (default is300,000
).
Fixed in this release
UI Changes
Turned the dropdown menu in the TablePage upwards and set it to the front to fix a bug where the menu would be hidden.
Storage
The setTargetDiskUsagePercentage mutation has been removed. The functionality that used this value has been updated to instead base decision-making on
PRIMARY_STORAGE_MAX_FILL_PERCENTAGE
, and onSECONDARY_STORAGE_MAX_FILL_PERCENTAGE
for nodes with secondary storage configured.
Dashboards and Widgets
The
Gauge
widget has been fixed as the Styling panel would not display configured thresholds.
Ingestion
A parser that failed to construct would sometimes result in events receiving a null error. This issue has been fixed.
A digest coordination issue has been fixed: it could cause mini-segments to stay behind on old digest leaders when leadership changes.
Queries
Occasional error logging from
QueryScheduler.reduceAndSetSnapshot
has been fixed.
Falcon LogScale 1.114.0 GA (2023-11-14)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.114.0 | GA | 2023-11-14 | Cloud | 2025-01-31 | No | 1.70.0 | No |
Available for download two days after release.
Bug fixes and updates.
Advance Warning
The following items are due to change in a future release.
Installation and Deployment
We intend to drop support for Java 17, making Java 21 the minimum. We plan to make this change in March 2024.
New features and improvements
Automation and Alerts
When Filter Alerts encounter a query warning that could potentially affect the result of the alert, the warning is now saved with the alert, so that it is visible in the alerts overview, same as for Standard Alerts.
Fixed in this release
Automation and Alerts
If an error occurred where the error message was huge, the error would not be stored on the failing alert or scheduled search. This issue has been fixed.
Storage
A case where we might ignore
LOCAL_STORAGE_PREFILL_PERCENTAGE
and prefetch bucketed segments even if above the limit has been fixed.
Ingestion
A gap in the statistics of ingest per day experienced by some organizations on the Usage Page and in humio-usage repository, causing the graph to drop to zero, has now been fixed. As a consequence of this fix, the first measurement performed with version 1.114 will result in the graph showing a peak, since it would include statistics from the period where calculations were skipped.
Falcon LogScale 1.113.0 GA (2023-11-09)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.113.0 | GA | 2023-11-09 | Cloud | 2025-01-31 | No | 1.70.0 | No |
Available for download two days after release.
Bug fixes and updates.
Breaking Changes
The following items create a breaking change in the behavior, response or operation of this release.
Functions
The new parameter
unit
is added toformatTime()
to specify whether the input field is in seconds or milliseconds, or if it should be auto-detected by the system.This is a breaking change: if you want to ensure fully backward-compatible behavior, set
unit=milliseconds
.For more information, see
formatTime()
.
Advance Warning
The following items are due to change in a future release.
Installation and Deployment
We intend to drop support for Java 17, making Java 21 the minimum. We plan to make this change in March 2024.
Behavior Changes
Scripts or environment which make use of these tools should be checked and updated for the new configuration:
Automation and Alerts
We have changed how Scheduled Searches handle query warnings, similar to what was done for Standard Alerts (see Falcon LogScale 1.112.0 GA (2023-10-24)). Previously, LogScale only triggered Scheduled Searches if there were no query warnings. Now, scheduled searches will trigger despite most query warnings, and the scheduled search status will show a warning instead of an error.
For query warnings about missing data, either due to ingest delay or some existing data that is currently unavailable, the scheduled search will retry for up to 10 minutes by default. This waiting time is configurable, see
SCHEDULED_SEARCH_MAX_WAIT_FOR_MISSING_DATA
for more information.Up until now, all query warnings were treated as errors: the scheduled search did not trigger even though it produced results, and the scheduled search was shown with an error in LogScale. Most query warnings meant that not all data was queried. The previous behaviour prevented the scheduled search from triggering in cases where it would not have, if all data had been available. For instance, a scheduled search that would trigger if a count of events dropped below a threshold. On the other hand, it made some scheduled searches not trigger, even though they would still have if all data was available. That meant that previously you would almost never have a scheduled search trigger when it should not, but you would sometimes have a scheduled search not trigger, when it should have. We have reverted this behavior.
With this change, we no longer recommend to set the configuration option
SCHEDULED_SEARCH_DESPITE_WARNINGS
totrue
, since it treats all query warnings as non-errors, and there are a few query warnings that should make the scheduled search fail.
New features and improvements
UI Changes
Lookup Files require unique column headers to work as expected, which was previously validated when attempting to use the file. You could still install an invalid file into LogScale however, but now lookup files with duplicate header names are also blocked from being installed.
Automation and Alerts
When clearing errors on alerts or scheduled searches, all notifications about the problem are now automatically deleted right when the error is cleared. Previously, notifications were only updated every 15 minutes. Note, that if the error returns, a new notification will be created.
GraphQL API
The redactEvents() mutation will no longer be allowed for users who have a limiting query prefix.
Configuration
Added validation for
LOCAL_STORAGE_PERCENTAGE
configuration against thetargetDiskUsagePercentage
, that might be set on runtime, to enforce that theLOCAL_STORAGE_PERCENTAGE
variable is at least 5 percentage points larger thantargetDiskUsagePercentage
. Nodes that are violating this constraint will not be able to start. In addition, the setTargetDiskUsagePercentage mutation will not allow violating the constraint.
Dashboards and Widgets
Table
widget.
Ingestion
When navigating between parser test cases, the table showing the outputs for the test case will now scroll to the top when you select a new test case.
Functions
The new query function
duration()
is introduced: it can be helpful in computations involving timestamps.The new query function
parseUri()
is introduced to support parsing of URIs without a scheme.The new query function
if()
is introduced to compute one of two expressions depending on the outcome of a test.
Fixed in this release
UI Changes
The page for creating repository or view tokens would fail to load if the user didn't have a
Change IP filters
Organization settings permission.
Automation and Alerts
If a filter alert, standard alert or scheduled search was assigned to run on another node in the cluster, due to changes to the available cluster nodes, they would be wrongly marked as failing with an error like The alert is broken. Save the alert again to fix it and an error log. This issue is now fixed.
Ingestion
Parser timeout errors on ingested events that would occur at shutdown have now been fixed.
Functions
cidr()
query function would fail to find some events when parameternegate=true
was set. This incorrect behavior has now been fixed.
Falcon LogScale 1.112.4 LTS (2024-02-23)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.112.4 | LTS | 2024-02-23 | Cloud | 2024-11-30 | No | 1.70.0 | No |
Hide file hashes
TAR Checksum | Value |
---|---|
MD5 | cbc7e33e67707121ad4dfb1fa126eb32 |
SHA1 | 5ba9b1cd3980020460310ad8bbb774e62ac6448c |
SHA256 | ee470562fe07a4fa1d418da3cdf691b7524e2892c21d9ca70157704810d19087 |
SHA512 | 2a332c85c34bdaa07d366284e88b7c94dd6bd8507a36590d350a102555a7266a597a323e36a51bc58ba16c16b642588b0a6dfe8f85ace8cbce10b889276921a2 |
Docker Image | Included JDK | SHA256 Checksum |
---|---|---|
humio | 21 | a02bf037577746f41bac7b384311d314fd4bb578a7103b49f738c91d3966fae7 |
humio-core | 21 | f21a0bce3e07878bef68cf60d63123301e46c57ac9a3d9b6005fff0f12a221dd |
kafka | 21 | b03f209bcb2c4ea31c4beedd41f0395ddd5e3b6b183c965ab98f0025d235fbde |
zookeeper | 21 | 779c0bddd23417d07b45d68b829e58ec64b2c484e95012f0d6924a9afdbda3b9 |
Download: https://repo.humio.com/repository/maven-releases/com/humio/server/1.112.4/server-1.112.4.tar.gz
These notes include entries from the following previous releases: 1.112.1, 1.112.2, 1.112.3
Bug fixes and performance improvements.
Advance Warning
The following items are due to change in a future release.
Installation and Deployment
We intend to drop support for Java 17, making Java 21 the minimum. We plan to make this change in March 2024.
Removed
Items that have been removed as of this release.
Installation and Deployment
All ZooKeeper-related functionality for LogScale was deprecated in December 2022, and is now removed:
Removed the ZooKeeper status page from the User Interface
Removed the ZooKeeper related GraphQL mutations
Removed the migration support for node IDs created by ZooKeeper, as we no longer support upgrading from version prior to 1.70.
Depending on your chosen Kafka deployment, ZooKeeper may still be required to support Kafka.
Running on Java 11, 12, 13, 14, 15 and 16 is no longer supported. The minimum supported Java version is 17 starting from this LogScale release.
GraphQL API
The deprecated client mutation ID concept is now being removed from the GraphQL API:
Removed the clientMutationId argument for a lot of mutations.
Removed the clientMutationId field from the returned type for a lot of mutations.
Renamed the ClientMutationID datatype, that was returned from some mutations to
BooleanResultType
datatype. Removed the clientMutationId field on the returned type and replaced it by a boolean field namedresult
.Most deprecated queries, mutations and fields have now been removed from the GraphQL API.
Storage
The unused
humio-backup
symlink inside Docker containers has been removed.Configuration
Some deprecated configuration variables have now been removed:
GCP_STORAGE_UPLOAD_CONCURRENCY
GCP_STORAGE_DOWNLOAD_CONCURRENCY
They have been replaced by
S3_STORAGE_CONCURRENCY
andGCP_STORAGE_CONCURRENCY
settings that internally handle rate-limiting responses from the bucket provider.
Deprecation
Items that have been deprecated and may be removed in a future release.
The following
REST
endpoints for deleting events have been deprecated:
/api/v1/dataspaces/
(Id)
/deleteevents
/api/v1/repositories/
(id)
/deleteeventsThe new GraphQL mutation redactEvents should be used instead.
Behavior Changes
Scripts or environment which make use of these tools should be checked and updated for the new configuration:
Automation and Alerts
We have changed how Standard Alerts handle query warnings. Previously, LogScale only triggered alerts if there were no query warnings. Now, alerts will trigger despite most query warnings, and the alert status will show a warning instead of an error. Up until now, all query warnings were treated as errors. This meant that the alert did not trigger even though it produced results, and the alert was shown with an error in LogScale. Most query warnings mean that not all data was queried. The previous behaviour prevented the alert from triggering in cases where it would not have, if all data had been available. For instance, an alert that would trigger if a count of events dropped below a threshold. On the other hand, it made some alerts not trigger, even though they would still have if all data was available. That meant that previously you would almost never get an alert that you should not have gotten, but you would sometime not get an alert that you should have gotten. We have reverted this. With this change, we no longer recommend to set the configuration option
ALERT_DESPITE_WARNINGS
totrue
, since it treats all query warnings as non-errors, and there are a few query warnings that should make the alert fail.For more information, see Diagnosing Alerts.
Upgrades
Changes that may occur or be required during an upgrade.
Security
xmlsec has been upgraded to 2.3.4 to address CVE-2023-44483 issue.
Configuration
Docker containers have been upgraded to Java 21.
New features and improvements
Installation and Deployment
Configure LogScale to write fatal JVM error logs in the JVM logging directory, which is specified using
JVM_LOG_DIR
variable. The default directory is/logs/humio
.
UI Changes
Most tables inside the LogScale UI now supports resizing columns, except the
Table
widget used during search.The behavior of the ComboBox has changed: the drop-down is not filtered until the text in the filter field has been edited, allowing you to easily copy, alter or clear the text.
The list of permissions now has a specific custom order in the UI, as follows.
Organization:
Organization settings
Repository and view management
Permissions and user management
Fleet management
Query monitoring
Other
Cluster management:
Cluster management
Organization management
Subdomains
Others
A combined view of permissions is now available to show all roles listed together when there is more than one role under each repository, organization, or system.
For more information, see Aggregate Permissions.
It is now possible to highlight results based on the filters applied in queries. This helps significantly when trying to understand why a query matches the results or when looking for a specific part of the events text.
For more information, see Filter Match Highlighting.
Automation and Alerts
The new button Scheduled Searches form allowing importing a Scheduled Search from template or package.
has been added to theWhen creating or updating Scheduled Searches using the GraphQL API, it is now possible to refer to actions in Packages using a qualified name of
\"packagescope/packagename:actionname\"
. Actions in packages will no longer be found if using an unqualified name.When generating CSV files for attaching to emails or uploading to LogScale in actions, or when using the message template {events_html}, the field @ingesttimestamp is now formatted similar to how @timestamp is.
The UI flow for Scheduled Searches has been updated: when you click on it will directly go to the New Scheduled Search form.
The Alert forms will not show any errors when the alert is disabled.
GraphQL API
Added limits for GraphQL queries on the total number of selected fields and fragments. Defaults are
1000
for authenticated and150
for unauthenticated users.Cluster administrators can adjust these limits with the
GraphQLSelectionSizeLimit
andUnauthenticatedGraphQLSelectionSizeLimit
dynamic configurations.The contentHash field on the
File
output type has been reintroduced.
Storage
JVM_TMP_DIR
has been added to the launcher script. This option is used for configuringjava.io.tmpdir
andjna.tmpdir
for the JVM. The directory will default tojvm-tmp
inside the directory specified by theDIRECTORY
setting. This default should alleviate issues starting LogScale on some systems due to the/tmp
directory being marked asnoexec
.For more information, see Troubleshooting: Error Starting LogScale due to Exec permissions on /tmp.
Bucket storage cleaning of
tmp
files now only runs on a few nodes in the cluster rather than on all nodes.
Configuration
LOCAL_STORAGE_PREFILL_PERCENTAGE
new configuration option has been added.For more information, see
LOCAL_STORAGE_PREFILL_PERCENTAGE
.Query queueing based on the available memory in query coordinator is enabled by default by treating dynamic configuration
QueryCoordinatorMaxHeapFraction
as 0.5, if it has not been set. To disable queing, setQueryCoordinatorMaxHeapFraction
to 1000.Set the default value of
LOCAL_STORAGE_PERCENTAGE
to85
, and the minimum value to0
. The default was previously to leave this unset, which is not safe in clusters where bucket storage contains more data than will fit on local drives.The new environment variable
DISABLE_BUCKET_CLEANING_TMP_FILES
has been introduced. It allows to reduce the amount of listing oftmp
files in bucket.
Dashboards and Widgets
You can enable the export of Dashboards to a PDF file, with many options available to control the output layout and formatting.
The feature is available to all users who already have access to dashboard data. This is the first of two feature releases, aiming to provide full schedulable PDF reporting capabilities to LogScale.
For more information, see Export Dashboards as PDF.
The new
Gauge
widget is introduced: it allows you to represent values on a fixed scale, offering a visual and intuitive way to monitor key performance metrics.For more information, see Gauge Widget.
A parameter configuration option has been added to support invalidation of parameter inputs. The format for this is a comma separated list of invalid input patterns (regexes).
Introduced a new style option Show 'Others' to the Time Chart Widget: it allows you to show/hide other series when there are more series than the maximum allowed in the chart.
A parameter configuration option has been added to allow setting a custom message when a parameter input is invalid.
New formatting options have been introduced for the
Table
widget, to get actionable insights from your data faster:Conditional formatting of table cells
Text wrapping and column resizing
Row numbering
Number formatting
Link formatting
Columns hiding
For more information, see Table Widget.
Ingestion
When writing parsers, the fields produced by a test case are now available for autocompletion in the editor.
For more information, see Using the Parser Code Editor.
Log Collector
The Fleet Management tab on
Fleet Overview
page is now renamed to Data Ingest.
Functions
parseCEF()
andparseLEEF()
functions now have an option to change the prefix of the header fields.Field names with special characters are now supported in Array Query Functions using backtick quoting.
For more information, see Using Array Query Functions.
Packages
Filter alerts and Standard alerts are now shown in the same tab Alerts under Assets when installing or viewing installed Packages.
It is now possible to see the type of action in Packages (Marketplace, Installed and Create a package).
Fixed in this release
UI Changes
Queries could "flicker" for a short period causing "negative alerts" to trigger for no reason (negative alerts are alerts that check for the absence of events). This issue has been fixed.
The following issue has been fixed on the
Search
page: if regular expressions contained named groups with special characters (underscore_
for example) a recent change with the introduction of Filter Match Highlighting would cause a server error and hang the UI.The following items about Saving Queries have been fixed:
The Search... field for saved queries did not return what would be expected.
Upon reopening the Search... field, the text would still be present in the Search... field but not filter on the queries.
dropdown after having filled out theAdded focus on the Search... field when reopening the dropdown.
Automation and Alerts
Notifications on problems with Filter Alerts where not automatically removed when the problem was solved. This issue is now fixed.
Filter alerts that could fail right after a cluster restart have now been fixed.
When used with Filter Alerts, the {events_html} message template would not keep the order of the fields from the Alert query.
GraphQL API
When trying to delete an Alert, Scheduled Search or Dashboard using a mutation for one of the other types, it would end up in a state where it was not deleted, but could not run either. This issue is now fixed.
Storage
A workaround solution has been identified for those cases where segment files on local disk no longer pass their internal checksum test and are detected as "broken" by the background merge process.
Ensure a copy of the local file is present in the bucket storage, backing up the cluster
Delete the local copy
As a result, any merge attempt involving that file will succeed after the next restart of LogScale.
Fixed an issue that could cause repositories undeleted using the mechanism described at Restoring a Repository or View to be only partially restored. Some deleted datasources within the repositories could erroneously be skipped during restoration.
For more information, see Restoring a Repository or View.
Dashboards and Widgets
Field values containing
%
would not be resolved correctly in interactions. This issue has been fixed.
Ingestion
The buttons used for editing and deleting an ingest listener were overlapping in Safari on the Ingest Listeners page under a repository. This issue has been fixed.
Functions
Results for empty buckets didn't include the steps after the first aggregator of the subquery. This issue has now been fixed.
match()
function using a json file and containing an object with a missing field, could lead to an internal error.The
regex()
function has been fixed for cases where\Q...\E
could cause problems for named capturing groups.The
array:filter()
function has been fixed for an issue that caused incorrect output element values in certain circumstances.
Other
A cluster with very little disk space left could result in excessive logging from
com.humio.distribution.RendezvousSegmentDistribution
.Fixing a race that can leave a query in a state where it will cause an excessive amount of 404 HTTP requests. This adds unnecessary noise and a bit of extra load to the system.
A minor logging issue has been fixed:
ClusterHostAliveStats
would log that hosts were "changed from being considered dead to alive" on hosts that had just rebooted, when such hosts actually consider all other nodes alive for a little while, to allow the booting node some time to hear heartbeats from others.A boot-time version checking issue could cause LogScale to crash on boot, if joining a fresh cluster, and the first node to join that cluster would crash.
Packages
Updating of a Package failed when using anything other than a personal user token. This issue has been fixed.
Updating a package with a lookup file and a parser/scheduled search/filter alert/alert containing match would fail if the new
column
parameter did not exist in the old lookup file. This issue has now been fixed.Aligned the requirements to allow all tokens (with the correct permissions) to install and update Packages.
Fixed a broken link from saved query asset in
Packages
toSearch
page.The alert types in Package Marketplace were showing twice — this is now fixed so it properly shows one type as expected.
Improvement
Storage
Allowed reassignment of digest that assigns partitions unevenly to hosts. This is to support clusters where hosts are not evenly sized, and so an even partition assignment is not expected.
Falcon LogScale 1.112.3 LTS (2024-01-30)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.112.3 | LTS | 2024-01-30 | Cloud | 2024-11-30 | No | 1.70.0 | No |
Hide file hashes
TAR Checksum | Value |
---|---|
MD5 | 88000e0267f6ceb570e47647d6bd9fe4 |
SHA1 | fc20e83976d56a7ed16822798afe4ef0f2b62982 |
SHA256 | beeb00d0ca09227f97bde119bbbc55b2cd48a2a35756de28c2a12f4ece7b906d |
SHA512 | 723b19908f9147efc9d6c4ab0a6974c0af2b5e78f2575416d004100929e21b566f39cc80f7ede449640ef2dfc75706c56d047c3e5b06ad31028edf9e0f3ad8c9 |
Docker Image | Included JDK | SHA256 Checksum |
---|---|---|
humio | 21 | b2118a56091ac892ccadba636845cf524dbe6033b128b6ae0f59aacad4fdecd8 |
humio-core | 21 | d127e5606cd70b6ba851faa22e0f8a9ba964d48d7c72d1e1e294df138dc4748e |
kafka | 21 | 3da6f48ddd0335f6d29ee0215d07a6e199a93acd528061ca59226243ebd8031e |
zookeeper | 21 | 503cf46cb2e18df07459f0a2bfa911cf150cd3753393c5a051ed6ec44a0152ea |
Download: https://repo.humio.com/repository/maven-releases/com/humio/server/1.112.3/server-1.112.3.tar.gz
These notes include entries from the following previous releases: 1.112.1, 1.112.2
Bug fixes and updates.
Advance Warning
The following items are due to change in a future release.
Installation and Deployment
We intend to drop support for Java 17, making Java 21 the minimum. We plan to make this change in March 2024.
Removed
Items that have been removed as of this release.
Installation and Deployment
All ZooKeeper-related functionality for LogScale was deprecated in December 2022, and is now removed:
Removed the ZooKeeper status page from the User Interface
Removed the ZooKeeper related GraphQL mutations
Removed the migration support for node IDs created by ZooKeeper, as we no longer support upgrading from version prior to 1.70.
Depending on your chosen Kafka deployment, ZooKeeper may still be required to support Kafka.
Running on Java 11, 12, 13, 14, 15 and 16 is no longer supported. The minimum supported Java version is 17 starting from this LogScale release.
GraphQL API
The deprecated client mutation ID concept is now being removed from the GraphQL API:
Removed the clientMutationId argument for a lot of mutations.
Removed the clientMutationId field from the returned type for a lot of mutations.
Renamed the ClientMutationID datatype, that was returned from some mutations to
BooleanResultType
datatype. Removed the clientMutationId field on the returned type and replaced it by a boolean field namedresult
.Most deprecated queries, mutations and fields have now been removed from the GraphQL API.
Storage
The unused
humio-backup
symlink inside Docker containers has been removed.Configuration
Some deprecated configuration variables have now been removed:
GCP_STORAGE_UPLOAD_CONCURRENCY
GCP_STORAGE_DOWNLOAD_CONCURRENCY
They have been replaced by
S3_STORAGE_CONCURRENCY
andGCP_STORAGE_CONCURRENCY
settings that internally handle rate-limiting responses from the bucket provider.
Deprecation
Items that have been deprecated and may be removed in a future release.
The following
REST
endpoints for deleting events have been deprecated:
/api/v1/dataspaces/
(Id)
/deleteevents
/api/v1/repositories/
(id)
/deleteeventsThe new GraphQL mutation redactEvents should be used instead.
Behavior Changes
Scripts or environment which make use of these tools should be checked and updated for the new configuration:
Automation and Alerts
We have changed how Standard Alerts handle query warnings. Previously, LogScale only triggered alerts if there were no query warnings. Now, alerts will trigger despite most query warnings, and the alert status will show a warning instead of an error. Up until now, all query warnings were treated as errors. This meant that the alert did not trigger even though it produced results, and the alert was shown with an error in LogScale. Most query warnings mean that not all data was queried. The previous behaviour prevented the alert from triggering in cases where it would not have, if all data had been available. For instance, an alert that would trigger if a count of events dropped below a threshold. On the other hand, it made some alerts not trigger, even though they would still have if all data was available. That meant that previously you would almost never get an alert that you should not have gotten, but you would sometime not get an alert that you should have gotten. We have reverted this. With this change, we no longer recommend to set the configuration option
ALERT_DESPITE_WARNINGS
totrue
, since it treats all query warnings as non-errors, and there are a few query warnings that should make the alert fail.For more information, see Diagnosing Alerts.
Upgrades
Changes that may occur or be required during an upgrade.
Security
xmlsec has been upgraded to 2.3.4 to address CVE-2023-44483 issue.
Configuration
Docker containers have been upgraded to Java 21.
New features and improvements
Installation and Deployment
Configure LogScale to write fatal JVM error logs in the JVM logging directory, which is specified using
JVM_LOG_DIR
variable. The default directory is/logs/humio
.
UI Changes
Most tables inside the LogScale UI now supports resizing columns, except the
Table
widget used during search.The behavior of the ComboBox has changed: the drop-down is not filtered until the text in the filter field has been edited, allowing you to easily copy, alter or clear the text.
The list of permissions now has a specific custom order in the UI, as follows.
Organization:
Organization settings
Repository and view management
Permissions and user management
Fleet management
Query monitoring
Other
Cluster management:
Cluster management
Organization management
Subdomains
Others
A combined view of permissions is now available to show all roles listed together when there is more than one role under each repository, organization, or system.
For more information, see Aggregate Permissions.
It is now possible to highlight results based on the filters applied in queries. This helps significantly when trying to understand why a query matches the results or when looking for a specific part of the events text.
For more information, see Filter Match Highlighting.
Automation and Alerts
The new button Scheduled Searches form allowing importing a Scheduled Search from template or package.
has been added to theWhen creating or updating Scheduled Searches using the GraphQL API, it is now possible to refer to actions in Packages using a qualified name of
\"packagescope/packagename:actionname\"
. Actions in packages will no longer be found if using an unqualified name.When generating CSV files for attaching to emails or uploading to LogScale in actions, or when using the message template {events_html}, the field @ingesttimestamp is now formatted similar to how @timestamp is.
The UI flow for Scheduled Searches has been updated: when you click on it will directly go to the New Scheduled Search form.
The Alert forms will not show any errors when the alert is disabled.
GraphQL API
Added limits for GraphQL queries on the total number of selected fields and fragments. Defaults are
1000
for authenticated and150
for unauthenticated users.Cluster administrators can adjust these limits with the
GraphQLSelectionSizeLimit
andUnauthenticatedGraphQLSelectionSizeLimit
dynamic configurations.The contentHash field on the
File
output type has been reintroduced.
Storage
JVM_TMP_DIR
has been added to the launcher script. This option is used for configuringjava.io.tmpdir
andjna.tmpdir
for the JVM. The directory will default tojvm-tmp
inside the directory specified by theDIRECTORY
setting. This default should alleviate issues starting LogScale on some systems due to the/tmp
directory being marked asnoexec
.For more information, see Troubleshooting: Error Starting LogScale due to Exec permissions on /tmp.
Bucket storage cleaning of
tmp
files now only runs on a few nodes in the cluster rather than on all nodes.
Configuration
LOCAL_STORAGE_PREFILL_PERCENTAGE
new configuration option has been added.For more information, see
LOCAL_STORAGE_PREFILL_PERCENTAGE
.Query queueing based on the available memory in query coordinator is enabled by default by treating dynamic configuration
QueryCoordinatorMaxHeapFraction
as 0.5, if it has not been set. To disable queing, setQueryCoordinatorMaxHeapFraction
to 1000.Set the default value of
LOCAL_STORAGE_PERCENTAGE
to85
, and the minimum value to0
. The default was previously to leave this unset, which is not safe in clusters where bucket storage contains more data than will fit on local drives.The new environment variable
DISABLE_BUCKET_CLEANING_TMP_FILES
has been introduced. It allows to reduce the amount of listing oftmp
files in bucket.
Dashboards and Widgets
You can enable the export of Dashboards to a PDF file, with many options available to control the output layout and formatting.
The feature is available to all users who already have access to dashboard data. This is the first of two feature releases, aiming to provide full schedulable PDF reporting capabilities to LogScale.
For more information, see Export Dashboards as PDF.
The new
Gauge
widget is introduced: it allows you to represent values on a fixed scale, offering a visual and intuitive way to monitor key performance metrics.For more information, see Gauge Widget.
A parameter configuration option has been added to support invalidation of parameter inputs. The format for this is a comma separated list of invalid input patterns (regexes).
Introduced a new style option Show 'Others' to the Time Chart Widget: it allows you to show/hide other series when there are more series than the maximum allowed in the chart.
A parameter configuration option has been added to allow setting a custom message when a parameter input is invalid.
New formatting options have been introduced for the
Table
widget, to get actionable insights from your data faster:Conditional formatting of table cells
Text wrapping and column resizing
Row numbering
Number formatting
Link formatting
Columns hiding
For more information, see Table Widget.
Ingestion
When writing parsers, the fields produced by a test case are now available for autocompletion in the editor.
For more information, see Using the Parser Code Editor.
Log Collector
The Fleet Management tab on
Fleet Overview
page is now renamed to Data Ingest.
Functions
parseCEF()
andparseLEEF()
functions now have an option to change the prefix of the header fields.Field names with special characters are now supported in Array Query Functions using backtick quoting.
For more information, see Using Array Query Functions.
Packages
Filter alerts and Standard alerts are now shown in the same tab Alerts under Assets when installing or viewing installed Packages.
It is now possible to see the type of action in Packages (Marketplace, Installed and Create a package).
Fixed in this release
UI Changes
Queries could "flicker" for a short period causing "negative alerts" to trigger for no reason (negative alerts are alerts that check for the absence of events). This issue has been fixed.
The following issue has been fixed on the
Search
page: if regular expressions contained named groups with special characters (underscore_
for example) a recent change with the introduction of Filter Match Highlighting would cause a server error and hang the UI.The following items about Saving Queries have been fixed:
The Search... field for saved queries did not return what would be expected.
Upon reopening the Search... field, the text would still be present in the Search... field but not filter on the queries.
dropdown after having filled out theAdded focus on the Search... field when reopening the dropdown.
Automation and Alerts
Notifications on problems with Filter Alerts where not automatically removed when the problem was solved. This issue is now fixed.
Filter alerts that could fail right after a cluster restart have now been fixed.
When used with Filter Alerts, the {events_html} message template would not keep the order of the fields from the Alert query.
GraphQL API
When trying to delete an Alert, Scheduled Search or Dashboard using a mutation for one of the other types, it would end up in a state where it was not deleted, but could not run either. This issue is now fixed.
Storage
A workaround solution has been identified for those cases where segment files on local disk no longer pass their internal checksum test and are detected as "broken" by the background merge process.
Ensure a copy of the local file is present in the bucket storage, backing up the cluster
Delete the local copy
As a result, any merge attempt involving that file will succeed after the next restart of LogScale.
Dashboards and Widgets
Field values containing
%
would not be resolved correctly in interactions. This issue has been fixed.
Ingestion
The buttons used for editing and deleting an ingest listener were overlapping in Safari on the Ingest Listeners page under a repository. This issue has been fixed.
Functions
Results for empty buckets didn't include the steps after the first aggregator of the subquery. This issue has now been fixed.
match()
function using a json file and containing an object with a missing field, could lead to an internal error.The
regex()
function has been fixed for cases where\Q...\E
could cause problems for named capturing groups.The
array:filter()
function has been fixed for an issue that caused incorrect output element values in certain circumstances.
Other
A cluster with very little disk space left could result in excessive logging from
com.humio.distribution.RendezvousSegmentDistribution
.Fixing a race that can leave a query in a state where it will cause an excessive amount of 404 HTTP requests. This adds unnecessary noise and a bit of extra load to the system.
A minor logging issue has been fixed:
ClusterHostAliveStats
would log that hosts were "changed from being considered dead to alive" on hosts that had just rebooted, when such hosts actually consider all other nodes alive for a little while, to allow the booting node some time to hear heartbeats from others.A boot-time version checking issue could cause LogScale to crash on boot, if joining a fresh cluster, and the first node to join that cluster would crash.
Packages
Updating of a Package failed when using anything other than a personal user token. This issue has been fixed.
Updating a package with a lookup file and a parser/scheduled search/filter alert/alert containing match would fail if the new
column
parameter did not exist in the old lookup file. This issue has now been fixed.Aligned the requirements to allow all tokens (with the correct permissions) to install and update Packages.
Fixed a broken link from saved query asset in
Packages
toSearch
page.The alert types in Package Marketplace were showing twice — this is now fixed so it properly shows one type as expected.
Falcon LogScale 1.112.2 LTS (2024-01-22)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.112.2 | LTS | 2024-01-22 | Cloud | 2024-11-30 | No | 1.70.0 | No |
Hide file hashes
TAR Checksum | Value |
---|---|
MD5 | bb1877af748c84bba281516a911b0823 |
SHA1 | 297c770af75cfc5700a069878eb08d85e817ece8 |
SHA256 | 6113922ff798079c87265ae4f7e7ecc3c3b8a94b2feeaf4ea2d6e81201d2fc83 |
SHA512 | bc789a19665d430281cfbe1241e0c5a5f4e01edbc5060295cb1215d65d2917a058c9616735ff210e2eaa5c41bba406aa63e17ba973e3cdf5760484e946e75ded |
Docker Image | Included JDK | SHA256 Checksum |
---|---|---|
humio | 21 | 7a37137d187338bc1bf46e3b4daa5659914fbd15758158f33a88db1f268cac81 |
humio-core | 21 | dddd6f2ed3ab4a5b21c40074cbc7a61526887415999273933bdc950740cb3a1b |
kafka | 21 | 0572df2b7ac4c78e495fde4180c099375c8e33e8fd20dbec8befd0554c706c7a |
zookeeper | 21 | b920b2354194c91f3e6acee62d205ed6b02379dd780458fda9bddfaf58ac4b23 |
Download: https://repo.humio.com/repository/maven-releases/com/humio/server/1.112.2/server-1.112.2.tar.gz
These notes include entries from the following previous releases: 1.112.1
Bug fixes and updates.
Advance Warning
The following items are due to change in a future release.
Installation and Deployment
We intend to drop support for Java 17, making Java 21 the minimum. We plan to make this change in March 2024.
Removed
Items that have been removed as of this release.
Installation and Deployment
All ZooKeeper-related functionality for LogScale was deprecated in December 2022, and is now removed:
Removed the ZooKeeper status page from the User Interface
Removed the ZooKeeper related GraphQL mutations
Removed the migration support for node IDs created by ZooKeeper, as we no longer support upgrading from version prior to 1.70.
Depending on your chosen Kafka deployment, ZooKeeper may still be required to support Kafka.
Running on Java 11, 12, 13, 14, 15 and 16 is no longer supported. The minimum supported Java version is 17 starting from this LogScale release.
GraphQL API
The deprecated client mutation ID concept is now being removed from the GraphQL API:
Removed the clientMutationId argument for a lot of mutations.
Removed the clientMutationId field from the returned type for a lot of mutations.
Renamed the ClientMutationID datatype, that was returned from some mutations to
BooleanResultType
datatype. Removed the clientMutationId field on the returned type and replaced it by a boolean field namedresult
.Most deprecated queries, mutations and fields have now been removed from the GraphQL API.
Storage
The unused
humio-backup
symlink inside Docker containers has been removed.Configuration
Some deprecated configuration variables have now been removed:
GCP_STORAGE_UPLOAD_CONCURRENCY
GCP_STORAGE_DOWNLOAD_CONCURRENCY
They have been replaced by
S3_STORAGE_CONCURRENCY
andGCP_STORAGE_CONCURRENCY
settings that internally handle rate-limiting responses from the bucket provider.
Deprecation
Items that have been deprecated and may be removed in a future release.
The following
REST
endpoints for deleting events have been deprecated:
/api/v1/dataspaces/
(Id)
/deleteevents
/api/v1/repositories/
(id)
/deleteeventsThe new GraphQL mutation redactEvents should be used instead.
Behavior Changes
Scripts or environment which make use of these tools should be checked and updated for the new configuration:
Automation and Alerts
We have changed how Standard Alerts handle query warnings. Previously, LogScale only triggered alerts if there were no query warnings. Now, alerts will trigger despite most query warnings, and the alert status will show a warning instead of an error. Up until now, all query warnings were treated as errors. This meant that the alert did not trigger even though it produced results, and the alert was shown with an error in LogScale. Most query warnings mean that not all data was queried. The previous behaviour prevented the alert from triggering in cases where it would not have, if all data had been available. For instance, an alert that would trigger if a count of events dropped below a threshold. On the other hand, it made some alerts not trigger, even though they would still have if all data was available. That meant that previously you would almost never get an alert that you should not have gotten, but you would sometime not get an alert that you should have gotten. We have reverted this. With this change, we no longer recommend to set the configuration option
ALERT_DESPITE_WARNINGS
totrue
, since it treats all query warnings as non-errors, and there are a few query warnings that should make the alert fail.For more information, see Diagnosing Alerts.
Upgrades
Changes that may occur or be required during an upgrade.
Security
xmlsec has been upgraded to 2.3.4 to address CVE-2023-44483 issue.
Configuration
Docker containers have been upgraded to Java 21.
New features and improvements
Installation and Deployment
Configure LogScale to write fatal JVM error logs in the JVM logging directory, which is specified using
JVM_LOG_DIR
variable. The default directory is/logs/humio
.
UI Changes
Most tables inside the LogScale UI now supports resizing columns, except the
Table
widget used during search.The behavior of the ComboBox has changed: the drop-down is not filtered until the text in the filter field has been edited, allowing you to easily copy, alter or clear the text.
The list of permissions now has a specific custom order in the UI, as follows.
Organization:
Organization settings
Repository and view management
Permissions and user management
Fleet management
Query monitoring
Other
Cluster management:
Cluster management
Organization management
Subdomains
Others
A combined view of permissions is now available to show all roles listed together when there is more than one role under each repository, organization, or system.
For more information, see Aggregate Permissions.
It is now possible to highlight results based on the filters applied in queries. This helps significantly when trying to understand why a query matches the results or when looking for a specific part of the events text.
For more information, see Filter Match Highlighting.
Automation and Alerts
The new button Scheduled Searches form allowing importing a Scheduled Search from template or package.
has been added to theWhen creating or updating Scheduled Searches using the GraphQL API, it is now possible to refer to actions in Packages using a qualified name of
\"packagescope/packagename:actionname\"
. Actions in packages will no longer be found if using an unqualified name.When generating CSV files for attaching to emails or uploading to LogScale in actions, or when using the message template {events_html}, the field @ingesttimestamp is now formatted similar to how @timestamp is.
The UI flow for Scheduled Searches has been updated: when you click on it will directly go to the New Scheduled Search form.
The Alert forms will not show any errors when the alert is disabled.
GraphQL API
Added limits for GraphQL queries on the total number of selected fields and fragments. Defaults are
1000
for authenticated and150
for unauthenticated users.Cluster administrators can adjust these limits with the
GraphQLSelectionSizeLimit
andUnauthenticatedGraphQLSelectionSizeLimit
dynamic configurations.The contentHash field on the
File
output type has been reintroduced.
Storage
JVM_TMP_DIR
has been added to the launcher script. This option is used for configuringjava.io.tmpdir
andjna.tmpdir
for the JVM. The directory will default tojvm-tmp
inside the directory specified by theDIRECTORY
setting. This default should alleviate issues starting LogScale on some systems due to the/tmp
directory being marked asnoexec
.For more information, see Troubleshooting: Error Starting LogScale due to Exec permissions on /tmp.
Bucket storage cleaning of
tmp
files now only runs on a few nodes in the cluster rather than on all nodes.
Configuration
LOCAL_STORAGE_PREFILL_PERCENTAGE
new configuration option has been added.For more information, see
LOCAL_STORAGE_PREFILL_PERCENTAGE
.Query queueing based on the available memory in query coordinator is enabled by default by treating dynamic configuration
QueryCoordinatorMaxHeapFraction
as 0.5, if it has not been set. To disable queing, setQueryCoordinatorMaxHeapFraction
to 1000.Set the default value of
LOCAL_STORAGE_PERCENTAGE
to85
, and the minimum value to0
. The default was previously to leave this unset, which is not safe in clusters where bucket storage contains more data than will fit on local drives.The new environment variable
DISABLE_BUCKET_CLEANING_TMP_FILES
has been introduced. It allows to reduce the amount of listing oftmp
files in bucket.
Dashboards and Widgets
You can enable the export of Dashboards to a PDF file, with many options available to control the output layout and formatting.
The feature is available to all users who already have access to dashboard data. This is the first of two feature releases, aiming to provide full schedulable PDF reporting capabilities to LogScale.
For more information, see Export Dashboards as PDF.
The new
Gauge
widget is introduced: it allows you to represent values on a fixed scale, offering a visual and intuitive way to monitor key performance metrics.For more information, see Gauge Widget.
A parameter configuration option has been added to support invalidation of parameter inputs. The format for this is a comma separated list of invalid input patterns (regexes).
Introduced a new style option Show 'Others' to the Time Chart Widget: it allows you to show/hide other series when there are more series than the maximum allowed in the chart.
A parameter configuration option has been added to allow setting a custom message when a parameter input is invalid.
New formatting options have been introduced for the
Table
widget, to get actionable insights from your data faster:Conditional formatting of table cells
Text wrapping and column resizing
Row numbering
Number formatting
Link formatting
Columns hiding
For more information, see Table Widget.
Ingestion
When writing parsers, the fields produced by a test case are now available for autocompletion in the editor.
For more information, see Using the Parser Code Editor.
Log Collector
The Fleet Management tab on
Fleet Overview
page is now renamed to Data Ingest.
Functions
parseCEF()
andparseLEEF()
functions now have an option to change the prefix of the header fields.Field names with special characters are now supported in Array Query Functions using backtick quoting.
For more information, see Using Array Query Functions.
Packages
Filter alerts and Standard alerts are now shown in the same tab Alerts under Assets when installing or viewing installed Packages.
It is now possible to see the type of action in Packages (Marketplace, Installed and Create a package).
Fixed in this release
UI Changes
Queries could "flicker" for a short period causing "negative alerts" to trigger for no reason (negative alerts are alerts that check for the absence of events). This issue has been fixed.
The following issue has been fixed on the
Search
page: if regular expressions contained named groups with special characters (underscore_
for example) a recent change with the introduction of Filter Match Highlighting would cause a server error and hang the UI.The following items about Saving Queries have been fixed:
The Search... field for saved queries did not return what would be expected.
Upon reopening the Search... field, the text would still be present in the Search... field but not filter on the queries.
dropdown after having filled out theAdded focus on the Search... field when reopening the dropdown.
Automation and Alerts
Notifications on problems with Filter Alerts where not automatically removed when the problem was solved. This issue is now fixed.
Filter alerts that could fail right after a cluster restart have now been fixed.
When used with Filter Alerts, the {events_html} message template would not keep the order of the fields from the Alert query.
GraphQL API
When trying to delete an Alert, Scheduled Search or Dashboard using a mutation for one of the other types, it would end up in a state where it was not deleted, but could not run either. This issue is now fixed.
Storage
A workaround solution has been identified for those cases where segment files on local disk no longer pass their internal checksum test and are detected as "broken" by the background merge process.
Ensure a copy of the local file is present in the bucket storage, backing up the cluster
Delete the local copy
As a result, any merge attempt involving that file will succeed after the next restart of LogScale.
Dashboards and Widgets
Field values containing
%
would not be resolved correctly in interactions. This issue has been fixed.
Ingestion
The buttons used for editing and deleting an ingest listener were overlapping in Safari on the Ingest Listeners page under a repository. This issue has been fixed.
Functions
Results for empty buckets didn't include the steps after the first aggregator of the subquery. This issue has now been fixed.
match()
function using a json file and containing an object with a missing field, could lead to an internal error.The
regex()
function has been fixed for cases where\Q...\E
could cause problems for named capturing groups.The
array:filter()
function has been fixed for an issue that caused incorrect output element values in certain circumstances.
Other
A cluster with very little disk space left could result in excessive logging from
com.humio.distribution.RendezvousSegmentDistribution
.Fixing a race that can leave a query in a state where it will cause an excessive amount of 404 HTTP requests. This adds unnecessary noise and a bit of extra load to the system.
A minor logging issue has been fixed:
ClusterHostAliveStats
would log that hosts were "changed from being considered dead to alive" on hosts that had just rebooted, when such hosts actually consider all other nodes alive for a little while, to allow the booting node some time to hear heartbeats from others.A boot-time version checking issue could cause LogScale to crash on boot, if joining a fresh cluster, and the first node to join that cluster would crash.
Packages
Updating of a Package failed when using anything other than a personal user token. This issue has been fixed.
Updating a package with a lookup file and a parser/scheduled search/filter alert/alert containing match would fail if the new
column
parameter did not exist in the old lookup file. This issue has now been fixed.Aligned the requirements to allow all tokens (with the correct permissions) to install and update Packages.
Fixed a broken link from saved query asset in
Packages
toSearch
page.The alert types in Package Marketplace were showing twice — this is now fixed so it properly shows one type as expected.
Falcon LogScale 1.112.1 LTS (2023-11-15)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.112.1 | LTS | 2023-11-15 | Cloud | 2024-11-30 | No | 1.70.0 | No |
Hide file hashes
TAR Checksum | Value |
---|---|
MD5 | adc304d42f49666a11a9343ce0b1cf45 |
SHA1 | 2d46f12e23448be8e966780e3bbefb8c24706615 |
SHA256 | ee624502c5a88774ac03ca56984c4a1aa76186f4d848b878106189c45d4855e0 |
SHA512 | 9231d4e6a250d7d9eaeaaf67b99979c5cbfe070d0a8f57b816e9cb0a76c3d76a93957268ece8a6ad0d296816a4a08c3259f85dd7e075b739f8d7351243ec9842 |
Docker Image | Included JDK | SHA256 Checksum |
---|---|---|
humio | 21 | 307d54f45c193743e6ef1e6b81cb6e278b77460351a9f1a4b1b3c4b14c9dd198 |
humio-core | 21 | 73ff5f4ce9f0b4d5a7dace1f7858d06948ff9ea05cda2acd45ffd1c2ff1e055b |
kafka | 21 | 39b5cf13a792c55b935bfbff81de9c384bd8555fe0ebf572debb639eb5638390 |
zookeeper | 21 | ccb43bdf0b2ca238b79b98069b9cf050fc39a60a5bd55f5e7709e76b6bab72ea |
Download: https://repo.humio.com/repository/maven-releases/com/humio/server/1.112.1/server-1.112.1.tar.gz
Bug fixes and updates.
Advance Warning
The following items are due to change in a future release.
Installation and Deployment
We intend to drop support for Java 17, making Java 21 the minimum. We plan to make this change in March 2024.
Removed
Items that have been removed as of this release.
Installation and Deployment
All ZooKeeper-related functionality for LogScale was deprecated in December 2022, and is now removed:
Removed the ZooKeeper status page from the User Interface
Removed the ZooKeeper related GraphQL mutations
Removed the migration support for node IDs created by ZooKeeper, as we no longer support upgrading from version prior to 1.70.
Depending on your chosen Kafka deployment, ZooKeeper may still be required to support Kafka.
Running on Java 11, 12, 13, 14, 15 and 16 is no longer supported. The minimum supported Java version is 17 starting from this LogScale release.
GraphQL API
The deprecated client mutation ID concept is now being removed from the GraphQL API:
Removed the clientMutationId argument for a lot of mutations.
Removed the clientMutationId field from the returned type for a lot of mutations.
Renamed the ClientMutationID datatype, that was returned from some mutations to
BooleanResultType
datatype. Removed the clientMutationId field on the returned type and replaced it by a boolean field namedresult
.Most deprecated queries, mutations and fields have now been removed from the GraphQL API.
Storage
The unused
humio-backup
symlink inside Docker containers has been removed.Configuration
Some deprecated configuration variables have now been removed:
GCP_STORAGE_UPLOAD_CONCURRENCY
GCP_STORAGE_DOWNLOAD_CONCURRENCY
They have been replaced by
S3_STORAGE_CONCURRENCY
andGCP_STORAGE_CONCURRENCY
settings that internally handle rate-limiting responses from the bucket provider.
Deprecation
Items that have been deprecated and may be removed in a future release.
The following
REST
endpoints for deleting events have been deprecated:
/api/v1/dataspaces/
(Id)
/deleteevents
/api/v1/repositories/
(id)
/deleteeventsThe new GraphQL mutation redactEvents should be used instead.
Behavior Changes
Scripts or environment which make use of these tools should be checked and updated for the new configuration:
Automation and Alerts
We have changed how Standard Alerts handle query warnings. Previously, LogScale only triggered alerts if there were no query warnings. Now, alerts will trigger despite most query warnings, and the alert status will show a warning instead of an error. Up until now, all query warnings were treated as errors. This meant that the alert did not trigger even though it produced results, and the alert was shown with an error in LogScale. Most query warnings mean that not all data was queried. The previous behaviour prevented the alert from triggering in cases where it would not have, if all data had been available. For instance, an alert that would trigger if a count of events dropped below a threshold. On the other hand, it made some alerts not trigger, even though they would still have if all data was available. That meant that previously you would almost never get an alert that you should not have gotten, but you would sometime not get an alert that you should have gotten. We have reverted this. With this change, we no longer recommend to set the configuration option
ALERT_DESPITE_WARNINGS
totrue
, since it treats all query warnings as non-errors, and there are a few query warnings that should make the alert fail.For more information, see Diagnosing Alerts.
Upgrades
Changes that may occur or be required during an upgrade.
Security
xmlsec has been upgraded to 2.3.4 to address CVE-2023-44483 issue.
Configuration
Docker containers have been upgraded to Java 21.
New features and improvements
Installation and Deployment
Configure LogScale to write fatal JVM error logs in the JVM logging directory, which is specified using
JVM_LOG_DIR
variable. The default directory is/logs/humio
.
UI Changes
Most tables inside the LogScale UI now supports resizing columns, except the
Table
widget used during search.The behavior of the ComboBox has changed: the drop-down is not filtered until the text in the filter field has been edited, allowing you to easily copy, alter or clear the text.
The list of permissions now has a specific custom order in the UI, as follows.
Organization:
Organization settings
Repository and view management
Permissions and user management
Fleet management
Query monitoring
Other
Cluster management:
Cluster management
Organization management
Subdomains
Others
A combined view of permissions is now available to show all roles listed together when there is more than one role under each repository, organization, or system.
For more information, see Aggregate Permissions.
It is now possible to highlight results based on the filters applied in queries. This helps significantly when trying to understand why a query matches the results or when looking for a specific part of the events text.
For more information, see Filter Match Highlighting.
Automation and Alerts
The new button Scheduled Searches form allowing importing a Scheduled Search from template or package.
has been added to theWhen creating or updating Scheduled Searches using the GraphQL API, it is now possible to refer to actions in Packages using a qualified name of
\"packagescope/packagename:actionname\"
. Actions in packages will no longer be found if using an unqualified name.When generating CSV files for attaching to emails or uploading to LogScale in actions, or when using the message template {events_html}, the field @ingesttimestamp is now formatted similar to how @timestamp is.
The UI flow for Scheduled Searches has been updated: when you click on it will directly go to the New Scheduled Search form.
The Alert forms will not show any errors when the alert is disabled.
GraphQL API
The contentHash field on the
File
output type has been reintroduced.
Storage
JVM_TMP_DIR
has been added to the launcher script. This option is used for configuringjava.io.tmpdir
andjna.tmpdir
for the JVM. The directory will default tojvm-tmp
inside the directory specified by theDIRECTORY
setting. This default should alleviate issues starting LogScale on some systems due to the/tmp
directory being marked asnoexec
.For more information, see Troubleshooting: Error Starting LogScale due to Exec permissions on /tmp.
Bucket storage cleaning of
tmp
files now only runs on a few nodes in the cluster rather than on all nodes.
Configuration
LOCAL_STORAGE_PREFILL_PERCENTAGE
new configuration option has been added.For more information, see
LOCAL_STORAGE_PREFILL_PERCENTAGE
.Query queueing based on the available memory in query coordinator is enabled by default by treating dynamic configuration
QueryCoordinatorMaxHeapFraction
as 0.5, if it has not been set. To disable queing, setQueryCoordinatorMaxHeapFraction
to 1000.Set the default value of
LOCAL_STORAGE_PERCENTAGE
to85
, and the minimum value to0
. The default was previously to leave this unset, which is not safe in clusters where bucket storage contains more data than will fit on local drives.The new environment variable
DISABLE_BUCKET_CLEANING_TMP_FILES
has been introduced. It allows to reduce the amount of listing oftmp
files in bucket.
Dashboards and Widgets
You can enable the export of Dashboards to a PDF file, with many options available to control the output layout and formatting.
The feature is available to all users who already have access to dashboard data. This is the first of two feature releases, aiming to provide full schedulable PDF reporting capabilities to LogScale.
For more information, see Export Dashboards as PDF.
The new
Gauge
widget is introduced: it allows you to represent values on a fixed scale, offering a visual and intuitive way to monitor key performance metrics.For more information, see Gauge Widget.
A parameter configuration option has been added to support invalidation of parameter inputs. The format for this is a comma separated list of invalid input patterns (regexes).
Introduced a new style option Show 'Others' to the Time Chart Widget: it allows you to show/hide other series when there are more series than the maximum allowed in the chart.
A parameter configuration option has been added to allow setting a custom message when a parameter input is invalid.
New formatting options have been introduced for the
Table
widget, to get actionable insights from your data faster:Conditional formatting of table cells
Text wrapping and column resizing
Row numbering
Number formatting
Link formatting
Columns hiding
For more information, see Table Widget.
Ingestion
When writing parsers, the fields produced by a test case are now available for autocompletion in the editor.
For more information, see Using the Parser Code Editor.
Log Collector
The Fleet Management tab on
Fleet Overview
page is now renamed to Data Ingest.
Functions
parseCEF()
andparseLEEF()
functions now have an option to change the prefix of the header fields.Field names with special characters are now supported in Array Query Functions using backtick quoting.
For more information, see Using Array Query Functions.
Packages
Filter alerts and Standard alerts are now shown in the same tab Alerts under Assets when installing or viewing installed Packages.
It is now possible to see the type of action in Packages (Marketplace, Installed and Create a package).
Fixed in this release
UI Changes
Queries could "flicker" for a short period causing "negative alerts" to trigger for no reason (negative alerts are alerts that check for the absence of events). This issue has been fixed.
The following issue has been fixed on the
Search
page: if regular expressions contained named groups with special characters (underscore_
for example) a recent change with the introduction of Filter Match Highlighting would cause a server error and hang the UI.The following items about Saving Queries have been fixed:
The Search... field for saved queries did not return what would be expected.
Upon reopening the Search... field, the text would still be present in the Search... field but not filter on the queries.
dropdown after having filled out theAdded focus on the Search... field when reopening the dropdown.
Automation and Alerts
Notifications on problems with Filter Alerts where not automatically removed when the problem was solved. This issue is now fixed.
Filter alerts that could fail right after a cluster restart have now been fixed.
When used with Filter Alerts, the {events_html} message template would not keep the order of the fields from the Alert query.
GraphQL API
When trying to delete an Alert, Scheduled Search or Dashboard using a mutation for one of the other types, it would end up in a state where it was not deleted, but could not run either. This issue is now fixed.
Storage
A workaround solution has been identified for those cases where segment files on local disk no longer pass their internal checksum test and are detected as "broken" by the background merge process.
Ensure a copy of the local file is present in the bucket storage, backing up the cluster
Delete the local copy
As a result, any merge attempt involving that file will succeed after the next restart of LogScale.
Dashboards and Widgets
Field values containing
%
would not be resolved correctly in interactions. This issue has been fixed.
Ingestion
The buttons used for editing and deleting an ingest listener were overlapping in Safari on the Ingest Listeners page under a repository. This issue has been fixed.
Functions
Results for empty buckets didn't include the steps after the first aggregator of the subquery. This issue has now been fixed.
match()
function using a json file and containing an object with a missing field, could lead to an internal error.The
regex()
function has been fixed for cases where\Q...\E
could cause problems for named capturing groups.The
array:filter()
function has been fixed for an issue that caused incorrect output element values in certain circumstances.
Other
A cluster with very little disk space left could result in excessive logging from
com.humio.distribution.RendezvousSegmentDistribution
.Fixing a race that can leave a query in a state where it will cause an excessive amount of 404 HTTP requests. This adds unnecessary noise and a bit of extra load to the system.
A minor logging issue has been fixed:
ClusterHostAliveStats
would log that hosts were "changed from being considered dead to alive" on hosts that had just rebooted, when such hosts actually consider all other nodes alive for a little while, to allow the booting node some time to hear heartbeats from others.A boot-time version checking issue could cause LogScale to crash on boot, if joining a fresh cluster, and the first node to join that cluster would crash.
Packages
Updating of a Package failed when using anything other than a personal user token. This issue has been fixed.
Updating a package with a lookup file and a parser/scheduled search/filter alert/alert containing match would fail if the new
column
parameter did not exist in the old lookup file. This issue has now been fixed.Aligned the requirements to allow all tokens (with the correct permissions) to install and update Packages.
Fixed a broken link from saved query asset in
Packages
toSearch
page.The alert types in Package Marketplace were showing twice — this is now fixed so it properly shows one type as expected.
Falcon LogScale 1.112.0 GA (2023-10-24)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.112.0 | GA | 2023-10-24 | Cloud | 2024-11-30 | No | 1.70.0 | No |
Available for download two days after release.
Bug fixes and updates.
Behavior Changes
Scripts or environment which make use of these tools should be checked and updated for the new configuration:
Automation and Alerts
We have changed how Standard Alerts handle query warnings. Previously, LogScale only triggered alerts if there were no query warnings. Now, alerts will trigger despite most query warnings, and the alert status will show a warning instead of an error. Up until now, all query warnings were treated as errors. This meant that the alert did not trigger even though it produced results, and the alert was shown with an error in LogScale. Most query warnings mean that not all data was queried. The previous behaviour prevented the alert from triggering in cases where it would not have, if all data had been available. For instance, an alert that would trigger if a count of events dropped below a threshold. On the other hand, it made some alerts not trigger, even though they would still have if all data was available. That meant that previously you would almost never get an alert that you should not have gotten, but you would sometime not get an alert that you should have gotten. We have reverted this. With this change, we no longer recommend to set the configuration option
ALERT_DESPITE_WARNINGS
totrue
, since it treats all query warnings as non-errors, and there are a few query warnings that should make the alert fail.For more information, see Diagnosing Alerts.
Upgrades
Changes that may occur or be required during an upgrade.
Storage
This release introduces a change to the internal storage format use for sharing global data. Once upgraded to v1.112 or higher it will not be possible to downgrade to a version lower than 1.112.
New features and improvements
Installation and Deployment
Configure LogScale to write fatal JVM error logs in the JVM logging directory, which is specified using
JVM_LOG_DIR
variable. The default directory is/logs/humio
.
UI Changes
The behavior of the ComboBox has changed: the drop-down is not filtered until the text in the filter field has been edited, allowing you to easily copy, alter or clear the text.
The list of permissions now has a specific custom order in the UI, as follows.
Organization:
Organization settings
Repository and view management
Permissions and user management
Fleet management
Query monitoring
Other
Cluster management:
Cluster management
Organization management
Subdomains
Others
A combined view of permissions is now available to show all roles listed together when there is more than one role under each repository, organization, or system.
For more information, see Aggregate Permissions.
Automation and Alerts
The Alert forms will not show any errors when the alert is disabled.
Dashboards and Widgets
You can enable the export of Dashboards to a PDF file, with many options available to control the output layout and formatting.
The feature is available to all users who already have access to dashboard data. This is the first of two feature releases, aiming to provide full schedulable PDF reporting capabilities to LogScale.
For more information, see Export Dashboards as PDF.
The new
Gauge
widget is introduced: it allows you to represent values on a fixed scale, offering a visual and intuitive way to monitor key performance metrics.For more information, see Gauge Widget.
Fixed in this release
UI Changes
Time Selector and date picker in the Time Interval panel have been fixed for issues related to daylight savings time.
Queries could "flicker" for a short period causing "negative alerts" to trigger for no reason (negative alerts are alerts that check for the absence of events). This issue has been fixed.
Automation and Alerts
Notifications on problems with Filter Alerts where not automatically removed when the problem was solved. This issue is now fixed.
GraphQL API
When trying to delete an Alert, Scheduled Search or Dashboard using a mutation for one of the other types, it would end up in a state where it was not deleted, but could not run either. This issue is now fixed.
Other
A minor logging issue has been fixed:
ClusterHostAliveStats
would log that hosts were "changed from being considered dead to alive" on hosts that had just rebooted, when such hosts actually consider all other nodes alive for a little while, to allow the booting node some time to hear heartbeats from others.
Packages
The alert types in Package Marketplace were showing twice — this is now fixed so it properly shows one type as expected.
Falcon LogScale 1.111.1 GA (2023-10-28)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.111.1 | GA | 2023-10-28 | Cloud | 2024-11-30 | No | 1.70.0 | No |
Available for download two days after release.
Bug fixes and updates.
Fixed in this release
UI Changes
Time Selector and date picker in the Time Interval panel have been fixed for issues related to daylight savings time.
Falcon LogScale 1.111.0 GA (2023-10-10)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.111.0 | GA | 2023-10-10 | Cloud | 2024-11-30 | No | 1.70.0 | No |
Available for download two days after release.
Bug fixes and updates.
Advance Warning
The following items are due to change in a future release.
Automation and Alerts
In LogScale version 1.112 we will change how standard alerts handle query warnings. Currently, LogScale will only trigger alerts if there are no query warnings. Starting with upcoming 1.112, alerts will trigger despite most query warnings, and the alert status will show a warning instead of an error.
Up until now, all query warnings have been treated as errors. This means that the alert does not trigger even though it produces results, and the alert is shown with an error in LogScale. Most query warnings mean that not all data was queried. The current behaviour prevents the alert from triggering in cases where it would not have, if all data had been available. For instance, an alert that would trigger if a count of events dropped below a threshold. On the other hand, it makes some alerts not trigger, even though they would still have if all data was available. That means that currently you will almost never get an alert that you should not have gotten, but you will sometime not get an alert that you should have gotten. We plan to revert this.
When this change happens, we no longer recommend to set the configuration option
ALERT_DESPITE_WARNINGS
totrue
, since it treats all query warnings as non-errors, and there are a few query warnings that should make the alert fail.
Removed
Items that have been removed as of this release.
Storage
The unused
humio-backup
symlink inside Docker containers has been removed.Configuration
Some deprecated configuration variables have now been removed:
GCP_STORAGE_UPLOAD_CONCURRENCY
GCP_STORAGE_DOWNLOAD_CONCURRENCY
They have been replaced by
S3_STORAGE_CONCURRENCY
andGCP_STORAGE_CONCURRENCY
settings that internally handle rate-limiting responses from the bucket provider.
Deprecation
Items that have been deprecated and may be removed in a future release.
The following
REST
endpoints for deleting events have been deprecated:
/api/v1/dataspaces/
(Id)
/deleteevents
/api/v1/repositories/
(id)
/deleteeventsThe new GraphQL mutation redactEvents should be used instead.
New features and improvements
Storage
JVM_TMP_DIR
has been added to the launcher script. This option is used for configuringjava.io.tmpdir
andjna.tmpdir
for the JVM. The directory will default tojvm-tmp
inside the directory specified by theDIRECTORY
setting. This default should alleviate issues starting LogScale on some systems due to the/tmp
directory being marked asnoexec
.For more information, see Troubleshooting: Error Starting LogScale due to Exec permissions on /tmp.
Bucket storage cleaning of
tmp
files now only runs on a few nodes in the cluster rather than on all nodes.
Configuration
The new environment variable
DISABLE_BUCKET_CLEANING_TMP_FILES
has been introduced. It allows to reduce the amount of listing oftmp
files in bucket.
Dashboards and Widgets
New formatting options have been introduced for the
Table
widget, to get actionable insights from your data faster:Conditional formatting of table cells
Text wrapping and column resizing
Row numbering
Number formatting
Link formatting
Columns hiding
For more information, see Table Widget.
Ingestion
When writing parsers, the fields produced by a test case are now available for autocompletion in the editor.
For more information, see Using the Parser Code Editor.
Functions
Field names with special characters are now supported in Array Query Functions using backtick quoting.
For more information, see Using Array Query Functions.
Fixed in this release
UI Changes
The following issue has been fixed on the
Search
page: if regular expressions contained named groups with special characters (underscore_
for example) a recent change with the introduction of Filter Match Highlighting would cause a server error and hang the UI.The following items about Saving Queries have been fixed:
The Search... field for saved queries did not return what would be expected.
Upon reopening the Search... field, the text would still be present in the Search... field but not filter on the queries.
dropdown after having filled out theAdded focus on the Search... field when reopening the dropdown.
Automation and Alerts
When used with Filter Alerts, the {events_html} message template would not keep the order of the fields from the Alert query.
Dashboards and Widgets
Field values containing
%
would not be resolved correctly in interactions. This issue has been fixed.
Functions
Results for empty buckets didn't include the steps after the first aggregator of the subquery. This issue has now been fixed.
Packages
Updating of a Package failed when using anything other than a personal user token. This issue has been fixed.
Aligned the requirements to allow all tokens (with the correct permissions) to install and update Packages.
Falcon LogScale 1.110.1 GA (2023-10-28)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.110.1 | GA | 2023-10-28 | Cloud | 2024-11-30 | No | 1.70.0 | No |
Available for download two days after release.
Bug fixes and updates.
Fixed in this release
UI Changes
Time Selector and date picker in the Time Interval panel have been fixed for issues related to daylight savings time.
Falcon LogScale 1.110.0 GA (2023-10-03)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.110.0 | GA | 2023-10-03 | Cloud | 2024-11-30 | No | 1.70.0 | No |
Available for download two days after release.
Bug fixes and updates.
Advance Warning
The following items are due to change in a future release.
Automation and Alerts
In LogScale version 1.112 we will change how standard alerts handle query warnings. Currently, LogScale will only trigger alerts if there are no query warnings. Starting with upcoming 1.112, alerts will trigger despite most query warnings, and the alert status will show a warning instead of an error.
Up until now, all query warnings have been treated as errors. This means that the alert does not trigger even though it produces results, and the alert is shown with an error in LogScale. Most query warnings mean that not all data was queried. The current behaviour prevents the alert from triggering in cases where it would not have, if all data had been available. For instance, an alert that would trigger if a count of events dropped below a threshold. On the other hand, it makes some alerts not trigger, even though they would still have if all data was available. That means that currently you will almost never get an alert that you should not have gotten, but you will sometime not get an alert that you should have gotten. We plan to revert this.
When this change happens, we no longer recommend to set the configuration option
ALERT_DESPITE_WARNINGS
totrue
, since it treats all query warnings as non-errors, and there are a few query warnings that should make the alert fail.
New features and improvements
GraphQL API
The contentHash field on the
File
output type has been reintroduced.
Dashboards and Widgets
A parameter configuration option has been added to support invalidation of parameter inputs. The format for this is a comma separated list of invalid input patterns (regexes).
A parameter configuration option has been added to allow setting a custom message when a parameter input is invalid.
Packages
Filter alerts and Standard alerts are now shown in the same tab Alerts under Assets when installing or viewing installed Packages.
It is now possible to see the type of action in Packages (Marketplace, Installed and Create a package).
Fixed in this release
Storage
A workaround solution has been identified for those cases where segment files on local disk no longer pass their internal checksum test and are detected as "broken" by the background merge process.
Ensure a copy of the local file is present in the bucket storage, backing up the cluster
Delete the local copy
As a result, any merge attempt involving that file will succeed after the next restart of LogScale.
Ingestion
The buttons used for editing and deleting an ingest listener were overlapping in Safari on the Ingest Listeners page under a repository. This issue has been fixed.
Functions
The
regex()
function has been fixed for cases where\Q...\E
could cause problems for named capturing groups.The
array:filter()
function has been fixed for an issue that caused incorrect output element values in certain circumstances.
Other
A boot-time version checking issue could cause LogScale to crash on boot, if joining a fresh cluster, and the first node to join that cluster would crash.
Packages
Fixed a broken link from saved query asset in
Packages
toSearch
page.
Falcon LogScale 1.109.1 GA (2023-10-28)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.109.1 | GA | 2023-10-28 | Cloud | 2024-11-30 | No | 1.70.0 | No |
Available for download two days after release.
Bug fixes and updates.
Fixed in this release
UI Changes
Time Selector and date picker in the Time Interval panel have been fixed for issues related to daylight savings time.
Falcon LogScale 1.109.0 GA (2023-09-26)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.109.0 | GA | 2023-09-26 | Cloud | 2024-11-30 | No | 1.70.0 | No |
Available for download two days after release.
Bug fixes and updates.
Advance Warning
The following items are due to change in a future release.
Automation and Alerts
In LogScale version 1.112 we will change how standard alerts handle query warnings. Currently, LogScale will only trigger alerts if there are no query warnings. Starting with upcoming 1.112, alerts will trigger despite most query warnings, and the alert status will show a warning instead of an error.
Up until now, all query warnings have been treated as errors. This means that the alert does not trigger even though it produces results, and the alert is shown with an error in LogScale. Most query warnings mean that not all data was queried. The current behaviour prevents the alert from triggering in cases where it would not have, if all data had been available. For instance, an alert that would trigger if a count of events dropped below a threshold. On the other hand, it makes some alerts not trigger, even though they would still have if all data was available. That means that currently you will almost never get an alert that you should not have gotten, but you will sometime not get an alert that you should have gotten. We plan to revert this.
When this change happens, we no longer recommend to set the configuration option
ALERT_DESPITE_WARNINGS
totrue
, since it treats all query warnings as non-errors, and there are a few query warnings that should make the alert fail.
Upgrades
Changes that may occur or be required during an upgrade.
Configuration
Docker containers have been upgraded to Java 21.
New features and improvements
Automation and Alerts
The new button Scheduled Searches form allowing importing a Scheduled Search from template or package.
has been added to theWhen creating or updating Scheduled Searches using the GraphQL API, it is now possible to refer to actions in Packages using a qualified name of
\"packagescope/packagename:actionname\"
. Actions in packages will no longer be found if using an unqualified name.When generating CSV files for attaching to emails or uploading to LogScale in actions, or when using the message template {events_html}, the field @ingesttimestamp is now formatted similar to how @timestamp is.
The UI flow for Scheduled Searches has been updated: when you click on it will directly go to the New Scheduled Search form.
Configuration
LOCAL_STORAGE_PREFILL_PERCENTAGE
new configuration option has been added.For more information, see
LOCAL_STORAGE_PREFILL_PERCENTAGE
.Set the default value of
LOCAL_STORAGE_PERCENTAGE
to85
, and the minimum value to0
. The default was previously to leave this unset, which is not safe in clusters where bucket storage contains more data than will fit on local drives.
Log Collector
The Fleet Management tab on
Fleet Overview
page is now renamed to Data Ingest.
Functions
parseCEF()
andparseLEEF()
functions now have an option to change the prefix of the header fields.
Fixed in this release
Automation and Alerts
Filter alerts that could fail right after a cluster restart have now been fixed.
Other
A cluster with very little disk space left could result in excessive logging from
com.humio.distribution.RendezvousSegmentDistribution
.
Packages
Updating a package with a lookup file and a parser/scheduled search/filter alert/alert containing match would fail if the new
column
parameter did not exist in the old lookup file. This issue has now been fixed.
Falcon LogScale 1.108.0 GA (2023-09-19)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.108.0 | GA | 2023-09-19 | Cloud | 2024-11-30 | No | 1.70.0 | No |
Available for download two days after release.
Bug fixes and updates.
Advance Warning
The following items are due to change in a future release.
Automation and Alerts
In LogScale version 1.112 we will change how standard alerts handle query warnings. Currently, LogScale will only trigger alerts if there are no query warnings. Starting with upcoming 1.112, alerts will trigger despite most query warnings, and the alert status will show a warning instead of an error.
Up until now, all query warnings have been treated as errors. This means that the alert does not trigger even though it produces results, and the alert is shown with an error in LogScale. Most query warnings mean that not all data was queried. The current behaviour prevents the alert from triggering in cases where it would not have, if all data had been available. For instance, an alert that would trigger if a count of events dropped below a threshold. On the other hand, it makes some alerts not trigger, even though they would still have if all data was available. That means that currently you will almost never get an alert that you should not have gotten, but you will sometime not get an alert that you should have gotten. We plan to revert this.
When this change happens, we no longer recommend to set the configuration option
ALERT_DESPITE_WARNINGS
totrue
, since it treats all query warnings as non-errors, and there are a few query warnings that should make the alert fail.
Removed
Items that have been removed as of this release.
Installation and Deployment
All ZooKeeper-related functionality for LogScale was deprecated in December 2022, and is now removed:
Removed the ZooKeeper status page from the User Interface
Removed the ZooKeeper related GraphQL mutations
Removed the migration support for node IDs created by ZooKeeper, as we no longer support upgrading from version prior to 1.70.
Depending on your chosen Kafka deployment, ZooKeeper may still be required to support Kafka.
GraphQL API
The deprecated client mutation ID concept is now being removed from the GraphQL API:
Removed the clientMutationId argument for a lot of mutations.
Removed the clientMutationId field from the returned type for a lot of mutations.
Renamed the ClientMutationID datatype, that was returned from some mutations to
BooleanResultType
datatype. Removed the clientMutationId field on the returned type and replaced it by a boolean field namedresult
.Most deprecated queries, mutations and fields have now been removed from the GraphQL API.
New features and improvements
Installation and Deployment
The following adjustments have been made to the launcher script:
Removed UnlockDiagnosticVMOptions
Raised default heap size to 75% of host memory, up from 50%
Move -XX:CompileCommand settings into the mandatory launch options, to prevent accidentally removing them when customizing
HUMIO_JVM_PERFORMANCE_OPTS
Set -XX:MaxDirectMemorySize to 1/5GB per CPU core as a default.
Print a warning if the sum of the heap size and the direct memory setting exceeds the total available memory.
Configuration
Query queueing based on the available memory in query coordinator is enabled by default by treating dynamic configuration
QueryCoordinatorMaxHeapFraction
as 0.5, if it has not been set. To disable queing, setQueryCoordinatorMaxHeapFraction
to 1000.
Dashboards and Widgets
Introduced a new style option Show 'Others' to the Time Chart Widget: it allows you to show/hide other series when there are more series than the maximum allowed in the chart.
Fixed in this release
Functions
Fixed a bug where
join()
queries could result in a memory leak from their sub queries not being properly cleaned up.
Falcon LogScale 1.107.0 GA (2023-09-12)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.107.0 | GA | 2023-09-12 | Cloud | 2024-11-30 | No | 1.70.0 | No |
Available for download two days after release.
Bug fixes and updates.
Advance Warning
The following items are due to change in a future release.
Installation and Deployment
We intend to drop support for Java 17, making Java 21 the minimum. We plan to make this change in March 2024.
Automation and Alerts
In LogScale version 1.112 we will change how standard alerts handle query warnings. Currently, LogScale will only trigger alerts if there are no query warnings. Starting with upcoming 1.112, alerts will trigger despite most query warnings, and the alert status will show a warning instead of an error.
Up until now, all query warnings have been treated as errors. This means that the alert does not trigger even though it produces results, and the alert is shown with an error in LogScale. Most query warnings mean that not all data was queried. The current behaviour prevents the alert from triggering in cases where it would not have, if all data had been available. For instance, an alert that would trigger if a count of events dropped below a threshold. On the other hand, it makes some alerts not trigger, even though they would still have if all data was available. That means that currently you will almost never get an alert that you should not have gotten, but you will sometime not get an alert that you should have gotten. We plan to revert this.
When this change happens, we no longer recommend to set the configuration option
ALERT_DESPITE_WARNINGS
totrue
, since it treats all query warnings as non-errors, and there are a few query warnings that should make the alert fail.
Removed
Items that have been removed as of this release.
Installation and Deployment
Running on Java 11, 12, 13, 14, 15 and 16 is no longer supported. The minimum supported Java version is 17 starting from this LogScale release.
New features and improvements
UI Changes
Most tables inside the LogScale UI now supports resizing columns, except the
Table
widget used during search.It is now possible to highlight results based on the filters applied in queries. This helps significantly when trying to understand why a query matches the results or when looking for a specific part of the events text.
For more information, see Filter Match Highlighting.
Configuration
GCS bucketing and query streaming now use the same proxy configuration as overall system proxy and S3 proxy. Example:
HTTP_PROXY_HOST
,HTTP_PROXY_PORT
,HTTP_PROXY_USERNAME
,HTTP_PROXY_PASSWORD
Fixed in this release
Functions
match()
function using a json file and containing an object with a missing field, could lead to an internal error.
Falcon LogScale 1.106.6 LTS (2024-01-22)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.106.6 | LTS | 2024-01-22 | Cloud | 2024-09-30 | No | 1.70.0 | No |
Hide file hashes
TAR Checksum | Value |
---|---|
MD5 | 731ff35cbdf1b8239240a23e960f61b1 |
SHA1 | 98de7c0b7ce4c381065b74806ac78fd6cdee32ba |
SHA256 | 92392ab60d33e766199ad07c00e9f55020cca33711580ec5b637350608530db3 |
SHA512 | 06d18de13a169cb89811abaa0f89a4824ee208bb9efb76cad950ba770758dbc7c42297eec6cf982d48abadbfd76af3e51b558a0b26ce4e5f75b1e32280545d51 |
Docker Image | SHA256 Checksum |
---|---|
humio | c6000d8e21b4670a537992438d38db8e6912679a1bb8ab5fae3850271c174ed3 |
humio-core | 9f8d3e32abe2c2ca4402d0a91adf4780556f40041fdd5a385387bfcbf205d9df |
kafka | b955111d2e83b1838cb3d266f3b121424de82114189fd5d4f6001f38619304c4 |
zookeeper | 05f6801ef1035b76a7224dd0b088c549708d3c88d176854330b6f068a00e4b7d |
Download: https://repo.humio.com/repository/maven-releases/com/humio/server/1.106.6/server-1.106.6.tar.gz
These notes include entries from the following previous releases: 1.106.2, 1.106.4, 1.106.5
Bug fixes and updates.
Advance Warning
The following items are due to change in a future release.
Installation and Deployment
Support for running on Java 11, 12, 13, 14, 15 and 16 will be removed by the end of September 2023.
Automation and Alerts
In LogScale version 1.112 we will change how standard alerts handle query warnings. Currently, LogScale will only trigger alerts if there are no query warnings. Starting with upcoming 1.112, alerts will trigger despite most query warnings, and the alert status will show a warning instead of an error.
Up until now, all query warnings have been treated as errors. This means that the alert does not trigger even though it produces results, and the alert is shown with an error in LogScale. Most query warnings mean that not all data was queried. The current behaviour prevents the alert from triggering in cases where it would not have, if all data had been available. For instance, an alert that would trigger if a count of events dropped below a threshold. On the other hand, it makes some alerts not trigger, even though they would still have if all data was available. That means that currently you will almost never get an alert that you should not have gotten, but you will sometime not get an alert that you should have gotten. We plan to revert this.
When this change happens, we no longer recommend to set the configuration option
ALERT_DESPITE_WARNINGS
totrue
, since it treats all query warnings as non-errors, and there are a few query warnings that should make the alert fail.
Upgrades
Changes that may occur or be required during an upgrade.
Security
xmlsec has been upgraded to 2.3.4 to address CVE-2023-44483 issue.
New features and improvements
Installation and Deployment
The following adjustments have been made to the launcher script:
Removed UnlockDiagnosticVMOptions
Raised default heap size to 75% of host memory, up from 50%
Move -XX:CompileCommand settings into the mandatory launch options, to prevent accidentally removing them when customizing
HUMIO_JVM_PERFORMANCE_OPTS
Set -XX:MaxDirectMemorySize to 1/5GB per CPU core as a default.
Print a warning if the sum of the heap size and the direct memory setting exceeds the total available memory.
UI Changes
The Show in context dialog now closes when the button in the dialog is clicked.
The fields and values in the Fields Panel and in the Event List are now sorted case-insensitively.
Automation and Alerts
It is now possible to import and export Filter Alerts in Packages from the UI.
When creating or updating Filter Alerts using the GraphQL API, it is now possible to refer to actions in Packages using a qualified name of
\"packagescope/packagename:actionname\"
. Actions in packages will no longer be found if using an unqualified name.The UI flow for Alerts has been updated — when you click on you are directly presented with the New alertform.
Importing an alert from template or package is done from the new New alert form.
button now located on top of theWhen installing or updating a package with an Alert or Scheduled search referencing an action that is not part of the package, the error is now shown in the UI. Previously, a generic error was shown.
Added a status field to some of the logs for Standard Alerts and Filter Alerts as well as Scheduled Searches. The field shows whether the current run of the job resulted in a
Success
orFailure
for the Alert or Scheduled Search.For more information, see Monitoring Alert Execution through the humio-activity Repository.
When installing a package, all actions referenced by Alerts and Scheduled searches in the package must be contained in the packages. Previously, missing actions were just ignored.
It is now possible to create Packages containing Filter Alerts, as well as importing such packages, using the API.
GraphQL API
Added limits for GraphQL queries on the total number of selected fields and fragments. Defaults are
1000
for authenticated and150
for unauthenticated users.Cluster administrators can adjust these limits with the
GraphQLSelectionSizeLimit
andUnauthenticatedGraphQLSelectionSizeLimit
dynamic configurations.The following GraphQL mutations have been changed so that the actions field can either contain IDs or names of actions:
createAlert
updateAlert
createScheduledSearch
updateScheduledSearch
Configuration
GCS bucketing and query streaming now use the same proxy configuration as overall system proxy and S3 proxy. Example:
HTTP_PROXY_HOST
,HTTP_PROXY_PORT
,HTTP_PROXY_USERNAME
,HTTP_PROXY_PASSWORD
Dashboards and Widgets
The text color styling option of the Note Widget is now included when importing a dashboard template or exporting it to a yaml file.
Increased to 10,000 the maximum amount of entries suggested in the dropdown of a parameter field of type File Parameter.
Ingestion
The ability to remove fields when parsing data has been enabled for all users.
For more information, see Removing Fields.
Audit logs for Ingest Tokens now include the ingest token name.
Log Collector
You can now toggle columns on the instance table, hereby specifying which information should be shown.
In Fleet Management, it is now possible to discard the draft of a configuration and rollback to the published version.
For more information, see Edit a Remote Configuration.
Functions
The
rename()
function has been enhanced: it is now possible to rename multiple fields using an array in itsfield
argument. This is backwards compatible with giving separatefield
andas
arguments.The new query function
wildcard()
is introduced. This function makes it easy to search for case-insensitive patterns on dashboards, or in ad-hoc queries.The new query function
crypto:md5()
is introduced. This function computes the MD5 hash of a given array of fields.Support for decimal values as exponent and divisor is now added in
math:pow()
andmath:mod()
functions respectively.The memory consumption of the
formatTime()
function has been decreased.
Fixed in this release
UI Changes
Time Selector and date picker in the Time Interval panel have been fixed for issues related to daylight savings time.
The URL would not be updated when selecting a time interval in the distribution chart on the
Search
page. This issue is now fixed.
Automation and Alerts
If polling queries were slow, then Scheduled Searches could fire twice. This issue is now fixed.
Filter Alerts installed from a package would show up under General and not under the Package name. This issue has been fixed.
Falcon LogScale repository actions have now been fixed for cases where they would ingest data into a repository even though ingest was blocked.
With Scheduled Searches installed from a package, if you edited the scheduled search and then updated the package, then you would get two copies of the scheduled search. This issue is now fixed.
Changes to uploaded files due to a package update would be kept even though the package update failed and other changes were rolled back. This wrong behavior has been fixed.
Dashboards and Widgets
Queries on a dashboard have been fixed as they would be invalid if the dashboard filter contained a single-line comment.
Widgets description tips on dashboards have been fixed as they would not show or have the same text for multiple widgets.
If you chose a page size larger than the number of rows, the page number and page size buttons would disappear. The
Table
widget now always shows the pagination buttons on theSearch
page where auto page size is turned off. On the dashboard, where auto page size is turned on, the existing behaviour remains.
Log Collector
Fleet Overview
in Fleet Management hangs and doesn't display any data. This behavior has been fixed.
Functions
Fixed a bug where
join()
queries could result in a memory leak from their sub queries not being properly cleaned up.The
hash()
query function would sometimes compute incorrect hashes when the field was formatted in UTF8. This is now fixed.Fixed an issue that could result in cluster performance degradation using
join()
under certain circumstances.Field names in the query used to export results to CSV had not been quoted correctly: they have now been fixed.
The
format()
function has been fixed as the US date format modifier resulted in the EU date format instead.
Other
Fixing a race that can leave a query in a state where it will cause an excessive amount of 404 HTTP requests. This adds unnecessary noise and a bit of extra load to the system.
The following repository issues have been fixed:
After multiple attemps in quick succession to create a repository with the same name, repositories would become inaccessible.
Some repositories could only be created partially and would be left as partially initialized in LogScale Internal Architecture used by LogScale.
Falcon LogScale 1.106.5 LTS (2023-11-15)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.106.5 | LTS | 2023-11-15 | Cloud | 2024-09-30 | No | 1.70.0 | No |
Hide file hashes
TAR Checksum | Value |
---|---|
MD5 | dddd2f72fc2570f6545f59472f11fe6c |
SHA1 | e57584aad90a7b1ac21c4e28c9ad09ec626dd648 |
SHA256 | 6af935589fd8bd3747a4f461e550ccac82d4ffa3e7f27f871cd14ea7e5bf8a49 |
SHA512 | 699e45b16ec8afee195cd718d43f507e2c8365d7cd8ddd7561462f43c0e7d0970fd2de5259f8c3ae3b95305416cef906581299668594c37e0834828bb3662cb1 |
Docker Image | SHA256 Checksum |
---|---|
humio | b3d38ec90dcdad1839ce9e7c384e29c5049ecc8c5e20ef4826c6232524015bfa |
humio-core | 9ad2f73627069c87f45e39c55b622bf81707fc3d0a795aaeaedf7550b601e9c4 |
kafka | 1b7862afa2ae5e25e7c187c51106c7da68c64df1cbbc8e020bd155f85e560580 |
zookeeper | f39e5dd61ec2a0a65b50e0ea5dd6670ed5babf4eab2b85923678fb1f3fef7241 |
Download: https://repo.humio.com/repository/maven-releases/com/humio/server/1.106.5/server-1.106.5.tar.gz
These notes include entries from the following previous releases: 1.106.2, 1.106.4
Bug fixes and updates.
Advance Warning
The following items are due to change in a future release.
Installation and Deployment
Support for running on Java 11, 12, 13, 14, 15 and 16 will be removed by the end of September 2023.
Automation and Alerts
In LogScale version 1.112 we will change how standard alerts handle query warnings. Currently, LogScale will only trigger alerts if there are no query warnings. Starting with upcoming 1.112, alerts will trigger despite most query warnings, and the alert status will show a warning instead of an error.
Up until now, all query warnings have been treated as errors. This means that the alert does not trigger even though it produces results, and the alert is shown with an error in LogScale. Most query warnings mean that not all data was queried. The current behaviour prevents the alert from triggering in cases where it would not have, if all data had been available. For instance, an alert that would trigger if a count of events dropped below a threshold. On the other hand, it makes some alerts not trigger, even though they would still have if all data was available. That means that currently you will almost never get an alert that you should not have gotten, but you will sometime not get an alert that you should have gotten. We plan to revert this.
When this change happens, we no longer recommend to set the configuration option
ALERT_DESPITE_WARNINGS
totrue
, since it treats all query warnings as non-errors, and there are a few query warnings that should make the alert fail.
Upgrades
Changes that may occur or be required during an upgrade.
Security
xmlsec has been upgraded to 2.3.4 to address CVE-2023-44483 issue.
New features and improvements
Installation and Deployment
The following adjustments have been made to the launcher script:
Removed UnlockDiagnosticVMOptions
Raised default heap size to 75% of host memory, up from 50%
Move -XX:CompileCommand settings into the mandatory launch options, to prevent accidentally removing them when customizing
HUMIO_JVM_PERFORMANCE_OPTS
Set -XX:MaxDirectMemorySize to 1/5GB per CPU core as a default.
Print a warning if the sum of the heap size and the direct memory setting exceeds the total available memory.
UI Changes
The Show in context dialog now closes when the button in the dialog is clicked.
The fields and values in the Fields Panel and in the Event List are now sorted case-insensitively.
Automation and Alerts
It is now possible to import and export Filter Alerts in Packages from the UI.
When creating or updating Filter Alerts using the GraphQL API, it is now possible to refer to actions in Packages using a qualified name of
\"packagescope/packagename:actionname\"
. Actions in packages will no longer be found if using an unqualified name.The UI flow for Alerts has been updated — when you click on you are directly presented with the New alertform.
Importing an alert from template or package is done from the new New alert form.
button now located on top of theWhen installing or updating a package with an Alert or Scheduled search referencing an action that is not part of the package, the error is now shown in the UI. Previously, a generic error was shown.
Added a status field to some of the logs for Standard Alerts and Filter Alerts as well as Scheduled Searches. The field shows whether the current run of the job resulted in a
Success
orFailure
for the Alert or Scheduled Search.For more information, see Monitoring Alert Execution through the humio-activity Repository.
When installing a package, all actions referenced by Alerts and Scheduled searches in the package must be contained in the packages. Previously, missing actions were just ignored.
It is now possible to create Packages containing Filter Alerts, as well as importing such packages, using the API.
GraphQL API
The following GraphQL mutations have been changed so that the actions field can either contain IDs or names of actions:
createAlert
updateAlert
createScheduledSearch
updateScheduledSearch
Configuration
GCS bucketing and query streaming now use the same proxy configuration as overall system proxy and S3 proxy. Example:
HTTP_PROXY_HOST
,HTTP_PROXY_PORT
,HTTP_PROXY_USERNAME
,HTTP_PROXY_PASSWORD
Dashboards and Widgets
The text color styling option of the Note Widget is now included when importing a dashboard template or exporting it to a yaml file.
Increased to 10,000 the maximum amount of entries suggested in the dropdown of a parameter field of type File Parameter.
Ingestion
The ability to remove fields when parsing data has been enabled for all users.
For more information, see Removing Fields.
Audit logs for Ingest Tokens now include the ingest token name.
Log Collector
You can now toggle columns on the instance table, hereby specifying which information should be shown.
In Fleet Management, it is now possible to discard the draft of a configuration and rollback to the published version.
For more information, see Edit a Remote Configuration.
Functions
The
rename()
function has been enhanced: it is now possible to rename multiple fields using an array in itsfield
argument. This is backwards compatible with giving separatefield
andas
arguments.The new query function
wildcard()
is introduced. This function makes it easy to search for case-insensitive patterns on dashboards, or in ad-hoc queries.The new query function
crypto:md5()
is introduced. This function computes the MD5 hash of a given array of fields.Support for decimal values as exponent and divisor is now added in
math:pow()
andmath:mod()
functions respectively.The memory consumption of the
formatTime()
function has been decreased.
Fixed in this release
UI Changes
Time Selector and date picker in the Time Interval panel have been fixed for issues related to daylight savings time.
The URL would not be updated when selecting a time interval in the distribution chart on the
Search
page. This issue is now fixed.
Automation and Alerts
If polling queries were slow, then Scheduled Searches could fire twice. This issue is now fixed.
Filter Alerts installed from a package would show up under General and not under the Package name. This issue has been fixed.
Falcon LogScale repository actions have now been fixed for cases where they would ingest data into a repository even though ingest was blocked.
With Scheduled Searches installed from a package, if you edited the scheduled search and then updated the package, then you would get two copies of the scheduled search. This issue is now fixed.
Changes to uploaded files due to a package update would be kept even though the package update failed and other changes were rolled back. This wrong behavior has been fixed.
Dashboards and Widgets
Queries on a dashboard have been fixed as they would be invalid if the dashboard filter contained a single-line comment.
Widgets description tips on dashboards have been fixed as they would not show or have the same text for multiple widgets.
If you chose a page size larger than the number of rows, the page number and page size buttons would disappear. The
Table
widget now always shows the pagination buttons on theSearch
page where auto page size is turned off. On the dashboard, where auto page size is turned on, the existing behaviour remains.
Log Collector
Fleet Overview
in Fleet Management hangs and doesn't display any data. This behavior has been fixed.
Functions
Fixed a bug where
join()
queries could result in a memory leak from their sub queries not being properly cleaned up.The
hash()
query function would sometimes compute incorrect hashes when the field was formatted in UTF8. This is now fixed.Fixed an issue that could result in cluster performance degradation using
join()
under certain circumstances.Field names in the query used to export results to CSV had not been quoted correctly: they have now been fixed.
The
format()
function has been fixed as the US date format modifier resulted in the EU date format instead.
Other
Fixing a race that can leave a query in a state where it will cause an excessive amount of 404 HTTP requests. This adds unnecessary noise and a bit of extra load to the system.
The following repository issues have been fixed:
After multiple attemps in quick succession to create a repository with the same name, repositories would become inaccessible.
Some repositories could only be created partially and would be left as partially initialized in LogScale Internal Architecture used by LogScale.
Falcon LogScale 1.106.4 LTS (2023-10-28)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.106.4 | LTS | 2023-10-28 | Cloud | 2024-09-30 | No | 1.70.0 | No |
Hide file hashes
TAR Checksum | Value |
---|---|
MD5 | 6cd245d416e5a117199c20b0ee8806a3 |
SHA1 | 642252be4f22397d72ad1c34d2e7724b5cf4cd09 |
SHA256 | cd2aa9be2d02daaff60f89742e82bc92b20017de95b74c3c96b70323382290af |
SHA512 | eed3d4f6c68902b401f0006cc07eafb33b24e8cbe87bb632281dcb51269f11f153fe58f68d244b107209254860bf54e68e763ce9afec1201f20c2ea30970ffb8 |
Docker Image | SHA256 Checksum |
---|---|
humio | effaba4975f3ca3ab75e07853c4bf4cc9936b694e8b632f74151e2ac45057c50 |
humio-core | cf6104f1b607f8a8a21ae5e6d73c3639fdade6e95bdbd1ee2cc236725f6d0860 |
kafka | c071905918a20309c0292e978e57ac1f382e578af01e5a0b7ec748508cd1c31f |
zookeeper | 30e4aa0ec27f29e480c5158778dae29f01894a2f307eeaffe9049a3d74a5f072 |
Download: https://repo.humio.com/repository/maven-releases/com/humio/server/1.106.4/server-1.106.4.tar.gz
These notes include entries from the following previous releases: 1.106.2
Bug fixes and updates.
Advance Warning
The following items are due to change in a future release.
Installation and Deployment
Support for running on Java 11, 12, 13, 14, 15 and 16 will be removed by the end of September 2023.
Automation and Alerts
In LogScale version 1.112 we will change how standard alerts handle query warnings. Currently, LogScale will only trigger alerts if there are no query warnings. Starting with upcoming 1.112, alerts will trigger despite most query warnings, and the alert status will show a warning instead of an error.
Up until now, all query warnings have been treated as errors. This means that the alert does not trigger even though it produces results, and the alert is shown with an error in LogScale. Most query warnings mean that not all data was queried. The current behaviour prevents the alert from triggering in cases where it would not have, if all data had been available. For instance, an alert that would trigger if a count of events dropped below a threshold. On the other hand, it makes some alerts not trigger, even though they would still have if all data was available. That means that currently you will almost never get an alert that you should not have gotten, but you will sometime not get an alert that you should have gotten. We plan to revert this.
When this change happens, we no longer recommend to set the configuration option
ALERT_DESPITE_WARNINGS
totrue
, since it treats all query warnings as non-errors, and there are a few query warnings that should make the alert fail.
New features and improvements
Installation and Deployment
The following adjustments have been made to the launcher script:
Removed UnlockDiagnosticVMOptions
Raised default heap size to 75% of host memory, up from 50%
Move -XX:CompileCommand settings into the mandatory launch options, to prevent accidentally removing them when customizing
HUMIO_JVM_PERFORMANCE_OPTS
Set -XX:MaxDirectMemorySize to 1/5GB per CPU core as a default.
Print a warning if the sum of the heap size and the direct memory setting exceeds the total available memory.
UI Changes
The Show in context dialog now closes when the button in the dialog is clicked.
The fields and values in the Fields Panel and in the Event List are now sorted case-insensitively.
Automation and Alerts
It is now possible to import and export Filter Alerts in Packages from the UI.
When creating or updating Filter Alerts using the GraphQL API, it is now possible to refer to actions in Packages using a qualified name of
\"packagescope/packagename:actionname\"
. Actions in packages will no longer be found if using an unqualified name.The UI flow for Alerts has been updated — when you click on you are directly presented with the New alertform.
Importing an alert from template or package is done from the new New alert form.
button now located on top of theWhen installing or updating a package with an Alert or Scheduled search referencing an action that is not part of the package, the error is now shown in the UI. Previously, a generic error was shown.
Added a status field to some of the logs for Standard Alerts and Filter Alerts as well as Scheduled Searches. The field shows whether the current run of the job resulted in a
Success
orFailure
for the Alert or Scheduled Search.For more information, see Monitoring Alert Execution through the humio-activity Repository.
When installing a package, all actions referenced by Alerts and Scheduled searches in the package must be contained in the packages. Previously, missing actions were just ignored.
It is now possible to create Packages containing Filter Alerts, as well as importing such packages, using the API.
GraphQL API
The following GraphQL mutations have been changed so that the actions field can either contain IDs or names of actions:
createAlert
updateAlert
createScheduledSearch
updateScheduledSearch
Configuration
GCS bucketing and query streaming now use the same proxy configuration as overall system proxy and S3 proxy. Example:
HTTP_PROXY_HOST
,HTTP_PROXY_PORT
,HTTP_PROXY_USERNAME
,HTTP_PROXY_PASSWORD
Dashboards and Widgets
The text color styling option of the Note Widget is now included when importing a dashboard template or exporting it to a yaml file.
Increased to 10,000 the maximum amount of entries suggested in the dropdown of a parameter field of type File Parameter.
Ingestion
The ability to remove fields when parsing data has been enabled for all users.
For more information, see Removing Fields.
Audit logs for Ingest Tokens now include the ingest token name.
Log Collector
You can now toggle columns on the instance table, hereby specifying which information should be shown.
In Fleet Management, it is now possible to discard the draft of a configuration and rollback to the published version.
For more information, see Edit a Remote Configuration.
Functions
The
rename()
function has been enhanced: it is now possible to rename multiple fields using an array in itsfield
argument. This is backwards compatible with giving separatefield
andas
arguments.The new query function
wildcard()
is introduced. This function makes it easy to search for case-insensitive patterns on dashboards, or in ad-hoc queries.The new query function
crypto:md5()
is introduced. This function computes the MD5 hash of a given array of fields.Support for decimal values as exponent and divisor is now added in
math:pow()
andmath:mod()
functions respectively.The memory consumption of the
formatTime()
function has been decreased.
Fixed in this release
UI Changes
Time Selector and date picker in the Time Interval panel have been fixed for issues related to daylight savings time.
The URL would not be updated when selecting a time interval in the distribution chart on the
Search
page. This issue is now fixed.
Automation and Alerts
If polling queries were slow, then Scheduled Searches could fire twice. This issue is now fixed.
Filter Alerts installed from a package would show up under General and not under the Package name. This issue has been fixed.
Falcon LogScale repository actions have now been fixed for cases where they would ingest data into a repository even though ingest was blocked.
With Scheduled Searches installed from a package, if you edited the scheduled search and then updated the package, then you would get two copies of the scheduled search. This issue is now fixed.
Changes to uploaded files due to a package update would be kept even though the package update failed and other changes were rolled back. This wrong behavior has been fixed.
Dashboards and Widgets
Queries on a dashboard have been fixed as they would be invalid if the dashboard filter contained a single-line comment.
Widgets description tips on dashboards have been fixed as they would not show or have the same text for multiple widgets.
If you chose a page size larger than the number of rows, the page number and page size buttons would disappear. The
Table
widget now always shows the pagination buttons on theSearch
page where auto page size is turned off. On the dashboard, where auto page size is turned on, the existing behaviour remains.
Log Collector
Fleet Overview
in Fleet Management hangs and doesn't display any data. This behavior has been fixed.
Functions
Fixed a bug where
join()
queries could result in a memory leak from their sub queries not being properly cleaned up.The
hash()
query function would sometimes compute incorrect hashes when the field was formatted in UTF8. This is now fixed.Fixed an issue that could result in cluster performance degradation using
join()
under certain circumstances.Field names in the query used to export results to CSV had not been quoted correctly: they have now been fixed.
The
format()
function has been fixed as the US date format modifier resulted in the EU date format instead.
Other
The following repository issues have been fixed:
After multiple attemps in quick succession to create a repository with the same name, repositories would become inaccessible.
Some repositories could only be created partially and would be left as partially initialized in LogScale Internal Architecture used by LogScale.
Falcon LogScale 1.106.3 Not Released (2023-10-28)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.106.3 | Not Released | 2023-10-28 | Internal Only | 2024-10-31 | No | 1.70.0 | No |
Available for download two days after release.
Not released.
Falcon LogScale 1.106.2 LTS (2023-09-27)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.106.2 | LTS | 2023-09-27 | Cloud | 2024-09-30 | No | 1.70.0 | No |
Hide file hashes
TAR Checksum | Value |
---|---|
MD5 | c8dec3b68f22324e82506bb409f9a70e |
SHA1 | 1b4fdf70a60b32ef37f815dd3a3f6db587a6dedc |
SHA256 | 2c7ea89974973d9b96a5618186c8a79943aa6b266e7c59ee4a219c6b407d6fc9 |
SHA512 | ed03c0e0501baf77dd67a18dec85610de2432463205482535e4ff6316351d49643c87e0de1c16e3c0b9ebbdd8962f06acddfeb36c2d57e80d7adad5b25e84eb0 |
Docker Image | SHA256 Checksum |
---|---|
humio | 2c3f8914be314d8b149b958073fe5b55299f5ead0b79ec57982f4724a81adfef |
humio-core | 9d621536c495cc79bb75dfc4ec355f2a617174722fd22da9625347f7b84d6d41 |
kafka | 468ea5a11fedbe97b33de836030b052be7052ee0a622efaa072e2ba19b70b2f7 |
zookeeper | b437dc0eb991f17a3a99d8a230ad77f620e600a8b0800a5863e9be8d2a2c7945 |
Download: https://repo.humio.com/repository/maven-releases/com/humio/server/1.106.2/server-1.106.2.tar.gz
Bug fixes and updates.
Advance Warning
The following items are due to change in a future release.
Installation and Deployment
Support for running on Java 11, 12, 13, 14, 15 and 16 will be removed by the end of September 2023.
Automation and Alerts
In LogScale version 1.112 we will change how standard alerts handle query warnings. Currently, LogScale will only trigger alerts if there are no query warnings. Starting with upcoming 1.112, alerts will trigger despite most query warnings, and the alert status will show a warning instead of an error.
Up until now, all query warnings have been treated as errors. This means that the alert does not trigger even though it produces results, and the alert is shown with an error in LogScale. Most query warnings mean that not all data was queried. The current behaviour prevents the alert from triggering in cases where it would not have, if all data had been available. For instance, an alert that would trigger if a count of events dropped below a threshold. On the other hand, it makes some alerts not trigger, even though they would still have if all data was available. That means that currently you will almost never get an alert that you should not have gotten, but you will sometime not get an alert that you should have gotten. We plan to revert this.
When this change happens, we no longer recommend to set the configuration option
ALERT_DESPITE_WARNINGS
totrue
, since it treats all query warnings as non-errors, and there are a few query warnings that should make the alert fail.
New features and improvements
Installation and Deployment
The following adjustments have been made to the launcher script:
Removed UnlockDiagnosticVMOptions
Raised default heap size to 75% of host memory, up from 50%
Move -XX:CompileCommand settings into the mandatory launch options, to prevent accidentally removing them when customizing
HUMIO_JVM_PERFORMANCE_OPTS
Set -XX:MaxDirectMemorySize to 1/5GB per CPU core as a default.
Print a warning if the sum of the heap size and the direct memory setting exceeds the total available memory.
UI Changes
The Show in context dialog now closes when the button in the dialog is clicked.
The fields and values in the Fields Panel and in the Event List are now sorted case-insensitively.
Automation and Alerts
It is now possible to import and export Filter Alerts in Packages from the UI.
When creating or updating Filter Alerts using the GraphQL API, it is now possible to refer to actions in Packages using a qualified name of
\"packagescope/packagename:actionname\"
. Actions in packages will no longer be found if using an unqualified name.The UI flow for Alerts has been updated — when you click on you are directly presented with the New alertform.
Importing an alert from template or package is done from the new New alert form.
button now located on top of theWhen installing or updating a package with an Alert or Scheduled search referencing an action that is not part of the package, the error is now shown in the UI. Previously, a generic error was shown.
Added a status field to some of the logs for Standard Alerts and Filter Alerts as well as Scheduled Searches. The field shows whether the current run of the job resulted in a
Success
orFailure
for the Alert or Scheduled Search.For more information, see Monitoring Alert Execution through the humio-activity Repository.
When installing a package, all actions referenced by Alerts and Scheduled searches in the package must be contained in the packages. Previously, missing actions were just ignored.
It is now possible to create Packages containing Filter Alerts, as well as importing such packages, using the API.
GraphQL API
The following GraphQL mutations have been changed so that the actions field can either contain IDs or names of actions:
createAlert
updateAlert
createScheduledSearch
updateScheduledSearch
Configuration
GCS bucketing and query streaming now use the same proxy configuration as overall system proxy and S3 proxy. Example:
HTTP_PROXY_HOST
,HTTP_PROXY_PORT
,HTTP_PROXY_USERNAME
,HTTP_PROXY_PASSWORD
Dashboards and Widgets
The text color styling option of the Note Widget is now included when importing a dashboard template or exporting it to a yaml file.
Increased to 10,000 the maximum amount of entries suggested in the dropdown of a parameter field of type File Parameter.
Ingestion
The ability to remove fields when parsing data has been enabled for all users.
For more information, see Removing Fields.
Audit logs for Ingest Tokens now include the ingest token name.
Log Collector
You can now toggle columns on the instance table, hereby specifying which information should be shown.
In Fleet Management, it is now possible to discard the draft of a configuration and rollback to the published version.
For more information, see Edit a Remote Configuration.
Functions
The
rename()
function has been enhanced: it is now possible to rename multiple fields using an array in itsfield
argument. This is backwards compatible with giving separatefield
andas
arguments.The new query function
wildcard()
is introduced. This function makes it easy to search for case-insensitive patterns on dashboards, or in ad-hoc queries.The new query function
crypto:md5()
is introduced. This function computes the MD5 hash of a given array of fields.Support for decimal values as exponent and divisor is now added in
math:pow()
andmath:mod()
functions respectively.The memory consumption of the
formatTime()
function has been decreased.
Fixed in this release
UI Changes
The URL would not be updated when selecting a time interval in the distribution chart on the
Search
page. This issue is now fixed.
Automation and Alerts
If polling queries were slow, then Scheduled Searches could fire twice. This issue is now fixed.
Filter Alerts installed from a package would show up under General and not under the Package name. This issue has been fixed.
Falcon LogScale repository actions have now been fixed for cases where they would ingest data into a repository even though ingest was blocked.
With Scheduled Searches installed from a package, if you edited the scheduled search and then updated the package, then you would get two copies of the scheduled search. This issue is now fixed.
Changes to uploaded files due to a package update would be kept even though the package update failed and other changes were rolled back. This wrong behavior has been fixed.
Dashboards and Widgets
Queries on a dashboard have been fixed as they would be invalid if the dashboard filter contained a single-line comment.
Widgets description tips on dashboards have been fixed as they would not show or have the same text for multiple widgets.
If you chose a page size larger than the number of rows, the page number and page size buttons would disappear. The
Table
widget now always shows the pagination buttons on theSearch
page where auto page size is turned off. On the dashboard, where auto page size is turned on, the existing behaviour remains.
Log Collector
Fleet Overview
in Fleet Management hangs and doesn't display any data. This behavior has been fixed.
Functions
Fixed a bug where
join()
queries could result in a memory leak from their sub queries not being properly cleaned up.The
hash()
query function would sometimes compute incorrect hashes when the field was formatted in UTF8. This is now fixed.Fixed an issue that could result in cluster performance degradation using
join()
under certain circumstances.Field names in the query used to export results to CSV had not been quoted correctly: they have now been fixed.
The
format()
function has been fixed as the US date format modifier resulted in the EU date format instead.
Other
The following repository issues have been fixed:
After multiple attemps in quick succession to create a repository with the same name, repositories would become inaccessible.
Some repositories could only be created partially and would be left as partially initialized in LogScale Internal Architecture used by LogScale.
Falcon LogScale 1.106.1 GA (2023-09-18)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.106.1 | GA | 2023-09-18 | Cloud | 2024-09-30 | No | 1.70.0 | No |
Available for download two days after release.
Bug fixes and updates.
New features and improvements
Installation and Deployment
The following adjustments have been made to the launcher script:
Removed UnlockDiagnosticVMOptions
Raised default heap size to 75% of host memory, up from 50%
Move -XX:CompileCommand settings into the mandatory launch options, to prevent accidentally removing them when customizing
HUMIO_JVM_PERFORMANCE_OPTS
Set -XX:MaxDirectMemorySize to 1/5GB per CPU core as a default.
Print a warning if the sum of the heap size and the direct memory setting exceeds the total available memory.
Configuration
GCS bucketing and query streaming now use the same proxy configuration as overall system proxy and S3 proxy. Example:
HTTP_PROXY_HOST
,HTTP_PROXY_PORT
,HTTP_PROXY_USERNAME
,HTTP_PROXY_PASSWORD
Fixed in this release
Functions
Fixed a bug where
join()
queries could result in a memory leak from their sub queries not being properly cleaned up.
Falcon LogScale 1.106.0 GA (2023-09-05)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.106.0 | GA | 2023-09-05 | Cloud | 2024-09-30 | No | 1.70.0 | No |
Available for download two days after release.
Bug fixes and updates.
Advance Warning
The following items are due to change in a future release.
Installation and Deployment
Support for running on Java 11, 12, 13, 14, 15 and 16 will be removed by the end of September 2023.
Automation and Alerts
In LogScale version 1.112 we will change how standard alerts handle query warnings. Currently, LogScale will only trigger alerts if there are no query warnings. Starting with upcoming 1.112, alerts will trigger despite most query warnings, and the alert status will show a warning instead of an error.
Up until now, all query warnings have been treated as errors. This means that the alert does not trigger even though it produces results, and the alert is shown with an error in LogScale. Most query warnings mean that not all data was queried. The current behaviour prevents the alert from triggering in cases where it would not have, if all data had been available. For instance, an alert that would trigger if a count of events dropped below a threshold. On the other hand, it makes some alerts not trigger, even though they would still have if all data was available. That means that currently you will almost never get an alert that you should not have gotten, but you will sometime not get an alert that you should have gotten. We plan to revert this.
When this change happens, we no longer recommend to set the configuration option
ALERT_DESPITE_WARNINGS
totrue
, since it treats all query warnings as non-errors, and there are a few query warnings that should make the alert fail.
New features and improvements
Automation and Alerts
When installing or updating a package with an Alert or Scheduled search referencing an action that is not part of the package, the error is now shown in the UI. Previously, a generic error was shown.
Dashboards and Widgets
The text color styling option of the Note Widget is now included when importing a dashboard template or exporting it to a yaml file.
Increased to 10,000 the maximum amount of entries suggested in the dropdown of a parameter field of type File Parameter.
Log Collector
You can now toggle columns on the instance table, hereby specifying which information should be shown.
Functions
The
rename()
function has been enhanced: it is now possible to rename multiple fields using an array in itsfield
argument. This is backwards compatible with giving separatefield
andas
arguments.
Fixed in this release
Dashboards and Widgets
Queries on a dashboard have been fixed as they would be invalid if the dashboard filter contained a single-line comment.
Widgets description tips on dashboards have been fixed as they would not show or have the same text for multiple widgets.
Falcon LogScale 1.105.0 GA (2023-08-29)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.105.0 | GA | 2023-08-29 | Cloud | 2024-09-30 | No | 1.70.0 | No |
Available for download two days after release.
Bug fixes and updates.
Advance Warning
The following items are due to change in a future release.
Installation and Deployment
Support for running on Java 11, 12, 13, 14, 15 and 16 will be removed by the end of September 2023.
Fixed in this release
Other
Keyboard navigation did not work in the jump panel.
Falcon LogScale 1.104.0 GA (2023-08-22)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.104.0 | GA | 2023-08-22 | Cloud | 2024-09-30 | No | 1.70.0 | No |
Available for download two days after release.
Bug fixes and updates.
Advance Warning
The following items are due to change in a future release.
Installation and Deployment
Support for running on Java 11, 12, 13, 14, 15 and 16 will be removed by the end of September 2023.
New features and improvements
Log Collector
In Fleet Management, it is now possible to discard the draft of a configuration and rollback to the published version.
For more information, see Edit a Remote Configuration.
Functions
The new query function
crypto:md5()
is introduced. This function computes the MD5 hash of a given array of fields.Support for decimal values as exponent and divisor is now added in
math:pow()
andmath:mod()
functions respectively.
Fixed in this release
Automation and Alerts
If polling queries were slow, then Scheduled Searches could fire twice. This issue is now fixed.
Dashboards and Widgets
If you chose a page size larger than the number of rows, the page number and page size buttons would disappear. The
Table
widget now always shows the pagination buttons on theSearch
page where auto page size is turned off. On the dashboard, where auto page size is turned on, the existing behaviour remains.
Functions
Field names in the query used to export results to CSV had not been quoted correctly: they have now been fixed.
Falcon LogScale 1.103.0 GA (2023-08-15)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.103.0 | GA | 2023-08-15 | Cloud | 2024-09-30 | No | 1.70.0 | No |
Available for download two days after release.
Bug fixes and updates.
Advance Warning
The following items are due to change in a future release.
Installation and Deployment
Support for running on Java 11, 12, 13, 14, 15 and 16 will be removed by the end of September 2023.
New features and improvements
Automation and Alerts
It is now possible to import and export Filter Alerts in Packages from the UI.
When installing a package, all actions referenced by Alerts and Scheduled searches in the package must be contained in the packages. Previously, missing actions were just ignored.
Ingestion
The ability to remove fields when parsing data has been enabled for all users.
For more information, see Removing Fields.
Fixed in this release
Automation and Alerts
Filter Alerts installed from a package would show up under General and not under the Package name. This issue has been fixed.
Changes to uploaded files due to a package update would be kept even though the package update failed and other changes were rolled back. This wrong behavior has been fixed.
Log Collector
Fleet Overview
in Fleet Management hangs and doesn't display any data. This behavior has been fixed.
Functions
The
hash()
query function would sometimes compute incorrect hashes when the field was formatted in UTF8. This is now fixed.
Falcon LogScale 1.102.0 GA (2023-08-08)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.102.0 | GA | 2023-08-08 | Cloud | 2024-09-30 | No | 1.44.0 | No |
Available for download two days after release.
Bug fixes and updates.
Advance Warning
The following items are due to change in a future release.
Installation and Deployment
Support for running on Java 11, 12, 13, 14, 15 and 16 will be removed by the end of September 2023.
New features and improvements
UI Changes
The Show in context dialog now closes when the button in the dialog is clicked.
The fields and values in the Fields Panel and in the Event List are now sorted case-insensitively.
Automation and Alerts
When creating or updating Filter Alerts using the GraphQL API, it is now possible to refer to actions in Packages using a qualified name of
\"packagescope/packagename:actionname\"
. Actions in packages will no longer be found if using an unqualified name.The UI flow for Alerts has been updated — when you click on you are directly presented with the New alertform.
Importing an alert from template or package is done from the new New alert form.
button now located on top of theAdded a status field to some of the logs for Standard Alerts and Filter Alerts as well as Scheduled Searches. The field shows whether the current run of the job resulted in a
Success
orFailure
for the Alert or Scheduled Search.For more information, see Monitoring Alert Execution through the humio-activity Repository.
It is now possible to create Packages containing Filter Alerts, as well as importing such packages, using the API.
GraphQL API
The following GraphQL mutations have been changed so that the actions field can either contain IDs or names of actions:
createAlert
updateAlert
createScheduledSearch
updateScheduledSearch
Ingestion
Audit logs for Ingest Tokens now include the ingest token name.
Functions
The new query function
wildcard()
is introduced. This function makes it easy to search for case-insensitive patterns on dashboards, or in ad-hoc queries.
Fixed in this release
UI Changes
The URL would not be updated when selecting a time interval in the distribution chart on the
Search
page. This issue is now fixed.
Automation and Alerts
With Scheduled Searches installed from a package, if you edited the scheduled search and then updated the package, then you would get two copies of the scheduled search. This issue is now fixed.
Functions
Fixed an issue that could result in cluster performance degradation using
join()
under certain circumstances.
Falcon LogScale 1.101.1 GA (2023-10-28)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.101.1 | GA | 2023-10-28 | Cloud | 2024-09-30 | No | 1.44.0 | No |
Available for download two days after release.
Bug fixes and updates.
Fixed in this release
UI Changes
Time Selector and date picker in the Time Interval panel have been fixed for issues related to daylight savings time.
Falcon LogScale 1.101.0 GA (2023-08-01)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.101.0 | GA | 2023-08-01 | Cloud | 2024-09-30 | No | 1.44.0 | No |
Available for download two days after release.
Bug fixes and updates.
Advance Warning
The following items are due to change in a future release.
Installation and Deployment
Support for running on Java 11, 12, 13, 14, 15 and 16 will be removed by the end of September 2023.
New features and improvements
Functions
The memory consumption of the
formatTime()
function has been decreased.
Fixed in this release
Automation and Alerts
Falcon LogScale repository actions have now been fixed for cases where they would ingest data into a repository even though ingest was blocked.
Functions
The
format()
function has been fixed as the US date format modifier resulted in the EU date format instead.
Other
The following repository issues have been fixed:
After multiple attemps in quick succession to create a repository with the same name, repositories would become inaccessible.
Some repositories could only be created partially and would be left as partially initialized in LogScale Internal Architecture used by LogScale.
Falcon LogScale 1.100.3 LTS (2024-01-22)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.100.3 | LTS | 2024-01-22 | Cloud | 2024-08-31 | No | 1.44.0 | No |
Hide file hashes
TAR Checksum | Value |
---|---|
MD5 | 42f1f33754b9295212786d4171c62aab |
SHA1 | b014f6731601d95c9a59257887303f133419ef52 |
SHA256 | c4838d355912d76be123e4694ad6483d7073469634c38a295e0f59d9bb54dc98 |
SHA512 | 7ecf7e438f387256d27c8bbae3ef1e1378c1e9d05b52993e9dd964c922c5732ee93d13d0f4480844781229d6cc5caab97e6ff1576cc04163bd71fd804195a106 |
Docker Image | SHA256 Checksum |
---|---|
humio | a78c9ac49b3f3136a914a95410dc200facb05e0c72549dbba939c6d053c3e75b |
humio-core | d2776980b1618355cfc378925b77bcee61362f7fd176652199b3cc0306268662 |
kafka | 85ae3de0fc78bc8a450fd15e4f385f11c5145ca9af9933801aaa4f0cfcc715cc |
zookeeper | 351829776e29d0560952b2165e9263738d336bb409293354921af2da995be480 |
Download: https://repo.humio.com/repository/maven-releases/com/humio/server/1.100.3/server-1.100.3.tar.gz
These notes include entries from the following previous releases: 1.100.0, 1.100.1, 1.100.2
Bug fixes and updates.
Advance Warning
The following items are due to change in a future release.
Installation and Deployment
Support for running on Java 11, 12, 13, 14, 15 and 16 will be removed by the end of September 2023.
Removed
Items that have been removed as of this release.
GraphQL API
The deprecated
RegistryPackage
datatype has been deleted, along with the deprecated mutations and fields using it:
installPackageFromRegistry mutation
updatePackageFromRegistry mutation
package in the
Searchdomain
datatype
Upgrades
Changes that may occur or be required during an upgrade.
Security
xmlsec has been upgraded to 2.3.4 to address CVE-2023-44483 issue.
Installation and Deployment
Permit running LogScale on Java 20. Docker containers have been upgraded to be based on Java 20.
Other
The Kafka client has been upgraded to 3.4.1. The Kafka broker has been upgraded to 3.4.1 in the Kafka container.
New features and improvements
Security
All view permission tokens created from now on will not be able to run queries based on the user who created it (legacy behavior due to user requirement for queries). They will however be able to run queries on behalf of the organization given the right permissions.
Existing view permission tokens and the resources (scheduled searches, alerts, etc.) are unaffected by this change. For any view permission tokens created after this change, the scheduled searches, alerts, etc. created using these tokens, will run based on the organization instead of the user who created the token.
This addresses the issue where, for example, alerts created using a view permission token would fail to run if the user who created the token was removed from the organization or if the permissions needed to run the alert was removed from the user. With the new behaviour the alert will continue working even though the user is removed or looses the required permissions to run the alert.
In the unlikely event where an external actor hits the audit log without an IP set, we will now log
null
instead of defaulting to the local IP.Migration from the legacy Organization Shared Dashboard IP filter to the Dashboard Security Policies for sharing dashboards will be done by Creating an IP Filter corresponding to the old filter. If the migration can be performed, this IP Filter will be set on all shared dashboards and set as the Shared Dashboard IP filter Security Policy for the organization. If migration cannot be done, a notification will be displayed to the organization owner explaining how to complete the migration manually. Migration cannot be done when there is a shared Dashboard that has an IP filter other than the legacy Organization Shared Dashboard IP filter.
Introducing organization query ownership, permission tokens and organization level security policies features.
For more information, see Organization Owned Queries, Repository & View Permissions, Security Policies.
UI Changes
Organization and system level permissions can now be handled through the UI.
When duplicating an alert, you are now redirected straight to the New alert page.
For more information, see Reusing an Alert.
Filter alerts now have an updated In preview label which no longer behaves like a button but shows a message when hovering over.
Automation and Alerts
More attributes have been added to Filter alerts:
Filter alerts will now be able to catch up with up to 24 hours of delay (ingest delays + delays in actions).
Filter alerts will now trigger on events that are unavailable for up to 10 minutes due to query warnings.
For more information, see Filter Alerts.
A new Alerts and Scheduled Searches.
/ option has been added forFor more information, see Managing Alerts.
Improvements have been made in the UI:
When Creating an Alert from a Query, the alert type — Standard or Filter — is auto-selected based on query detection.
Added a trigger limit field in the Filter Alerts form.
Actions are now selected in Alerts and Scheduled Searches forms using a ComboBox component.
Changed the behaviour of the + button for Actions selection in the Alerts and Scheduled Searches forms; it will now take you to the form where you create a new action instead of adding an action to that entity.
GraphQL API
For the updateMaxAutoShardCount and blockIngest GraphQL mutations, it is no longer required to be root, instead the caller must have the
ManageCluster
permission.Added limits for GraphQL queries on the total number of selected fields and fragments. Defaults are
1000
for authenticated and150
for unauthenticated users.Cluster administrators can adjust these limits with the
GraphQLSelectionSizeLimit
andUnauthenticatedGraphQLSelectionSizeLimit
dynamic configurations.The userId input field on the updateDashboardToken mutation is now optional and deprecated in favor of the queryOwnershipType field. If userId is set to anything else than the calling user ID, an exception will be thrown.
A GraphQL API has been added to read the current tag groupings on a repository.
For more information, see repository() .
QueryOnlyAccessTokens GraphQL query field previously used for a prototype has now been removed.
API
For auto sharding operations (
GET
,UPDATE
,DELETE
) in Cluster Management API it is no longer required to be root, instead the caller must have theManageCluster
permission.
Configuration
The following configuration parameters have been added:
FILTER_ALERTS_MAX_CATCH_UP_LIMIT
to set how long back filter alerts will be able to catch up with delays.FILTER_ALERTS_MAX_WAIT_FOR_MISSING_DATA
to set for how long filter alerts will wait for query warnings about missing data to disappear.
The following configuration parameters for storage concurrency are now deprecated:
GCP_STORAGE_UPLOAD_CONCURRENCY
GCP_STORAGE_DOWNLOAD_CONCURRENCY
They are replaced by new variables:
If unassigned, the new variables will populate with the largest value from the deprecated variables, until these are removed.
The new configuration parameters
FILTER_ALERT_MAX_EMAIL_TRIGGER_LIMIT
andFILTER_ALERT_MAX_NON_EMAIL_TRIGGER_LIMIT
now allow setting the trigger limit for filter alerts; the allowed value depends on whether the alert has email actions attached or not.Introduced the new Dynamic Configuration option
QueryPartitionAutoBalance
which turns on/off automatic balancing of query partitions across nodes.For more information, see Dynamic Configuration Parameters.
Dashboards and Widgets
When clicking
on a dashboard widget, the query will now use the live setting of the dashboard. Also, parameter values are carried over.For more information, see Manage Widgets.
Log Collector
A new fleet metric has been added to the
Fleet overview
page.For more information, see Falcon Log Collector Manage your Fleet.
Quick filters have been added on
Fleet Overview
(Status and Config) and onConfig overview
(Status) pages.For more information, see Falcon Log Collector Manage your Fleet.
A menu item has been added to
Fleet Overview
page, which now allows to unenroll a collector from Fleet Management.For more information, see Manage Falcon Log Collector Instance Enrollment.
Functions
Parameter
ignoreCase
has been added to thein()
function, to allow for case-insensitive searching. Default is to case sensitively search for the provided values.Changed the approximation algorithm used for counting distinct values in
count(myField, distinct=true)
andfieldstats()
. Any query using one of the aforementioned functions may report a different number, which in most cases will be more accurate than previous estimates.
Other
License keys using the format applied before 2021 are no longer supported. Obsolete license formats start with the string
eyJhbGciOiJFUzI1NiIsInR5cCI6IkpXVCJ9
. If your license key is obsolete, before you upgrade LogScale contact Support to request an equivalent license key that has the new format. All versions of LogScale since 2020 support the new license key format.For more information, see License Installation.
Tag groupings
page is now available under the repository Settings tab to see the tag groupings which are currently in use on a repository.
Fixed in this release
Security
Hidden validation issues that would prevent from saving changes to Security Policies configuration have now been fixed.
UI Changes
Time Selector and date picker in the Time Interval panel have been fixed for issues related to daylight savings time.
Fixed an issue where query parameters would be extracted from comments in the query.
Fixed an error that was thrown when attempting to export fields to CSV containing spaces.
Fixed the default query prefixes which would override exceptions to default role bindings if no query prefix is set in the exceptions. The default query prefix set in the default role will now only impact views that are not defined as an exception to the default rule.
Automation and Alerts
Filter alerts with a query ending with a comment would not run. This issue has now been fixed.
GraphQL API
The GraphQL query used by the front page could not return all views and repositories a user had access to, because of an issue with the default roles on groups. This issue has now been fixed.
Configuration
Wrong behaviour in the
StaticQueryFractionOfCores
dynamic configuration. The intent of this configuration is to limit queries from one organization (user on single-organization clusters) to run on a certain percentage of mapper threads at most, effectively throttling queries to prevent one organization from consuming all capacity. Throttled queries from one organization could still block queries from other organizations and prevent them from running, leaving mapper threads idle: this behaviour has now been fixed.
Dashboards and Widgets
When Using Saved Queries in Interactions, the interaction would not be kept if the saved query was created from template with the button. This issue is now fixed.
Description tips that were partly hidden in
Table
widgets are now correctly visualized in dashboards.Fixed the parameter form which could not be opened when asterisks were used as quoted identifiers in the query.
On charts, the legend tooltip was sometimes hidden towards the bottom of the chart. It has now been fixed to stay within the chart boundaries.
The rendering of JSON in the
Event List
widget is now faster and consumes less memory.In Dashboard Link, the targeted dashboard could not display correctly if the dashboard was renamed. The issue has been fixed by using the dashboard ID instead of the name as reference.
When using the
sort()
function with theBar Chart
widget, it would only stay sorted for a while. The issue has been fixed and it now remains sorted in the same order as the underlying data.
Ingestion
A 500 status code was issued when ingesting to
/api/v1/ingest/json
with no assigned parser. It now ingests the rawstring.
Functions
Fixed an issue where syntax coloring and code completion would stop working in certain cases (using multiple saved queries, or aggregate function in
case
).Fixed
bucket()
andtimeChart()
functions as they could lead to partially missing results when used in combination withwindow()
.
Other
BucketStorageUploadLatencyJob
could incorrectly report that LogScale was falling behind on bucket uploads. This issue has been fixed.Fixing a race that can leave a query in a state where it will cause an excessive amount of 404 HTTP requests. This adds unnecessary noise and a bit of extra load to the system.
Packages
Upgrading a Package could result in a conflict for unchanged items when those items had fields beginning or ending with spaces. This issue has now been fixed.
Falcon LogScale 1.100.2 LTS (2023-11-15)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.100.2 | LTS | 2023-11-15 | Cloud | 2024-08-31 | No | 1.44.0 | No |
Hide file hashes
TAR Checksum | Value |
---|---|
MD5 | c99f3f0c7a456d11a34a7fc2ab95e20e |
SHA1 | e8a3d07f1caff582bc735bd3e41d9dde46a383ec |
SHA256 | 84327f98133e658e21a9a96086bd1278fd2b3b283bf11cb272a942b1f5808642 |
SHA512 | c3b84cc9262dae85a43f739f4234d0079099cdbf5601275f3e7f58dc7473a510d2325b32d48f7c3a2391412333e66b173522a7817c3864ecda7f137f323e0d7f |
Docker Image | SHA256 Checksum |
---|---|
humio | 11c959196ced9534737388aac9b75bb19790d19187f766c71590390e33562482 |
humio-core | f040d8eb6bd18c2a261900a0c4a2539aad5d8724c32c3f29c612326d9ce48c40 |
kafka | 482857019187364223678e5ca21f16f3b34587f62ddf941866e136f633a69ea1 |
zookeeper | cbf4679f4097c0b05242bb228897b9bdf44c41b44bbd5beccf4ed22165c95352 |
Download: https://repo.humio.com/repository/maven-releases/com/humio/server/1.100.2/server-1.100.2.tar.gz
These notes include entries from the following previous releases: 1.100.0, 1.100.1
Bug fixes and updates.
Advance Warning
The following items are due to change in a future release.
Installation and Deployment
Support for running on Java 11, 12, 13, 14, 15 and 16 will be removed by the end of September 2023.
Removed
Items that have been removed as of this release.
GraphQL API
The deprecated
RegistryPackage
datatype has been deleted, along with the deprecated mutations and fields using it:
installPackageFromRegistry mutation
updatePackageFromRegistry mutation
package in the
Searchdomain
datatype
Upgrades
Changes that may occur or be required during an upgrade.
Security
xmlsec has been upgraded to 2.3.4 to address CVE-2023-44483 issue.
Installation and Deployment
Permit running LogScale on Java 20. Docker containers have been upgraded to be based on Java 20.
Other
The Kafka client has been upgraded to 3.4.1. The Kafka broker has been upgraded to 3.4.1 in the Kafka container.
New features and improvements
Security
All view permission tokens created from now on will not be able to run queries based on the user who created it (legacy behavior due to user requirement for queries). They will however be able to run queries on behalf of the organization given the right permissions.
Existing view permission tokens and the resources (scheduled searches, alerts, etc.) are unaffected by this change. For any view permission tokens created after this change, the scheduled searches, alerts, etc. created using these tokens, will run based on the organization instead of the user who created the token.
This addresses the issue where, for example, alerts created using a view permission token would fail to run if the user who created the token was removed from the organization or if the permissions needed to run the alert was removed from the user. With the new behaviour the alert will continue working even though the user is removed or looses the required permissions to run the alert.
In the unlikely event where an external actor hits the audit log without an IP set, we will now log
null
instead of defaulting to the local IP.Migration from the legacy Organization Shared Dashboard IP filter to the Dashboard Security Policies for sharing dashboards will be done by Creating an IP Filter corresponding to the old filter. If the migration can be performed, this IP Filter will be set on all shared dashboards and set as the Shared Dashboard IP filter Security Policy for the organization. If migration cannot be done, a notification will be displayed to the organization owner explaining how to complete the migration manually. Migration cannot be done when there is a shared Dashboard that has an IP filter other than the legacy Organization Shared Dashboard IP filter.
Introducing organization query ownership, permission tokens and organization level security policies features.
For more information, see Organization Owned Queries, Repository & View Permissions, Security Policies.
UI Changes
Organization and system level permissions can now be handled through the UI.
When duplicating an alert, you are now redirected straight to the New alert page.
For more information, see Reusing an Alert.
Filter alerts now have an updated In preview label which no longer behaves like a button but shows a message when hovering over.
Automation and Alerts
More attributes have been added to Filter alerts:
Filter alerts will now be able to catch up with up to 24 hours of delay (ingest delays + delays in actions).
Filter alerts will now trigger on events that are unavailable for up to 10 minutes due to query warnings.
For more information, see Filter Alerts.
A new Alerts and Scheduled Searches.
/ option has been added forFor more information, see Managing Alerts.
Improvements have been made in the UI:
When Creating an Alert from a Query, the alert type — Standard or Filter — is auto-selected based on query detection.
Added a trigger limit field in the Filter Alerts form.
Actions are now selected in Alerts and Scheduled Searches forms using a ComboBox component.
Changed the behaviour of the + button for Actions selection in the Alerts and Scheduled Searches forms; it will now take you to the form where you create a new action instead of adding an action to that entity.
GraphQL API
For the updateMaxAutoShardCount and blockIngest GraphQL mutations, it is no longer required to be root, instead the caller must have the
ManageCluster
permission.The userId input field on the updateDashboardToken mutation is now optional and deprecated in favor of the queryOwnershipType field. If userId is set to anything else than the calling user ID, an exception will be thrown.
A GraphQL API has been added to read the current tag groupings on a repository.
For more information, see repository() .
QueryOnlyAccessTokens GraphQL query field previously used for a prototype has now been removed.
API
For auto sharding operations (
GET
,UPDATE
,DELETE
) in Cluster Management API it is no longer required to be root, instead the caller must have theManageCluster
permission.
Configuration
The following configuration parameters have been added:
FILTER_ALERTS_MAX_CATCH_UP_LIMIT
to set how long back filter alerts will be able to catch up with delays.FILTER_ALERTS_MAX_WAIT_FOR_MISSING_DATA
to set for how long filter alerts will wait for query warnings about missing data to disappear.
The following configuration parameters for storage concurrency are now deprecated:
GCP_STORAGE_UPLOAD_CONCURRENCY
GCP_STORAGE_DOWNLOAD_CONCURRENCY
They are replaced by new variables:
If unassigned, the new variables will populate with the largest value from the deprecated variables, until these are removed.
The new configuration parameters
FILTER_ALERT_MAX_EMAIL_TRIGGER_LIMIT
andFILTER_ALERT_MAX_NON_EMAIL_TRIGGER_LIMIT
now allow setting the trigger limit for filter alerts; the allowed value depends on whether the alert has email actions attached or not.Introduced the new Dynamic Configuration option
QueryPartitionAutoBalance
which turns on/off automatic balancing of query partitions across nodes.For more information, see Dynamic Configuration Parameters.
Dashboards and Widgets
When clicking
on a dashboard widget, the query will now use the live setting of the dashboard. Also, parameter values are carried over.For more information, see Manage Widgets.
Log Collector
A new fleet metric has been added to the
Fleet overview
page.For more information, see Falcon Log Collector Manage your Fleet.
Quick filters have been added on
Fleet Overview
(Status and Config) and onConfig overview
(Status) pages.For more information, see Falcon Log Collector Manage your Fleet.
A menu item has been added to
Fleet Overview
page, which now allows to unenroll a collector from Fleet Management.For more information, see Manage Falcon Log Collector Instance Enrollment.
Functions
Parameter
ignoreCase
has been added to thein()
function, to allow for case-insensitive searching. Default is to case sensitively search for the provided values.Changed the approximation algorithm used for counting distinct values in
count(myField, distinct=true)
andfieldstats()
. Any query using one of the aforementioned functions may report a different number, which in most cases will be more accurate than previous estimates.
Other
License keys using the format applied before 2021 are no longer supported. Obsolete license formats start with the string
eyJhbGciOiJFUzI1NiIsInR5cCI6IkpXVCJ9
. If your license key is obsolete, before you upgrade LogScale contact Support to request an equivalent license key that has the new format. All versions of LogScale since 2020 support the new license key format.For more information, see License Installation.
Tag groupings
page is now available under the repository Settings tab to see the tag groupings which are currently in use on a repository.
Fixed in this release
Security
Hidden validation issues that would prevent from saving changes to Security Policies configuration have now been fixed.
UI Changes
Time Selector and date picker in the Time Interval panel have been fixed for issues related to daylight savings time.
Fixed an issue where query parameters would be extracted from comments in the query.
Fixed an error that was thrown when attempting to export fields to CSV containing spaces.
Fixed the default query prefixes which would override exceptions to default role bindings if no query prefix is set in the exceptions. The default query prefix set in the default role will now only impact views that are not defined as an exception to the default rule.
Automation and Alerts
Filter alerts with a query ending with a comment would not run. This issue has now been fixed.
GraphQL API
The GraphQL query used by the front page could not return all views and repositories a user had access to, because of an issue with the default roles on groups. This issue has now been fixed.
Configuration
Wrong behaviour in the
StaticQueryFractionOfCores
dynamic configuration. The intent of this configuration is to limit queries from one organization (user on single-organization clusters) to run on a certain percentage of mapper threads at most, effectively throttling queries to prevent one organization from consuming all capacity. Throttled queries from one organization could still block queries from other organizations and prevent them from running, leaving mapper threads idle: this behaviour has now been fixed.
Dashboards and Widgets
When Using Saved Queries in Interactions, the interaction would not be kept if the saved query was created from template with the button. This issue is now fixed.
Description tips that were partly hidden in
Table
widgets are now correctly visualized in dashboards.Fixed the parameter form which could not be opened when asterisks were used as quoted identifiers in the query.
On charts, the legend tooltip was sometimes hidden towards the bottom of the chart. It has now been fixed to stay within the chart boundaries.
The rendering of JSON in the
Event List
widget is now faster and consumes less memory.In Dashboard Link, the targeted dashboard could not display correctly if the dashboard was renamed. The issue has been fixed by using the dashboard ID instead of the name as reference.
When using the
sort()
function with theBar Chart
widget, it would only stay sorted for a while. The issue has been fixed and it now remains sorted in the same order as the underlying data.
Ingestion
A 500 status code was issued when ingesting to
/api/v1/ingest/json
with no assigned parser. It now ingests the rawstring.
Functions
Fixed an issue where syntax coloring and code completion would stop working in certain cases (using multiple saved queries, or aggregate function in
case
).Fixed
bucket()
andtimeChart()
functions as they could lead to partially missing results when used in combination withwindow()
.
Other
BucketStorageUploadLatencyJob
could incorrectly report that LogScale was falling behind on bucket uploads. This issue has been fixed.Fixing a race that can leave a query in a state where it will cause an excessive amount of 404 HTTP requests. This adds unnecessary noise and a bit of extra load to the system.
Packages
Upgrading a Package could result in a conflict for unchanged items when those items had fields beginning or ending with spaces. This issue has now been fixed.
Falcon LogScale 1.100.1 LTS (2023-10-28)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.100.1 | LTS | 2023-10-28 | Cloud | 2024-08-31 | No | 1.44.0 | No |
Hide file hashes
TAR Checksum | Value |
---|---|
MD5 | b6c4ba4a2ad739834e88487addd0337d |
SHA1 | f7af84bba1bdf85ebec2a88afbccfe21086d848b |
SHA256 | e54266bbfc953464f65d071363ee33407910c9b0c4fccf9dffdb30b1252bebe8 |
SHA512 | 6b6d54cc24e40e5fb079d0b708f81b21d9765cb1c24e90bc0830599d3421d1c3ed329193865ab6a8326e31e4cfcea442b1d328d95b7fabd951dd19aaf0ace684 |
Docker Image | SHA256 Checksum |
---|---|
humio | c912f26dff073f1b79c4f1cb54a590f8608d41f23af653c85f46b917f5752561 |
humio-core | 9c28429f6ed743e54ad849e4211291c91725ab73fb72c03fc65f7f14f54ee21b |
kafka | b6950dddb92ab5df6b63e77d6df96a06ebc1d0675e0889c1f35f745d9904188d |
zookeeper | ac7de0749fac8ac57a0098c5dd7c1c0a2169950233be015c2264bd24ab830015 |
Download: https://repo.humio.com/repository/maven-releases/com/humio/server/1.100.1/server-1.100.1.tar.gz
These notes include entries from the following previous releases: 1.100.0
Bug fixes and updates.
Advance Warning
The following items are due to change in a future release.
Installation and Deployment
Support for running on Java 11, 12, 13, 14, 15 and 16 will be removed by the end of September 2023.
Removed
Items that have been removed as of this release.
GraphQL API
The deprecated
RegistryPackage
datatype has been deleted, along with the deprecated mutations and fields using it:
installPackageFromRegistry mutation
updatePackageFromRegistry mutation
package in the
Searchdomain
datatype
Upgrades
Changes that may occur or be required during an upgrade.
Installation and Deployment
Permit running LogScale on Java 20. Docker containers have been upgraded to be based on Java 20.
Other
The Kafka client has been upgraded to 3.4.1. The Kafka broker has been upgraded to 3.4.1 in the Kafka container.
New features and improvements
Security
All view permission tokens created from now on will not be able to run queries based on the user who created it (legacy behavior due to user requirement for queries). They will however be able to run queries on behalf of the organization given the right permissions.
Existing view permission tokens and the resources (scheduled searches, alerts, etc.) are unaffected by this change. For any view permission tokens created after this change, the scheduled searches, alerts, etc. created using these tokens, will run based on the organization instead of the user who created the token.
This addresses the issue where, for example, alerts created using a view permission token would fail to run if the user who created the token was removed from the organization or if the permissions needed to run the alert was removed from the user. With the new behaviour the alert will continue working even though the user is removed or looses the required permissions to run the alert.
In the unlikely event where an external actor hits the audit log without an IP set, we will now log
null
instead of defaulting to the local IP.Migration from the legacy Organization Shared Dashboard IP filter to the Dashboard Security Policies for sharing dashboards will be done by Creating an IP Filter corresponding to the old filter. If the migration can be performed, this IP Filter will be set on all shared dashboards and set as the Shared Dashboard IP filter Security Policy for the organization. If migration cannot be done, a notification will be displayed to the organization owner explaining how to complete the migration manually. Migration cannot be done when there is a shared Dashboard that has an IP filter other than the legacy Organization Shared Dashboard IP filter.
Introducing organization query ownership, permission tokens and organization level security policies features.
For more information, see Organization Owned Queries, Repository & View Permissions, Security Policies.
UI Changes
Organization and system level permissions can now be handled through the UI.
When duplicating an alert, you are now redirected straight to the New alert page.
For more information, see Reusing an Alert.
Filter alerts now have an updated In preview label which no longer behaves like a button but shows a message when hovering over.
Automation and Alerts
More attributes have been added to Filter alerts:
Filter alerts will now be able to catch up with up to 24 hours of delay (ingest delays + delays in actions).
Filter alerts will now trigger on events that are unavailable for up to 10 minutes due to query warnings.
For more information, see Filter Alerts.
A new Alerts and Scheduled Searches.
/ option has been added forFor more information, see Managing Alerts.
Improvements have been made in the UI:
When Creating an Alert from a Query, the alert type — Standard or Filter — is auto-selected based on query detection.
Added a trigger limit field in the Filter Alerts form.
Actions are now selected in Alerts and Scheduled Searches forms using a ComboBox component.
Changed the behaviour of the + button for Actions selection in the Alerts and Scheduled Searches forms; it will now take you to the form where you create a new action instead of adding an action to that entity.
GraphQL API
For the updateMaxAutoShardCount and blockIngest GraphQL mutations, it is no longer required to be root, instead the caller must have the
ManageCluster
permission.The userId input field on the updateDashboardToken mutation is now optional and deprecated in favor of the queryOwnershipType field. If userId is set to anything else than the calling user ID, an exception will be thrown.
A GraphQL API has been added to read the current tag groupings on a repository.
For more information, see repository() .
QueryOnlyAccessTokens GraphQL query field previously used for a prototype has now been removed.
API
For auto sharding operations (
GET
,UPDATE
,DELETE
) in Cluster Management API it is no longer required to be root, instead the caller must have theManageCluster
permission.
Configuration
The following configuration parameters have been added:
FILTER_ALERTS_MAX_CATCH_UP_LIMIT
to set how long back filter alerts will be able to catch up with delays.FILTER_ALERTS_MAX_WAIT_FOR_MISSING_DATA
to set for how long filter alerts will wait for query warnings about missing data to disappear.
The following configuration parameters for storage concurrency are now deprecated:
GCP_STORAGE_UPLOAD_CONCURRENCY
GCP_STORAGE_DOWNLOAD_CONCURRENCY
They are replaced by new variables:
If unassigned, the new variables will populate with the largest value from the deprecated variables, until these are removed.
The new configuration parameters
FILTER_ALERT_MAX_EMAIL_TRIGGER_LIMIT
andFILTER_ALERT_MAX_NON_EMAIL_TRIGGER_LIMIT
now allow setting the trigger limit for filter alerts; the allowed value depends on whether the alert has email actions attached or not.Introduced the new Dynamic Configuration option
QueryPartitionAutoBalance
which turns on/off automatic balancing of query partitions across nodes.For more information, see Dynamic Configuration Parameters.
Dashboards and Widgets
When clicking
on a dashboard widget, the query will now use the live setting of the dashboard. Also, parameter values are carried over.For more information, see Manage Widgets.
Log Collector
A new fleet metric has been added to the
Fleet overview
page.For more information, see Falcon Log Collector Manage your Fleet.
Quick filters have been added on
Fleet Overview
(Status and Config) and onConfig overview
(Status) pages.For more information, see Falcon Log Collector Manage your Fleet.
A menu item has been added to
Fleet Overview
page, which now allows to unenroll a collector from Fleet Management.For more information, see Manage Falcon Log Collector Instance Enrollment.
Functions
Parameter
ignoreCase
has been added to thein()
function, to allow for case-insensitive searching. Default is to case sensitively search for the provided values.Changed the approximation algorithm used for counting distinct values in
count(myField, distinct=true)
andfieldstats()
. Any query using one of the aforementioned functions may report a different number, which in most cases will be more accurate than previous estimates.
Other
License keys using the format applied before 2021 are no longer supported. Obsolete license formats start with the string
eyJhbGciOiJFUzI1NiIsInR5cCI6IkpXVCJ9
. If your license key is obsolete, before you upgrade LogScale contact Support to request an equivalent license key that has the new format. All versions of LogScale since 2020 support the new license key format.For more information, see License Installation.
Tag groupings
page is now available under the repository Settings tab to see the tag groupings which are currently in use on a repository.
Fixed in this release
Security
Hidden validation issues that would prevent from saving changes to Security Policies configuration have now been fixed.
UI Changes
Time Selector and date picker in the Time Interval panel have been fixed for issues related to daylight savings time.
Fixed an issue where query parameters would be extracted from comments in the query.
Fixed an error that was thrown when attempting to export fields to CSV containing spaces.
Fixed the default query prefixes which would override exceptions to default role bindings if no query prefix is set in the exceptions. The default query prefix set in the default role will now only impact views that are not defined as an exception to the default rule.
Automation and Alerts
Filter alerts with a query ending with a comment would not run. This issue has now been fixed.
GraphQL API
The GraphQL query used by the front page could not return all views and repositories a user had access to, because of an issue with the default roles on groups. This issue has now been fixed.
Configuration
Wrong behaviour in the
StaticQueryFractionOfCores
dynamic configuration. The intent of this configuration is to limit queries from one organization (user on single-organization clusters) to run on a certain percentage of mapper threads at most, effectively throttling queries to prevent one organization from consuming all capacity. Throttled queries from one organization could still block queries from other organizations and prevent them from running, leaving mapper threads idle: this behaviour has now been fixed.
Dashboards and Widgets
When Using Saved Queries in Interactions, the interaction would not be kept if the saved query was created from template with the button. This issue is now fixed.
Description tips that were partly hidden in
Table
widgets are now correctly visualized in dashboards.Fixed the parameter form which could not be opened when asterisks were used as quoted identifiers in the query.
On charts, the legend tooltip was sometimes hidden towards the bottom of the chart. It has now been fixed to stay within the chart boundaries.
The rendering of JSON in the
Event List
widget is now faster and consumes less memory.In Dashboard Link, the targeted dashboard could not display correctly if the dashboard was renamed. The issue has been fixed by using the dashboard ID instead of the name as reference.
When using the
sort()
function with theBar Chart
widget, it would only stay sorted for a while. The issue has been fixed and it now remains sorted in the same order as the underlying data.
Ingestion
A 500 status code was issued when ingesting to
/api/v1/ingest/json
with no assigned parser. It now ingests the rawstring.
Functions
Fixed an issue where syntax coloring and code completion would stop working in certain cases (using multiple saved queries, or aggregate function in
case
).Fixed
bucket()
andtimeChart()
functions as they could lead to partially missing results when used in combination withwindow()
.
Other
BucketStorageUploadLatencyJob
could incorrectly report that LogScale was falling behind on bucket uploads. This issue has been fixed.
Packages
Upgrading a Package could result in a conflict for unchanged items when those items had fields beginning or ending with spaces. This issue has now been fixed.
Falcon LogScale 1.100.0 LTS (2023-08-16)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.100.0 | LTS | 2023-08-16 | Cloud | 2024-08-31 | No | 1.44.0 | No |
Hide file hashes
TAR Checksum | Value |
---|---|
MD5 | 21d63c1c73f770ef58d5adc06ea1841d |
SHA1 | fafb23178c2ed5dc84dab13265f7dc89b8940de2 |
SHA256 | d51e51ae8e8301044be0bd3c617bde63a6f83b787ded74b61e8e5ded573cad15 |
SHA512 | f0f0b76ceef499ccfe0f1090b0fcaba3fcb16b1cdd61d6d1420d1f3ae84267c251778fc6bd4a1b982fc31c2e5d05af69f784d60daa876101190b3ecb21b53388 |
Docker Image | SHA256 Checksum |
---|---|
humio | fae9d70da0bfe10cb6502029cf0eeb23787f1af56fb773ad894b285052b9f9af |
humio-core | 4ddd216beb45f6bd70f59b0137fb8a36f5b32dfecfebb427059b97b118521d16 |
kafka | 11eb764c06ea5015fc803453c43ea38c034ed66c807aa46ef273d2bf406c7986 |
zookeeper | 1f6d7261f2e2970dd4c67812d5103d3c5dbc228b27f45e18cecf4ab74335969a |
Download: https://repo.humio.com/repository/maven-releases/com/humio/server/1.100.0/server-1.100.0.tar.gz
Bug fixes and updates.
Advance Warning
The following items are due to change in a future release.
Installation and Deployment
Support for running on Java 11, 12, 13, 14, 15 and 16 will be removed by the end of September 2023.
Removed
Items that have been removed as of this release.
GraphQL API
The deprecated
RegistryPackage
datatype has been deleted, along with the deprecated mutations and fields using it:
installPackageFromRegistry mutation
updatePackageFromRegistry mutation
package in the
Searchdomain
datatype
Upgrades
Changes that may occur or be required during an upgrade.
Installation and Deployment
Permit running LogScale on Java 20. Docker containers have been upgraded to be based on Java 20.
Other
The Kafka client has been upgraded to 3.4.1. The Kafka broker has been upgraded to 3.4.1 in the Kafka container.
New features and improvements
Security
All view permission tokens created from now on will not be able to run queries based on the user who created it (legacy behavior due to user requirement for queries). They will however be able to run queries on behalf of the organization given the right permissions.
Existing view permission tokens and the resources (scheduled searches, alerts, etc.) are unaffected by this change. For any view permission tokens created after this change, the scheduled searches, alerts, etc. created using these tokens, will run based on the organization instead of the user who created the token.
This addresses the issue where, for example, alerts created using a view permission token would fail to run if the user who created the token was removed from the organization or if the permissions needed to run the alert was removed from the user. With the new behaviour the alert will continue working even though the user is removed or looses the required permissions to run the alert.
In the unlikely event where an external actor hits the audit log without an IP set, we will now log
null
instead of defaulting to the local IP.Migration from the legacy Organization Shared Dashboard IP filter to the Dashboard Security Policies for sharing dashboards will be done by Creating an IP Filter corresponding to the old filter. If the migration can be performed, this IP Filter will be set on all shared dashboards and set as the Shared Dashboard IP filter Security Policy for the organization. If migration cannot be done, a notification will be displayed to the organization owner explaining how to complete the migration manually. Migration cannot be done when there is a shared Dashboard that has an IP filter other than the legacy Organization Shared Dashboard IP filter.
Introducing organization query ownership, permission tokens and organization level security policies features.
For more information, see Organization Owned Queries, Repository & View Permissions, Security Policies.
UI Changes
Organization and system level permissions can now be handled through the UI.
When duplicating an alert, you are now redirected straight to the New alert page.
For more information, see Reusing an Alert.
Filter alerts now have an updated In preview label which no longer behaves like a button but shows a message when hovering over.
Automation and Alerts
More attributes have been added to Filter alerts:
Filter alerts will now be able to catch up with up to 24 hours of delay (ingest delays + delays in actions).
Filter alerts will now trigger on events that are unavailable for up to 10 minutes due to query warnings.
For more information, see Filter Alerts.
A new Alerts and Scheduled Searches.
/ option has been added forFor more information, see Managing Alerts.
Improvements have been made in the UI:
When Creating an Alert from a Query, the alert type — Standard or Filter — is auto-selected based on query detection.
Added a trigger limit field in the Filter Alerts form.
Actions are now selected in Alerts and Scheduled Searches forms using a ComboBox component.
Changed the behaviour of the + button for Actions selection in the Alerts and Scheduled Searches forms; it will now take you to the form where you create a new action instead of adding an action to that entity.
GraphQL API
For the updateMaxAutoShardCount and blockIngest GraphQL mutations, it is no longer required to be root, instead the caller must have the
ManageCluster
permission.The userId input field on the updateDashboardToken mutation is now optional and deprecated in favor of the queryOwnershipType field. If userId is set to anything else than the calling user ID, an exception will be thrown.
A GraphQL API has been added to read the current tag groupings on a repository.
For more information, see repository() .
QueryOnlyAccessTokens GraphQL query field previously used for a prototype has now been removed.
API
For auto sharding operations (
GET
,UPDATE
,DELETE
) in Cluster Management API it is no longer required to be root, instead the caller must have theManageCluster
permission.
Configuration
The following configuration parameters have been added:
FILTER_ALERTS_MAX_CATCH_UP_LIMIT
to set how long back filter alerts will be able to catch up with delays.FILTER_ALERTS_MAX_WAIT_FOR_MISSING_DATA
to set for how long filter alerts will wait for query warnings about missing data to disappear.
The following configuration parameters for storage concurrency are now deprecated:
GCP_STORAGE_UPLOAD_CONCURRENCY
GCP_STORAGE_DOWNLOAD_CONCURRENCY
They are replaced by new variables:
If unassigned, the new variables will populate with the largest value from the deprecated variables, until these are removed.
The new configuration parameters
FILTER_ALERT_MAX_EMAIL_TRIGGER_LIMIT
andFILTER_ALERT_MAX_NON_EMAIL_TRIGGER_LIMIT
now allow setting the trigger limit for filter alerts; the allowed value depends on whether the alert has email actions attached or not.Introduced the new Dynamic Configuration option
QueryPartitionAutoBalance
which turns on/off automatic balancing of query partitions across nodes.For more information, see Dynamic Configuration Parameters.
Dashboards and Widgets
When clicking
on a dashboard widget, the query will now use the live setting of the dashboard. Also, parameter values are carried over.For more information, see Manage Widgets.
Log Collector
A new fleet metric has been added to the
Fleet overview
page.For more information, see Falcon Log Collector Manage your Fleet.
Quick filters have been added on
Fleet Overview
(Status and Config) and onConfig overview
(Status) pages.For more information, see Falcon Log Collector Manage your Fleet.
A menu item has been added to
Fleet Overview
page, which now allows to unenroll a collector from Fleet Management.For more information, see Manage Falcon Log Collector Instance Enrollment.
Functions
Parameter
ignoreCase
has been added to thein()
function, to allow for case-insensitive searching. Default is to case sensitively search for the provided values.Changed the approximation algorithm used for counting distinct values in
count(myField, distinct=true)
andfieldstats()
. Any query using one of the aforementioned functions may report a different number, which in most cases will be more accurate than previous estimates.
Other
License keys using the format applied before 2021 are no longer supported. Obsolete license formats start with the string
eyJhbGciOiJFUzI1NiIsInR5cCI6IkpXVCJ9
. If your license key is obsolete, before you upgrade LogScale contact Support to request an equivalent license key that has the new format. All versions of LogScale since 2020 support the new license key format.For more information, see License Installation.
Tag groupings
page is now available under the repository Settings tab to see the tag groupings which are currently in use on a repository.
Fixed in this release
Security
Hidden validation issues that would prevent from saving changes to Security Policies configuration have now been fixed.
UI Changes
Fixed an issue where query parameters would be extracted from comments in the query.
Fixed an error that was thrown when attempting to export fields to CSV containing spaces.
Fixed the default query prefixes which would override exceptions to default role bindings if no query prefix is set in the exceptions. The default query prefix set in the default role will now only impact views that are not defined as an exception to the default rule.
Automation and Alerts
Filter alerts with a query ending with a comment would not run. This issue has now been fixed.
GraphQL API
The GraphQL query used by the front page could not return all views and repositories a user had access to, because of an issue with the default roles on groups. This issue has now been fixed.
Configuration
Wrong behaviour in the
StaticQueryFractionOfCores
dynamic configuration. The intent of this configuration is to limit queries from one organization (user on single-organization clusters) to run on a certain percentage of mapper threads at most, effectively throttling queries to prevent one organization from consuming all capacity. Throttled queries from one organization could still block queries from other organizations and prevent them from running, leaving mapper threads idle: this behaviour has now been fixed.
Dashboards and Widgets
When Using Saved Queries in Interactions, the interaction would not be kept if the saved query was created from template with the button. This issue is now fixed.
Description tips that were partly hidden in
Table
widgets are now correctly visualized in dashboards.Fixed the parameter form which could not be opened when asterisks were used as quoted identifiers in the query.
On charts, the legend tooltip was sometimes hidden towards the bottom of the chart. It has now been fixed to stay within the chart boundaries.
The rendering of JSON in the
Event List
widget is now faster and consumes less memory.In Dashboard Link, the targeted dashboard could not display correctly if the dashboard was renamed. The issue has been fixed by using the dashboard ID instead of the name as reference.
When using the
sort()
function with theBar Chart
widget, it would only stay sorted for a while. The issue has been fixed and it now remains sorted in the same order as the underlying data.
Ingestion
A 500 status code was issued when ingesting to
/api/v1/ingest/json
with no assigned parser. It now ingests the rawstring.
Functions
Fixed an issue where syntax coloring and code completion would stop working in certain cases (using multiple saved queries, or aggregate function in
case
).Fixed
bucket()
andtimeChart()
functions as they could lead to partially missing results when used in combination withwindow()
.
Other
BucketStorageUploadLatencyJob
could incorrectly report that LogScale was falling behind on bucket uploads. This issue has been fixed.
Packages
Upgrading a Package could result in a conflict for unchanged items when those items had fields beginning or ending with spaces. This issue has now been fixed.
Falcon LogScale 1.99.0 GA (2023-07-18)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.99.0 | GA | 2023-07-18 | Cloud | 2024-08-31 | No | 1.44.0 | No |
Available for download two days after release.
Bug fixes and updates.
Advance Warning
The following items are due to change in a future release.
Installation and Deployment
Support for running on Java 11, 12, 13, 14, 15 and 16 will be removed by the end of September 2023.
New features and improvements
Log Collector
A menu item has been added to
Fleet Overview
page, which now allows to unenroll a collector from Fleet Management.For more information, see Manage Falcon Log Collector Instance Enrollment.
Functions
Parameter
ignoreCase
has been added to thein()
function, to allow for case-insensitive searching. Default is to case sensitively search for the provided values.
Fixed in this release
GraphQL API
The GraphQL query used by the front page could not return all views and repositories a user had access to, because of an issue with the default roles on groups. This issue has now been fixed.
Configuration
Wrong behaviour in the
StaticQueryFractionOfCores
dynamic configuration. The intent of this configuration is to limit queries from one organization (user on single-organization clusters) to run on a certain percentage of mapper threads at most, effectively throttling queries to prevent one organization from consuming all capacity. Throttled queries from one organization could still block queries from other organizations and prevent them from running, leaving mapper threads idle: this behaviour has now been fixed.
Packages
Upgrading a Package could result in a conflict for unchanged items when those items had fields beginning or ending with spaces. This issue has now been fixed.
Falcon LogScale 1.98.0 GA (2023-07-11)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.98.0 | GA | 2023-07-11 | Cloud | 2024-08-31 | No | 1.44.0 | No |
Available for download two days after release.
Bug fixes and updates.
Advance Warning
The following items are due to change in a future release.
Installation and Deployment
Support for running on Java 11, 12, 13, 14, 15 and 16 will be removed by the end of September 2023.
New features and improvements
Automation and Alerts
Improvements have been made in the UI:
When Creating an Alert from a Query, the alert type — Standard or Filter — is auto-selected based on query detection.
Added a trigger limit field in the Filter Alerts form.
Actions are now selected in Alerts and Scheduled Searches forms using a ComboBox component.
Changed the behaviour of the + button for Actions selection in the Alerts and Scheduled Searches forms; it will now take you to the form where you create a new action instead of adding an action to that entity.
GraphQL API
QueryOnlyAccessTokens GraphQL query field previously used for a prototype has now been removed.
Configuration
The new configuration parameters
FILTER_ALERT_MAX_EMAIL_TRIGGER_LIMIT
andFILTER_ALERT_MAX_NON_EMAIL_TRIGGER_LIMIT
now allow setting the trigger limit for filter alerts; the allowed value depends on whether the alert has email actions attached or not.
Falcon LogScale 1.97.0 GA (2023-07-04)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.97.0 | GA | 2023-07-04 | Cloud | 2024-08-31 | No | 1.44.0 | No |
Available for download two days after release.
Bug fixes and updates.
Advance Warning
The following items are due to change in a future release.
Installation and Deployment
Support for running on Java 11, 12, 13, 14, 15 and 16 will be removed by the end of September 2023.
New features and improvements
Security
All view permission tokens created from now on will not be able to run queries based on the user who created it (legacy behavior due to user requirement for queries). They will however be able to run queries on behalf of the organization given the right permissions.
Existing view permission tokens and the resources (scheduled searches, alerts, etc.) are unaffected by this change. For any view permission tokens created after this change, the scheduled searches, alerts, etc. created using these tokens, will run based on the organization instead of the user who created the token.
This addresses the issue where, for example, alerts created using a view permission token would fail to run if the user who created the token was removed from the organization or if the permissions needed to run the alert was removed from the user. With the new behaviour the alert will continue working even though the user is removed or looses the required permissions to run the alert.
Migration from the legacy Organization Shared Dashboard IP filter to the Dashboard Security Policies for sharing dashboards will be done by Creating an IP Filter corresponding to the old filter. If the migration can be performed, this IP Filter will be set on all shared dashboards and set as the Shared Dashboard IP filter Security Policy for the organization. If migration cannot be done, a notification will be displayed to the organization owner explaining how to complete the migration manually. Migration cannot be done when there is a shared Dashboard that has an IP filter other than the legacy Organization Shared Dashboard IP filter.
Introducing organization query ownership, permission tokens and organization level security policies features.
For more information, see Organization Owned Queries, Repository & View Permissions, Security Policies.
UI Changes
Organization and system level permissions can now be handled through the UI.
Automation and Alerts
More attributes have been added to Filter alerts:
Filter alerts will now be able to catch up with up to 24 hours of delay (ingest delays + delays in actions).
Filter alerts will now trigger on events that are unavailable for up to 10 minutes due to query warnings.
For more information, see Filter Alerts.
A new Alerts and Scheduled Searches.
/ option has been added forFor more information, see Managing Alerts.
GraphQL API
A GraphQL API has been added to read the current tag groupings on a repository.
For more information, see repository() .
Configuration
The following configuration parameters have been added:
FILTER_ALERTS_MAX_CATCH_UP_LIMIT
to set how long back filter alerts will be able to catch up with delays.FILTER_ALERTS_MAX_WAIT_FOR_MISSING_DATA
to set for how long filter alerts will wait for query warnings about missing data to disappear.
The following configuration parameters for storage concurrency are now deprecated:
GCP_STORAGE_UPLOAD_CONCURRENCY
GCP_STORAGE_DOWNLOAD_CONCURRENCY
They are replaced by new variables:
If unassigned, the new variables will populate with the largest value from the deprecated variables, until these are removed.
Dashboards and Widgets
When clicking
on a dashboard widget, the query will now use the live setting of the dashboard. Also, parameter values are carried over.For more information, see Manage Widgets.
Log Collector
Quick filters have been added on
Fleet Overview
(Status and Config) and onConfig overview
(Status) pages.For more information, see Falcon Log Collector Manage your Fleet.
Other
Tag groupings
page is now available under the repository Settings tab to see the tag groupings which are currently in use on a repository.
Fixed in this release
Automation and Alerts
Filter alerts with a query ending with a comment would not run. This issue has now been fixed.
Dashboards and Widgets
The rendering of JSON in the
Event List
widget is now faster and consumes less memory.When using the
sort()
function with theBar Chart
widget, it would only stay sorted for a while. The issue has been fixed and it now remains sorted in the same order as the underlying data.
Ingestion
A 500 status code was issued when ingesting to
/api/v1/ingest/json
with no assigned parser. It now ingests the rawstring.
Falcon LogScale 1.96.0 GA (2023-06-27)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.96.0 | GA | 2023-06-27 | Cloud | 2024-08-31 | No | 1.44.0 | No |
Available for download two days after release.
Bug fixes and updates.
Advance Warning
The following items are due to change in a future release.
Installation and Deployment
Support for running on Java 11, 12, 13, 14, 15 and 16 will be removed by the end of September 2023.
Upgrades
Changes that may occur or be required during an upgrade.
Other
The Kafka client has been upgraded to 3.4.1. The Kafka broker has been upgraded to 3.4.1 in the Kafka container.
New features and improvements
UI Changes
When duplicating an alert, you are now redirected straight to the New alert page.
For more information, see Reusing an Alert.
Filter alerts now have an updated In preview label which no longer behaves like a button but shows a message when hovering over.
GraphQL API
For the updateMaxAutoShardCount and blockIngest GraphQL mutations, it is no longer required to be root, instead the caller must have the
ManageCluster
permission.The userId input field on the updateDashboardToken mutation is now optional and deprecated in favor of the queryOwnershipType field. If userId is set to anything else than the calling user ID, an exception will be thrown.
API
For auto sharding operations (
GET
,UPDATE
,DELETE
) in Cluster Management API it is no longer required to be root, instead the caller must have theManageCluster
permission.
Fixed in this release
Dashboards and Widgets
When Using Saved Queries in Interactions, the interaction would not be kept if the saved query was created from template with the button. This issue is now fixed.
Description tips that were partly hidden in
Table
widgets are now correctly visualized in dashboards.In Dashboard Link, the targeted dashboard could not display correctly if the dashboard was renamed. The issue has been fixed by using the dashboard ID instead of the name as reference.
Functions
Fixed an issue where syntax coloring and code completion would stop working in certain cases (using multiple saved queries, or aggregate function in
case
).
Falcon LogScale 1.95.0 GA (2023-06-20)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.95.0 | GA | 2023-06-20 | Cloud | 2024-08-31 | No | 1.44.0 | No |
Available for download two days after release.
Bug fixes and updates.
Advance Warning
The following items are due to change in a future release.
Installation and Deployment
Support for running on Java 11, 12, 13, 14, 15 and 16 will be removed by the end of September 2023.
Removed
Items that have been removed as of this release.
GraphQL API
The deprecated
RegistryPackage
datatype has been deleted, along with the deprecated mutations and fields using it:
installPackageFromRegistry mutation
updatePackageFromRegistry mutation
package in the
Searchdomain
datatype
Upgrades
Changes that may occur or be required during an upgrade.
Installation and Deployment
Permit running LogScale on Java 20. Docker containers have been upgraded to be based on Java 20.
Fixed in this release
UI Changes
Fixed an issue where query parameters would be extracted from comments in the query.
Fixed an error that was thrown when attempting to export fields to CSV containing spaces.
Fixed the default query prefixes which would override exceptions to default role bindings if no query prefix is set in the exceptions. The default query prefix set in the default role will now only impact views that are not defined as an exception to the default rule.
Dashboards and Widgets
Fixed the parameter form which could not be opened when asterisks were used as quoted identifiers in the query.
On charts, the legend tooltip was sometimes hidden towards the bottom of the chart. It has now been fixed to stay within the chart boundaries.
Other
BucketStorageUploadLatencyJob
could incorrectly report that LogScale was falling behind on bucket uploads. This issue has been fixed.
Falcon LogScale 1.94.2 LTS (2023-11-15)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.94.2 | LTS | 2023-11-15 | Cloud | 2024-07-31 | No | 1.44.0 | No |
Hide file hashes
TAR Checksum | Value |
---|---|
MD5 | badc10344b739942bd0f02ca2fd26033 |
SHA1 | 5e47a9376b87d16d0fdd7120132cc3ecb9ba0a16 |
SHA256 | c47ac4e150334899a5fad7ef9aeb3cae759ad6e619781d5ee627fa05fb091dfa |
SHA512 | ebb165a31f919d57b20b802569f816a544a5fcfbfae4f5cb15d616a00316dbe353ed296ad388136739a6200fc5ae209f88a6dc7f913a72f6ac35a3cdd4029936 |
Docker Image | SHA256 Checksum |
---|---|
humio | cb5a118e0001da009a6234068e58c1aa3b873965ef2eac13ea0f28f0d388b49c |
humio-core | 3ec3fbe5a57b17f359240891783deb1b5f24b1576eabd6fdc1a874cb4499d78d |
kafka | 7329a337457b6e498a70be4bc3f7c6e516c8b77f17075209120d1be2c862db5c |
zookeeper | ddcd922ea39fa5b593d8501956883fb23ede29786f2241d1247493081c05bfe8 |
Download: https://repo.humio.com/repository/maven-releases/com/humio/server/1.94.2/server-1.94.2.tar.gz
These notes include entries from the following previous releases: 1.94.0, 1.94.1
Bug fixes and updates.
Advance Warning
The following items are due to change in a future release.
Installation and Deployment
Support for running on Java 11, 12, 13, 14, 15 and 16 will be removed by the end of September 2023.
Removed
Items that have been removed as of this release.
API
Degrade and deprecate some REST and GraphQL APIs due to the introduction of
AutomaticSegmentDistribution
andAutomaticDigesterDistribution
. The deprecated elements will be removed in a future release, once the upgrade compatibility with version 1.88.0 is dropped. We expect this to be no earlier than September 2023.The following REST endpoints are deprecated, as they no longer have an effect and return meaningless results:
api/v1/clusterconfig/segments/prune-replicas
api/v1/clusterconfig/segments/distribute-evenly
api/v1/clusterconfig/segments/distribute-evenly-reshuffle-all
api/v1/clusterconfig/segments/distribute-evenly-to-host
api/v1/clusterconfig/segments/distribute-evenly-from-host
api/v1/clusterconfig/segments/partitions
api/v1/clusterconfig/segments/partitions/setdefaults
api/v1/clusterconfig/segments/set-replication-defaults
api/v1/clusterconfig/partitions/setdefaults
api/v1/clusterconfig/ingestpartitions/distribute-evenly-from-host
api/v1/clusterconfig/ingestpartitions/setdefaults
api/v1/clusterconfig/ingestpartitions
(POST
only,GET
will continue to work)The following GraphQL mutations are deprecated, as they no longer have an effect and return meaningless results:
startDataRedistribution
updateStoragePartitionScheme
The IngestPartitionScheme mutation is not deprecated, but as it updates state that is overwritten by automation, we recommend against using it — it exists solely to serve as a debugging tool.
The following GraphQL fields on the
cluster
object are deprecated, and return meaningless values:
ingestPartitionsWarnings
suggestedIngestPartitions
storagePartitions
storagePartitionsWarnings
suggestedStoragePartitions
storageDivergence
reapply_targetSize
The following fields in the return value of the
api/v1/clusterconfig/segments/segment-stats
endpoint are deprecated and degraded to always beO
:
reapply_targetBytes
reapply_targetSegments
reapply_inboundBytes
reapply_inboundSegments
Behavior Changes
Scripts or environment which make use of these tools should be checked and updated for the new configuration:
Storage
Be less aggressive updating the digest partitions when a node goes offline. When a node goes offline/online, creating a well balanced table can require changes to partitions other than those where the changed node appears. This can cause more digest reassignment that we'd like, so we're changing the behavior of the automation. We'll now only generate optimally balanced tables in reaction to nodes being registered or unregistered from the cluster, and in reaction to the digest replication factor changing. The rest of the time, we'll take the previously generated balanced table as a starting point, and do very minimal node replacements in it to ensure partitions are properly replicated to live nodes.
It is no longer allowed for nodes to delete bucketed mini-segments involved in queries off local disks before the queries are done. This should help ensure queries do not "miss" querying these files if they are deleted while a query is running.
Metadata on segments in memory is now represented in a manner that requires less memory at runtime after booting. The heap required for global snapshot is in the range 3-6 times the size of the disk, for a cluster with many segments. This change reduces the memory requirements for long retention compared to previous versions. Note that for a short time during boot of a node the memory requirement is closer to 10-15 times the size of the snapshot on disk.
Configuration
Remove
NEW_INGEST_ONLY_NODE_SEMANTICS
since we no longer support opting out of the newingestonly
behavior. The behavior has been the default since 1.79.0.For more information, see Falcon LogScale 1.79.0 GA (2023-02-28), LogScale Operational Architecture.
Upgrades
Changes that may occur or be required during an upgrade.
Security
xmlsec has been upgraded to 2.3.4 to address CVE-2023-44483 issue.
New features and improvements
UI Changes
A new tutorial built on a dedicated demo data view is available for environments that do not have access to legacy tutorial based on a sandbox repository.
The
DeleteRepositoryOrView
data permission is now visible in the UI on Cloud environments.The Time Selector now only allows zooming out to approximately 4,000 years.
The
ChangeRetention
data permission is now enabled on Cloud environments.When reaching the default capped output in
table()
andsort()
query functions, a warning now suggests you can set a new value using thelimit
parameter.
Documentation
LogScale Kubernetes Reference Architecture new page has been added with LogScale reference architecture description when deploying LogScale using Kubernetes.
Regular Expression Syntax new page has been added with extended details of supported regular expression syntax and differences between the LogScale support and other implementations such as Java and Perl.
Automation and Alerts
The Alert and Scheduled Search jobs no longer produce logs about specific alerts or scheduled searches in the humio repository. The logs are still sent to the humio-activity repository, which in normal setup is also ingested into the humio repository. So before, the logs would normally be duplicated, now they are not. The only difference between the two types of logs, is that the logs from the humio-activity repository all have loglevel equal to
INFO
. You can use the severity field instead to distinguish between the severity of the logs.The possibility to mark alerts and scheduled searches as favorites has been removed.
Improvements in the layout of Alerts and Scheduled Searches, which now have updated forms.
The
Actions
overview now has quick filters for showing only actions of specific types.The
Scheduled Searches
overview now shows the status of scheduled searches with a colored dot to make it easy to spot failing scheduled searches.Improvements in the Alerts and Scheduled Searches permissions, which are now renamed to Run on behalf of, and have a more clarifying help text.
The
Alerts
overview now has quick filters for showing only standard alerts or filter alerts. It also shows the status of alerts with a colored dot to make it easy to spot failing alerts.
GraphQL API
The
Usage
page has been updated to support queries that are in progress for longer than the GraphQL timeout allows.The semantics of the field SolitarySegmentSize on the
ClusterNode
datatype has changed from counting bytes that only exist on that node and which have been underreplicated for a while, to counting bytes that only exist on that node.The GraphQL schema for
UsageStats
has been updated to reflect that queries can be in progress.Mutations enableAlert and disableAlert have been added for enabling and disabling an alert without changing other fields.
Configuration
Setting the
SHARED_DASHBOARDS_ENABLED
environment variable tofalse
now disables the option of creating links for sharing dashboards.For more information, see Disabling Access to Shared Dashboards.
Added support for using Google Cloud storage access Workload Identity rather than an explicit service account for bucket storage and export to bucket of query results.
For more information, see Google Cloud Bucket Storage with Workload Identity.
The new
MAX_EVENT_FIELD_COUNT_IN_PARSER
is introduced to control the number of fields allowed within the parser, but not when storing the event.
Dashboards and Widgets
New parsing of Template Expressions has been implemented in the UI for improved performance.
When creating or editing interactions you can now visualize any unused parameter bindings, with the option to remove them.
For more information, see Unused parameters bindings.
Improved performance on the
Search
page, especially when events contain large JSON objects.A new limit of 49 series has been set when using the wide format data (one field per series) in the Scatter Chart Widget (the first field is always the x axis). No such limit applies to long format data (series defined by one groupby column).
The
empty list
alias is now available as an input option for parameter bindings, so that Multi-value Parameters can be set explicitly to have the value of an empty list.For more information, see Empty list alias.
Parameter labels are now used instead of parameter IDs when displaying the list of parameters that a widget / query is waiting on.
Ingestion
Parser timeouts have been changed to take thread time into account. This should make parsers more resilient to long Garbage Collector stalls.
For more information, see Parser Timeout.
Log Collector
Added a new test status for configurations, which allows you to try out a configuration on one or more instances before it's published.
For more information, see Test a Remote Configuration.
Functions
Performance improvements when using
regex()
function orregex
syntax.In
parseTimestamp()
function, special format specifiers likeseconds
are now recognized independently of capitalization to allow case-insensitive match.
Other
Reduced the amount of memory used when multiple queries use the
match()
function with the same arguments. Before, if you ran many queries that used the same file, the contents of the file would be represented multiple times in memory, once for each query. This could put you at risk of exhausting the server's memory if the files were large. With this change the file contents will be shared between the queries and represented only once. This enables the server to run more queries and/or handle larger files.For more information, see Lookup Files Operations.
When the Kafka broker set changes at runtime, track that set and use as bootstrap servers for Kafka whenever LogScale needs to create a new Kafka client at runtime. This allows replacing all Kafka brokers (incrementally, moving their work to new servers) without restarting LogScale. Note that the set is not persisted across restart of LogScale, so when restarting LogScale, make sure to provide an up to date set of bootstrap servers.
The following cluster management features are now enabled:
AutomaticJobDistribution
AutomaticDigesterDistribution
AutomaticSegmentDistribution
For more information, see Digest Rules.
Fixed in this release
UI Changes
Turned off the light bulb in the query editor as it was causing technical issues.
Fixed an issue where the filter would remain applied in the saved or recent queries when switching tabs in the menu.
Time Selector and date picker in the Time Interval panel have been fixed for issues related to daylight savings time.
Fixed the order of the timezones in the timezone dropdown on the
Search
andDashboards
pages.An error for lacking permissions that appeared when updating the organization settings has been fixed. Now, if you have permissions to view the Organization Settings page, you can also update information on it.
Automation and Alerts
Dashboards and Widgets
Labels of FixedList Parameter parameters values have been fixed, so that they default to the value instead of rendering empty string.
Fixed an issue where certain widget options would be ignored when importing a dashboard template or installing a package.
The following issues have been fixed on dashboards:
A dashboard would sometimes be perceived as changed on the server even though it was not.
Discard unsaved changes would appear when creating and applying new parameters.
Fixed the
Manage interactions
page where Event List Interactions were not scrollable.Fixed a wrong behaviour on the Interactions overview page when creating a new interaction: if the interaction panel was opened, the repository options would dropdown in it instead of in the Create new interaction dialog.
Queries
An edge case has been fixed where query workers could fail to include mini-segments if the mini-segments were merged at a bad time, causing queries to be missing the data in those segments.
Functions
The
select()
function has been fixed as it wasn't preserving tags.The
format()
has been fixed as the combination of the hexadecimal modifier combined with grouping would not always work.The
rename()
function would drop the field, if thefield
andas
arguments were identical; this issue has now been fixed.The regex engine has been fixed for issues impacting nested repeats and giving false negatives, as in expressions such as
(x{2}:){3}
.
Other
Some merged segments could temporarily be missing from query results right after an ephemeral node reboot. This issue has been fixed.
The following Node-Level Metrics that showed incorrect results are now fixed:
primary-disk-usage
,secondary-disk-usage
,cluster-time-skew
,temp-disk-usage-bytes
.Fixed an issue that could cause segments to appear missing in queries, due to the presence of deleted mini-segments with the same target as live mini-segments.
Early Access
Automation and Alerts
This release includes filter alerts in Early Access. Filter alerts aim to replace existing alerts for use cases where the query does not contain any aggregates.
Filter alerts:
Trigger on individual events and send notifications per event.
Guarantee at-least-once delivery of events to actions, within the limits described below.
Currently only support delays (ingest delays + delays in actions) of 1 hour and limit the number of notifications to 15 per minute per alert. Before going out of Public GA, those limits will be raised.
For more information, see Alerts.
Falcon LogScale 1.94.1 LTS (2023-10-28)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.94.1 | LTS | 2023-10-28 | Cloud | 2024-07-31 | No | 1.44.0 | No |
Hide file hashes
TAR Checksum | Value |
---|---|
MD5 | eb464db335e468ebc1e4d5ce2c4c9bbb |
SHA1 | 50539def54f28defd1e7609dcf3a914dfc5af021 |
SHA256 | 815ece20b80e4bd40f98891377cc5c5a798fb6d1459584eae33ac1f3dad88adc |
SHA512 | f3bbc09f1d861b0cee819f669ea9341b52d3d0a56250307804a50f8338b36f89270806708a623404d97a36f726ad2677036b52ca137bef2f3f1f31902a4b8e88 |
Docker Image | SHA256 Checksum |
---|---|
humio | 14f52128e6db97854786d535a67c95a07bba628961e565d55382a9a4ba85a8e7 |
humio-core | 14f52128e6db97854786d535a67c95a07bba628961e565d55382a9a4ba85a8e7 |
kafka | d219b84fe3feb2a3f10da27c33a4471584f50296306f9e740d6469e4d38a04c6 |
zookeeper | a3e76d48d8029aa0579928ee260e5a8aa92ac7efbea4daa8d67339929506ce91 |
Download: https://repo.humio.com/repository/maven-releases/com/humio/server/1.94.1/server-1.94.1.tar.gz
These notes include entries from the following previous releases: 1.94.0
Bug fixes and updates.
Advance Warning
The following items are due to change in a future release.
Installation and Deployment
Support for running on Java 11, 12, 13, 14, 15 and 16 will be removed by the end of September 2023.
Removed
Items that have been removed as of this release.
API
Degrade and deprecate some REST and GraphQL APIs due to the introduction of
AutomaticSegmentDistribution
andAutomaticDigesterDistribution
. The deprecated elements will be removed in a future release, once the upgrade compatibility with version 1.88.0 is dropped. We expect this to be no earlier than September 2023.The following REST endpoints are deprecated, as they no longer have an effect and return meaningless results:
api/v1/clusterconfig/segments/prune-replicas
api/v1/clusterconfig/segments/distribute-evenly
api/v1/clusterconfig/segments/distribute-evenly-reshuffle-all
api/v1/clusterconfig/segments/distribute-evenly-to-host
api/v1/clusterconfig/segments/distribute-evenly-from-host
api/v1/clusterconfig/segments/partitions
api/v1/clusterconfig/segments/partitions/setdefaults
api/v1/clusterconfig/segments/set-replication-defaults
api/v1/clusterconfig/partitions/setdefaults
api/v1/clusterconfig/ingestpartitions/distribute-evenly-from-host
api/v1/clusterconfig/ingestpartitions/setdefaults
api/v1/clusterconfig/ingestpartitions
(POST
only,GET
will continue to work)The following GraphQL mutations are deprecated, as they no longer have an effect and return meaningless results:
startDataRedistribution
updateStoragePartitionScheme
The IngestPartitionScheme mutation is not deprecated, but as it updates state that is overwritten by automation, we recommend against using it — it exists solely to serve as a debugging tool.
The following GraphQL fields on the
cluster
object are deprecated, and return meaningless values:
ingestPartitionsWarnings
suggestedIngestPartitions
storagePartitions
storagePartitionsWarnings
suggestedStoragePartitions
storageDivergence
reapply_targetSize
The following fields in the return value of the
api/v1/clusterconfig/segments/segment-stats
endpoint are deprecated and degraded to always beO
:
reapply_targetBytes
reapply_targetSegments
reapply_inboundBytes
reapply_inboundSegments
Behavior Changes
Scripts or environment which make use of these tools should be checked and updated for the new configuration:
Storage
Be less aggressive updating the digest partitions when a node goes offline. When a node goes offline/online, creating a well balanced table can require changes to partitions other than those where the changed node appears. This can cause more digest reassignment that we'd like, so we're changing the behavior of the automation. We'll now only generate optimally balanced tables in reaction to nodes being registered or unregistered from the cluster, and in reaction to the digest replication factor changing. The rest of the time, we'll take the previously generated balanced table as a starting point, and do very minimal node replacements in it to ensure partitions are properly replicated to live nodes.
It is no longer allowed for nodes to delete bucketed mini-segments involved in queries off local disks before the queries are done. This should help ensure queries do not "miss" querying these files if they are deleted while a query is running.
Metadata on segments in memory is now represented in a manner that requires less memory at runtime after booting. The heap required for global snapshot is in the range 3-6 times the size of the disk, for a cluster with many segments. This change reduces the memory requirements for long retention compared to previous versions. Note that for a short time during boot of a node the memory requirement is closer to 10-15 times the size of the snapshot on disk.
Configuration
Remove
NEW_INGEST_ONLY_NODE_SEMANTICS
since we no longer support opting out of the newingestonly
behavior. The behavior has been the default since 1.79.0.For more information, see Falcon LogScale 1.79.0 GA (2023-02-28), LogScale Operational Architecture.
New features and improvements
UI Changes
A new tutorial built on a dedicated demo data view is available for environments that do not have access to legacy tutorial based on a sandbox repository.
The
DeleteRepositoryOrView
data permission is now visible in the UI on Cloud environments.The Time Selector now only allows zooming out to approximately 4,000 years.
The
ChangeRetention
data permission is now enabled on Cloud environments.When reaching the default capped output in
table()
andsort()
query functions, a warning now suggests you can set a new value using thelimit
parameter.
Documentation
LogScale Kubernetes Reference Architecture new page has been added with LogScale reference architecture description when deploying LogScale using Kubernetes.
Regular Expression Syntax new page has been added with extended details of supported regular expression syntax and differences between the LogScale support and other implementations such as Java and Perl.
Automation and Alerts
The Alert and Scheduled Search jobs no longer produce logs about specific alerts or scheduled searches in the humio repository. The logs are still sent to the humio-activity repository, which in normal setup is also ingested into the humio repository. So before, the logs would normally be duplicated, now they are not. The only difference between the two types of logs, is that the logs from the humio-activity repository all have loglevel equal to
INFO
. You can use the severity field instead to distinguish between the severity of the logs.The possibility to mark alerts and scheduled searches as favorites has been removed.
Improvements in the layout of Alerts and Scheduled Searches, which now have updated forms.
The
Actions
overview now has quick filters for showing only actions of specific types.The
Scheduled Searches
overview now shows the status of scheduled searches with a colored dot to make it easy to spot failing scheduled searches.Improvements in the Alerts and Scheduled Searches permissions, which are now renamed to Run on behalf of, and have a more clarifying help text.
The
Alerts
overview now has quick filters for showing only standard alerts or filter alerts. It also shows the status of alerts with a colored dot to make it easy to spot failing alerts.
GraphQL API
The
Usage
page has been updated to support queries that are in progress for longer than the GraphQL timeout allows.The semantics of the field SolitarySegmentSize on the
ClusterNode
datatype has changed from counting bytes that only exist on that node and which have been underreplicated for a while, to counting bytes that only exist on that node.The GraphQL schema for
UsageStats
has been updated to reflect that queries can be in progress.Mutations enableAlert and disableAlert have been added for enabling and disabling an alert without changing other fields.
Configuration
Setting the
SHARED_DASHBOARDS_ENABLED
environment variable tofalse
now disables the option of creating links for sharing dashboards.For more information, see Disabling Access to Shared Dashboards.
Added support for using Google Cloud storage access Workload Identity rather than an explicit service account for bucket storage and export to bucket of query results.
For more information, see Google Cloud Bucket Storage with Workload Identity.
The new
MAX_EVENT_FIELD_COUNT_IN_PARSER
is introduced to control the number of fields allowed within the parser, but not when storing the event.
Dashboards and Widgets
New parsing of Template Expressions has been implemented in the UI for improved performance.
When creating or editing interactions you can now visualize any unused parameter bindings, with the option to remove them.
For more information, see Unused parameters bindings.
Improved performance on the
Search
page, especially when events contain large JSON objects.A new limit of 49 series has been set when using the wide format data (one field per series) in the Scatter Chart Widget (the first field is always the x axis). No such limit applies to long format data (series defined by one groupby column).
The
empty list
alias is now available as an input option for parameter bindings, so that Multi-value Parameters can be set explicitly to have the value of an empty list.For more information, see Empty list alias.
Parameter labels are now used instead of parameter IDs when displaying the list of parameters that a widget / query is waiting on.
Ingestion
Parser timeouts have been changed to take thread time into account. This should make parsers more resilient to long Garbage Collector stalls.
For more information, see Parser Timeout.
Log Collector
Added a new test status for configurations, which allows you to try out a configuration on one or more instances before it's published.
For more information, see Test a Remote Configuration.
Functions
Performance improvements when using
regex()
function orregex
syntax.In
parseTimestamp()
function, special format specifiers likeseconds
are now recognized independently of capitalization to allow case-insensitive match.
Other
Reduced the amount of memory used when multiple queries use the
match()
function with the same arguments. Before, if you ran many queries that used the same file, the contents of the file would be represented multiple times in memory, once for each query. This could put you at risk of exhausting the server's memory if the files were large. With this change the file contents will be shared between the queries and represented only once. This enables the server to run more queries and/or handle larger files.For more information, see Lookup Files Operations.
When the Kafka broker set changes at runtime, track that set and use as bootstrap servers for Kafka whenever LogScale needs to create a new Kafka client at runtime. This allows replacing all Kafka brokers (incrementally, moving their work to new servers) without restarting LogScale. Note that the set is not persisted across restart of LogScale, so when restarting LogScale, make sure to provide an up to date set of bootstrap servers.
The following cluster management features are now enabled:
AutomaticJobDistribution
AutomaticDigesterDistribution
AutomaticSegmentDistribution
For more information, see Digest Rules.
Fixed in this release
UI Changes
Turned off the light bulb in the query editor as it was causing technical issues.
Fixed an issue where the filter would remain applied in the saved or recent queries when switching tabs in the menu.
Time Selector and date picker in the Time Interval panel have been fixed for issues related to daylight savings time.
Fixed the order of the timezones in the timezone dropdown on the
Search
andDashboards
pages.An error for lacking permissions that appeared when updating the organization settings has been fixed. Now, if you have permissions to view the Organization Settings page, you can also update information on it.
Automation and Alerts
Dashboards and Widgets
Labels of FixedList Parameter parameters values have been fixed, so that they default to the value instead of rendering empty string.
Fixed an issue where certain widget options would be ignored when importing a dashboard template or installing a package.
The following issues have been fixed on dashboards:
A dashboard would sometimes be perceived as changed on the server even though it was not.
Discard unsaved changes would appear when creating and applying new parameters.
Fixed the
Manage interactions
page where Event List Interactions were not scrollable.Fixed a wrong behaviour on the Interactions overview page when creating a new interaction: if the interaction panel was opened, the repository options would dropdown in it instead of in the Create new interaction dialog.
Queries
An edge case has been fixed where query workers could fail to include mini-segments if the mini-segments were merged at a bad time, causing queries to be missing the data in those segments.
Functions
The
select()
function has been fixed as it wasn't preserving tags.The
format()
has been fixed as the combination of the hexadecimal modifier combined with grouping would not always work.The
rename()
function would drop the field, if thefield
andas
arguments were identical; this issue has now been fixed.The regex engine has been fixed for issues impacting nested repeats and giving false negatives, as in expressions such as
(x{2}:){3}
.
Other
Some merged segments could temporarily be missing from query results right after an ephemeral node reboot. This issue has been fixed.
The following Node-Level Metrics that showed incorrect results are now fixed:
primary-disk-usage
,secondary-disk-usage
,cluster-time-skew
,temp-disk-usage-bytes
.Fixed an issue that could cause segments to appear missing in queries, due to the presence of deleted mini-segments with the same target as live mini-segments.
Early Access
Automation and Alerts
This release includes filter alerts in Early Access. Filter alerts aim to replace existing alerts for use cases where the query does not contain any aggregates.
Filter alerts:
Trigger on individual events and send notifications per event.
Guarantee at-least-once delivery of events to actions, within the limits described below.
Currently only support delays (ingest delays + delays in actions) of 1 hour and limit the number of notifications to 15 per minute per alert. Before going out of Public GA, those limits will be raised.
For more information, see Alerts.
Falcon LogScale 1.94.0 LTS (2023-07-05)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.94.0 | LTS | 2023-07-05 | Cloud | 2024-07-31 | No | 1.44.0 | No |
Hide file hashes
TAR Checksum | Value |
---|---|
MD5 | a100dcfdab967319d89d19bef26db9af |
SHA1 | 6a811035c3b79b48cdc30d81d1acbe94dd31e118 |
SHA256 | ee5d7491b7dbf0622d95b4382e9061699f7d10644f607309cf3f3c06976539f4 |
SHA512 | 62b92b63446626fbbf063f1b249efbd38dc87da939b1a17f5c47884e02e8825ff4f6f03edf0c49117a005e3c9852377638a1dc6e71e5ba11f748800db293ea00 |
Docker Image | SHA256 Checksum |
---|---|
humio | a7f0df994aa81ffe6c417d2e1ca7a86a300a6ae1c9c17d3415cedbeb4315c686 |
humio-core | feb4b24681e28deb6f415d518624c665a92451cea5dad54b75fd81351ef3dadc |
kafka | 9c0d4fa13b873432c405f26a58df36f81893f30dc9a1f11e92cc033d2801e208 |
zookeeper | 7fe8f047891922b3180c7d138f2ab28935f184e22b24e9bdbe436b0ab910de65 |
Download: https://repo.humio.com/repository/maven-releases/com/humio/server/1.94.0/server-1.94.0.tar.gz
Bug fixes and updates.
Advance Warning
The following items are due to change in a future release.
Installation and Deployment
Support for running on Java 11, 12, 13, 14, 15 and 16 will be removed by the end of September 2023.
Removed
Items that have been removed as of this release.
API
Degrade and deprecate some REST and GraphQL APIs due to the introduction of
AutomaticSegmentDistribution
andAutomaticDigesterDistribution
. The deprecated elements will be removed in a future release, once the upgrade compatibility with version 1.88.0 is dropped. We expect this to be no earlier than September 2023.The following REST endpoints are deprecated, as they no longer have an effect and return meaningless results:
api/v1/clusterconfig/segments/prune-replicas
api/v1/clusterconfig/segments/distribute-evenly
api/v1/clusterconfig/segments/distribute-evenly-reshuffle-all
api/v1/clusterconfig/segments/distribute-evenly-to-host
api/v1/clusterconfig/segments/distribute-evenly-from-host
api/v1/clusterconfig/segments/partitions
api/v1/clusterconfig/segments/partitions/setdefaults
api/v1/clusterconfig/segments/set-replication-defaults
api/v1/clusterconfig/partitions/setdefaults
api/v1/clusterconfig/ingestpartitions/distribute-evenly-from-host
api/v1/clusterconfig/ingestpartitions/setdefaults
api/v1/clusterconfig/ingestpartitions
(POST
only,GET
will continue to work)The following GraphQL mutations are deprecated, as they no longer have an effect and return meaningless results:
startDataRedistribution
updateStoragePartitionScheme
The IngestPartitionScheme mutation is not deprecated, but as it updates state that is overwritten by automation, we recommend against using it — it exists solely to serve as a debugging tool.
The following GraphQL fields on the
cluster
object are deprecated, and return meaningless values:
ingestPartitionsWarnings
suggestedIngestPartitions
storagePartitions
storagePartitionsWarnings
suggestedStoragePartitions
storageDivergence
reapply_targetSize
The following fields in the return value of the
api/v1/clusterconfig/segments/segment-stats
endpoint are deprecated and degraded to always beO
:
reapply_targetBytes
reapply_targetSegments
reapply_inboundBytes
reapply_inboundSegments
Behavior Changes
Scripts or environment which make use of these tools should be checked and updated for the new configuration:
Storage
Be less aggressive updating the digest partitions when a node goes offline. When a node goes offline/online, creating a well balanced table can require changes to partitions other than those where the changed node appears. This can cause more digest reassignment that we'd like, so we're changing the behavior of the automation. We'll now only generate optimally balanced tables in reaction to nodes being registered or unregistered from the cluster, and in reaction to the digest replication factor changing. The rest of the time, we'll take the previously generated balanced table as a starting point, and do very minimal node replacements in it to ensure partitions are properly replicated to live nodes.
It is no longer allowed for nodes to delete bucketed mini-segments involved in queries off local disks before the queries are done. This should help ensure queries do not "miss" querying these files if they are deleted while a query is running.
Metadata on segments in memory is now represented in a manner that requires less memory at runtime after booting. The heap required for global snapshot is in the range 3-6 times the size of the disk, for a cluster with many segments. This change reduces the memory requirements for long retention compared to previous versions. Note that for a short time during boot of a node the memory requirement is closer to 10-15 times the size of the snapshot on disk.
Configuration
Remove
NEW_INGEST_ONLY_NODE_SEMANTICS
since we no longer support opting out of the newingestonly
behavior. The behavior has been the default since 1.79.0.For more information, see Falcon LogScale 1.79.0 GA (2023-02-28), LogScale Operational Architecture.
New features and improvements
UI Changes
A new tutorial built on a dedicated demo data view is available for environments that do not have access to legacy tutorial based on a sandbox repository.
The
DeleteRepositoryOrView
data permission is now visible in the UI on Cloud environments.The Time Selector now only allows zooming out to approximately 4,000 years.
The
ChangeRetention
data permission is now enabled on Cloud environments.When reaching the default capped output in
table()
andsort()
query functions, a warning now suggests you can set a new value using thelimit
parameter.
Documentation
LogScale Kubernetes Reference Architecture new page has been added with LogScale reference architecture description when deploying LogScale using Kubernetes.
Regular Expression Syntax new page has been added with extended details of supported regular expression syntax and differences between the LogScale support and other implementations such as Java and Perl.
Automation and Alerts
The Alert and Scheduled Search jobs no longer produce logs about specific alerts or scheduled searches in the humio repository. The logs are still sent to the humio-activity repository, which in normal setup is also ingested into the humio repository. So before, the logs would normally be duplicated, now they are not. The only difference between the two types of logs, is that the logs from the humio-activity repository all have loglevel equal to
INFO
. You can use the severity field instead to distinguish between the severity of the logs.The possibility to mark alerts and scheduled searches as favorites has been removed.
Improvements in the layout of Alerts and Scheduled Searches, which now have updated forms.
The
Actions
overview now has quick filters for showing only actions of specific types.The
Scheduled Searches
overview now shows the status of scheduled searches with a colored dot to make it easy to spot failing scheduled searches.Improvements in the Alerts and Scheduled Searches permissions, which are now renamed to Run on behalf of, and have a more clarifying help text.
The
Alerts
overview now has quick filters for showing only standard alerts or filter alerts. It also shows the status of alerts with a colored dot to make it easy to spot failing alerts.
GraphQL API
The
Usage
page has been updated to support queries that are in progress for longer than the GraphQL timeout allows.The semantics of the field SolitarySegmentSize on the
ClusterNode
datatype has changed from counting bytes that only exist on that node and which have been underreplicated for a while, to counting bytes that only exist on that node.The GraphQL schema for
UsageStats
has been updated to reflect that queries can be in progress.Mutations enableAlert and disableAlert have been added for enabling and disabling an alert without changing other fields.
Configuration
Setting the
SHARED_DASHBOARDS_ENABLED
environment variable tofalse
now disables the option of creating links for sharing dashboards.For more information, see Disabling Access to Shared Dashboards.
Added support for using Google Cloud storage access Workload Identity rather than an explicit service account for bucket storage and export to bucket of query results.
For more information, see Google Cloud Bucket Storage with Workload Identity.
The new
MAX_EVENT_FIELD_COUNT_IN_PARSER
is introduced to control the number of fields allowed within the parser, but not when storing the event.
Dashboards and Widgets
New parsing of Template Expressions has been implemented in the UI for improved performance.
When creating or editing interactions you can now visualize any unused parameter bindings, with the option to remove them.
For more information, see Unused parameters bindings.
Improved performance on the
Search
page, especially when events contain large JSON objects.A new limit of 49 series has been set when using the wide format data (one field per series) in the Scatter Chart Widget (the first field is always the x axis). No such limit applies to long format data (series defined by one groupby column).
The
empty list
alias is now available as an input option for parameter bindings, so that Multi-value Parameters can be set explicitly to have the value of an empty list.For more information, see Empty list alias.
Parameter labels are now used instead of parameter IDs when displaying the list of parameters that a widget / query is waiting on.
Ingestion
Parser timeouts have been changed to take thread time into account. This should make parsers more resilient to long Garbage Collector stalls.
For more information, see Parser Timeout.
Log Collector
Added a new test status for configurations, which allows you to try out a configuration on one or more instances before it's published.
For more information, see Test a Remote Configuration.
Functions
Performance improvements when using
regex()
function orregex
syntax.In
parseTimestamp()
function, special format specifiers likeseconds
are now recognized independently of capitalization to allow case-insensitive match.
Other
Reduced the amount of memory used when multiple queries use the
match()
function with the same arguments. Before, if you ran many queries that used the same file, the contents of the file would be represented multiple times in memory, once for each query. This could put you at risk of exhausting the server's memory if the files were large. With this change the file contents will be shared between the queries and represented only once. This enables the server to run more queries and/or handle larger files.For more information, see Lookup Files Operations.
When the Kafka broker set changes at runtime, track that set and use as bootstrap servers for Kafka whenever LogScale needs to create a new Kafka client at runtime. This allows replacing all Kafka brokers (incrementally, moving their work to new servers) without restarting LogScale. Note that the set is not persisted across restart of LogScale, so when restarting LogScale, make sure to provide an up to date set of bootstrap servers.
The following cluster management features are now enabled:
AutomaticJobDistribution
AutomaticDigesterDistribution
AutomaticSegmentDistribution
For more information, see Digest Rules.
Fixed in this release
UI Changes
Turned off the light bulb in the query editor as it was causing technical issues.
Fixed an issue where the filter would remain applied in the saved or recent queries when switching tabs in the menu.
Fixed the order of the timezones in the timezone dropdown on the
Search
andDashboards
pages.An error for lacking permissions that appeared when updating the organization settings has been fixed. Now, if you have permissions to view the Organization Settings page, you can also update information on it.
Automation and Alerts
Dashboards and Widgets
Labels of FixedList Parameter parameters values have been fixed, so that they default to the value instead of rendering empty string.
Fixed an issue where certain widget options would be ignored when importing a dashboard template or installing a package.
The following issues have been fixed on dashboards:
A dashboard would sometimes be perceived as changed on the server even though it was not.
Discard unsaved changes would appear when creating and applying new parameters.
Fixed the
Manage interactions
page where Event List Interactions were not scrollable.Fixed a wrong behaviour on the Interactions overview page when creating a new interaction: if the interaction panel was opened, the repository options would dropdown in it instead of in the Create new interaction dialog.
Queries
An edge case has been fixed where query workers could fail to include mini-segments if the mini-segments were merged at a bad time, causing queries to be missing the data in those segments.
Functions
The
select()
function has been fixed as it wasn't preserving tags.The
format()
has been fixed as the combination of the hexadecimal modifier combined with grouping would not always work.The
rename()
function would drop the field, if thefield
andas
arguments were identical; this issue has now been fixed.The regex engine has been fixed for issues impacting nested repeats and giving false negatives, as in expressions such as
(x{2}:){3}
.
Other
Some merged segments could temporarily be missing from query results right after an ephemeral node reboot. This issue has been fixed.
The following Node-Level Metrics that showed incorrect results are now fixed:
primary-disk-usage
,secondary-disk-usage
,cluster-time-skew
,temp-disk-usage-bytes
.Fixed an issue that could cause segments to appear missing in queries, due to the presence of deleted mini-segments with the same target as live mini-segments.
Early Access
Automation and Alerts
This release includes filter alerts in Early Access. Filter alerts aim to replace existing alerts for use cases where the query does not contain any aggregates.
Filter alerts:
Trigger on individual events and send notifications per event.
Guarantee at-least-once delivery of events to actions, within the limits described below.
Currently only support delays (ingest delays + delays in actions) of 1 hour and limit the number of notifications to 15 per minute per alert. Before going out of Public GA, those limits will be raised.
For more information, see Alerts.
Falcon LogScale 1.93.0 GA (2023-06-06)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.93.0 | GA | 2023-06-06 | Cloud | 2024-07-31 | No | 1.44.0 | No |
Available for download two days after release.
Bug fixes and updates.
Advance Warning
The following items are due to change in a future release.
Installation and Deployment
Support for running on Java 11, 12, 13, 14, 15 and 16 will be removed by the end of September 2023.
New features and improvements
Automation and Alerts
The possibility to mark alerts and scheduled searches as favorites has been removed.
Improvements in the layout of Alerts and Scheduled Searches, which now have updated forms.
The
Actions
overview now has quick filters for showing only actions of specific types.The
Scheduled Searches
overview now shows the status of scheduled searches with a colored dot to make it easy to spot failing scheduled searches.Improvements in the Alerts and Scheduled Searches permissions, which are now renamed to Run on behalf of, and have a more clarifying help text.
The
Alerts
overview now has quick filters for showing only standard alerts or filter alerts. It also shows the status of alerts with a colored dot to make it easy to spot failing alerts.
GraphQL API
The semantics of the field SolitarySegmentSize on the
ClusterNode
datatype has changed from counting bytes that only exist on that node and which have been underreplicated for a while, to counting bytes that only exist on that node.
Dashboards and Widgets
Improved performance on the
Search
page, especially when events contain large JSON objects.A new limit of 49 series has been set when using the wide format data (one field per series) in the Scatter Chart Widget (the first field is always the x axis). No such limit applies to long format data (series defined by one groupby column).
Ingestion
Parser timeouts have been changed to take thread time into account. This should make parsers more resilient to long Garbage Collector stalls.
For more information, see Parser Timeout.
Fixed in this release
Dashboards and Widgets
Labels of FixedList Parameter parameters values have been fixed, so that they default to the value instead of rendering empty string.
Functions
The
format()
has been fixed as the combination of the hexadecimal modifier combined with grouping would not always work.
Early Access
Automation and Alerts
This release includes filter alerts in Early Access. Filter alerts aim to replace existing alerts for use cases where the query does not contain any aggregates.
Filter alerts:
Trigger on individual events and send notifications per event.
Guarantee at-least-once delivery of events to actions, within the limits described below.
Currently only support delays (ingest delays + delays in actions) of 1 hour and limit the number of notifications to 15 per minute per alert. Before going out of Public GA, those limits will be raised.
For more information, see Alerts.
Falcon LogScale 1.92.0 GA (2023-05-30)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.92.0 | GA | 2023-05-30 | Cloud | 2024-07-31 | No | 1.44.0 | No |
Available for download two days after release.
Bug fixes and updates.
Advance Warning
The following items are due to change in a future release.
Installation and Deployment
Support for running on Java 11, 12, 13, 14, 15 and 16 will be removed by the end of September 2023.
Behavior Changes
Scripts or environment which make use of these tools should be checked and updated for the new configuration:
Storage
Be less aggressive updating the digest partitions when a node goes offline. When a node goes offline/online, creating a well balanced table can require changes to partitions other than those where the changed node appears. This can cause more digest reassignment that we'd like, so we're changing the behavior of the automation. We'll now only generate optimally balanced tables in reaction to nodes being registered or unregistered from the cluster, and in reaction to the digest replication factor changing. The rest of the time, we'll take the previously generated balanced table as a starting point, and do very minimal node replacements in it to ensure partitions are properly replicated to live nodes.
It is no longer allowed for nodes to delete bucketed mini-segments involved in queries off local disks before the queries are done. This should help ensure queries do not "miss" querying these files if they are deleted while a query is running.
Metadata on segments in memory is now represented in a manner that requires less memory at runtime after booting. The heap required for global snapshot is in the range 3-6 times the size of the disk, for a cluster with many segments. This change reduces the memory requirements for long retention compared to previous versions. Note that for a short time during boot of a node the memory requirement is closer to 10-15 times the size of the snapshot on disk.
Configuration
Remove
NEW_INGEST_ONLY_NODE_SEMANTICS
since we no longer support opting out of the newingestonly
behavior. The behavior has been the default since 1.79.0.For more information, see Falcon LogScale 1.79.0 GA (2023-02-28), LogScale Operational Architecture.
New features and improvements
UI Changes
A new tutorial built on a dedicated demo data view is available for environments that do not have access to legacy tutorial based on a sandbox repository.
The
DeleteRepositoryOrView
data permission is now visible in the UI on Cloud environments.The Time Selector now only allows zooming out to approximately 4,000 years.
The
ChangeRetention
data permission is now enabled on Cloud environments.
Documentation
LogScale Kubernetes Reference Architecture new page has been added with LogScale reference architecture description when deploying LogScale using Kubernetes.
Regular Expression Syntax new page has been added with extended details of supported regular expression syntax and differences between the LogScale support and other implementations such as Java and Perl.
GraphQL API
The
Usage
page has been updated to support queries that are in progress for longer than the GraphQL timeout allows.The GraphQL schema for
UsageStats
has been updated to reflect that queries can be in progress.
Configuration
Setting the
SHARED_DASHBOARDS_ENABLED
environment variable tofalse
now disables the option of creating links for sharing dashboards.For more information, see Disabling Access to Shared Dashboards.
Added support for using Google Cloud storage access Workload Identity rather than an explicit service account for bucket storage and export to bucket of query results.
For more information, see Google Cloud Bucket Storage with Workload Identity.
The new
MAX_EVENT_FIELD_COUNT_IN_PARSER
is introduced to control the number of fields allowed within the parser, but not when storing the event.
Dashboards and Widgets
New parsing of Template Expressions has been implemented in the UI for improved performance.
When creating or editing interactions you can now visualize any unused parameter bindings, with the option to remove them.
For more information, see Unused parameters bindings.
The
empty list
alias is now available as an input option for parameter bindings, so that Multi-value Parameters can be set explicitly to have the value of an empty list.For more information, see Empty list alias.
Parameter labels are now used instead of parameter IDs when displaying the list of parameters that a widget / query is waiting on.
Queries
Polling a query on
/queryjobs
can now delay the response a bit in order to allow returning a potentiallydone
response. The typical effective delay is less than 2 seconds, and the positive effect is saving the extra poll roundtrip that would otherwise need to happen before the query completed. This in particular makes simple queries complete faster from the viewpoint of the client, as they do not have to wait for an extra poll roundtrip in most cases.
Other
Reduced the amount of memory used when multiple queries use the
match()
function with the same arguments. Before, if you ran many queries that used the same file, the contents of the file would be represented multiple times in memory, once for each query. This could put you at risk of exhausting the server's memory if the files were large. With this change the file contents will be shared between the queries and represented only once. This enables the server to run more queries and/or handle larger files.For more information, see Lookup Files Operations.
When the Kafka broker set changes at runtime, track that set and use as bootstrap servers for Kafka whenever LogScale needs to create a new Kafka client at runtime. This allows replacing all Kafka brokers (incrementally, moving their work to new servers) without restarting LogScale. Note that the set is not persisted across restart of LogScale, so when restarting LogScale, make sure to provide an up to date set of bootstrap servers.
Fixed in this release
Security
Verified that LogScale does not use the affected Akka dependency component in CVE-2023-31442 by default, and have taken additional precautions to notify customers.
For:
LogScale Cloud/Falcon Long Term Repository:
This CVE does not impact LogScale Cloud or LTR customers.
LogScale Self-Hosted:
Exposure to risk:
Potential risk is only present if a self hosted customer has modified the Akka parameters to a non default value of
akka.io.dns.resolver = async-dns
during initial setup.By default LogScale does not use this configuration parameter.
CrowdStrike has never recommended custom Akka parameters. We recommend using default values for all parameters.
Steps to mitigate:
Setting akka.io.dns.resolver to default value (inet-address) will mitigate the potential risk.
On versions older than 1.92.0:
Unset the custom Akka configuration. Refer to Akka documentation for more information on how to unset or pass a different value to the parameter here.
CrowdStrike recommends upgrading LogScale to 1.92.x or higher versions.
UI Changes
Fixed an issue where the filter would remain applied in the saved or recent queries when switching tabs in the menu.
Fixed the order of the timezones in the timezone dropdown on the
Search
andDashboards
pages.
Automation and Alerts
Fixed an issue that could cause some rarely occurring errors when running alerts to not show up on the alert.
Dashboards and Widgets
Fixed an issue where certain widget options would be ignored when importing a dashboard template or installing a package.
Fixed a wrong behaviour on the Interactions overview page when creating a new interaction: if the interaction panel was opened, the repository options would dropdown in it instead of in the Create new interaction dialog.
Other
The following Node-Level Metrics that showed incorrect results are now fixed:
primary-disk-usage
,secondary-disk-usage
,cluster-time-skew
,temp-disk-usage-bytes
.
Falcon LogScale 1.91.0 Not Released (2023-05-23)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.91.0 | Not Released | 2023-05-23 | Internal Only | 2024-05-31 | No | 1.44.0 | No |
Available for download two days after release.
Not released.
Advance Warning
The following items are due to change in a future release.
Installation and Deployment
Support for running on Java 11, 12, 13, 14, 15 and 16 will be removed by the end of September 2023.
Falcon LogScale 1.90.0 Not Released (2023-05-16)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.90.0 | Not Released | 2023-05-16 | Internal Only | 2024-05-31 | No | 1.44.0 | No |
Available for download two days after release.
Not released.
Advance Warning
The following items are due to change in a future release.
Installation and Deployment
Support for running on Java 11, 12, 13, 14, 15 and 16 will be removed by the end of September 2023.
Falcon LogScale 1.89.0 GA (2023-05-11)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.89.0 | GA | 2023-05-11 | Cloud | 2024-07-31 | No | 1.44.0 | No |
Available for download two days after release.
Bug fixes and updates.
Advance Warning
The following items are due to change in a future release.
Installation and Deployment
Support for running on Java 11, 12, 13, 14, 15 and 16 will be removed by the end of September 2023.
Removed
Items that have been removed as of this release.
API
Degrade and deprecate some REST and GraphQL APIs due to the introduction of
AutomaticSegmentDistribution
andAutomaticDigesterDistribution
. The deprecated elements will be removed in a future release, once the upgrade compatibility with version 1.88.0 is dropped. We expect this to be no earlier than September 2023.The following REST endpoints are deprecated, as they no longer have an effect and return meaningless results:
api/v1/clusterconfig/segments/prune-replicas
api/v1/clusterconfig/segments/distribute-evenly
api/v1/clusterconfig/segments/distribute-evenly-reshuffle-all
api/v1/clusterconfig/segments/distribute-evenly-to-host
api/v1/clusterconfig/segments/distribute-evenly-from-host
api/v1/clusterconfig/segments/partitions
api/v1/clusterconfig/segments/partitions/setdefaults
api/v1/clusterconfig/segments/set-replication-defaults
api/v1/clusterconfig/partitions/setdefaults
api/v1/clusterconfig/ingestpartitions/distribute-evenly-from-host
api/v1/clusterconfig/ingestpartitions/setdefaults
api/v1/clusterconfig/ingestpartitions
(POST
only,GET
will continue to work)The following GraphQL mutations are deprecated, as they no longer have an effect and return meaningless results:
startDataRedistribution
updateStoragePartitionScheme
The IngestPartitionScheme mutation is not deprecated, but as it updates state that is overwritten by automation, we recommend against using it — it exists solely to serve as a debugging tool.
The following GraphQL fields on the
cluster
object are deprecated, and return meaningless values:
ingestPartitionsWarnings
suggestedIngestPartitions
storagePartitions
storagePartitionsWarnings
suggestedStoragePartitions
storageDivergence
reapply_targetSize
The following fields in the return value of the
api/v1/clusterconfig/segments/segment-stats
endpoint are deprecated and degraded to always beO
:
reapply_targetBytes
reapply_targetSegments
reapply_inboundBytes
reapply_inboundSegments
New features and improvements
Automation and Alerts
The Alert and Scheduled Search jobs no longer produce logs about specific alerts or scheduled searches in the humio repository. The logs are still sent to the humio-activity repository, which in normal setup is also ingested into the humio repository. So before, the logs would normally be duplicated, now they are not. The only difference between the two types of logs, is that the logs from the humio-activity repository all have loglevel equal to
INFO
. You can use the severity field instead to distinguish between the severity of the logs.
GraphQL API
Mutations enableAlert and disableAlert have been added for enabling and disabling an alert without changing other fields.
Configuration
Automatic rebalancing of existing segments onto cluster nodes has been enabled.
Manual editing of the segment partition table is no longer supported. The table is no longer displayed in the
Cluster Administration
UI.The segments will be distributed onto cluster nodes based on the following node-level settings:
ZONE
defines a node's zone. The balancing logic will attempt to distribute segment replicas across as many zones as possible.The target disk usage percentage determines how much of the node disk we will consider usable for storing segment data during a rebalance. The balancing logic will attempt to keep nodes equally full, while considering the node zone and segment replication factor. This can be configured via GraphQL using the setTargetDiskUsagePercentage mutation. The default value is
90
.Nodes with a
NODE_ROLES
setting that excludes segment storage will not receive segments as part of a rebalance.
Log Collector
Added a new test status for configurations, which allows you to try out a configuration on one or more instances before it's published.
For more information, see Test a Remote Configuration.
Other
The following cluster management features are now enabled:
AutomaticJobDistribution
AutomaticDigesterDistribution
AutomaticSegmentDistribution
For more information, see Digest Rules.
Fixed in this release
UI Changes
The
Search
page would reload when using the browser's history navigation buttons. This issue has now been fixed.An error for lacking permissions that appeared when updating the organization settings has been fixed. Now, if you have permissions to view the Organization Settings page, you can also update information on it.
Automation and Alerts
Dashboards and Widgets
The following issues have been fixed on dashboards:
A dashboard would sometimes be perceived as changed on the server even though it was not.
Discard unsaved changes would appear when creating and applying new parameters.
Queries
An edge case has been fixed where query workers could fail to include mini-segments if the mini-segments were merged at a bad time, causing queries to be missing the data in those segments.
Functions
Other
Some merged segments could temporarily be missing from query results right after an ephemeral node reboot. This issue has been fixed.
Fixed an issue that could cause segments to appear missing in queries, due to the presence of deleted mini-segments with the same target as live mini-segments.
Falcon LogScale 1.88.2 LTS (2023-07-04)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.88.2 | LTS | 2023-07-04 | Cloud | 2024-05-31 | No | 1.44.0 | No |
Hide file hashes
TAR Checksum | Value |
---|---|
MD5 | 0ff76e4d337ae9ad8c97f50afb0c9934 |
SHA1 | 1c654d18a443dcf58a4ac42755aa5121bf73aed4 |
SHA256 | 7a020339813f78cadf3a6c162950c891772ca4fd01500c3d21fb0b5c6b66b3a7 |
SHA512 | 6ae6bfff348447d78de037970a4ab2b84e4b60fdc134be4d68d54b70b1dd4676d5ac7059940222f9df97e89665db52b5549c3c421b5759fcf7d1c75879f7a1ec |
Docker Image | SHA256 Checksum |
---|---|
humio | dc1b3fca6b642ba3c784d675153ab955e3d6010954e3efcde1a1388bcace1690 |
humio-core | fed8b88befc50966e8b9f2003c840094790762af993e77fdb9c13e02975d0c11 |
kafka | c24e1d5d6ee54ff52d57d7dae47ab20d8be0b5e4d49f2ca1b92503a5e0629478 |
zookeeper | b5dbea2a18ef00f271267665a86c1e084c3251e53db1510ddf54b1206390b5e3 |
Download: https://repo.humio.com/repository/maven-releases/com/humio/server/1.88.2/server-1.88.2.tar.gz
These notes include entries from the following previous releases: 1.88.0, 1.88.1
Bug fix and updates.
Behavior Changes
Scripts or environment which make use of these tools should be checked and updated for the new configuration:
Storage
It is no longer allowed for nodes to delete bucketed mini-segments involved in queries off local disks before the queries are done. This should help ensure queries do not "miss" querying these files if they are deleted while a query is running.
Change how downloads from bucket storage are prioritized for queries. Previously the highest priority query was allowed to download as many segments as it liked. We now try to estimate how much work a query has available in local segments, and prioritize fetching segments for those queries that are close to running out of local work and becoming blocked for that reason.
Upgrades
Changes that may occur or be required during an upgrade.
Other
Docker images have been upgraded to Java 19.0.2. to address CVE-2022-45688 issue.
Snakeyamls has been upgraded to 2.0 to address CVE-2022-1471 issue.
New features and improvements
UI Changes
The view permission
ChangeDashboardReadonlyToken
is now also required when creating and deleting shared dashboard tokens.Improvements in UI tables visualization: even long column headers' text is now always left-aligned (instead of center-aligned and on top of each other) and uses a different color.
Organization level query blocking has been added to
Organization Settings
UI.For more information, see Organization Query Monitor.
Event List Interactions are now accessible from the Repository and View Settings page.
Automation and Alerts
Clicking the Alerts will now show every unique label that has been created on every alert in the same repository. This means that you don't need to rewrite a label when wanting to add the same label to another alert. This feature also applies to Scheduled Searches.
button inThe error message for an Alert or Scheduled Search on their edit pages now has a button for clearing the error while the dismiss icon will just close the message but not clear errors.
When creating a new Alert, you now have a pulldown menu that suggests labels that you've previously created for other alerts. The same applies to Scheduled Searches.
For more information, see Creating Alerts.
The default time window for Alerts has been updated:
When creating an alert from the Alerts page, the default query time window has been changed from to to match the default throttle time.
When creating an alert from the Search page, the default Throttle period has been changed to match that of the query time window set.
For more information, see Creating Alerts.
When enabling an Alert or Scheduled Search with no actions, an inline warning message now appears instead of a message box.
GraphQL API
The following GraphQL mutations can now also be performed with the
ChangeOrganizationPermissions
permission:The following GraphQL mutations can now also be performed with the
ChangeSystemPermissions
permission:The following GraphQL queries and mutations can now also be performed with either
ChangeOrganizationPermissions
,ChangeSystemPermissions
permission depending on the group:The permissions required in order to list IP filters have been updated. You can now also list IP filters with one of the following permissions:
The querySearchDomain GraphQL query now allows you to search for Views and Repositories based on your permissions — previously, enforcing specific permissions caused errors.
Configuration
New configuration parameters have been added allowing control of
client.rack
for our Kafka consumers:KAFKA_CLIENT_RACK_ENV_VAR
— this variable is read to find the name of the variable that holds the value. It defaults toZONE
, which is the same variable applied to the LogScale node zones by default.
Using the storage class "S3 Intelligent-Tiering" in AWS S3 selectively on files that LogScale knows continues to be supported: it is controlled by the new dynamic configuration
BucketStorageUploadInfrequentThresholdDays
that sets the minimum number of days of remaining retention for the data in order to switch from the default "S3 Standard" to the "Intelligent" tier.The decision is made at the point of upload to the bucket only, whereas existing objects in the bucket are not modified.
The bucket must be configured to not allow the optional tiers
Archive Access tier
norDeep Archive Access tier
as those do not have instant access, which is required for LogScale.As a consequence of that, do not enable automatic archiving within the S3 Intelligent-Tiering storage class.
The new configuration parameter
SEGMENT_READ_FADVICE
has been introduced.The following cluster-level setting has been introduced, editable via GraphQL mutations:
setSegmentReplicationFactor configures the desired number of segment replicas.
This is also configurable via the
DEFAULT_SEGMENT_REPLICATION_FACTOR
configuration parameter.If configured via both environment variable and GraphQL mutation, the mutation has precedence.
For new clusters the default is
1
. For clusters upgrading from older versions, the initial value is taken from theSTORAGE_REPLICATION_FACTOR
environment variable, if set. If instead the variable is not set, the value is taken from the replication factor of the storage partition table prior to the upgrade — this means that upgrading clusters should see no change to their replication factor, unless specified in theSTORAGE_REPLICATION_FACTOR
.The feature can be disabled in case of problems via either the GraphQL mutation setAllowRebalanceExistingSegments, or the environment variable
DEFAULT_ALLOW_REBALANCE_EXISTING_SEGMENTS
.If you need to disable the feature, please reach out to Support and share your concerns so we can try to address them. We intend to remove the option to handle segment partitions manually in the future.
Disable the AutomaticDigesterDistribution feature by default. While the feature works, it can cause performance issues on very large installs if nodes are rebooted repeatedly. In future versions, we've worked around this issue, but for 1.88 patch versions, we prefer simply disabling the feature.
Dashboards and Widgets
When using the Edit in search view item on a dashboard widget, the values set in parameters in the query are also carried over into the search view.
Introduced a new setting for dashboard parameters configuration to defer query execution: the dashboard will not execute any queries on page load until the user provides a value to the parameter.
For more information, see Configuring Dashboard Parameters.
The new interaction type
has been introduced, allowing users to create an interaction that will trigger a new search.For more information, see Manage Dashboard Interactions, Creating Event List Interactions.
You can now save interactions with a saved query on the Search page. Interactions in saved queries are also supported in Packages.
For more information, see Creating Event List Interactions.
The new interaction type Update Parameters has been introduced. This interaction allows you to update parameters in the context you're working in — on the dashboard or on the Search page.
For more information, see Update Parameters.
The combo box has been updated to show multiple selections as "pills".
You can now delete or duplicate Event List Interactions from the Interactions overview page.
For more information, see Deleting & Duplicating Event List Interactions.
Multivalued parameters have been introduced to pass an array of values to the query. The support is limited to the Dashboards page.
For more information, see Multi-value Parameters.
When Setting Up a Dashboard Interaction, the
{{ startTime }}
and{{ endTime }}
special variables now work differently, depending on whether the query, widget or dashboard is running in Live mode or not. They now work as follows:In a live query or dashboard, the
startTime
variable will contain the relative time, such as whereasendTime
will be empty.In a non-live query or dashboard,
startTime
will be the absolute start time when the query was last run.endTime
, similarly, will have the end time of when the query was last run.
Interactive elements in visualizations now have the point cursor.
Log Collector
On the Config Overview page a column showing the state of the configuration has been added. The configuration can either be or in state.
A menu item has been added on the Config Overview page, that links to the Settings page.
When clicking on an Error status on the Fleet Overview page, a dialog with the error details will open.
For more information, see Falcon Log Collector Manage your Fleet.
Fleet Management updates:
Added the
Basic Information
page with primary information of a specific configuration e.g. name, description, no. of assigned instances.The Config Editor used to create/modify LogScale Collector configurations in LogScale has been augmented with context aware auto-completion, tooltips for keywords and highlighting of invalid settings.
For more information, see Manage Remote Configurations.
Queries
Reduced the amount of memory used when multiple queries use the
match()
function with the same arguments. Before, if you ran many queries that used the same file, the contents of the file would be represented multiple times in memory, once for each query. This could put you at risk of exhausting the server's memory if the files were large. With this change the file contents will be shared between the queries and represented only once. This enables the server to run more queries and/or handle larger files.For more information, see Lookup Files Operations.
Improvements to query scheduler logic for "shelving" i.e., pausing queries considered too expensive. The pause/unpause logic are now more responsive and unpause queries faster when they become eligible to run.
Polling a query on
/queryjobs
can now delay the response a bit in order to allow returning a potentiallydone
response. The typical effective delay is less than 2 seconds, and the positive effect is saving the extra poll roundtrip that would otherwise need to happen before the query completed. This in particular makes simple queries complete faster from the viewpoint of the client, as they do not have to wait for an extra poll roundtrip in most cases.
Functions
Performance improvements have been made to the
match()
query function in cases whereignoreCase=true
is used together with eithermode=cidr
, ormode=string
.base64Decode()
query function has been updated such that, when decoding to UTF-8, invalid code points are replaced with a placeholder character.When IOCs are not available, the
ioc:lookup()
query function will now produce an error. Previously, it only produced a warning.The memory usage of the functions
selectLast()
andgroupBy()
has been improved.
Other
When the automatic segment rebalancing feature is enabled, ignore the segment storage table when evaluating whether dead ephemeral nodes can be removed automatically.
Create Repositories
permission now also allows LogScale Self-Hosted users to create repositories.
Packages
The size limit of packages' lookup files has been changed to adhere to the
MAX_FILEUPLOAD_SIZE
configuration parameter. Previously the size limit was1MB
.For more information, see Exporting the Package.
Fixed in this release
Security
Verified that LogScale does not use the affected Akka dependency component in CVE-2023-31442 by default, and have taken additional precautions to notify customers.
For:
LogScale Cloud/Falcon Long Term Repository:
This CVE does not impact LogScale Cloud or LTR customers.
LogScale Self-Hosted:
Exposure to risk:
Potential risk is only present if a self hosted customer has modified the Akka parameters to a non default value of
akka.io.dns.resolver = async-dns
during initial setup.By default LogScale does not use this configuration parameter.
CrowdStrike has never recommended custom Akka parameters. We recommend using default values for all parameters.
Steps to mitigate:
Setting akka.io.dns.resolver to default value (inet-address) will mitigate the potential risk.
On versions older than 1.92.0:
Unset the custom Akka configuration. Refer to Akka documentation for more information on how to unset or pass a different value to the parameter here.
CrowdStrike recommends upgrading LogScale to 1.92.x or higher versions.
UI Changes
The
Search
page would reload when using the browser's history navigation buttons. This issue has now been fixed.An issue in the
Usage
page that could fail showing any data has been fixed.The
Usage
page now shows an error if there are any warnings from the query.The Fields Panel flyout displayed the bottom 10 values rather than the top 10 values. This issue has now been fixed.
For more information, see Displaying Fields.
Dashboards and Widgets
""
was being discarded when creating URLs for interactions. This issue has now been fixed.Attempting to remove a widget on a dashboard would sometimes remove another widget than the one attempted to remove. This issue has been fixed.
The tooltip in the
Time Chart
widget would not show any data points. This issue has now been fixed.Non-breaking space chars (ALT+Space) made Template Expressions unable to be resolved. This issue has been fixed.
'_'
was not recognized as a valid first symbol for parameters when parsing queries. This issue has now been fixed.Fixed an issue where clicking the Inspect link in alert notifications would land on a missing page.
The values of FixedList Parameter on a dashboard would change sort ordering after being exported to a yaml template file. This issue has been fixed.
Queries
In clusters with bucket storage running queries that take more than 90 minutes, those queries could spuriously fail with a complaint that segments were missing. The issue has now been fixed.
Export query result to file dialog would not close in some cases. This issue has now been fixed.
Restart of queries based on lookup files has been fixed: only live queries need restarting from changes to uploaded files that they depend on. Scheduled Searches and static queries use the version of the file present when they start and run to completion.
Functions
The
groupBy()
function would not always warn upon exceeding the defaultlimit
. This issue has now been fixed.Fixed a regression in
join()
validation, which was introduced in version Falcon LogScale 1.80.0 GA (2023-03-07).timeChart()
provided withunit
andgroupBy()
as the aggregation function would not warn on exceeding the defaultgroupBy()
limit
. This issue has now been fixed.
Other
An issue that would cause query workers to handle mini-segments for longer than intended has been fixed.
The following audit log issues have been fixed:
the audit log logged the name of the view owning the view bindings instead of the repository it links to. The name now matches the id in the binding log entry.
the audit log for a view update did not use the updated view but the view data before the update.
An uploaded file would sometimes disappear immediately after uploading. This issue has been fixed.
An issue that would cause bucket downloads to retry infinitely many times for certain types of segments has been fixed.
Fixed an issue where searching within small subsets of the latest 24 hours in combination with hash filters could result in events that belonged in the time range to not be included in the result. The visible symptom was that narrowing the search span provided more hits.
Fixed bucket downloads that could fail if the segment they were fetching disappeared from global.
In ephemeral-disk mode, allow removing a node via the UI when it is dead regardless of any data present on the node: ephemeral mode knows how to ensure durability also when nodes are lost without notice.
For more information, see Ephemeral Nodes and Cluster Identity.
Falcon LogScale 1.88.1 LTS (2023-06-22)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.88.1 | LTS | 2023-06-22 | Cloud | 2024-05-31 | No | 1.44.0 | No |
Hide file hashes
TAR Checksum | Value |
---|---|
MD5 | a999345b437d7c0fbaddbb8a55e6429a |
SHA1 | 1de90e62be02bdcccbec07f9a29720c3c69dbe63 |
SHA256 | 40b2addbaedfdd3155fbd78b38c72dc8ef64c8a945a5858db38867d92aa54b6a |
SHA512 | 78910febd9b6c7d216bc662ca47987db4e3a9e8c7eb3ee948a90e0c73b7aaf2de287faaec1c7891f8a76b4bc0914019e1ed09bd4919d4b5a23deda31c3d3038d |
Docker Image | SHA256 Checksum |
---|---|
humio | d8436c255ce0c95e231fc533a2240f037f3668f4e5f6bc7d3ea614173a6a5088 |
humio-core | c5e82eac78cf5cf9132d3fb76ccda881ef402c142378eebffae77b4356de2ef9 |
kafka | 5ad6b49d76ca75c91731c02f1c11928eea98efa3d2f1df8800ab55e669045ce0 |
zookeeper | cb4f5e163317fb289110c6372d5b54208f4396c433282b65bbfa3c15596e64cc |
Download: https://repo.humio.com/repository/maven-releases/com/humio/server/1.88.1/server-1.88.1.tar.gz
These notes include entries from the following previous releases: 1.88.0
Security fixes.
Behavior Changes
Scripts or environment which make use of these tools should be checked and updated for the new configuration:
Storage
It is no longer allowed for nodes to delete bucketed mini-segments involved in queries off local disks before the queries are done. This should help ensure queries do not "miss" querying these files if they are deleted while a query is running.
Change how downloads from bucket storage are prioritized for queries. Previously the highest priority query was allowed to download as many segments as it liked. We now try to estimate how much work a query has available in local segments, and prioritize fetching segments for those queries that are close to running out of local work and becoming blocked for that reason.
Upgrades
Changes that may occur or be required during an upgrade.
Other
Docker images have been upgraded to Java 19.0.2. to address CVE-2022-45688 issue.
Snakeyamls has been upgraded to 2.0 to address CVE-2022-1471 issue.
New features and improvements
UI Changes
The view permission
ChangeDashboardReadonlyToken
is now also required when creating and deleting shared dashboard tokens.Improvements in UI tables visualization: even long column headers' text is now always left-aligned (instead of center-aligned and on top of each other) and uses a different color.
Organization level query blocking has been added to
Organization Settings
UI.For more information, see Organization Query Monitor.
Event List Interactions are now accessible from the Repository and View Settings page.
Automation and Alerts
Clicking the Alerts will now show every unique label that has been created on every alert in the same repository. This means that you don't need to rewrite a label when wanting to add the same label to another alert. This feature also applies to Scheduled Searches.
button inThe error message for an Alert or Scheduled Search on their edit pages now has a button for clearing the error while the dismiss icon will just close the message but not clear errors.
When creating a new Alert, you now have a pulldown menu that suggests labels that you've previously created for other alerts. The same applies to Scheduled Searches.
For more information, see Creating Alerts.
The default time window for Alerts has been updated:
When creating an alert from the Alerts page, the default query time window has been changed from to to match the default throttle time.
When creating an alert from the Search page, the default Throttle period has been changed to match that of the query time window set.
For more information, see Creating Alerts.
When enabling an Alert or Scheduled Search with no actions, an inline warning message now appears instead of a message box.
GraphQL API
The following GraphQL mutations can now also be performed with the
ChangeOrganizationPermissions
permission:The following GraphQL mutations can now also be performed with the
ChangeSystemPermissions
permission:The following GraphQL queries and mutations can now also be performed with either
ChangeOrganizationPermissions
,ChangeSystemPermissions
permission depending on the group:The permissions required in order to list IP filters have been updated. You can now also list IP filters with one of the following permissions:
The querySearchDomain GraphQL query now allows you to search for Views and Repositories based on your permissions — previously, enforcing specific permissions caused errors.
Configuration
New configuration parameters have been added allowing control of
client.rack
for our Kafka consumers:KAFKA_CLIENT_RACK_ENV_VAR
— this variable is read to find the name of the variable that holds the value. It defaults toZONE
, which is the same variable applied to the LogScale node zones by default.
Using the storage class "S3 Intelligent-Tiering" in AWS S3 selectively on files that LogScale knows continues to be supported: it is controlled by the new dynamic configuration
BucketStorageUploadInfrequentThresholdDays
that sets the minimum number of days of remaining retention for the data in order to switch from the default "S3 Standard" to the "Intelligent" tier.The decision is made at the point of upload to the bucket only, whereas existing objects in the bucket are not modified.
The bucket must be configured to not allow the optional tiers
Archive Access tier
norDeep Archive Access tier
as those do not have instant access, which is required for LogScale.As a consequence of that, do not enable automatic archiving within the S3 Intelligent-Tiering storage class.
The new configuration parameter
SEGMENT_READ_FADVICE
has been introduced.The following cluster-level setting has been introduced, editable via GraphQL mutations:
setSegmentReplicationFactor configures the desired number of segment replicas.
This is also configurable via the
DEFAULT_SEGMENT_REPLICATION_FACTOR
configuration parameter.If configured via both environment variable and GraphQL mutation, the mutation has precedence.
For new clusters the default is
1
. For clusters upgrading from older versions, the initial value is taken from theSTORAGE_REPLICATION_FACTOR
environment variable, if set. If instead the variable is not set, the value is taken from the replication factor of the storage partition table prior to the upgrade — this means that upgrading clusters should see no change to their replication factor, unless specified in theSTORAGE_REPLICATION_FACTOR
.The feature can be disabled in case of problems via either the GraphQL mutation setAllowRebalanceExistingSegments, or the environment variable
DEFAULT_ALLOW_REBALANCE_EXISTING_SEGMENTS
.If you need to disable the feature, please reach out to Support and share your concerns so we can try to address them. We intend to remove the option to handle segment partitions manually in the future.
Disable the AutomaticDigesterDistribution feature by default. While the feature works, it can cause performance issues on very large installs if nodes are rebooted repeatedly. In future versions, we've worked around this issue, but for 1.88 patch versions, we prefer simply disabling the feature.
Dashboards and Widgets
When using the Edit in search view item on a dashboard widget, the values set in parameters in the query are also carried over into the search view.
Introduced a new setting for dashboard parameters configuration to defer query execution: the dashboard will not execute any queries on page load until the user provides a value to the parameter.
For more information, see Configuring Dashboard Parameters.
The new interaction type
has been introduced, allowing users to create an interaction that will trigger a new search.For more information, see Manage Dashboard Interactions, Creating Event List Interactions.
You can now save interactions with a saved query on the Search page. Interactions in saved queries are also supported in Packages.
For more information, see Creating Event List Interactions.
The new interaction type Update Parameters has been introduced. This interaction allows you to update parameters in the context you're working in — on the dashboard or on the Search page.
For more information, see Update Parameters.
The combo box has been updated to show multiple selections as "pills".
You can now delete or duplicate Event List Interactions from the Interactions overview page.
For more information, see Deleting & Duplicating Event List Interactions.
Multivalued parameters have been introduced to pass an array of values to the query. The support is limited to the Dashboards page.
For more information, see Multi-value Parameters.
When Setting Up a Dashboard Interaction, the
{{ startTime }}
and{{ endTime }}
special variables now work differently, depending on whether the query, widget or dashboard is running in Live mode or not. They now work as follows:In a live query or dashboard, the
startTime
variable will contain the relative time, such as whereasendTime
will be empty.In a non-live query or dashboard,
startTime
will be the absolute start time when the query was last run.endTime
, similarly, will have the end time of when the query was last run.
Interactive elements in visualizations now have the point cursor.
Log Collector
On the Config Overview page a column showing the state of the configuration has been added. The configuration can either be or in state.
A menu item has been added on the Config Overview page, that links to the Settings page.
When clicking on an Error status on the Fleet Overview page, a dialog with the error details will open.
For more information, see Falcon Log Collector Manage your Fleet.
Fleet Management updates:
Added the
Basic Information
page with primary information of a specific configuration e.g. name, description, no. of assigned instances.The Config Editor used to create/modify LogScale Collector configurations in LogScale has been augmented with context aware auto-completion, tooltips for keywords and highlighting of invalid settings.
For more information, see Manage Remote Configurations.
Queries
Reduced the amount of memory used when multiple queries use the
match()
function with the same arguments. Before, if you ran many queries that used the same file, the contents of the file would be represented multiple times in memory, once for each query. This could put you at risk of exhausting the server's memory if the files were large. With this change the file contents will be shared between the queries and represented only once. This enables the server to run more queries and/or handle larger files.For more information, see Lookup Files Operations.
Improvements to query scheduler logic for "shelving" i.e., pausing queries considered too expensive. The pause/unpause logic are now more responsive and unpause queries faster when they become eligible to run.
Polling a query on
/queryjobs
can now delay the response a bit in order to allow returning a potentiallydone
response. The typical effective delay is less than 2 seconds, and the positive effect is saving the extra poll roundtrip that would otherwise need to happen before the query completed. This in particular makes simple queries complete faster from the viewpoint of the client, as they do not have to wait for an extra poll roundtrip in most cases.
Functions
Performance improvements have been made to the
match()
query function in cases whereignoreCase=true
is used together with eithermode=cidr
, ormode=string
.base64Decode()
query function has been updated such that, when decoding to UTF-8, invalid code points are replaced with a placeholder character.When IOCs are not available, the
ioc:lookup()
query function will now produce an error. Previously, it only produced a warning.The memory usage of the functions
selectLast()
andgroupBy()
has been improved.
Other
When the automatic segment rebalancing feature is enabled, ignore the segment storage table when evaluating whether dead ephemeral nodes can be removed automatically.
Create Repositories
permission now also allows LogScale Self-Hosted users to create repositories.
Packages
The size limit of packages' lookup files has been changed to adhere to the
MAX_FILEUPLOAD_SIZE
configuration parameter. Previously the size limit was1MB
.For more information, see Exporting the Package.
Fixed in this release
Security
Verified that LogScale does not use the affected Akka dependency component in CVE-2023-31442 by default, and have taken additional precautions to notify customers.
For:
LogScale Cloud/Falcon Long Term Repository:
This CVE does not impact LogScale Cloud or LTR customers.
LogScale Self-Hosted:
Exposure to risk:
Potential risk is only present if a self hosted customer has modified the Akka parameters to a non default value of
akka.io.dns.resolver = async-dns
during initial setup.By default LogScale does not use this configuration parameter.
CrowdStrike has never recommended custom Akka parameters. We recommend using default values for all parameters.
Steps to mitigate:
Setting akka.io.dns.resolver to default value (inet-address) will mitigate the potential risk.
On versions older than 1.92.0:
Unset the custom Akka configuration. Refer to Akka documentation for more information on how to unset or pass a different value to the parameter here.
CrowdStrike recommends upgrading LogScale to 1.92.x or higher versions.
UI Changes
The
Search
page would reload when using the browser's history navigation buttons. This issue has now been fixed.An issue in the
Usage
page that could fail showing any data has been fixed.The
Usage
page now shows an error if there are any warnings from the query.The Fields Panel flyout displayed the bottom 10 values rather than the top 10 values. This issue has now been fixed.
For more information, see Displaying Fields.
Dashboards and Widgets
""
was being discarded when creating URLs for interactions. This issue has now been fixed.Attempting to remove a widget on a dashboard would sometimes remove another widget than the one attempted to remove. This issue has been fixed.
The tooltip in the
Time Chart
widget would not show any data points. This issue has now been fixed.Non-breaking space chars (ALT+Space) made Template Expressions unable to be resolved. This issue has been fixed.
'_'
was not recognized as a valid first symbol for parameters when parsing queries. This issue has now been fixed.Fixed an issue where clicking the Inspect link in alert notifications would land on a missing page.
The values of FixedList Parameter on a dashboard would change sort ordering after being exported to a yaml template file. This issue has been fixed.
Queries
In clusters with bucket storage running queries that take more than 90 minutes, those queries could spuriously fail with a complaint that segments were missing. The issue has now been fixed.
Export query result to file dialog would not close in some cases. This issue has now been fixed.
Restart of queries based on lookup files has been fixed: only live queries need restarting from changes to uploaded files that they depend on. Scheduled Searches and static queries use the version of the file present when they start and run to completion.
Functions
The
groupBy()
function would not always warn upon exceeding the defaultlimit
. This issue has now been fixed.Fixed a regression in
join()
validation, which was introduced in version Falcon LogScale 1.80.0 GA (2023-03-07).timeChart()
provided withunit
andgroupBy()
as the aggregation function would not warn on exceeding the defaultgroupBy()
limit
. This issue has now been fixed.
Other
An issue that would cause query workers to handle mini-segments for longer than intended has been fixed.
The following audit log issues have been fixed:
the audit log logged the name of the view owning the view bindings instead of the repository it links to. The name now matches the id in the binding log entry.
the audit log for a view update did not use the updated view but the view data before the update.
An uploaded file would sometimes disappear immediately after uploading. This issue has been fixed.
An issue that would cause bucket downloads to retry infinitely many times for certain types of segments has been fixed.
Fixed bucket downloads that could fail if the segment they were fetching disappeared from global.
In ephemeral-disk mode, allow removing a node via the UI when it is dead regardless of any data present on the node: ephemeral mode knows how to ensure durability also when nodes are lost without notice.
For more information, see Ephemeral Nodes and Cluster Identity.
Falcon LogScale 1.88.0 LTS (2023-05-24)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.88.0 | LTS | 2023-05-24 | Cloud | 2024-05-31 | No | 1.44.0 | Yes |
Hide file hashes
TAR Checksum | Value |
---|---|
MD5 | 4498b5fcb67bc5d9418ddb67d502af19 |
SHA1 | ce9309cb9c9d6f56513ff1e5de4c91f4f23a8b47 |
SHA256 | 9ba3c4f782bbd58751571b247ab3e76b6e2b50f0457d6966c8754e6566569273 |
SHA512 | e5cebea46bb385f268c2e8ca6f7d6d42d12f19fe7704efcbcb41b50e40ccbf318ed3035c79b8f3fdf8623860e24190dde692fce09a938d9f0c2b3486ac436ae1 |
Docker Image | SHA256 Checksum |
---|---|
humio | 607c8b664d97ec29e5a11960d3b37a01580054d5582748721c5ac141c8be72c0 |
humio-core | 071c84efeb896afb372c43515aab1a5b67e61b035e90937311988ffda9c16a53 |
kafka | ccd909da61a4b1c8be82600f749d2a571afb3ee2baa720a77aaebf06ffd334e4 |
zookeeper | 9a2015bfd9a7b7401604bb54f17d9029b4c6dc42cf3b25655b5c7e60f7e1db86 |
Download: https://repo.humio.com/repository/maven-releases/com/humio/server/1.88.0/server-1.88.0.tar.gz
Bug fixes and updates.
Behavior Changes
Scripts or environment which make use of these tools should be checked and updated for the new configuration:
Storage
It is no longer allowed for nodes to delete bucketed mini-segments involved in queries off local disks before the queries are done. This should help ensure queries do not "miss" querying these files if they are deleted while a query is running.
Change how downloads from bucket storage are prioritized for queries. Previously the highest priority query was allowed to download as many segments as it liked. We now try to estimate how much work a query has available in local segments, and prioritize fetching segments for those queries that are close to running out of local work and becoming blocked for that reason.
Upgrades
Changes that may occur or be required during an upgrade.
Other
Docker images have been upgraded to Java 19.0.2. to address CVE-2022-45688 issue.
Snakeyamls has been upgraded to 2.0 to address CVE-2022-1471 issue.
New features and improvements
UI Changes
The view permission
ChangeDashboardReadonlyToken
is now also required when creating and deleting shared dashboard tokens.Improvements in UI tables visualization: even long column headers' text is now always left-aligned (instead of center-aligned and on top of each other) and uses a different color.
Organization level query blocking has been added to
Organization Settings
UI.For more information, see Organization Query Monitor.
Event List Interactions are now accessible from the Repository and View Settings page.
Automation and Alerts
Clicking the Alerts will now show every unique label that has been created on every alert in the same repository. This means that you don't need to rewrite a label when wanting to add the same label to another alert. This feature also applies to Scheduled Searches.
button inThe error message for an Alert or Scheduled Search on their edit pages now has a button for clearing the error while the dismiss icon will just close the message but not clear errors.
When creating a new Alert, you now have a pulldown menu that suggests labels that you've previously created for other alerts. The same applies to Scheduled Searches.
For more information, see Creating Alerts.
The default time window for Alerts has been updated:
When creating an alert from the Alerts page, the default query time window has been changed from to to match the default throttle time.
When creating an alert from the Search page, the default Throttle period has been changed to match that of the query time window set.
For more information, see Creating Alerts.
When enabling an Alert or Scheduled Search with no actions, an inline warning message now appears instead of a message box.
GraphQL API
The following GraphQL mutations can now also be performed with the
ChangeOrganizationPermissions
permission:The following GraphQL mutations can now also be performed with the
ChangeSystemPermissions
permission:The following GraphQL queries and mutations can now also be performed with either
ChangeOrganizationPermissions
,ChangeSystemPermissions
permission depending on the group:The permissions required in order to list IP filters have been updated. You can now also list IP filters with one of the following permissions:
The querySearchDomain GraphQL query now allows you to search for Views and Repositories based on your permissions — previously, enforcing specific permissions caused errors.
Configuration
New configuration parameters have been added allowing control of
client.rack
for our Kafka consumers:KAFKA_CLIENT_RACK_ENV_VAR
— this variable is read to find the name of the variable that holds the value. It defaults toZONE
, which is the same variable applied to the LogScale node zones by default.
Using the storage class "S3 Intelligent-Tiering" in AWS S3 selectively on files that LogScale knows continues to be supported: it is controlled by the new dynamic configuration
BucketStorageUploadInfrequentThresholdDays
that sets the minimum number of days of remaining retention for the data in order to switch from the default "S3 Standard" to the "Intelligent" tier.The decision is made at the point of upload to the bucket only, whereas existing objects in the bucket are not modified.
The bucket must be configured to not allow the optional tiers
Archive Access tier
norDeep Archive Access tier
as those do not have instant access, which is required for LogScale.As a consequence of that, do not enable automatic archiving within the S3 Intelligent-Tiering storage class.
The new configuration parameter
SEGMENT_READ_FADVICE
has been introduced.The following cluster-level setting has been introduced, editable via GraphQL mutations:
setSegmentReplicationFactor configures the desired number of segment replicas.
This is also configurable via the
DEFAULT_SEGMENT_REPLICATION_FACTOR
configuration parameter.If configured via both environment variable and GraphQL mutation, the mutation has precedence.
For new clusters the default is
1
. For clusters upgrading from older versions, the initial value is taken from theSTORAGE_REPLICATION_FACTOR
environment variable, if set. If instead the variable is not set, the value is taken from the replication factor of the storage partition table prior to the upgrade — this means that upgrading clusters should see no change to their replication factor, unless specified in theSTORAGE_REPLICATION_FACTOR
.The feature can be disabled in case of problems via either the GraphQL mutation setAllowRebalanceExistingSegments, or the environment variable
DEFAULT_ALLOW_REBALANCE_EXISTING_SEGMENTS
.If you need to disable the feature, please reach out to Support and share your concerns so we can try to address them. We intend to remove the option to handle segment partitions manually in the future.
Disable the AutomaticDigesterDistribution feature by default. While the feature works, it can cause performance issues on very large installs if nodes are rebooted repeatedly. In future versions, we've worked around this issue, but for 1.88 patch versions, we prefer simply disabling the feature.
Dashboards and Widgets
When using the Edit in search view item on a dashboard widget, the values set in parameters in the query are also carried over into the search view.
Introduced a new setting for dashboard parameters configuration to defer query execution: the dashboard will not execute any queries on page load until the user provides a value to the parameter.
For more information, see Configuring Dashboard Parameters.
The new interaction type
has been introduced, allowing users to create an interaction that will trigger a new search.For more information, see Manage Dashboard Interactions, Creating Event List Interactions.
You can now save interactions with a saved query on the Search page. Interactions in saved queries are also supported in Packages.
For more information, see Creating Event List Interactions.
The new interaction type Update Parameters has been introduced. This interaction allows you to update parameters in the context you're working in — on the dashboard or on the Search page.
For more information, see Update Parameters.
The combo box has been updated to show multiple selections as "pills".
You can now delete or duplicate Event List Interactions from the Interactions overview page.
For more information, see Deleting & Duplicating Event List Interactions.
Multivalued parameters have been introduced to pass an array of values to the query. The support is limited to the Dashboards page.
For more information, see Multi-value Parameters.
When Setting Up a Dashboard Interaction, the
{{ startTime }}
and{{ endTime }}
special variables now work differently, depending on whether the query, widget or dashboard is running in Live mode or not. They now work as follows:In a live query or dashboard, the
startTime
variable will contain the relative time, such as whereasendTime
will be empty.In a non-live query or dashboard,
startTime
will be the absolute start time when the query was last run.endTime
, similarly, will have the end time of when the query was last run.
Interactive elements in visualizations now have the point cursor.
Log Collector
On the Config Overview page a column showing the state of the configuration has been added. The configuration can either be or in state.
A menu item has been added on the Config Overview page, that links to the Settings page.
When clicking on an Error status on the Fleet Overview page, a dialog with the error details will open.
For more information, see Falcon Log Collector Manage your Fleet.
Fleet Management updates:
Added the
Basic Information
page with primary information of a specific configuration e.g. name, description, no. of assigned instances.The Config Editor used to create/modify LogScale Collector configurations in LogScale has been augmented with context aware auto-completion, tooltips for keywords and highlighting of invalid settings.
For more information, see Manage Remote Configurations.
Queries
Reduced the amount of memory used when multiple queries use the
match()
function with the same arguments. Before, if you ran many queries that used the same file, the contents of the file would be represented multiple times in memory, once for each query. This could put you at risk of exhausting the server's memory if the files were large. With this change the file contents will be shared between the queries and represented only once. This enables the server to run more queries and/or handle larger files.For more information, see Lookup Files Operations.
Improvements to query scheduler logic for "shelving" i.e., pausing queries considered too expensive. The pause/unpause logic are now more responsive and unpause queries faster when they become eligible to run.
Functions
Performance improvements have been made to the
match()
query function in cases whereignoreCase=true
is used together with eithermode=cidr
, ormode=string
.base64Decode()
query function has been updated such that, when decoding to UTF-8, invalid code points are replaced with a placeholder character.When IOCs are not available, the
ioc:lookup()
query function will now produce an error. Previously, it only produced a warning.The memory usage of the functions
selectLast()
andgroupBy()
has been improved.
Other
When the automatic segment rebalancing feature is enabled, ignore the segment storage table when evaluating whether dead ephemeral nodes can be removed automatically.
Create Repositories
permission now also allows LogScale Self-Hosted users to create repositories.
Packages
The size limit of packages' lookup files has been changed to adhere to the
MAX_FILEUPLOAD_SIZE
configuration parameter. Previously the size limit was1MB
.For more information, see Exporting the Package.
Fixed in this release
UI Changes
The
Search
page would reload when using the browser's history navigation buttons. This issue has now been fixed.An issue in the
Usage
page that could fail showing any data has been fixed.The
Usage
page now shows an error if there are any warnings from the query.The Fields Panel flyout displayed the bottom 10 values rather than the top 10 values. This issue has now been fixed.
For more information, see Displaying Fields.
Dashboards and Widgets
""
was being discarded when creating URLs for interactions. This issue has now been fixed.Attempting to remove a widget on a dashboard would sometimes remove another widget than the one attempted to remove. This issue has been fixed.
The tooltip in the
Time Chart
widget would not show any data points. This issue has now been fixed.Non-breaking space chars (ALT+Space) made Template Expressions unable to be resolved. This issue has been fixed.
'_'
was not recognized as a valid first symbol for parameters when parsing queries. This issue has now been fixed.Fixed an issue where clicking the Inspect link in alert notifications would land on a missing page.
The values of FixedList Parameter on a dashboard would change sort ordering after being exported to a yaml template file. This issue has been fixed.
Queries
In clusters with bucket storage running queries that take more than 90 minutes, those queries could spuriously fail with a complaint that segments were missing. The issue has now been fixed.
Export query result to file dialog would not close in some cases. This issue has now been fixed.
Restart of queries based on lookup files has been fixed: only live queries need restarting from changes to uploaded files that they depend on. Scheduled Searches and static queries use the version of the file present when they start and run to completion.
Functions
The
groupBy()
function would not always warn upon exceeding the defaultlimit
. This issue has now been fixed.Fixed a regression in
join()
validation, which was introduced in version Falcon LogScale 1.80.0 GA (2023-03-07).timeChart()
provided withunit
andgroupBy()
as the aggregation function would not warn on exceeding the defaultgroupBy()
limit
. This issue has now been fixed.
Other
An issue that would cause query workers to handle mini-segments for longer than intended has been fixed.
The following audit log issues have been fixed:
the audit log logged the name of the view owning the view bindings instead of the repository it links to. The name now matches the id in the binding log entry.
the audit log for a view update did not use the updated view but the view data before the update.
An uploaded file would sometimes disappear immediately after uploading. This issue has been fixed.
An issue that would cause bucket downloads to retry infinitely many times for certain types of segments has been fixed.
Fixed bucket downloads that could fail if the segment they were fetching disappeared from global.
In ephemeral-disk mode, allow removing a node via the UI when it is dead regardless of any data present on the node: ephemeral mode knows how to ensure durability also when nodes are lost without notice.
For more information, see Ephemeral Nodes and Cluster Identity.
Falcon LogScale 1.87.0 GA (2023-04-25)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.87.0 | GA | 2023-04-25 | Cloud | 2024-05-31 | No | 1.44.0 | No |
Available for download two days after release.
Bug fixes and updates.
Advance Warning
The following items are due to change in a future release.
Installation and Deployment
Support for running on Java 11, 12, 13, 14, 15 and 16 will be removed by the end of September 2023.
New features and improvements
Dashboards and Widgets
When using the Edit in search view item on a dashboard widget, the values set in parameters in the query are also carried over into the search view.
When Setting Up a Dashboard Interaction, the
{{ startTime }}
and{{ endTime }}
special variables now work differently, depending on whether the query, widget or dashboard is running in Live mode or not. They now work as follows:In a live query or dashboard, the
startTime
variable will contain the relative time, such as whereasendTime
will be empty.In a non-live query or dashboard,
startTime
will be the absolute start time when the query was last run.endTime
, similarly, will have the end time of when the query was last run.
Functions
base64Decode()
query function has been updated such that, when decoding to UTF-8, invalid code points are replaced with a placeholder character.The memory usage of the functions
selectLast()
andgroupBy()
has been improved.
Packages
The size limit of packages' lookup files has been changed to adhere to the
MAX_FILEUPLOAD_SIZE
configuration parameter. Previously the size limit was1MB
.For more information, see Exporting the Package.
Fixed in this release
UI Changes
Dashboards and Widgets
Attempting to remove a widget on a dashboard would sometimes remove another widget than the one attempted to remove. This issue has been fixed.
Non-breaking space chars (ALT+Space) made Template Expressions unable to be resolved. This issue has been fixed.
Queries
In clusters with bucket storage running queries that take more than 90 minutes, those queries could spuriously fail with a complaint that segments were missing. The issue has now been fixed.
Functions
Falcon LogScale 1.86.0 GA (2023-04-18)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.86.0 | GA | 2023-04-18 | Cloud | 2024-05-31 | No | 1.44.0 | No |
Available for download two days after release.
Bug fixes and updates.
Advance Warning
The following items are due to change in a future release.
Installation and Deployment
Support for running on Java 11, 12, 13, 14, 15 and 16 will be removed by the end of September 2023.
New features and improvements
Automation and Alerts
When creating a new Alert, you now have a pulldown menu that suggests labels that you've previously created for other alerts. The same applies to Scheduled Searches.
For more information, see Creating Alerts.
Configuration
New configuration parameters have been added allowing control of
client.rack
for our Kafka consumers:KAFKA_CLIENT_RACK_ENV_VAR
— this variable is read to find the name of the variable that holds the value. It defaults toZONE
, which is the same variable applied to the LogScale node zones by default.
Fixed in this release
Dashboards and Widgets
""
was being discarded when creating URLs for interactions. This issue has now been fixed.'_'
was not recognized as a valid first symbol for parameters when parsing queries. This issue has now been fixed.
Falcon LogScale 1.85.0 GA (2023-04-13)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.85.0 | GA | 2023-04-13 | Cloud | 2024-05-31 | No | 1.44.0 | No |
Available for download two days after release.
Bug fixes and updates.
Advance Warning
The following items are due to change in a future release.
Installation and Deployment
Support for running on Java 11, 12, 13, 14, 15 and 16 will be removed by the end of September 2023.
Upgrades
Changes that may occur or be required during an upgrade.
Other
Snakeyamls has been upgraded to 2.0 to address CVE-2022-1471 issue.
New features and improvements
UI Changes
Improvements in UI tables visualization: even long column headers' text is now always left-aligned (instead of center-aligned and on top of each other) and uses a different color.
Organization level query blocking has been added to
Organization Settings
UI.For more information, see Organization Query Monitor.
Automation and Alerts
Clicking the Alerts will now show every unique label that has been created on every alert in the same repository. This means that you don't need to rewrite a label when wanting to add the same label to another alert. This feature also applies to Scheduled Searches.
button in
GraphQL API
The following GraphQL mutations can now also be performed with the
ChangeOrganizationPermissions
permission:The following GraphQL mutations can now also be performed with the
ChangeSystemPermissions
permission:The following GraphQL queries and mutations can now also be performed with either
ChangeOrganizationPermissions
,ChangeSystemPermissions
permission depending on the group:The permissions required in order to list IP filters have been updated. You can now also list IP filters with one of the following permissions:
Configuration
Using the storage class "S3 Intelligent-Tiering" in AWS S3 selectively on files that LogScale knows continues to be supported: it is controlled by the new dynamic configuration
BucketStorageUploadInfrequentThresholdDays
that sets the minimum number of days of remaining retention for the data in order to switch from the default "S3 Standard" to the "Intelligent" tier.The decision is made at the point of upload to the bucket only, whereas existing objects in the bucket are not modified.
The bucket must be configured to not allow the optional tiers
Archive Access tier
norDeep Archive Access tier
as those do not have instant access, which is required for LogScale.As a consequence of that, do not enable automatic archiving within the S3 Intelligent-Tiering storage class.
The new configuration parameter
SEGMENT_READ_FADVICE
has been introduced.
Dashboards and Widgets
Introduced a new setting for dashboard parameters configuration to defer query execution: the dashboard will not execute any queries on page load until the user provides a value to the parameter.
For more information, see Configuring Dashboard Parameters.
The new interaction type
has been introduced, allowing users to create an interaction that will trigger a new search.For more information, see Manage Dashboard Interactions, Creating Event List Interactions.
Multivalued parameters have been introduced to pass an array of values to the query. The support is limited to the Dashboards page.
For more information, see Multi-value Parameters.
Log Collector
Fleet Management updates:
Added the
Basic Information
page with primary information of a specific configuration e.g. name, description, no. of assigned instances.The Config Editor used to create/modify LogScale Collector configurations in LogScale has been augmented with context aware auto-completion, tooltips for keywords and highlighting of invalid settings.
For more information, see Manage Remote Configurations.
Queries
Improvements to query scheduler logic for "shelving" i.e., pausing queries considered too expensive. The pause/unpause logic are now more responsive and unpause queries faster when they become eligible to run.
Functions
When IOCs are not available, the
ioc:lookup()
query function will now produce an error. Previously, it only produced a warning.
Other
Create Repositories
permission now also allows LogScale Self-Hosted users to create repositories.Worker-level query scheduling has been adjusted to avoid long-term starvation of expensive queries.
Fixed in this release
Functions
Fixed a regression in
join()
validation, which was introduced in version Falcon LogScale 1.80.0 GA (2023-03-07).Fixed an issue where a query with
join()
,selfJoin()
, orselfJoinFilter()
functions would sometimes get cancelled.
Other
An issue that would cause query workers to handle mini-segments for longer than intended has been fixed.
The following audit log issues have been fixed:
the audit log logged the name of the view owning the view bindings instead of the repository it links to. The name now matches the id in the binding log entry.
the audit log for a view update did not use the updated view but the view data before the update.
An issue that would cause bucket downloads to retry infinitely many times for certain types of segments has been fixed.
Falcon LogScale 1.84.0 Not Released (2023-04-04)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.84.0 | Not Released | 2023-04-04 | Internal Only | 2024-04-30 | No | 1.44.0 | No |
Available for download two days after release.
Not released.
Advance Warning
The following items are due to change in a future release.
Installation and Deployment
Support for running on Java 11, 12, 13, 14, 15 and 16 will be removed by the end of September 2023.
Falcon LogScale 1.83.0 GA (2023-03-28)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.83.0 | GA | 2023-03-28 | Cloud | 2024-05-31 | No | 1.44.0 | No |
Available for download two days after release.
Bug fixes and updates.
Advance Warning
The following items are due to change in a future release.
Installation and Deployment
Support for running on Java 11, 12, 13, 14, 15 and 16 will be removed by the end of September 2023.
New features and improvements
UI Changes
Event List Interactions are now accessible from the Repository and View Settings page.
Automation and Alerts
The default time window for Alerts has been updated:
When creating an alert from the Alerts page, the default query time window has been changed from to to match the default throttle time.
When creating an alert from the Search page, the default Throttle period has been changed to match that of the query time window set.
For more information, see Creating Alerts.
GraphQL API
The querySearchDomain GraphQL query now allows you to search for Views and Repositories based on your permissions — previously, enforcing specific permissions caused errors.
Dashboards and Widgets
You can now save interactions with a saved query on the Search page. Interactions in saved queries are also supported in Packages.
For more information, see Creating Event List Interactions.
The combo box has been updated to show multiple selections as "pills".
You can now delete or duplicate Event List Interactions from the Interactions overview page.
For more information, see Deleting & Duplicating Event List Interactions.
Interactive elements in visualizations now have the point cursor.
Functions
Performance improvements have been made to the
match()
query function in cases whereignoreCase=true
is used together with eithermode=cidr
, ormode=string
.
Fixed in this release
Other
Fixed bucket downloads that could fail if the segment they were fetching disappeared from global.
Falcon LogScale 1.82.4 LTS (2023-11-20)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.82.4 | LTS | 2023-11-20 | Cloud | 2024-04-30 | No | 1.44.0 | No |
Hide file hashes
TAR Checksum | Value |
---|---|
MD5 | 702decdef1d72545f0357091f3623c08 |
SHA1 | 49b14563b7ea0d01f87cb78e25e99fcab603c2c8 |
SHA256 | d38ea4ab551ab0f826eab12e772ee78a3870d3d73cfd83bcb4acbeddeb44dd70 |
SHA512 | e039bfaa94a9c9a4c96cd7350b21ff035fd73d1800514ec53aa416ca555178eb87c99da0c939e2f5366909c5fd63526b0d598318f5672da4d016ef788155a4fe |
Docker Image | SHA256 Checksum |
---|---|
humio | f801fd236ff5729012c51c035c1607be2fda7909f79843c82af7535b20e6c6f1 |
humio-core | 0a024a74995adcf6ce9d225e731af34bbfdbe5dee016162a3b6ae073f26ce4c4 |
kafka | fd0838840877fadce404c233cb5a4000e31361e9ee49a8ab95cb36c66a70d67b |
zookeeper | 8022e850fe9e6d38c158d8ea8c03a5f88b0607bd1cc2a5cf96219ab3f87db00f |
Download: https://repo.humio.com/repository/maven-releases/com/humio/server/1.82.4/server-1.82.4.tar.gz
These notes include entries from the following previous releases: 1.82.0, 1.82.1, 1.82.2, 1.82.3
Bug fix and updates.
New features and improvements
UI Changes
Improvements have been made on the Fields Panel, that would flicker when switching between the Results and Events tabs and the query was live. It now displays the fields of the aggregated query when on the Results tab, and the fields of the events query when on the Events tab.
Queries
Added backend support for organization level query blocking. Actors with the
BlockQueries
permission are able to block and stop queries running within their organization.
Functions
Other
Added optional global argument to
stopAllQueries
,stopStreamingQueries
,stopHistoricalQueries
,blockedQueries
,addToBlocklistById
,addToBlocklist
permissions. Default isfalse
i.e. within own organization only.Worker-level query scheduling has been adjusted to avoid long-term starvation of expensive queries.
Fixed in this release
Security
Verified that LogScale does not use the affected Akka dependency component in CVE-2023-31442 by default, and have taken additional precautions to notify customers.
For:
LogScale Cloud/Falcon Long Term Repository:
This CVE does not impact LogScale Cloud or LTR customers.
LogScale Self-Hosted:
Exposure to risk:
Potential risk is only present if a self hosted customer has modified the Akka parameters to a non default value of
akka.io.dns.resolver = async-dns
during initial setup.By default LogScale does not use this configuration parameter.
CrowdStrike has never recommended custom Akka parameters. We recommend using default values for all parameters.
Steps to mitigate:
Setting akka.io.dns.resolver to default value (inet-address) will mitigate the potential risk.
On versions older than 1.92.0:
Unset the custom Akka configuration. Refer to Akka documentation for more information on how to unset or pass a different value to the parameter here.
CrowdStrike recommends upgrading LogScale to 1.92.x or higher versions.
UI Changes
Time Selector and date picker in the Time Interval panel have been fixed for issues related to daylight savings time.
Fixed some missing Field Interactions options for the data type in the Event List.
For more information, see Field Data Types.
API
Fixed an issue with API Explorer that could fail to load in some configurations when using cookie authentication.
Dashboards and Widgets
The dropdown menu for dashboard parameter suggestions is now faster and can handle several thousand entries without blocking the UI.
For more information, see Manage Dashboard Parameters.
Functions
Fixed an issue where a query with
join()
,selfJoin()
, orselfJoinFilter()
functions would sometimes get cancelled.
Other
Fixed a permission issue for LogScale Self-Hosted having a dependency on the
ManageOrganizations
system permission, which should not apply to that environment — theManageCluster
system permission in itself is now sufficient for Self-Hosted.Fixed an issue where searching within small subsets of the latest 24 hours in combination with hash filters could result in events that belonged in the time range to not be included in the result. The visible symptom was that narrowing the search span provided more hits.
Fixed an issue that occurred when creating users: when multiple user creation requests were sent at the same time, multiple users were in some cases created with the same name.
Fixed an issue that could cause recently merged mini-segments to be excluded from searches after a reboot.
Falcon LogScale 1.82.3 LTS (2023-07-04)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.82.3 | LTS | 2023-07-04 | Cloud | 2024-04-30 | No | 1.44.0 | No |
Hide file hashes
TAR Checksum | Value |
---|---|
MD5 | 0823b20fe8ac1627377ee9c8088915f4 |
SHA1 | c702802c7912a62575d0df66c0e50fbad77ffae2 |
SHA256 | 3ccba6e2d9911d345f19ddfb57ac25614a0ecc566d0c95443fd8cbbd010a4132 |
SHA512 | fc8627e24fc82022248520e2dd594ef74fb02228dfe51aa7d7a1e7a241d0d27606a68a2c1dfdecdd72547683b6b86d70930773e2798ccfadf9b8b18fc151b2eb |
Docker Image | SHA256 Checksum |
---|---|
humio | 159d69c1521724d90bfefc7a5fe22f23ad3cfc386291f5e93828e27d05bb5f34 |
humio-core | a30a197ed0d8e829db54e8738a583275a01717b397e0beac8523e921448bbc4a |
kafka | e808836ef57c65de5cd5c4b39fadd4f62ea0589d7781cab3c70b263713efbff7 |
zookeeper | 771385427652ccd1d166e2e98dd09fcfcc84297303db6a571a4007d5f104dfc7 |
Download: https://repo.humio.com/repository/maven-releases/com/humio/server/1.82.3/server-1.82.3.tar.gz
These notes include entries from the following previous releases: 1.82.0, 1.82.1, 1.82.2
Bug fix and updates.
New features and improvements
UI Changes
Improvements have been made on the Fields Panel, that would flicker when switching between the Results and Events tabs and the query was live. It now displays the fields of the aggregated query when on the Results tab, and the fields of the events query when on the Events tab.
Queries
Added backend support for organization level query blocking. Actors with the
BlockQueries
permission are able to block and stop queries running within their organization.
Functions
Other
Added optional global argument to
stopAllQueries
,stopStreamingQueries
,stopHistoricalQueries
,blockedQueries
,addToBlocklistById
,addToBlocklist
permissions. Default isfalse
i.e. within own organization only.Worker-level query scheduling has been adjusted to avoid long-term starvation of expensive queries.
Fixed in this release
Security
Verified that LogScale does not use the affected Akka dependency component in CVE-2023-31442 by default, and have taken additional precautions to notify customers.
For:
LogScale Cloud/Falcon Long Term Repository:
This CVE does not impact LogScale Cloud or LTR customers.
LogScale Self-Hosted:
Exposure to risk:
Potential risk is only present if a self hosted customer has modified the Akka parameters to a non default value of
akka.io.dns.resolver = async-dns
during initial setup.By default LogScale does not use this configuration parameter.
CrowdStrike has never recommended custom Akka parameters. We recommend using default values for all parameters.
Steps to mitigate:
Setting akka.io.dns.resolver to default value (inet-address) will mitigate the potential risk.
On versions older than 1.92.0:
Unset the custom Akka configuration. Refer to Akka documentation for more information on how to unset or pass a different value to the parameter here.
CrowdStrike recommends upgrading LogScale to 1.92.x or higher versions.
UI Changes
Fixed some missing Field Interactions options for the data type in the Event List.
For more information, see Field Data Types.
API
Fixed an issue with API Explorer that could fail to load in some configurations when using cookie authentication.
Dashboards and Widgets
The dropdown menu for dashboard parameter suggestions is now faster and can handle several thousand entries without blocking the UI.
For more information, see Manage Dashboard Parameters.
Functions
Fixed an issue where a query with
join()
,selfJoin()
, orselfJoinFilter()
functions would sometimes get cancelled.
Other
Fixed a permission issue for LogScale Self-Hosted having a dependency on the
ManageOrganizations
system permission, which should not apply to that environment — theManageCluster
system permission in itself is now sufficient for Self-Hosted.Fixed an issue where searching within small subsets of the latest 24 hours in combination with hash filters could result in events that belonged in the time range to not be included in the result. The visible symptom was that narrowing the search span provided more hits.
Fixed an issue that occurred when creating users: when multiple user creation requests were sent at the same time, multiple users were in some cases created with the same name.
Fixed an issue that could cause recently merged mini-segments to be excluded from searches after a reboot.
Falcon LogScale 1.82.2 LTS (2023-06-22)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.82.2 | LTS | 2023-06-22 | Cloud | 2024-04-30 | No | 1.44.0 | No |
Hide file hashes
TAR Checksum | Value |
---|---|
MD5 | 155c984fb2f3319e42ca151704a5b3f1 |
SHA1 | 05cae24fac6be7d8cc24bb29c11fa068961d76e7 |
SHA256 | 9a0ca4411a7fbd12dbbbe9582498d9f26552c72e68bfca31f6e812b4bb5bcc81 |
SHA512 | bcf589535e0e5c13ecab26f8ba330dd99f190228337f780d0c77d7ca012d67156ad26916471da4e21e7847074f1b654638117626f34e2c509cd2c736faaa90aa |
Docker Image | SHA256 Checksum |
---|---|
humio | 3efbba813293749fc2a6c7bd8332f17110ca33f92a5aa6ea82dab158defe4456 |
humio-core | 92178d5950198b8a9c5df7764385248f7a58d72c2c3f9eae74f3c7b8492b91b1 |
kafka | 4485b9c15d8a3e9cc122f548e68a6daaba983cb8479f553d8ea11ba4de7c09c5 |
zookeeper | c045a185ddc1f0d852a4f73b6668db78c580b0f17287458d78cc723fce20a778 |
Download: https://repo.humio.com/repository/maven-releases/com/humio/server/1.82.2/server-1.82.2.tar.gz
These notes include entries from the following previous releases: 1.82.0, 1.82.1
Security fixes.
New features and improvements
UI Changes
Improvements have been made on the Fields Panel, that would flicker when switching between the Results and Events tabs and the query was live. It now displays the fields of the aggregated query when on the Results tab, and the fields of the events query when on the Events tab.
Queries
Added backend support for organization level query blocking. Actors with the
BlockQueries
permission are able to block and stop queries running within their organization.
Functions
Other
Added optional global argument to
stopAllQueries
,stopStreamingQueries
,stopHistoricalQueries
,blockedQueries
,addToBlocklistById
,addToBlocklist
permissions. Default isfalse
i.e. within own organization only.Worker-level query scheduling has been adjusted to avoid long-term starvation of expensive queries.
Fixed in this release
Security
Verified that LogScale does not use the affected Akka dependency component in CVE-2023-31442 by default, and have taken additional precautions to notify customers.
For:
LogScale Cloud/Falcon Long Term Repository:
This CVE does not impact LogScale Cloud or LTR customers.
LogScale Self-Hosted:
Exposure to risk:
Potential risk is only present if a self hosted customer has modified the Akka parameters to a non default value of
akka.io.dns.resolver = async-dns
during initial setup.By default LogScale does not use this configuration parameter.
CrowdStrike has never recommended custom Akka parameters. We recommend using default values for all parameters.
Steps to mitigate:
Setting akka.io.dns.resolver to default value (inet-address) will mitigate the potential risk.
On versions older than 1.92.0:
Unset the custom Akka configuration. Refer to Akka documentation for more information on how to unset or pass a different value to the parameter here.
CrowdStrike recommends upgrading LogScale to 1.92.x or higher versions.
UI Changes
Fixed some missing Field Interactions options for the data type in the Event List.
For more information, see Field Data Types.
API
Fixed an issue with API Explorer that could fail to load in some configurations when using cookie authentication.
Dashboards and Widgets
The dropdown menu for dashboard parameter suggestions is now faster and can handle several thousand entries without blocking the UI.
For more information, see Manage Dashboard Parameters.
Functions
Fixed an issue where a query with
join()
,selfJoin()
, orselfJoinFilter()
functions would sometimes get cancelled.
Other
Fixed a permission issue for LogScale Self-Hosted having a dependency on the
ManageOrganizations
system permission, which should not apply to that environment — theManageCluster
system permission in itself is now sufficient for Self-Hosted.Fixed an issue that occurred when creating users: when multiple user creation requests were sent at the same time, multiple users were in some cases created with the same name.
Fixed an issue that could cause recently merged mini-segments to be excluded from searches after a reboot.
Falcon LogScale 1.82.1 LTS (2023-05-15)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.82.1 | LTS | 2023-05-15 | Cloud | 2024-04-30 | No | 1.44.0 | No |
Hide file hashes
TAR Checksum | Value |
---|---|
MD5 | fb4b50f131a07565a3f26b880d56fd4c |
SHA1 | 1263a3d79beba7826184e3ad25ecd0e943c7ea9e |
SHA256 | 0d941a09269ae5efaa934568009a4ac4ea8a3ef7100332afe253c82ff75ec17b |
SHA512 | 0277eab1d3069f5369f978b9d582f3093c6856563d4512c590cbed3a381e1fd8c45ddb4185e71c2ff5c2ce9dc2f97822fbe466574c0e638e90c2132db9980e06 |
Docker Image | SHA256 Checksum |
---|---|
humio | 87eaef9b043dd23a65eefd749e9540b27d554fd1f41484695d77815544347937 |
humio-core | 145cb7a5d35dffc62e7239b007721454ba2bd85af50496cc4c461bafbcd472d9 |
kafka | 31a6e03e68efd4d7a2a3be0858726d77c6c7e7be9eadecff52144e8e349b195e |
zookeeper | 4842ee4e9848b389d97e155a9504e9e9c54ae4312c9d47b23529e9bea4fcf85a |
Download: https://repo.humio.com/repository/maven-releases/com/humio/server/1.82.1/server-1.82.1.tar.gz
These notes include entries from the following previous releases: 1.82.0
Bug fixes and updates.
New features and improvements
UI Changes
Improvements have been made on the Fields Panel, that would flicker when switching between the Results and Events tabs and the query was live. It now displays the fields of the aggregated query when on the Results tab, and the fields of the events query when on the Events tab.
Queries
Added backend support for organization level query blocking. Actors with the
BlockQueries
permission are able to block and stop queries running within their organization.
Functions
Other
Added optional global argument to
stopAllQueries
,stopStreamingQueries
,stopHistoricalQueries
,blockedQueries
,addToBlocklistById
,addToBlocklist
permissions. Default isfalse
i.e. within own organization only.Worker-level query scheduling has been adjusted to avoid long-term starvation of expensive queries.
Fixed in this release
UI Changes
Fixed some missing Field Interactions options for the data type in the Event List.
For more information, see Field Data Types.
API
Fixed an issue with API Explorer that could fail to load in some configurations when using cookie authentication.
Dashboards and Widgets
The dropdown menu for dashboard parameter suggestions is now faster and can handle several thousand entries without blocking the UI.
For more information, see Manage Dashboard Parameters.
Functions
Fixed an issue where a query with
join()
,selfJoin()
, orselfJoinFilter()
functions would sometimes get cancelled.
Other
Fixed a permission issue for LogScale Self-Hosted having a dependency on the
ManageOrganizations
system permission, which should not apply to that environment — theManageCluster
system permission in itself is now sufficient for Self-Hosted.Fixed an issue that occurred when creating users: when multiple user creation requests were sent at the same time, multiple users were in some cases created with the same name.
Fixed an issue that could cause recently merged mini-segments to be excluded from searches after a reboot.
Falcon LogScale 1.82.0 LTS (2023-04-12)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.82.0 | LTS | 2023-04-12 | Cloud | 2024-04-30 | No | 1.44.0 | No |
Hide file hashes
TAR Checksum | Value |
---|---|
MD5 | 51d366030dddbe5f3b5cf2bf54507d11 |
SHA1 | 877287bb9d431853a530ff647b692a8c40f8fad8 |
SHA256 | 191b56f4e1a9e54bcf284b0280cce865251970d7bd0cd6660f132520b0931432 |
SHA512 | 88e8219b8a60fe259e1953584cee167d2325abf28525ea70bead77998af11b5ebe0df5bcce0a3bef398db95b5f99ac1f3022d8ab24f88aef22bde42f0f4af180 |
Docker Image | SHA256 Checksum |
---|---|
humio | 6087c619f855cf9a1d05c6aaf983feb3b1fdd7d051c8c769683541f45181da79 |
humio-core | 41089d22b64f5e7fb6e1337064afec7816ac84aa5602b1d98f6684fa16eb03e4 |
kafka | 3b3a4a610a5ab9b38dded8cad062aedf863651ca6f3f9c6b81bee753af9c6d6c |
zookeeper | e5887242f44220b313c6c11bb062ce1e3255d65ec9437bc58876ffbf1753ce0a |
Download: https://repo.humio.com/repository/maven-releases/com/humio/server/1.82.0/server-1.82.0.tar.gz
Bug fixes and updates.
New features and improvements
UI Changes
Improvements have been made on the Fields Panel, that would flicker when switching between the Results and Events tabs and the query was live. It now displays the fields of the aggregated query when on the Results tab, and the fields of the events query when on the Events tab.
Queries
Added backend support for organization level query blocking. Actors with the
BlockQueries
permission are able to block and stop queries running within their organization.
Functions
Other
Added optional global argument to
stopAllQueries
,stopStreamingQueries
,stopHistoricalQueries
,blockedQueries
,addToBlocklistById
,addToBlocklist
permissions. Default isfalse
i.e. within own organization only.Worker-level query scheduling has been adjusted to avoid long-term starvation of expensive queries.
Fixed in this release
API
Fixed an issue with API Explorer that could fail to load in some configurations when using cookie authentication.
Dashboards and Widgets
The dropdown menu for dashboard parameter suggestions is now faster and can handle several thousand entries without blocking the UI.
For more information, see Manage Dashboard Parameters.
Functions
Fixed an issue where a query with
join()
,selfJoin()
, orselfJoinFilter()
functions would sometimes get cancelled.
Other
Fixed a permission issue for LogScale Self-Hosted having a dependency on the
ManageOrganizations
system permission, which should not apply to that environment — theManageCluster
system permission in itself is now sufficient for Self-Hosted.Fixed an issue that occurred when creating users: when multiple user creation requests were sent at the same time, multiple users were in some cases created with the same name.
Fixed an issue that could cause recently merged mini-segments to be excluded from searches after a reboot.
Falcon LogScale 1.81.0 GA (2023-03-14)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.81.0 | GA | 2023-03-14 | Cloud | 2024-04-30 | No | 1.44.0 | No |
Available for download two days after release.
Bug fixes and updates.
Removed
Items that have been removed as of this release.
Automation and Alerts
The deprecated REST Alert API has been removed.
Other
The deprecated REST Action API endpoint for testing actions has been removed.
Upgrades
Changes that may occur or be required during an upgrade.
Other
OpenSSL in Docker images has been upgraded to address CVE-2023-0286 issue.
New features and improvements
UI Changes
Query Monitor
page is now available on the organization level. Users withMonitor queries organization
level permission get access to the page where they can see queries running in their organization.For more information, see Query Monitor, Organization Query Monitor.
Automation and Alerts
The throttle field on alerts can now be imported and exported.
Configuration
The default value for
AUTOSHARDING_TRIGGER_DELAY_MS
has been raised from 20,000 to 3,600,000.
Ingestion
New ingest endpoint
api/v1/ingest/json
for ingesting JSON objects and JSON arrays has been added.For more information, see Ingesting Raw JSON Data.
Other
Event redaction will no longer rewrite mini-segments. Instead, the redaction will be delayed until all mini-segments that would be affected have been merged.
Fixed in this release
Falcon Data Replicator
Fixed a bug where testing new FDR feeds that use S3 Aliasing would fail for valid credentials.
Dashboards and Widgets
The following items have been fixed:
Parameter bindings would not be visible for imported dashboards when configuring interactions.
Imported dashboard containing interactions would be perceived as invalid.
For more information, see Manage Dashboard Interactions.
Functions
Fixed a bug where the query editor would wrongly claim that predicate functions used as match guards were missing an argument to the
field
parameter.
Other
Fixed some issues in the event redaction implementation which could cause the redaction to fail in rare cases.
Fixed an issue which could cause mini-segments to not all be on the same host for a short time, while those mini-segments were being merged. This could cause queries to be unable to query them.
A bug has been fixed that caused recent mini-segments to be missed in queries if the mini-segments were merged during the query.
Falcon LogScale 1.80.0 GA (2023-03-07)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.80.0 | GA | 2023-03-07 | Cloud | 2024-04-30 | No | 1.44.0 | No |
Available for download two days after release.
Bug fixes and updates.
Behavior Changes
Scripts or environment which make use of these tools should be checked and updated for the new configuration:
Ingestion
Ingested events would not be limited in size if the bulk of the data in the event was in fields other than @rawstring. This will now be enforced. Events that exceed the limit on event size at ingest are handled as follows:
@rawstring is truncated to the maximum allowed length, and all other fields are dropped from the event.
@timestamp becomes the ingest time.
@timezone becomes UTC.
(This is identical to the previous handling of oversized @rawstring).
Upgrades
Changes that may occur or be required during an upgrade.
Other
Kafka client has been upgraded to 3.4.0.
Kafka broker has been upgraded to 3.4.0 in the Kafka container.
The container upgrade is performed for security reasons to resolve CVE-2022-36944 issue, which Kafka should however not be affected by. If you wish to do a rolling upgrade of your Kafka containers, please always refer to Kafka upgrade guide.
New features and improvements
UI Changes
Whether one can create a new repository is now controlled by the
Create repository
permission in the UI.
Configuration
Removed
NEW_VHOST_SELECTION_ENABLED
as a configuration option. The option has beentrue
by default since 1.70; an opt-out is no longer needed.
Dashboards and Widgets
Changed the query editor when editing dashboard queries to be the same that is used on the Search page.
Log Collector
New Template feature added to the Fleet Management page, which allows you to:
upload a yaml file when creating a new configuration
export either the published or draft version of a configuration file.
For more information, see Fleet Management Overview.
Queries
Added backend support for organization level query monitor. The new
MonitorQueries
permission now allows viewing queries that are running within the organization.
Functions
Saved queries can now be used in subqueries, see Using Functions as Arguments to Other Functions.
Packages
Interactions installed from a package use the new repository where the package is installed.
Fixed in this release
UI Changes
A high CPU usage in the UI since LogScale 1.75 when the Time Zone Selector dropdown was displayed has now been fixed.
Configuration
Automatic generation and updating of the digest partitions table has been enabled, and manual editing is no longer supported. See Digest Rules for reference.
The table will be kept up to date based on the following node-level settings (see Starting a New LogScale Node):
ZONE defines a node's zone. The table we generate will attempt to distribute segments across as many zones as possible.
Nodes will appear in the table more often if they have many cores. Nodes with fewer cores will appear less often.
Nodes with a
NODE_ROLES
setting that excludes digest work will not appear in the table.
A cluster-level setting has also been introduced: setDigestReplicationFactor GraphQL mutation configures the replication factor to use for the table. This is also settable via the environment variable
DEFAULT_DIGEST_REPLICATION_FACTOR
.Automatic management of the digest partition table is now handled by the environment variable
DEFAULT_ALLOW_UPDATE_DESIRED_DIGESTERS
. We intend to remove the option to handle digest partitions manually in the future.
Dashboards and Widgets
Keyboard combinations cmd+Z/Ctrl+Z no longer deletes the query on dashboard widgets.
Functions
A performance issue in
collect()
when it collected many values has been fixed.Validation of
join()
and join-like functions in conditional expressions and subqueries not having positional information has been fixed.Fixed an issue where joins in
case
statements,match
statements, and subqueries would mark the entire query as erroneous.
Other
Some mini-segments would be excluded from queries in cases where those mini-segments had previously been merged, but the merge was reverted.
Two hosts booted at around the same time would conflict on which vhost number to use, causing one of the hosts to crash.
Avoid caching warnings that some data segments could not be found on any servers. This prevents queries from displaying this warning spuriously.
Mini-segments would be removed too early from nodes which were querying them, causing queries to be missing some data.
Falcon LogScale 1.79.0 GA (2023-02-28)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.79.0 | GA | 2023-02-28 | Cloud | 2024-04-30 | No | 1.44.0 | No |
Available for download two days after release.
Bug fixes and updates.
Behavior Changes
Scripts or environment which make use of these tools should be checked and updated for the new configuration:
Configuration
The behavior of nodes using the
ingestonly
role has changed. Such nodes used not to write to global, and not register themselves in the cluster. They now do both.The old behavior can be restored by setting
NEW_INGEST_ONLY_NODE_SEMANTICS=false
. If you do this, please reach out to Support and outline your need, as this option will be removed in the near future.
New features and improvements
Automation and Alerts
When creating or editing Alerts and Scheduled Searches, it is now possible to specify another user the alert or scheduled search should run as, via the new organization permission
ChangeTriggersToRunAsOtherUsers
.It is now checked that the user selected to run the alert or scheduled search has permissions to run it. Previously, that was first checked when trying to run the alert or scheduled search.
The new feature checks whether the user, trying to create or edit an alert or schedule search, has permissions to change and run as another user. If the feature is enabled, you can select the user to run an alert or schedule search as, from a list of users.
See Creating Alerts and Scheduled Search Run on Behalf of for more information.
Functions
Fixed in this release
Falcon Data Replicator
Fixed a performance issue when setting
fileDownloadParallelism
to more than1
. See Adjust Polling Nodes Per Feed for more information.
UI Changes
The Event Distribution Histogram wouldn't show properly after manipulation of the @timestamp field.
Dashboards and Widgets
Fixed dashboard links to the same dashboard, as they would not correctly update the parameters.
In visualizations using the
timeChart()
orbucket()
functions, when no results were returned you would just see an empty page. Consistently with other visualizations, you will now see a no-result message displayed, such as No results in active time window or Search Completed. No results found — depending on whether Live mode is selected or not.
Falcon LogScale 1.78.0 GA (2023-02-21)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.78.0 | GA | 2023-02-21 | Cloud | 2024-04-30 | No | 1.44.0 | No |
Available for download two days after release.
Bug fixes and updates.
Advance Warning
The following items are due to change in a future release.
Configuration
Starting from 1.78 release, the default value for the
MAX_INGEST_REQUEST_SIZE
configuration will be reduced from1 GB
to32 MB
.This value limits the size of ingest request and rejects oversized requests.
If the request is compressed within HTTP, then this restricts the size after decompressing.
New features and improvements
UI Changes
An explicit logout message now indicates that the user's session has been terminated.
The Time Zone Selector now shows the timezone as instead of when the offset is zero.
Clone items have all been replaced with Duplicate in the UI to be consistent with what they actually do.
Automation and Alerts
When updating or creating new Actions, any server errors will be displayed in a summary under the form. The server errors in the summary will now specify the form field title where the error occurred, to easily identify where the error is.
Removed the sidepanel when creating/editing an Alerts or Scheduled Searches.
Configuration
The default value of
MAX_INGEST_REQUEST_SIZE
has been reduced from1024 MB
to32 MB
. This limits the size of ingest requests and rejects oversized requests. If the request is compressed within HTTP, then this restricts the size after decompressing.
Functions
The
array:filter()
function is now generally available.Introduced the new query function
bitfield:extractFlags()
.More time format modifiers are now supported in the
format()
function:Full and abbreviated month, day-of-week names, and the century
Date/time composition format
Day Mon DD HH:MM:SS Zone YYYY
, e.g.,Tue Jun 22 16:45:05 GMT+1 1993
.
Other
"Sticky" autoshards no longer mean that the system cannot tune their value, but only that it cannot decrease the number of shards; the cluster is allowed to raise the number of shards on datasources when it needs to, also for those that were set as sticky using the REST API.
An enhancement has been made so that when the number of ingest partitions is increased, fresh partitions are assigned to all non-idle datasources based on the new set of partitions. Before this change only new datasources (new tag combinations) would be using the new partitions. The auto-balancing does not start if there are nodes in the cluster running versions prior to 1.78.0.
Fixed in this release
UI Changes
When exporting a dashboard, alert or scheduled search as a template, the labels' field was missing in the exported YAML.
For more information, see Managing Alerts, Scheduled Searches, Dashboards & Widgets.
Double-clicking in the Event List would open the Inspection Panel instead of making a text selection. It now correctly selects the word being double-clicked.
Automation and Alerts
A typo has been fixed in message ActionWithIdNotFound.
GraphQL API
Pending deletes that would cause nodes to fail to start, reporting a NullPointerException, have been fixed.
Dashboards and Widgets
A newly added, unconfigured dashboard parameter could not be deleted again. This issue has been fixed.
Queries
When making updates to the query partition table, only change partitions with dead nodes. This should allow queries to continue without requiring resubmit when a previously unknown node joins the cluster.
Ensure we keep hosts listed in the query partition table up to date as those hosts restart. This should prevent an issue where removing too many nodes from a cluster could prevent queries from running.
Prevent nodes configured not to run queries from starting queries locally in the case where the query request can't be proxied.
Other
Fixed ingest-only nodes that would fail all requests to
/dataspaces
and/repositories
.
Falcon LogScale 1.77.0 GA (2023-02-14)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.77.0 | GA | 2023-02-14 | Cloud | 2024-04-30 | No | 1.44.0 | No |
Available for download two days after release.
Bug fixes and updates.
Advance Warning
The following items are due to change in a future release.
Configuration
Starting from 1.78 release, the default value for the
MAX_INGEST_REQUEST_SIZE
configuration will be reduced from1 GB
to32 MB
.This value limits the size of ingest request and rejects oversized requests.
If the request is compressed within HTTP, then this restricts the size after decompressing.
Behavior Changes
Scripts or environment which make use of these tools should be checked and updated for the new configuration:
Ingestion
It is no longer possible to list ingest tokens for system repositories.
New features and improvements
UI Changes
Filtering and group-by icons have been added to the Fields Panel and Inspection Panel detail views.
Documentation
The new Template Language reference information section has been added.
Dashboards and Widgets
Hold ⇧ (Shift) to show unformatted values. Hold ⌥ (Alt on Windows or Option on Mac) to show full legend labels.
startTime
,endTime
, and parameter variables are now also available when working with Template Language expressions on the Search page.
Functions
Introduced the
array:reduceAll()
function.As other released Array Query Functions, it requires square braces.
Other
Ephemeral nodes are automatically removed from the cluster if they are offline for too long (2 hours by default).
Adding more Repositories & Views to a group is now done inside a dialog.
Packages
Repository interactions are now supported in Packages. When exporting a package with dashboard link interactions referencing a dashboard also included in the package, then that reference will be updated to reflect this in the resulting zip file.
Fixed in this release
Storage
Fixing mini-segment fetches as they failed to complete properly during queries, if the number of mini-segments involved was too large.
Job-to-node assignment in LogScale has been reworked. Jobs that only needed to run on a subset of nodes in the cluster — such as the job for firing alert notifications or the job enforcing retention settings — would previously select which hosts were responsible for executing the job based on the segment storage table.
The selection is now based on consistent hashing, which means the job assignments should automatically follow the set of live nodes.
It is possible to observe where a given job is running based on logs found with the query
class=*JobAssignments*
.
Configuration
Nodes are now considered ephemeral only if they set
USING_EPHEMERAL_DISKS
totrue
. Previously, they were ephemeral if they either set that configuration, or if they were using the httponly node role.
Dashboards and Widgets
When importing a dashboard from a template, some widget options (including LegendPosition) were being ignored and reverted to their default value.
The
Table
widget is able to display any search result, yet in the widget dropdown, it would often say "Incompatible". It now indicates compatibility. For event type results, the Event List visualisation will still be preferred and auto selected.When using the Export as template functionality, the label field was missing in the exported YAML.
For more information, see Dashboards & Widgets.
If you clone a widget and click Edit in Search View, you would be asked to discard your changes before editing, causing confusion. Now, Edit in Search View is not available until you save or discard using the buttons in the top bar.
For more information, see Manage Widgets, Manage Widgets.
The
Scatter Chart
widget visualization would under some conditions claim to be compatible with any result that has 3 or more fields. Yet it would not display anything unless the actual data was numeric. TheScatter Chart
visualization now properly detects compatibility and ignores any non-numeric fields in the query result.
Functions
Other
Fixing mini-segment downloads during queries, as they could cause download retries to fail spuriously, even if the download actually succeeded.
Linked to the correct SaaS eula for SaaS customers.
Timeout from publish to global topic in Kafka has been fixed, as it resulted in marking input segments for merge as broken temporarily.
Falcon LogScale 1.76.5 LTS (2023-07-04)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.76.5 | LTS | 2023-07-04 | Cloud | 2024-02-28 | No | 1.44.0 | No |
Hide file hashes
TAR Checksum | Value |
---|---|
MD5 | 78d4ef92b3bd402d031e960f9693930d |
SHA1 | 37283886b67fdee02b27254d842721654eaf248a |
SHA256 | e17d9e3ca998d00d0b17e9eae5802b1f2155994e678821b20a07754ef0982e8c |
SHA512 | db232a4058dc81ab97f306872441686d95e29bafdaf67f571bdc42ef7aaad267d9e56444ea23c9d37868ffc6c0e9286e650cee39fe1eead827f5bba0ffd5fe68 |
Docker Image | SHA256 Checksum |
---|---|
humio | dbc00dbca27fc9dfc119fe2e56c2413fec71cc3f9ddc18a3c3c14d61258c67d7 |
humio-core | bc6832bbc6d349d4513d18e4aae7c56f5f649ed4c39bbc9440e588a05ef471d8 |
kafka | d6772836c29001ddb614323802a4cbdbb93de6426af71caf34e49b5abba39f68 |
zookeeper | 46e8c5876ba32632f8e4868a7f6dc366c1c928ce761553fbbd517c5603947d4d |
Download: https://repo.humio.com/repository/maven-releases/com/humio/server/1.76.5/server-1.76.5.tar.gz
These notes include entries from the following previous releases: 1.76.1, 1.76.2, 1.76.3, 1.76.4
Bug fix and updates.
Advance Warning
The following items are due to change in a future release.
Configuration
Starting from 1.78 release, the default value for the
MAX_INGEST_REQUEST_SIZE
configuration will be reduced from1 GB
to32 MB
.This value limits the size of ingest request and rejects oversized requests.
If the request is compressed within HTTP, then this restricts the size after decompressing.
Removed
Items that have been removed as of this release.
API
Removed the API for managing ingest tokens. This has long been deprecated and replaced by a GraphQL API.
Deprecation
Items that have been deprecated and may be removed in a future release.
The
REST
endpoint for testing actions has been deprecated. api/v1/repositories/repoId
/alertnotifiers/actionId
/test has been deprecated. The new GraphQL mutations should be used instead.
Upgrades
Changes that may occur or be required during an upgrade.
Other
Java upgraded to 17.0.6 in Docker containers
Kafka upgraded to 3.3.2 for KAFKA-14379
Kafka client upgraded to 3.3.2
Kafka Docker container upgraded to 3.3.2
Kafka client has been upgraded to 3.4.0.
Kafka broker has been upgraded to 3.4.0 in the Kafka container.
The container upgrade is performed for security reasons to resolve CVE-2022-36944 issue, which Kafka should however not be affected by. If you wish to do a rolling upgrade of your Kafka containers, please always refer to Kafka upgrade guide.
Packages
Optimizations in package handling require migration of data during upgrade. This migration is performed automatically. Please notice:
While the upgrade of cluster nodes are ongoing, we recommend you do not install or update any packages, as they may end up in an inconsistent state.
If a package ends up in a bad state during migration, it can be fixed simply by reinstalling the package.
You will potentially experience that accessing the list of installed packages will fail, and creating new dashboards, alerts, parsers, etc. based on package templates will not work as intended.
This should only happen during the cluster upgrade, and should resolve itself once the cluster is fully upgraded.
If the cluster nodes are downgraded, any packages installed or updated while running the new version will not work, and we therefore recommend uninstalling or downgrading those packages prior to downgrading the cluster nodes.
New features and improvements
Security
When creating a new group you now have to add the group and add permissions for it in the same multi step dialog.
UI Changes
Changes have been made for the three-dot menu (⋮) used for Field Interactions:
It is now available from the Fields Panel and the Inspection Panel, see Searching Data.
Keyboard navigation has been improved.
For field interactions with live queries, the Fields Panel flyout will now display a fixed list of top values, keeping the values from the point in time when the menu was opened.
Suggestions in Query Editor will show for certain function parameters like time formats.
Introduced Search Interactions to add custom event list options for all users in a repository.
For more information, see Event List Interactions.
Event List Interactions are now sorted by name and repository name by default.
Tabs on the
Users
page are renamed: former Groups and Permissions tab is now renamed to Permissions; former Details tab is now renamed to Information. In addition, the Permissions tab is now displayed first — it is also the tab that will be opened by default when navigating to a user from other places in the product. See Manage users & permissions for a description of roles and permissions in the UI.The Search page now supports timezone picking e.g. . The timezone will be set on the users' session and remembered between pages.
For more information, see Setting Time Zone.
You can now set your preferred timezone under Manage your Account.
Known field names are now shown as completion suggestions in Query Editor while you type.
Automation and Alerts
The list of Message Templates and Variables is no longer shown in the User Interface when editing Actions, instead a link to the documentation has been added.
GraphQL API
GraphQL API mutations have been added for testing actions without having to save them first. The added mutations are:
testEmailAction
testHumioRepoAction
testOpsGenieAction
testPagerDutyAction
testSlackAction
testSlackPostMessageAction
testUploadFileAction
testVictorOpsAction
testWebhookAction
The previous testAction mutation has been removed.
The new GraphQL API mutations' signature is almost the same as the create mutation for the same action, except that test actions require event data and a trigger name, as the previous testAction mutation did.
As a consequence, the
button is now always enabled in the UI.
Configuration
The ability to keep the same merge target across digest changes is reintroduced. This feature was reverted in an earlier release due to a discovered issue where mini segments for an active merge target could end up spread across hosts. As that issue has been fixed, mini segments should now be stored on the hosts running digest for the target.
A new environment configuration variable
GLOB_ALLOW_LIST_EMAIL_ACTIONS
is introduced. It enables cluster-wide blocking of recipients of Action Type: Email actions that are not in the provided allow list.New dynamic configuration
FlushSegmentsAndGlobalOnShutdown
. When set, and whenUSING_EPHEMERAL_DISKS
is set totrue
, forces all in-progress segments to be closed and uploaded to bucket, and also forces a write (and upload) of global snapshot during shutdown. When not set, this avoids the extra work and thus time shutting down from flushing very recent segments, as those can then be resumed on next boot, assuming that next boot continues on the same Kafka epoch. The default isfalse
, which allows faster shutdown.
Dashboards and Widgets
The
Single Value
widget now supports interactions on both the Search and Dashboard page. See Manage Dashboard Interactions for more details on interactions.Introduced Dashboards Interactions to add interactive elements to your dashboards.
For more information, see Manage Dashboard Interactions.
It is now possible to set a temporary timezone in dashboards, which will be read from the URL on page load e.g.
tz=Europe/Copenhagen
.For more information, see Time Interval Settings.
Log Collector
Falcon Log Collector Manage your Fleet now supports remote configuration of LogScale Collectors. This gives an administrator the option of managing the configuration of LogScale Collector instances in LogScale, instead of managing configuration files directly on the device where Falcon Log Collector is installed.
For more information, see Falcon Log Collector Manage your Fleet, Falcon Log Collector Releases.
Functions
Queries containing a
join()
function no longer run truly live when the query is set to Live. Instead, these queries will run repeatedly at intervals determined by the query engine.For more information, see Errors when Using Live join() Functions, Widgets with Live join() Functions,
join()
, Special Behaviour for Live Joins.default()
now supports assigning the same value to multiple fields, by passing multiple field names to thefield
parameter.The query function
holtwinters()
has been removed from the product.Using
ioc:lookup()
in a query while the IOC service is disabled will now result in a failed query instead of a warning, stating that there are partial results.selectLast()
andgroupBy()
now use less state size, allowing for larger result sets.The performance of
in()
is improved when matching with values that do not use the*
wildcard.
Other
"Sticky" autoshards no longer mean that the system cannot tune their value, but only that it cannot decrease the number of shards; the cluster is allowed to raise the number of shards on datasources when it needs to, also for those that were set as sticky using the REST API.
Ephemeral nodes are automatically removed from the cluster if they are offline for too long (2 hours by default).
Fixed in this release
Security
Verified that LogScale does not use the affected Akka dependency component in CVE-2023-31442 by default, and have taken additional precautions to notify customers.
For:
LogScale Cloud/Falcon Long Term Repository:
This CVE does not impact LogScale Cloud or LTR customers.
LogScale Self-Hosted:
Exposure to risk:
Potential risk is only present if a self hosted customer has modified the Akka parameters to a non default value of
akka.io.dns.resolver = async-dns
during initial setup.By default LogScale does not use this configuration parameter.
CrowdStrike has never recommended custom Akka parameters. We recommend using default values for all parameters.
Steps to mitigate:
Setting akka.io.dns.resolver to default value (inet-address) will mitigate the potential risk.
On versions older than 1.92.0:
Unset the custom Akka configuration. Refer to Akka documentation for more information on how to unset or pass a different value to the parameter here.
CrowdStrike recommends upgrading LogScale to 1.92.x or higher versions.
UI Changes
Time Selector and date picker in the Time Interval panel have been fixed for issues related to daylight savings time.
Fixed an issue that made switching UI theme report an error and only take effect for the current session.
Fixed an issue where the dashboard page would freeze when the value of a dashboard parameter was changed.
Fixed the UI as it were not showing an error when a query gets blocked due to query quota settings.
We have fixed tooltips in the query editor, which were hidden by other elements in the UI.
Automation and Alerts
For self-hosted: Automation for sending emails from Actions no longer uses the IP filter, allowing administrators not to put Automation on the IP allowlist.
GraphQL API
Pending deletes that would cause nodes to fail to start, reporting a NullPointerException, have been fixed.
Storage
Fixing mini-segment fetches as they failed to complete properly during queries, if the number of mini-segments involved was too large.
The noise from
MiniSegmentMergeLatencyLoggerJob
has been reduced by being more conservative about when we log mini segments that are unexpectedly not being merged. We have madeMiniSegmentMergeLatencyLoggerJob
take datasource idleness into account.
API
Fixed an issue with API Explorer that could fail to load in some configurations when using cookie authentication.
Configuration
Nodes are now considered ephemeral only if they set
USING_EPHEMERAL_DISKS
totrue
. Previously, they were ephemeral if they either set that configuration, or if they were using the httponly node role.Fixed an issue where the IOC database could get out of sync. The IOC database will be re-downloaded upon upgrade, therefore IOCs won't be completely available for a while after the upgrade.
Removed compression type
extreme
for configurationCOMPRESSION_TYPE
. Specifyingextreme
will now select the default value ofhigh
in order not to cause configuration errors for clusters that specifyextreme
. The suggestion is to removeCOMPRESSION_TYPE
from your configurations unless you specify the only other non-default value offast
.
Ingestion
We have set a maximum number of events that we will parse under a single timeout so large batches are allowed to take longer. If you've seen parsers time out not because the parser is actually slow but because you were processing many events in a single batch, this change should cause that stop happening. Only parsers that are genuinely slow should now time out.
Queries
The query scheduling has been fixed as it could hit races with the background recompression of files in a way that resulted in the query missing the file and ending up adding warnings about segment files being missed by the query.
Fixed a failing require from
MiniSegmentsAsTargetSegmentReader
, causing queries to fail in very rare cases.
Functions
Queries ending with
tail()
will no longer be rendered with infinite scroll.
Other
Fixed an issue for the ingest API that made it possible to ingest into system repositories.
Fixing mini-segment downloads during queries, as they could cause download retries to fail spuriously, even if the download actually succeeded.
Fixed an issue where searching within small subsets of the latest 24 hours in combination with hash filters could result in events that belonged in the time range to not be included in the result. The visible symptom was that narrowing the search span provided more hits.
Timeout from publish to global topic in Kafka has been fixed, as it resulted in marking input segments for merge as broken temporarily.
Falcon LogScale 1.76.4 LTS (2023-06-22)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.76.4 | LTS | 2023-06-22 | Cloud | 2024-02-28 | No | 1.44.0 | No |
Hide file hashes
TAR Checksum | Value |
---|---|
MD5 | db953465da81f218ea623eda9eb1bbe0 |
SHA1 | 2a8103e67e25f5875b0d9fb64df7b200553e2e2a |
SHA256 | c9ea2a7accd3e68b4a38dadc8a4a2db61551501d5faf13fa45422fd62433a9ed |
SHA512 | 04f3e5f98eea9fb40c9af0ca3bf00ed1941dd54d6ab2d3fc030d9ea98fca735a5fc9a9aa3d83dc5d66a47955c0242ba78dca291321c6b53880a7239d1831c259 |
Docker Image | SHA256 Checksum |
---|---|
humio | 764db9a8db6139b18f8e6757c1bfd8b3adfe61cfae07f923a453a1fbe7831fe7 |
humio-core | a885340105544d8054463717a4457cc6cda3b42bdcb5df12a830677b8a232803 |
kafka | 92250700f7357110151726ebbe1cb38f63534c25e24566ffdb5f3d9c042158f3 |
zookeeper | 76417a09f8e5bfcbf647ade6456acce088c5dcb4cb7be02686dcc6c2a30b616e |
Download: https://repo.humio.com/repository/maven-releases/com/humio/server/1.76.4/server-1.76.4.tar.gz
These notes include entries from the following previous releases: 1.76.1, 1.76.2, 1.76.3
Security fixes.
Advance Warning
The following items are due to change in a future release.
Configuration
Starting from 1.78 release, the default value for the
MAX_INGEST_REQUEST_SIZE
configuration will be reduced from1 GB
to32 MB
.This value limits the size of ingest request and rejects oversized requests.
If the request is compressed within HTTP, then this restricts the size after decompressing.
Removed
Items that have been removed as of this release.
API
Removed the API for managing ingest tokens. This has long been deprecated and replaced by a GraphQL API.
Deprecation
Items that have been deprecated and may be removed in a future release.
The
REST
endpoint for testing actions has been deprecated. api/v1/repositories/repoId
/alertnotifiers/actionId
/test has been deprecated. The new GraphQL mutations should be used instead.
Upgrades
Changes that may occur or be required during an upgrade.
Other
Java upgraded to 17.0.6 in Docker containers
Kafka upgraded to 3.3.2 for KAFKA-14379
Kafka client upgraded to 3.3.2
Kafka Docker container upgraded to 3.3.2
Kafka client has been upgraded to 3.4.0.
Kafka broker has been upgraded to 3.4.0 in the Kafka container.
The container upgrade is performed for security reasons to resolve CVE-2022-36944 issue, which Kafka should however not be affected by. If you wish to do a rolling upgrade of your Kafka containers, please always refer to Kafka upgrade guide.
Packages
Optimizations in package handling require migration of data during upgrade. This migration is performed automatically. Please notice:
While the upgrade of cluster nodes are ongoing, we recommend you do not install or update any packages, as they may end up in an inconsistent state.
If a package ends up in a bad state during migration, it can be fixed simply by reinstalling the package.
You will potentially experience that accessing the list of installed packages will fail, and creating new dashboards, alerts, parsers, etc. based on package templates will not work as intended.
This should only happen during the cluster upgrade, and should resolve itself once the cluster is fully upgraded.
If the cluster nodes are downgraded, any packages installed or updated while running the new version will not work, and we therefore recommend uninstalling or downgrading those packages prior to downgrading the cluster nodes.
New features and improvements
Security
When creating a new group you now have to add the group and add permissions for it in the same multi step dialog.
UI Changes
Changes have been made for the three-dot menu (⋮) used for Field Interactions:
It is now available from the Fields Panel and the Inspection Panel, see Searching Data.
Keyboard navigation has been improved.
For field interactions with live queries, the Fields Panel flyout will now display a fixed list of top values, keeping the values from the point in time when the menu was opened.
Suggestions in Query Editor will show for certain function parameters like time formats.
Introduced Search Interactions to add custom event list options for all users in a repository.
For more information, see Event List Interactions.
Event List Interactions are now sorted by name and repository name by default.
Tabs on the
Users
page are renamed: former Groups and Permissions tab is now renamed to Permissions; former Details tab is now renamed to Information. In addition, the Permissions tab is now displayed first — it is also the tab that will be opened by default when navigating to a user from other places in the product. See Manage users & permissions for a description of roles and permissions in the UI.The Search page now supports timezone picking e.g. . The timezone will be set on the users' session and remembered between pages.
For more information, see Setting Time Zone.
You can now set your preferred timezone under Manage your Account.
Known field names are now shown as completion suggestions in Query Editor while you type.
Automation and Alerts
The list of Message Templates and Variables is no longer shown in the User Interface when editing Actions, instead a link to the documentation has been added.
GraphQL API
GraphQL API mutations have been added for testing actions without having to save them first. The added mutations are:
testEmailAction
testHumioRepoAction
testOpsGenieAction
testPagerDutyAction
testSlackAction
testSlackPostMessageAction
testUploadFileAction
testVictorOpsAction
testWebhookAction
The previous testAction mutation has been removed.
The new GraphQL API mutations' signature is almost the same as the create mutation for the same action, except that test actions require event data and a trigger name, as the previous testAction mutation did.
As a consequence, the
button is now always enabled in the UI.
Configuration
The ability to keep the same merge target across digest changes is reintroduced. This feature was reverted in an earlier release due to a discovered issue where mini segments for an active merge target could end up spread across hosts. As that issue has been fixed, mini segments should now be stored on the hosts running digest for the target.
A new environment configuration variable
GLOB_ALLOW_LIST_EMAIL_ACTIONS
is introduced. It enables cluster-wide blocking of recipients of Action Type: Email actions that are not in the provided allow list.New dynamic configuration
FlushSegmentsAndGlobalOnShutdown
. When set, and whenUSING_EPHEMERAL_DISKS
is set totrue
, forces all in-progress segments to be closed and uploaded to bucket, and also forces a write (and upload) of global snapshot during shutdown. When not set, this avoids the extra work and thus time shutting down from flushing very recent segments, as those can then be resumed on next boot, assuming that next boot continues on the same Kafka epoch. The default isfalse
, which allows faster shutdown.
Dashboards and Widgets
The
Single Value
widget now supports interactions on both the Search and Dashboard page. See Manage Dashboard Interactions for more details on interactions.Introduced Dashboards Interactions to add interactive elements to your dashboards.
For more information, see Manage Dashboard Interactions.
It is now possible to set a temporary timezone in dashboards, which will be read from the URL on page load e.g.
tz=Europe/Copenhagen
.For more information, see Time Interval Settings.
Log Collector
Falcon Log Collector Manage your Fleet now supports remote configuration of LogScale Collectors. This gives an administrator the option of managing the configuration of LogScale Collector instances in LogScale, instead of managing configuration files directly on the device where Falcon Log Collector is installed.
For more information, see Falcon Log Collector Manage your Fleet, Falcon Log Collector Releases.
Functions
Queries containing a
join()
function no longer run truly live when the query is set to Live. Instead, these queries will run repeatedly at intervals determined by the query engine.For more information, see Errors when Using Live join() Functions, Widgets with Live join() Functions,
join()
, Special Behaviour for Live Joins.default()
now supports assigning the same value to multiple fields, by passing multiple field names to thefield
parameter.The query function
holtwinters()
has been removed from the product.Using
ioc:lookup()
in a query while the IOC service is disabled will now result in a failed query instead of a warning, stating that there are partial results.selectLast()
andgroupBy()
now use less state size, allowing for larger result sets.The performance of
in()
is improved when matching with values that do not use the*
wildcard.
Other
"Sticky" autoshards no longer mean that the system cannot tune their value, but only that it cannot decrease the number of shards; the cluster is allowed to raise the number of shards on datasources when it needs to, also for those that were set as sticky using the REST API.
Ephemeral nodes are automatically removed from the cluster if they are offline for too long (2 hours by default).
Fixed in this release
Security
Verified that LogScale does not use the affected Akka dependency component in CVE-2023-31442 by default, and have taken additional precautions to notify customers.
For:
LogScale Cloud/Falcon Long Term Repository:
This CVE does not impact LogScale Cloud or LTR customers.
LogScale Self-Hosted:
Exposure to risk:
Potential risk is only present if a self hosted customer has modified the Akka parameters to a non default value of
akka.io.dns.resolver = async-dns
during initial setup.By default LogScale does not use this configuration parameter.
CrowdStrike has never recommended custom Akka parameters. We recommend using default values for all parameters.
Steps to mitigate:
Setting akka.io.dns.resolver to default value (inet-address) will mitigate the potential risk.
On versions older than 1.92.0:
Unset the custom Akka configuration. Refer to Akka documentation for more information on how to unset or pass a different value to the parameter here.
CrowdStrike recommends upgrading LogScale to 1.92.x or higher versions.
UI Changes
Fixed an issue that made switching UI theme report an error and only take effect for the current session.
Fixed an issue where the dashboard page would freeze when the value of a dashboard parameter was changed.
Fixed the UI as it were not showing an error when a query gets blocked due to query quota settings.
We have fixed tooltips in the query editor, which were hidden by other elements in the UI.
Automation and Alerts
For self-hosted: Automation for sending emails from Actions no longer uses the IP filter, allowing administrators not to put Automation on the IP allowlist.
GraphQL API
Pending deletes that would cause nodes to fail to start, reporting a NullPointerException, have been fixed.
Storage
Fixing mini-segment fetches as they failed to complete properly during queries, if the number of mini-segments involved was too large.
The noise from
MiniSegmentMergeLatencyLoggerJob
has been reduced by being more conservative about when we log mini segments that are unexpectedly not being merged. We have madeMiniSegmentMergeLatencyLoggerJob
take datasource idleness into account.
API
Fixed an issue with API Explorer that could fail to load in some configurations when using cookie authentication.
Configuration
Nodes are now considered ephemeral only if they set
USING_EPHEMERAL_DISKS
totrue
. Previously, they were ephemeral if they either set that configuration, or if they were using the httponly node role.Fixed an issue where the IOC database could get out of sync. The IOC database will be re-downloaded upon upgrade, therefore IOCs won't be completely available for a while after the upgrade.
Removed compression type
extreme
for configurationCOMPRESSION_TYPE
. Specifyingextreme
will now select the default value ofhigh
in order not to cause configuration errors for clusters that specifyextreme
. The suggestion is to removeCOMPRESSION_TYPE
from your configurations unless you specify the only other non-default value offast
.
Ingestion
We have set a maximum number of events that we will parse under a single timeout so large batches are allowed to take longer. If you've seen parsers time out not because the parser is actually slow but because you were processing many events in a single batch, this change should cause that stop happening. Only parsers that are genuinely slow should now time out.
Queries
The query scheduling has been fixed as it could hit races with the background recompression of files in a way that resulted in the query missing the file and ending up adding warnings about segment files being missed by the query.
Fixed a failing require from
MiniSegmentsAsTargetSegmentReader
, causing queries to fail in very rare cases.
Functions
Queries ending with
tail()
will no longer be rendered with infinite scroll.
Other
Fixed an issue for the ingest API that made it possible to ingest into system repositories.
Fixing mini-segment downloads during queries, as they could cause download retries to fail spuriously, even if the download actually succeeded.
Timeout from publish to global topic in Kafka has been fixed, as it resulted in marking input segments for merge as broken temporarily.
Falcon LogScale 1.76.3 LTS (2023-04-27)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.76.3 | LTS | 2023-04-27 | Cloud | 2024-02-28 | No | 1.44.0 | No |
Hide file hashes
TAR Checksum | Value |
---|---|
MD5 | 1e3ae5bfd9e5f5678fe34f0665148868 |
SHA1 | cafadf95ebeb23f145b263f9e05c7721a8a6d378 |
SHA256 | 7e532c0071874b5924b2d46c95e884c85f61de82eae228015a0c19f988f7311b |
SHA512 | f7b5ab273c7700554eece6798ebd8eb99cf4baf1e978d17b4dc92bb57f7d8001c0c03d0e03572007853ead7c63ea10191e4291d3abe6adfe18a46a3b7b5063fe |
Docker Image | SHA256 Checksum |
---|---|
humio | 21ff13996def725123764b56f9e9f3a563d4988b2f24f877abae4341db2eb3e9 |
humio-core | e848a7cdbc6d65eeb5bd33eaca3e5b697d2f477e41de294a729382ceaf383aa0 |
kafka | 395dd3f4cf6cc2a0bd02600bf7a08cadc2a840787727f77cb59e89833a48cd97 |
zookeeper | da46f56b5bcbf620c2d3e699c113a6bde0655028ed6ea6695663b7fecd479d3f |
Download: https://repo.humio.com/repository/maven-releases/com/humio/server/1.76.3/server-1.76.3.tar.gz
These notes include entries from the following previous releases: 1.76.1, 1.76.2
Bug fix.
Advance Warning
The following items are due to change in a future release.
Configuration
Starting from 1.78 release, the default value for the
MAX_INGEST_REQUEST_SIZE
configuration will be reduced from1 GB
to32 MB
.This value limits the size of ingest request and rejects oversized requests.
If the request is compressed within HTTP, then this restricts the size after decompressing.
Removed
Items that have been removed as of this release.
API
Removed the API for managing ingest tokens. This has long been deprecated and replaced by a GraphQL API.
Deprecation
Items that have been deprecated and may be removed in a future release.
The
REST
endpoint for testing actions has been deprecated. api/v1/repositories/repoId
/alertnotifiers/actionId
/test has been deprecated. The new GraphQL mutations should be used instead.
Upgrades
Changes that may occur or be required during an upgrade.
Other
Java upgraded to 17.0.6 in Docker containers
Kafka upgraded to 3.3.2 for KAFKA-14379
Kafka client upgraded to 3.3.2
Kafka Docker container upgraded to 3.3.2
Kafka client has been upgraded to 3.4.0.
Kafka broker has been upgraded to 3.4.0 in the Kafka container.
The container upgrade is performed for security reasons to resolve CVE-2022-36944 issue, which Kafka should however not be affected by. If you wish to do a rolling upgrade of your Kafka containers, please always refer to Kafka upgrade guide.
Packages
Optimizations in package handling require migration of data during upgrade. This migration is performed automatically. Please notice:
While the upgrade of cluster nodes are ongoing, we recommend you do not install or update any packages, as they may end up in an inconsistent state.
If a package ends up in a bad state during migration, it can be fixed simply by reinstalling the package.
You will potentially experience that accessing the list of installed packages will fail, and creating new dashboards, alerts, parsers, etc. based on package templates will not work as intended.
This should only happen during the cluster upgrade, and should resolve itself once the cluster is fully upgraded.
If the cluster nodes are downgraded, any packages installed or updated while running the new version will not work, and we therefore recommend uninstalling or downgrading those packages prior to downgrading the cluster nodes.
New features and improvements
Security
When creating a new group you now have to add the group and add permissions for it in the same multi step dialog.
UI Changes
Changes have been made for the three-dot menu (⋮) used for Field Interactions:
It is now available from the Fields Panel and the Inspection Panel, see Searching Data.
Keyboard navigation has been improved.
For field interactions with live queries, the Fields Panel flyout will now display a fixed list of top values, keeping the values from the point in time when the menu was opened.
Suggestions in Query Editor will show for certain function parameters like time formats.
Introduced Search Interactions to add custom event list options for all users in a repository.
For more information, see Event List Interactions.
Event List Interactions are now sorted by name and repository name by default.
Tabs on the
Users
page are renamed: former Groups and Permissions tab is now renamed to Permissions; former Details tab is now renamed to Information. In addition, the Permissions tab is now displayed first — it is also the tab that will be opened by default when navigating to a user from other places in the product. See Manage users & permissions for a description of roles and permissions in the UI.The Search page now supports timezone picking e.g. . The timezone will be set on the users' session and remembered between pages.
For more information, see Setting Time Zone.
You can now set your preferred timezone under Manage your Account.
Known field names are now shown as completion suggestions in Query Editor while you type.
Automation and Alerts
The list of Message Templates and Variables is no longer shown in the User Interface when editing Actions, instead a link to the documentation has been added.
GraphQL API
GraphQL API mutations have been added for testing actions without having to save them first. The added mutations are:
testEmailAction
testHumioRepoAction
testOpsGenieAction
testPagerDutyAction
testSlackAction
testSlackPostMessageAction
testUploadFileAction
testVictorOpsAction
testWebhookAction
The previous testAction mutation has been removed.
The new GraphQL API mutations' signature is almost the same as the create mutation for the same action, except that test actions require event data and a trigger name, as the previous testAction mutation did.
As a consequence, the
button is now always enabled in the UI.
Configuration
The ability to keep the same merge target across digest changes is reintroduced. This feature was reverted in an earlier release due to a discovered issue where mini segments for an active merge target could end up spread across hosts. As that issue has been fixed, mini segments should now be stored on the hosts running digest for the target.
A new environment configuration variable
GLOB_ALLOW_LIST_EMAIL_ACTIONS
is introduced. It enables cluster-wide blocking of recipients of Action Type: Email actions that are not in the provided allow list.New dynamic configuration
FlushSegmentsAndGlobalOnShutdown
. When set, and whenUSING_EPHEMERAL_DISKS
is set totrue
, forces all in-progress segments to be closed and uploaded to bucket, and also forces a write (and upload) of global snapshot during shutdown. When not set, this avoids the extra work and thus time shutting down from flushing very recent segments, as those can then be resumed on next boot, assuming that next boot continues on the same Kafka epoch. The default isfalse
, which allows faster shutdown.
Dashboards and Widgets
The
Single Value
widget now supports interactions on both the Search and Dashboard page. See Manage Dashboard Interactions for more details on interactions.Introduced Dashboards Interactions to add interactive elements to your dashboards.
For more information, see Manage Dashboard Interactions.
It is now possible to set a temporary timezone in dashboards, which will be read from the URL on page load e.g.
tz=Europe/Copenhagen
.For more information, see Time Interval Settings.
Log Collector
Falcon Log Collector Manage your Fleet now supports remote configuration of LogScale Collectors. This gives an administrator the option of managing the configuration of LogScale Collector instances in LogScale, instead of managing configuration files directly on the device where Falcon Log Collector is installed.
For more information, see Falcon Log Collector Manage your Fleet, Falcon Log Collector Releases.
Functions
Queries containing a
join()
function no longer run truly live when the query is set to Live. Instead, these queries will run repeatedly at intervals determined by the query engine.For more information, see Errors when Using Live join() Functions, Widgets with Live join() Functions,
join()
, Special Behaviour for Live Joins.default()
now supports assigning the same value to multiple fields, by passing multiple field names to thefield
parameter.The query function
holtwinters()
has been removed from the product.Using
ioc:lookup()
in a query while the IOC service is disabled will now result in a failed query instead of a warning, stating that there are partial results.selectLast()
andgroupBy()
now use less state size, allowing for larger result sets.The performance of
in()
is improved when matching with values that do not use the*
wildcard.
Other
"Sticky" autoshards no longer mean that the system cannot tune their value, but only that it cannot decrease the number of shards; the cluster is allowed to raise the number of shards on datasources when it needs to, also for those that were set as sticky using the REST API.
Ephemeral nodes are automatically removed from the cluster if they are offline for too long (2 hours by default).
Fixed in this release
UI Changes
Fixed an issue that made switching UI theme report an error and only take effect for the current session.
Fixed an issue where the dashboard page would freeze when the value of a dashboard parameter was changed.
Fixed the UI as it were not showing an error when a query gets blocked due to query quota settings.
We have fixed tooltips in the query editor, which were hidden by other elements in the UI.
Automation and Alerts
For self-hosted: Automation for sending emails from Actions no longer uses the IP filter, allowing administrators not to put Automation on the IP allowlist.
GraphQL API
Pending deletes that would cause nodes to fail to start, reporting a NullPointerException, have been fixed.
Storage
Fixing mini-segment fetches as they failed to complete properly during queries, if the number of mini-segments involved was too large.
The noise from
MiniSegmentMergeLatencyLoggerJob
has been reduced by being more conservative about when we log mini segments that are unexpectedly not being merged. We have madeMiniSegmentMergeLatencyLoggerJob
take datasource idleness into account.
API
Fixed an issue with API Explorer that could fail to load in some configurations when using cookie authentication.
Configuration
Nodes are now considered ephemeral only if they set
USING_EPHEMERAL_DISKS
totrue
. Previously, they were ephemeral if they either set that configuration, or if they were using the httponly node role.Fixed an issue where the IOC database could get out of sync. The IOC database will be re-downloaded upon upgrade, therefore IOCs won't be completely available for a while after the upgrade.
Removed compression type
extreme
for configurationCOMPRESSION_TYPE
. Specifyingextreme
will now select the default value ofhigh
in order not to cause configuration errors for clusters that specifyextreme
. The suggestion is to removeCOMPRESSION_TYPE
from your configurations unless you specify the only other non-default value offast
.
Ingestion
We have set a maximum number of events that we will parse under a single timeout so large batches are allowed to take longer. If you've seen parsers time out not because the parser is actually slow but because you were processing many events in a single batch, this change should cause that stop happening. Only parsers that are genuinely slow should now time out.
Queries
The query scheduling has been fixed as it could hit races with the background recompression of files in a way that resulted in the query missing the file and ending up adding warnings about segment files being missed by the query.
Fixed a failing require from
MiniSegmentsAsTargetSegmentReader
, causing queries to fail in very rare cases.
Functions
Queries ending with
tail()
will no longer be rendered with infinite scroll.
Other
Fixed an issue for the ingest API that made it possible to ingest into system repositories.
Fixing mini-segment downloads during queries, as they could cause download retries to fail spuriously, even if the download actually succeeded.
Timeout from publish to global topic in Kafka has been fixed, as it resulted in marking input segments for merge as broken temporarily.
Falcon LogScale 1.76.2 LTS (2023-03-06)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.76.2 | LTS | 2023-03-06 | Cloud | 2024-02-28 | No | 1.44.0 | No |
Hide file hashes
TAR Checksum | Value |
---|---|
MD5 | 6e3ffd1b9c487b41516d3aca0952c9d5 |
SHA1 | 02832eec0f41358be3e956d7e9dbab87da190d2e |
SHA256 | 1289683afb0666dbba0c7340b05d1d8ecbdd4f32ed69924108e5065d435f0fc1 |
SHA512 | 6d25eefba440ad98e76721945ac0cc12a02e1cd53ae1b1a35acd1b4b2abfe675583bdedaadefb44a12564680f507709541d43de116907fdc08a1f0a7a2f0d3b9 |
Docker Image | SHA256 Checksum |
---|---|
humio | ccde946345dfc1fe39bd0e36bc80fff02c67791241ecbfb2848842215c505b57 |
humio-core | 52bb05ccf27c04c842b307e2b895eb3bd296e40058c992bb8b898027c49595f2 |
kafka | 37e166cfeacd8ceae63929368c7d6a86560a9bc1a5b0ee174d775ef636f9a223 |
zookeeper | 0021b13408d1fad0b2644ddad81dce83af0fa2fd885b411526a365913b794f39 |
Download: https://repo.humio.com/repository/maven-releases/com/humio/server/1.76.2/server-1.76.2.tar.gz
These notes include entries from the following previous releases: 1.76.1
Security fix.
Advance Warning
The following items are due to change in a future release.
Configuration
Starting from 1.78 release, the default value for the
MAX_INGEST_REQUEST_SIZE
configuration will be reduced from1 GB
to32 MB
.This value limits the size of ingest request and rejects oversized requests.
If the request is compressed within HTTP, then this restricts the size after decompressing.
Removed
Items that have been removed as of this release.
API
Removed the API for managing ingest tokens. This has long been deprecated and replaced by a GraphQL API.
Deprecation
Items that have been deprecated and may be removed in a future release.
The
REST
endpoint for testing actions has been deprecated. api/v1/repositories/repoId
/alertnotifiers/actionId
/test has been deprecated. The new GraphQL mutations should be used instead.
Upgrades
Changes that may occur or be required during an upgrade.
Other
Java upgraded to 17.0.6 in Docker containers
Kafka upgraded to 3.3.2 for KAFKA-14379
Kafka client upgraded to 3.3.2
Kafka Docker container upgraded to 3.3.2
Kafka client has been upgraded to 3.4.0.
Kafka broker has been upgraded to 3.4.0 in the Kafka container.
The container upgrade is performed for security reasons to resolve CVE-2022-36944 issue, which Kafka should however not be affected by. If you wish to do a rolling upgrade of your Kafka containers, please always refer to Kafka upgrade guide.
Packages
Optimizations in package handling require migration of data during upgrade. This migration is performed automatically. Please notice:
While the upgrade of cluster nodes are ongoing, we recommend you do not install or update any packages, as they may end up in an inconsistent state.
If a package ends up in a bad state during migration, it can be fixed simply by reinstalling the package.
You will potentially experience that accessing the list of installed packages will fail, and creating new dashboards, alerts, parsers, etc. based on package templates will not work as intended.
This should only happen during the cluster upgrade, and should resolve itself once the cluster is fully upgraded.
If the cluster nodes are downgraded, any packages installed or updated while running the new version will not work, and we therefore recommend uninstalling or downgrading those packages prior to downgrading the cluster nodes.
New features and improvements
Security
When creating a new group you now have to add the group and add permissions for it in the same multi step dialog.
UI Changes
Changes have been made for the three-dot menu (⋮) used for Field Interactions:
It is now available from the Fields Panel and the Inspection Panel, see Searching Data.
Keyboard navigation has been improved.
For field interactions with live queries, the Fields Panel flyout will now display a fixed list of top values, keeping the values from the point in time when the menu was opened.
Suggestions in Query Editor will show for certain function parameters like time formats.
Introduced Search Interactions to add custom event list options for all users in a repository.
For more information, see Event List Interactions.
Event List Interactions are now sorted by name and repository name by default.
Tabs on the
Users
page are renamed: former Groups and Permissions tab is now renamed to Permissions; former Details tab is now renamed to Information. In addition, the Permissions tab is now displayed first — it is also the tab that will be opened by default when navigating to a user from other places in the product. See Manage users & permissions for a description of roles and permissions in the UI.The Search page now supports timezone picking e.g. . The timezone will be set on the users' session and remembered between pages.
For more information, see Setting Time Zone.
You can now set your preferred timezone under Manage your Account.
Known field names are now shown as completion suggestions in Query Editor while you type.
Automation and Alerts
The list of Message Templates and Variables is no longer shown in the User Interface when editing Actions, instead a link to the documentation has been added.
GraphQL API
GraphQL API mutations have been added for testing actions without having to save them first. The added mutations are:
testEmailAction
testHumioRepoAction
testOpsGenieAction
testPagerDutyAction
testSlackAction
testSlackPostMessageAction
testUploadFileAction
testVictorOpsAction
testWebhookAction
The previous testAction mutation has been removed.
The new GraphQL API mutations' signature is almost the same as the create mutation for the same action, except that test actions require event data and a trigger name, as the previous testAction mutation did.
As a consequence, the
button is now always enabled in the UI.
Configuration
The ability to keep the same merge target across digest changes is reintroduced. This feature was reverted in an earlier release due to a discovered issue where mini segments for an active merge target could end up spread across hosts. As that issue has been fixed, mini segments should now be stored on the hosts running digest for the target.
A new environment configuration variable
GLOB_ALLOW_LIST_EMAIL_ACTIONS
is introduced. It enables cluster-wide blocking of recipients of Action Type: Email actions that are not in the provided allow list.New dynamic configuration
FlushSegmentsAndGlobalOnShutdown
. When set, and whenUSING_EPHEMERAL_DISKS
is set totrue
, forces all in-progress segments to be closed and uploaded to bucket, and also forces a write (and upload) of global snapshot during shutdown. When not set, this avoids the extra work and thus time shutting down from flushing very recent segments, as those can then be resumed on next boot, assuming that next boot continues on the same Kafka epoch. The default isfalse
, which allows faster shutdown.
Dashboards and Widgets
The
Single Value
widget now supports interactions on both the Search and Dashboard page. See Manage Dashboard Interactions for more details on interactions.Introduced Dashboards Interactions to add interactive elements to your dashboards.
For more information, see Manage Dashboard Interactions.
It is now possible to set a temporary timezone in dashboards, which will be read from the URL on page load e.g.
tz=Europe/Copenhagen
.For more information, see Time Interval Settings.
Log Collector
Falcon Log Collector Manage your Fleet now supports remote configuration of LogScale Collectors. This gives an administrator the option of managing the configuration of LogScale Collector instances in LogScale, instead of managing configuration files directly on the device where Falcon Log Collector is installed.
For more information, see Falcon Log Collector Manage your Fleet, Falcon Log Collector Releases.
Functions
Queries containing a
join()
function no longer run truly live when the query is set to Live. Instead, these queries will run repeatedly at intervals determined by the query engine.For more information, see Errors when Using Live join() Functions, Widgets with Live join() Functions,
join()
, Special Behaviour for Live Joins.default()
now supports assigning the same value to multiple fields, by passing multiple field names to thefield
parameter.The query function
holtwinters()
has been removed from the product.Using
ioc:lookup()
in a query while the IOC service is disabled will now result in a failed query instead of a warning, stating that there are partial results.selectLast()
andgroupBy()
now use less state size, allowing for larger result sets.The performance of
in()
is improved when matching with values that do not use the*
wildcard.
Other
"Sticky" autoshards no longer mean that the system cannot tune their value, but only that it cannot decrease the number of shards; the cluster is allowed to raise the number of shards on datasources when it needs to, also for those that were set as sticky using the REST API.
Ephemeral nodes are automatically removed from the cluster if they are offline for too long (2 hours by default).
Fixed in this release
UI Changes
Fixed an issue that made switching UI theme report an error and only take effect for the current session.
Fixed an issue where the dashboard page would freeze when the value of a dashboard parameter was changed.
Fixed the UI as it were not showing an error when a query gets blocked due to query quota settings.
We have fixed tooltips in the query editor, which were hidden by other elements in the UI.
Automation and Alerts
For self-hosted: Automation for sending emails from Actions no longer uses the IP filter, allowing administrators not to put Automation on the IP allowlist.
GraphQL API
Pending deletes that would cause nodes to fail to start, reporting a NullPointerException, have been fixed.
Storage
Fixing mini-segment fetches as they failed to complete properly during queries, if the number of mini-segments involved was too large.
The noise from
MiniSegmentMergeLatencyLoggerJob
has been reduced by being more conservative about when we log mini segments that are unexpectedly not being merged. We have madeMiniSegmentMergeLatencyLoggerJob
take datasource idleness into account.
Configuration
Nodes are now considered ephemeral only if they set
USING_EPHEMERAL_DISKS
totrue
. Previously, they were ephemeral if they either set that configuration, or if they were using the httponly node role.Fixed an issue where the IOC database could get out of sync. The IOC database will be re-downloaded upon upgrade, therefore IOCs won't be completely available for a while after the upgrade.
Removed compression type
extreme
for configurationCOMPRESSION_TYPE
. Specifyingextreme
will now select the default value ofhigh
in order not to cause configuration errors for clusters that specifyextreme
. The suggestion is to removeCOMPRESSION_TYPE
from your configurations unless you specify the only other non-default value offast
.
Ingestion
We have set a maximum number of events that we will parse under a single timeout so large batches are allowed to take longer. If you've seen parsers time out not because the parser is actually slow but because you were processing many events in a single batch, this change should cause that stop happening. Only parsers that are genuinely slow should now time out.
Queries
The query scheduling has been fixed as it could hit races with the background recompression of files in a way that resulted in the query missing the file and ending up adding warnings about segment files being missed by the query.
Fixed a failing require from
MiniSegmentsAsTargetSegmentReader
, causing queries to fail in very rare cases.
Functions
Queries ending with
tail()
will no longer be rendered with infinite scroll.
Other
Fixed an issue for the ingest API that made it possible to ingest into system repositories.
Fixing mini-segment downloads during queries, as they could cause download retries to fail spuriously, even if the download actually succeeded.
Timeout from publish to global topic in Kafka has been fixed, as it resulted in marking input segments for merge as broken temporarily.
Falcon LogScale 1.76.1 LTS (2023-02-27)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.76.1 | LTS | 2023-02-27 | Cloud | 2024-02-28 | No | 1.44.0 | No |
Hide file hashes
TAR Checksum | Value |
---|---|
MD5 | 5c03162eebeb9c4fe028bce4140da4d9 |
SHA1 | 56459772d2c7f5c2d21be6650d473bfee0893ab1 |
SHA256 | 04c067f721cb6a3bf3e74ce10d2bda8a12a3ede05c6b181af8a074a430321bfc |
SHA512 | 71e330f43a0825c70bf0fd8b1c3c82cedc554aab87316a906a72c813697c930be58eef2bffe049ff28f13bd0d4b44700e8fd7a54e796724acdf4c063c5c4508c |
Docker Image | SHA256 Checksum |
---|---|
humio | e4a730e769cb84cea8be642eb352763e6596caa249a95857de9052cc4b83ddb4 |
humio-core | 148d662610e09163ce581487ebdec4519960e9f332473b100cd3c6466d52943b |
kafka | c717b3b0c5087cb746bde5381419bf5cc31532a1756f2463ff31477374b89a4a |
zookeeper | 2b228e05f97e8946c323fa40060102de41210e5e38733ffb3dd0b353259c37d3 |
Download: https://repo.humio.com/repository/maven-releases/com/humio/server/1.76.1/server-1.76.1.tar.gz
Bug fixes and updates.
Advance Warning
The following items are due to change in a future release.
Configuration
Starting from 1.78 release, the default value for the
MAX_INGEST_REQUEST_SIZE
configuration will be reduced from1 GB
to32 MB
.This value limits the size of ingest request and rejects oversized requests.
If the request is compressed within HTTP, then this restricts the size after decompressing.
Removed
Items that have been removed as of this release.
API
Removed the API for managing ingest tokens. This has long been deprecated and replaced by a GraphQL API.
Deprecation
Items that have been deprecated and may be removed in a future release.
The
REST
endpoint for testing actions has been deprecated. api/v1/repositories/repoId
/alertnotifiers/actionId
/test has been deprecated. The new GraphQL mutations should be used instead.
Upgrades
Changes that may occur or be required during an upgrade.
Other
Java upgraded to 17.0.6 in Docker containers
Kafka upgraded to 3.3.2 for KAFKA-14379
Kafka client upgraded to 3.3.2
Kafka Docker container upgraded to 3.3.2
Packages
Optimizations in package handling require migration of data during upgrade. This migration is performed automatically. Please notice:
While the upgrade of cluster nodes are ongoing, we recommend you do not install or update any packages, as they may end up in an inconsistent state.
If a package ends up in a bad state during migration, it can be fixed simply by reinstalling the package.
You will potentially experience that accessing the list of installed packages will fail, and creating new dashboards, alerts, parsers, etc. based on package templates will not work as intended.
This should only happen during the cluster upgrade, and should resolve itself once the cluster is fully upgraded.
If the cluster nodes are downgraded, any packages installed or updated while running the new version will not work, and we therefore recommend uninstalling or downgrading those packages prior to downgrading the cluster nodes.
New features and improvements
Security
When creating a new group you now have to add the group and add permissions for it in the same multi step dialog.
UI Changes
Changes have been made for the three-dot menu (⋮) used for Field Interactions:
It is now available from the Fields Panel and the Inspection Panel, see Searching Data.
Keyboard navigation has been improved.
For field interactions with live queries, the Fields Panel flyout will now display a fixed list of top values, keeping the values from the point in time when the menu was opened.
Suggestions in Query Editor will show for certain function parameters like time formats.
Introduced Search Interactions to add custom event list options for all users in a repository.
For more information, see Event List Interactions.
Event List Interactions are now sorted by name and repository name by default.
Tabs on the
Users
page are renamed: former Groups and Permissions tab is now renamed to Permissions; former Details tab is now renamed to Information. In addition, the Permissions tab is now displayed first — it is also the tab that will be opened by default when navigating to a user from other places in the product. See Manage users & permissions for a description of roles and permissions in the UI.The Search page now supports timezone picking e.g. . The timezone will be set on the users' session and remembered between pages.
For more information, see Setting Time Zone.
You can now set your preferred timezone under Manage your Account.
Known field names are now shown as completion suggestions in Query Editor while you type.
Automation and Alerts
The list of Message Templates and Variables is no longer shown in the User Interface when editing Actions, instead a link to the documentation has been added.
GraphQL API
GraphQL API mutations have been added for testing actions without having to save them first. The added mutations are:
testEmailAction
testHumioRepoAction
testOpsGenieAction
testPagerDutyAction
testSlackAction
testSlackPostMessageAction
testUploadFileAction
testVictorOpsAction
testWebhookAction
The previous testAction mutation has been removed.
The new GraphQL API mutations' signature is almost the same as the create mutation for the same action, except that test actions require event data and a trigger name, as the previous testAction mutation did.
As a consequence, the
button is now always enabled in the UI.
Configuration
The ability to keep the same merge target across digest changes is reintroduced. This feature was reverted in an earlier release due to a discovered issue where mini segments for an active merge target could end up spread across hosts. As that issue has been fixed, mini segments should now be stored on the hosts running digest for the target.
A new environment configuration variable
GLOB_ALLOW_LIST_EMAIL_ACTIONS
is introduced. It enables cluster-wide blocking of recipients of Action Type: Email actions that are not in the provided allow list.New dynamic configuration
FlushSegmentsAndGlobalOnShutdown
. When set, and whenUSING_EPHEMERAL_DISKS
is set totrue
, forces all in-progress segments to be closed and uploaded to bucket, and also forces a write (and upload) of global snapshot during shutdown. When not set, this avoids the extra work and thus time shutting down from flushing very recent segments, as those can then be resumed on next boot, assuming that next boot continues on the same Kafka epoch. The default isfalse
, which allows faster shutdown.
Dashboards and Widgets
The
Single Value
widget now supports interactions on both the Search and Dashboard page. See Manage Dashboard Interactions for more details on interactions.Introduced Dashboards Interactions to add interactive elements to your dashboards.
For more information, see Manage Dashboard Interactions.
It is now possible to set a temporary timezone in dashboards, which will be read from the URL on page load e.g.
tz=Europe/Copenhagen
.For more information, see Time Interval Settings.
Log Collector
Falcon Log Collector Manage your Fleet now supports remote configuration of LogScale Collectors. This gives an administrator the option of managing the configuration of LogScale Collector instances in LogScale, instead of managing configuration files directly on the device where Falcon Log Collector is installed.
For more information, see Falcon Log Collector Manage your Fleet, Falcon Log Collector Releases.
Functions
Queries containing a
join()
function no longer run truly live when the query is set to Live. Instead, these queries will run repeatedly at intervals determined by the query engine.For more information, see Errors when Using Live join() Functions, Widgets with Live join() Functions,
join()
, Special Behaviour for Live Joins.default()
now supports assigning the same value to multiple fields, by passing multiple field names to thefield
parameter.The query function
holtwinters()
has been removed from the product.Using
ioc:lookup()
in a query while the IOC service is disabled will now result in a failed query instead of a warning, stating that there are partial results.selectLast()
andgroupBy()
now use less state size, allowing for larger result sets.The performance of
in()
is improved when matching with values that do not use the*
wildcard.
Other
"Sticky" autoshards no longer mean that the system cannot tune their value, but only that it cannot decrease the number of shards; the cluster is allowed to raise the number of shards on datasources when it needs to, also for those that were set as sticky using the REST API.
Ephemeral nodes are automatically removed from the cluster if they are offline for too long (2 hours by default).
Fixed in this release
UI Changes
Fixed an issue that made switching UI theme report an error and only take effect for the current session.
Fixed an issue where the dashboard page would freeze when the value of a dashboard parameter was changed.
Fixed the UI as it were not showing an error when a query gets blocked due to query quota settings.
We have fixed tooltips in the query editor, which were hidden by other elements in the UI.
Automation and Alerts
For self-hosted: Automation for sending emails from Actions no longer uses the IP filter, allowing administrators not to put Automation on the IP allowlist.
GraphQL API
Pending deletes that would cause nodes to fail to start, reporting a NullPointerException, have been fixed.
Storage
Fixing mini-segment fetches as they failed to complete properly during queries, if the number of mini-segments involved was too large.
The noise from
MiniSegmentMergeLatencyLoggerJob
has been reduced by being more conservative about when we log mini segments that are unexpectedly not being merged. We have madeMiniSegmentMergeLatencyLoggerJob
take datasource idleness into account.
Configuration
Nodes are now considered ephemeral only if they set
USING_EPHEMERAL_DISKS
totrue
. Previously, they were ephemeral if they either set that configuration, or if they were using the httponly node role.Fixed an issue where the IOC database could get out of sync. The IOC database will be re-downloaded upon upgrade, therefore IOCs won't be completely available for a while after the upgrade.
Removed compression type
extreme
for configurationCOMPRESSION_TYPE
. Specifyingextreme
will now select the default value ofhigh
in order not to cause configuration errors for clusters that specifyextreme
. The suggestion is to removeCOMPRESSION_TYPE
from your configurations unless you specify the only other non-default value offast
.
Ingestion
We have set a maximum number of events that we will parse under a single timeout so large batches are allowed to take longer. If you've seen parsers time out not because the parser is actually slow but because you were processing many events in a single batch, this change should cause that stop happening. Only parsers that are genuinely slow should now time out.
Queries
The query scheduling has been fixed as it could hit races with the background recompression of files in a way that resulted in the query missing the file and ending up adding warnings about segment files being missed by the query.
Fixed a failing require from
MiniSegmentsAsTargetSegmentReader
, causing queries to fail in very rare cases.
Functions
Queries ending with
tail()
will no longer be rendered with infinite scroll.
Other
Fixed an issue for the ingest API that made it possible to ingest into system repositories.
Fixing mini-segment downloads during queries, as they could cause download retries to fail spuriously, even if the download actually succeeded.
Timeout from publish to global topic in Kafka has been fixed, as it resulted in marking input segments for merge as broken temporarily.
Falcon LogScale 1.76.0 GA (2023-02-07)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.76.0 | GA | 2023-02-07 | Cloud | 2024-02-28 | No | 1.44.0 | No |
Available for download two days after release.
Bug fixes and updates.
Advance Warning
The following items are due to change in a future release.
Configuration
Starting from 1.78 release, the default value for the
MAX_INGEST_REQUEST_SIZE
configuration will be reduced from1 GB
to32 MB
.This value limits the size of ingest request and rejects oversized requests.
If the request is compressed within HTTP, then this restricts the size after decompressing.
Removed
Items that have been removed as of this release.
API
Removed the API for managing ingest tokens. This has long been deprecated and replaced by a GraphQL API.
New features and improvements
UI Changes
Event List Interactions are now sorted by name and repository name by default.
Dashboards and Widgets
The
Single Value
widget now supports interactions on both the Search and Dashboard page. See Manage Dashboard Interactions for more details on interactions.It is now possible to set a temporary timezone in dashboards, which will be read from the URL on page load e.g.
tz=Europe/Copenhagen
.For more information, see Time Interval Settings.
Log Collector
Falcon Log Collector Manage your Fleet now supports remote configuration of LogScale Collectors. This gives an administrator the option of managing the configuration of LogScale Collector instances in LogScale, instead of managing configuration files directly on the device where Falcon Log Collector is installed.
For more information, see Falcon Log Collector Manage your Fleet, Falcon Log Collector Releases.
Functions
Queries containing a
join()
function no longer run truly live when the query is set to Live. Instead, these queries will run repeatedly at intervals determined by the query engine.For more information, see Errors when Using Live join() Functions, Widgets with Live join() Functions,
join()
, Special Behaviour for Live Joins.
Fixed in this release
Other
Fixed an issue for the ingest API that made it possible to ingest into system repositories.
Falcon LogScale 1.75.0 GA (2023-01-31)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.75.0 | GA | 2023-01-31 | Cloud | 2024-02-28 | No | 1.44.0 | No |
Available for download two days after release.
Bug fixes and updates.
Deprecation
Items that have been deprecated and may be removed in a future release.
The
REST
endpoint for testing actions has been deprecated. api/v1/repositories/repoId
/alertnotifiers/actionId
/test has been deprecated. The new GraphQL mutations should be used instead.
Upgrades
Changes that may occur or be required during an upgrade.
Other
Java upgraded to 17.0.6 in Docker containers
Kafka upgraded to 3.3.2 for KAFKA-14379
Kafka client upgraded to 3.3.2
Kafka Docker container upgraded to 3.3.2
New features and improvements
UI Changes
Suggestions in Query Editor will show for certain function parameters like time formats.
Introduced Search Interactions to add custom event list options for all users in a repository.
For more information, see Event List Interactions.
The Search page now supports timezone picking e.g. . The timezone will be set on the users' session and remembered between pages.
For more information, see Setting Time Zone.
You can now set your preferred timezone under Manage your Account.
Known field names are now shown as completion suggestions in Query Editor while you type.
GraphQL API
GraphQL API mutations have been added for testing actions without having to save them first. The added mutations are:
testEmailAction
testHumioRepoAction
testOpsGenieAction
testPagerDutyAction
testSlackAction
testSlackPostMessageAction
testUploadFileAction
testVictorOpsAction
testWebhookAction
The previous testAction mutation has been removed.
The new GraphQL API mutations' signature is almost the same as the create mutation for the same action, except that test actions require event data and a trigger name, as the previous testAction mutation did.
As a consequence, the
button is now always enabled in the UI.
Dashboards and Widgets
Introduced Dashboards Interactions to add interactive elements to your dashboards.
For more information, see Manage Dashboard Interactions.
Functions
default()
now supports assigning the same value to multiple fields, by passing multiple field names to thefield
parameter.selectLast()
andgroupBy()
now use less state size, allowing for larger result sets.The performance of
in()
is improved when matching with values that do not use the*
wildcard.
Fixed in this release
UI Changes
Fixed an issue that made switching UI theme report an error and only take effect for the current session.
Fixed the UI as it were not showing an error when a query gets blocked due to query quota settings.
Automation and Alerts
For self-hosted: Automation for sending emails from Actions no longer uses the IP filter, allowing administrators not to put Automation on the IP allowlist.
Queries
Fixed a failing require from
MiniSegmentsAsTargetSegmentReader
, causing queries to fail in very rare cases.
Functions
Queries ending with
tail()
will no longer be rendered with infinite scroll.
Other
Unlimited waits for nodes to get in sync has been fixed. This caused digest coordination to fail, to limit the time allowed for a node to get "in sync" on a partition before leadership was assigned to it, in cases where the previous digest leader shut down gracefully.
Falcon LogScale 1.74.0 GA (2023-01-24)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.74.0 | GA | 2023-01-24 | Cloud | 2024-02-28 | No | 1.44.0 | No |
Available for download two days after release.
Bug fixes and updates.
New features and improvements
Security
When creating a new group you now have to add the group and add permissions for it in the same multi step dialog.
UI Changes
Changes have been made for the three-dot menu (⋮) used for Field Interactions:
It is now available from the Fields Panel and the Inspection Panel, see Searching Data.
Keyboard navigation has been improved.
For field interactions with live queries, the Fields Panel flyout will now display a fixed list of top values, keeping the values from the point in time when the menu was opened.
Automation and Alerts
The list of Message Templates and Variables is no longer shown in the User Interface when editing Actions, instead a link to the documentation has been added.
Configuration
A new environment configuration variable
GLOB_ALLOW_LIST_EMAIL_ACTIONS
is introduced. It enables cluster-wide blocking of recipients of Action Type: Email actions that are not in the provided allow list.
Fixed in this release
UI Changes
Fixed an issue where the dashboard page would freeze when the value of a dashboard parameter was changed.
We have fixed tooltips in the query editor, which were hidden by other elements in the UI.
Configuration
Some restrictions have been introduced when running with
single-user
asAUTHENTICATION_METHOD
:Starting on a machine that already has multiple users will not delete them, but you will be unable to create additional users.
Running
AUTHENTICATION_METHOD=single-user
with multiple users will pick the best candidate based on username and privilege.
Fixed an issue where the environment variable
OIDC_USE_HTTP_PROXY
was not respected. It means that LogScale will now call all OIDC endpoints directly without going through the HTTP Proxy, whenOIDC_USE_HTTP_PROXY
is set tofalse
.This fixes the known issue previously reported in Falcon LogScale 1.63.1 LTS (2022-11-14), Falcon LogScale 1.63.2 LTS (2022-11-30), Falcon LogScale 1.63.3 LTS (2022-12-21) and Falcon LogScale 1.70.0 LTS (2023-01-16).
Ingestion
We have set a maximum number of events that we will parse under a single timeout so large batches are allowed to take longer. If you've seen parsers time out not because the parser is actually slow but because you were processing many events in a single batch, this change should cause that stop happening. Only parsers that are genuinely slow should now time out.
Falcon LogScale 1.73.0 GA (2023-01-17)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.73.0 | GA | 2023-01-17 | Cloud | 2024-02-28 | No | 1.44.0 | No |
Available for download two days after release.
Bug fixes and updates.
New features and improvements
Functions
The query function
holtwinters()
has been removed from the product.Using
ioc:lookup()
in a query while the IOC service is disabled will now result in a failed query instead of a warning, stating that there are partial results.
Fixed in this release
Storage
The noise from
MiniSegmentMergeLatencyLoggerJob
has been reduced by being more conservative about when we log mini segments that are unexpectedly not being merged. We have madeMiniSegmentMergeLatencyLoggerJob
take datasource idleness into account.
Falcon LogScale 1.72.0 GA (2023-01-10)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.72.0 | GA | 2023-01-10 | Cloud | 2024-02-28 | No | 1.44.0 | No |
Available for download two days after release.
Bug fixes and updates.
Deprecation
Items that have been deprecated and may be removed in a future release.
The query function
holtwinters()
is now deprecated and will be removed along with the release of future version 1.73; therefore, its usage in alerts is not recommended.
Fixed in this release
Configuration
Removed compression type
extreme
for configurationCOMPRESSION_TYPE
. Specifyingextreme
will now select the default value ofhigh
in order not to cause configuration errors for clusters that specifyextreme
. The suggestion is to removeCOMPRESSION_TYPE
from your configurations unless you specify the only other non-default value offast
.
Queries
The query scheduling has been fixed as it could hit races with the background recompression of files in a way that resulted in the query missing the file and ending up adding warnings about segment files being missed by the query.
Falcon LogScale 1.71.0 GA (2023-01-03)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.71.0 | GA | 2023-01-03 | Cloud | 2024-02-28 | No | 1.44.0 | No |
Available for download two days after release.
Bug fixes and updates.
Deprecation
Items that have been deprecated and may be removed in a future release.
The query function
holtwinters()
is now deprecated and will be removed along with the release of future version 1.73; therefore, its usage in alerts is not recommended.
Upgrades
Changes that may occur or be required during an upgrade.
Packages
Optimizations in package handling require migration of data during upgrade. This migration is performed automatically. Please notice:
While the upgrade of cluster nodes are ongoing, we recommend you do not install or update any packages, as they may end up in an inconsistent state.
If a package ends up in a bad state during migration, it can be fixed simply by reinstalling the package.
You will potentially experience that accessing the list of installed packages will fail, and creating new dashboards, alerts, parsers, etc. based on package templates will not work as intended.
This should only happen during the cluster upgrade, and should resolve itself once the cluster is fully upgraded.
If the cluster nodes are downgraded, any packages installed or updated while running the new version will not work, and we therefore recommend uninstalling or downgrading those packages prior to downgrading the cluster nodes.
New features and improvements
UI Changes
Tabs on the
Users
page are renamed: former Groups and Permissions tab is now renamed to Permissions; former Details tab is now renamed to Information. In addition, the Permissions tab is now displayed first — it is also the tab that will be opened by default when navigating to a user from other places in the product. See Manage users & permissions for a description of roles and permissions in the UI.
Configuration
The ability to keep the same merge target across digest changes is reintroduced. This feature was reverted in an earlier release due to a discovered issue where mini segments for an active merge target could end up spread across hosts. As that issue has been fixed, mini segments should now be stored on the hosts running digest for the target.
New dynamic configuration
FlushSegmentsAndGlobalOnShutdown
. When set, and whenUSING_EPHEMERAL_DISKS
is set totrue
, forces all in-progress segments to be closed and uploaded to bucket, and also forces a write (and upload) of global snapshot during shutdown. When not set, this avoids the extra work and thus time shutting down from flushing very recent segments, as those can then be resumed on next boot, assuming that next boot continues on the same Kafka epoch. The default isfalse
, which allows faster shutdown.
Fixed in this release
Configuration
Fixed an issue where the IOC database could get out of sync. The IOC database will be re-downloaded upon upgrade, therefore IOCs won't be completely available for a while after the upgrade.
Falcon LogScale 1.70.2 LTS (2023-03-06)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.70.2 | LTS | 2023-03-06 | Cloud | 2024-01-31 | No | 1.44.0 | No |
Hide file hashes
TAR Checksum | Value |
---|---|
MD5 | be3b24f591693dae272c029c64172f9b |
SHA1 | 1ba350d80532d655a8f16585e2442a306e9bd163 |
SHA256 | b44d38ad5ebbc0a8765df609e1fab85ab0421ea4f298e766d5ae042862b4dfe5 |
SHA512 | fdc39cf36ac6ce8f88ceb78d7728cb6beebfae56faa54eef724b8154fac5e4cafee5118621ef2234eb080533243ebe06ef8b0a2f78aa1945f7fc21ccd7e9b010 |
Docker Image | SHA256 Checksum |
---|---|
humio | 80df972f2666dfd6cd0f6d667fa5fa4c0d70da505475fb2feafc2f9ec758d2b2 |
humio-core | f4c2cc95de7ad66e6800fbceb1a22caee567b482dcf6b1181b7369c9aa7c86eb |
kafka | 737cbf227b96a304343f03ed168642ae24ea0ca5f6e3c754ebb26e482b6b887c |
zookeeper | d45fc23dee3e9b4b8dd0be3a88baddba0fef08c21d77119b6d07254fbb34ee8c |
Download: https://repo.humio.com/repository/maven-releases/com/humio/server/1.70.2/server-1.70.2.tar.gz
These notes include entries from the following previous releases: 1.70.0, 1.70.1
Security fix and bug fixes.
Deprecation
Items that have been deprecated and may be removed in a future release.
The following environment variables are deprecated as of this release. Their removal will be announced in a future version.
The recommended steps for migrating off of ZooKeeper are described in Falcon LogScale 1.70.0 LTS (2023-01-16).
Note
When using Operator and Kubernetes deployments, you must upgrade to 0.17.0 of operator to support migration away from the ZooKeeper requirement. See Operator Version 0.17.0.
The query function
holtwinters()
is now deprecated and will be removed along with the release of future version 1.73; therefore, its usage in alerts is not recommended.
Upgrades
Changes that may occur or be required during an upgrade.
Installation and Deployment
We have enabled a new vhost selection method by default. The way hosts select their vhost number when joining the cluster has changed, the new logic is described at Node Identifiers documentation page.
The new logic does not depend on ZooKeeper, even for clusters where nodes occasionally lose disk contents, such as Kubernetes. In order to smooth migration for clusters using ZooKeeper, the new logic will still interact with ZooKeeper to avoid nodes using a mix of new and old vhost code from fighting over the vhost numbers. This is only necessary while migrating.
The recommended steps for migrating off of ZooKeeper are as follows:
Deploy the new LogScale version to all nodes.
Remove
ZOOKEEPER_URL_FOR_NODE_UUID
,ZOOKEEPER_URL
,ZOOKEEPER_PREFIX_FOR_NODE_UUID
,ZOOKEEPER_SESSIONTIMEOUT_FOR_NODE_UUID
from the configuration for all nodes.Reboot
Once rebooted, LogScale will no longer need ZooKeeper directly, except as an indirect dependency of Kafka. Due to this, the 4 ZooKeeper-related variables are deprecated as of this release and will be removed in a future version.
Since vhost numbers now change when a disk is wiped, cluster administrators for clusters using nodes where
USING_EPHEMERAL_DISKS
is set totrue
will need to ensure that the storage and digest partitioning tables are up to date as hosts join and leave the cluster. Updating the tables is handled automatically if using the LogScale Kubernetes operator, but for clusters that do not use this operator, cluster administrators should run scripts periodically to keep the storage and digest tables up to date. This is not a new requirement for ephemeral clusters, but we're providing a reminder here since it may be needed more frequently now.The cluster GraphQL query can provide updated tables (the
suggestedIngestPartitions
andsuggestedStoragePartitions
fields), which can then be applied via the updateIngestPartitionScheme and updateStoragePartitionScheme GraphQL mutations.Should you experience any issue in using this feature, you may opt out by setting
NEW_VHOST_SELECTION_ENABLED=false
. If you do this, please reach out to support with feedback, as we otherwise intend to remove the old vhost selection logic in the coming months.Note
When using Operator and Kubernetes deployments, you must upgrade to 0.17.0 of operator to support migration away from the ZooKeeper requirement. See Operator Version 0.17.0.
Other
Kafka client has been upgraded to 3.4.0.
Kafka broker has been upgraded to 3.4.0 in the Kafka container.
The container upgrade is performed for security reasons to resolve CVE-2022-36944 issue, which Kafka should however not be affected by. If you wish to do a rolling upgrade of your Kafka containers, please always refer to Kafka upgrade guide.
New features and improvements
Dashboards and Widgets
Added support for export and import of dashboards with query based widgets which use a fixed time window.
Other
Add code to ensure all mini-segments for the same target end up located on the same hosts. A change in 1.63 could create a situation where mini-segments for the same merge target wound up on different nodes, which the query code currently assumes can't happen. This could cause
Result is partial
responses to user queries.Ephemeral nodes are automatically removed from the cluster if they are offline for too long (2 hours by default).
New background task
TagGroupingSuggestionsJob
that reports on flow rate in repositories with many datasources on what it considers slow ones, controlled by configuration of segment sizes and flush intervals. The output in the log can be input to decision on addTag Grouping
to a repository to reduce the number of slow datasources.
Fixed in this release
Security
Update Netty to address CVE-2022-41915.
Automation and Alerts
Fixed a bug where a link in the notification for a failed alert would link to a non-existing page.
GraphQL API
Pending deletes that would cause nodes to fail to start, reporting a NullPointerException, have been fixed.
Configuration
Some restrictions have been introduced when running with
single-user
asAUTHENTICATION_METHOD
:Starting on a machine that already has multiple users will not delete them, but you will be unable to create additional users.
Running
AUTHENTICATION_METHOD=single-user
with multiple users will pick the best candidate based on username and privilege.
Nodes are now considered ephemeral only if they set
USING_EPHEMERAL_DISKS
totrue
. Previously, they were ephemeral if they either set that configuration, or if they were using the httponly node role.Fixed an issue where the environment variable
OIDC_USE_HTTP_PROXY
was not respected. It means that LogScale will now call all OIDC endpoints directly without going through the HTTP Proxy, whenOIDC_USE_HTTP_PROXY
is set tofalse
.This fixes the known issue previously reported in Falcon LogScale 1.63.1 LTS (2022-11-14), Falcon LogScale 1.63.2 LTS (2022-11-30), Falcon LogScale 1.63.3 LTS (2022-12-21) and Falcon LogScale 1.70.0 LTS (2023-01-16).
Dashboards and Widgets
Fixed three bugs in the
Bar Chart
— where the sorting would be wrong with updating query results in the stacked version, flickering would occur when deselecting all series in the legend, and deselecting renamed series in the legend would not have any effect.Scatter Chart
has been updated:The x-axis would not update correctly with updated query results
The trend line toggle in the style panel was invisible.
Fixed an issue with parameters in dashboards, where the values of a fixed list parameter would not have their order maintained when exporting and importing templates.
Other
Fixed a bug where very long string literals in a regex could cause a query/parser to fail with a stack overflow.
Unlimited waits for nodes to get in sync has been fixed. This caused digest coordination to fail, to limit the time allowed for a node to get "in sync" on a partition before leadership was assigned to it, in cases where the previous digest leader shut down gracefully.
Timeout from publish to global topic in Kafka has been fixed, as it resulted in marking input segments for merge as broken temporarily.
Known Issues
Security
OIDC known issue preventing login:
if you are running LogScale self-hosted, this version will not work in an environment where you are Authenticating with OpenID Connect and you use an HTTP Proxy, but you do not want to call OpenIDConnect through the HTTP Proxy.
Please do not upgrade to this version if this is the case — upgrade to Falcon LogScale 1.63.4 LTS (2023-02-01) and Falcon LogScale 1.70.1 LTS (2023-02-01) instead, where this issue has been fixed.
Falcon LogScale 1.70.1 LTS (2023-02-01)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.70.1 | LTS | 2023-02-01 | Cloud | 2024-01-31 | No | 1.44.0 | No |
Hide file hashes
TAR Checksum | Value |
---|---|
MD5 | e0aa0e799d3f5b3009ef9a636ad96c78 |
SHA1 | 06710b69230c442dc67ba137bd02613cb29b5224 |
SHA256 | 5ada1e5a56d73d1e2532378035c1eccf9fa476b5226ae6bdc9d7e29dc8b2ddc3 |
SHA512 | e1768b9e41ef5940e2defbf10cc908b454ef6f67358b340fbcb648c5e91a98031b385cfcc242ee40ed33d4d17fb5e1eb7418abb287adbfe83e3827ae8e11a54f |
Docker Image | SHA256 Checksum |
---|---|
humio | 72a45d92e868e101a8bd2a7f20f83b37946761bfead3bce3bbb28a9ebf318a50 |
humio-core | f4c2cc95de7ad66e6800fbceb1a22caee567b482dcf6b1181b7369c9aa7c86eb |
kafka | c8500740a3ead1d9e6b48df608b225ffd9c221263a9bcd5f2134b3ade89260ac |
zookeeper | 0b1155a324d0940f6c4aa3e3f2088cec5d8cd77e6883454c59a111028b7cfe82 |
Download: https://repo.humio.com/repository/maven-releases/com/humio/server/1.70.1/server-1.70.1.tar.gz
These notes include entries from the following previous releases: 1.70.0
Bug fixes and updates.
Deprecation
Items that have been deprecated and may be removed in a future release.
The following environment variables are deprecated as of this release. Their removal will be announced in a future version.
The recommended steps for migrating off of ZooKeeper are described in Falcon LogScale 1.70.0 LTS (2023-01-16).
Note
When using Operator and Kubernetes deployments, you must upgrade to 0.17.0 of operator to support migration away from the ZooKeeper requirement. See Operator Version 0.17.0.
The query function
holtwinters()
is now deprecated and will be removed along with the release of future version 1.73; therefore, its usage in alerts is not recommended.
Upgrades
Changes that may occur or be required during an upgrade.
Installation and Deployment
We have enabled a new vhost selection method by default. The way hosts select their vhost number when joining the cluster has changed, the new logic is described at Node Identifiers documentation page.
The new logic does not depend on ZooKeeper, even for clusters where nodes occasionally lose disk contents, such as Kubernetes. In order to smooth migration for clusters using ZooKeeper, the new logic will still interact with ZooKeeper to avoid nodes using a mix of new and old vhost code from fighting over the vhost numbers. This is only necessary while migrating.
The recommended steps for migrating off of ZooKeeper are as follows:
Deploy the new LogScale version to all nodes.
Remove
ZOOKEEPER_URL_FOR_NODE_UUID
,ZOOKEEPER_URL
,ZOOKEEPER_PREFIX_FOR_NODE_UUID
,ZOOKEEPER_SESSIONTIMEOUT_FOR_NODE_UUID
from the configuration for all nodes.Reboot
Once rebooted, LogScale will no longer need ZooKeeper directly, except as an indirect dependency of Kafka. Due to this, the 4 ZooKeeper-related variables are deprecated as of this release and will be removed in a future version.
Since vhost numbers now change when a disk is wiped, cluster administrators for clusters using nodes where
USING_EPHEMERAL_DISKS
is set totrue
will need to ensure that the storage and digest partitioning tables are up to date as hosts join and leave the cluster. Updating the tables is handled automatically if using the LogScale Kubernetes operator, but for clusters that do not use this operator, cluster administrators should run scripts periodically to keep the storage and digest tables up to date. This is not a new requirement for ephemeral clusters, but we're providing a reminder here since it may be needed more frequently now.The cluster GraphQL query can provide updated tables (the
suggestedIngestPartitions
andsuggestedStoragePartitions
fields), which can then be applied via the updateIngestPartitionScheme and updateStoragePartitionScheme GraphQL mutations.Should you experience any issue in using this feature, you may opt out by setting
NEW_VHOST_SELECTION_ENABLED=false
. If you do this, please reach out to support with feedback, as we otherwise intend to remove the old vhost selection logic in the coming months.Note
When using Operator and Kubernetes deployments, you must upgrade to 0.17.0 of operator to support migration away from the ZooKeeper requirement. See Operator Version 0.17.0.
New features and improvements
Dashboards and Widgets
Added support for export and import of dashboards with query based widgets which use a fixed time window.
Other
Add code to ensure all mini-segments for the same target end up located on the same hosts. A change in 1.63 could create a situation where mini-segments for the same merge target wound up on different nodes, which the query code currently assumes can't happen. This could cause
Result is partial
responses to user queries.New background task
TagGroupingSuggestionsJob
that reports on flow rate in repositories with many datasources on what it considers slow ones, controlled by configuration of segment sizes and flush intervals. The output in the log can be input to decision on addTag Grouping
to a repository to reduce the number of slow datasources.
Fixed in this release
Security
Update Netty to address CVE-2022-41915.
Automation and Alerts
Fixed a bug where a link in the notification for a failed alert would link to a non-existing page.
Configuration
Some restrictions have been introduced when running with
single-user
asAUTHENTICATION_METHOD
:Starting on a machine that already has multiple users will not delete them, but you will be unable to create additional users.
Running
AUTHENTICATION_METHOD=single-user
with multiple users will pick the best candidate based on username and privilege.
Fixed an issue where the environment variable
OIDC_USE_HTTP_PROXY
was not respected. It means that LogScale will now call all OIDC endpoints directly without going through the HTTP Proxy, whenOIDC_USE_HTTP_PROXY
is set tofalse
.This fixes the known issue previously reported in Falcon LogScale 1.63.1 LTS (2022-11-14), Falcon LogScale 1.63.2 LTS (2022-11-30), Falcon LogScale 1.63.3 LTS (2022-12-21) and Falcon LogScale 1.70.0 LTS (2023-01-16).
Dashboards and Widgets
Fixed three bugs in the
Bar Chart
— where the sorting would be wrong with updating query results in the stacked version, flickering would occur when deselecting all series in the legend, and deselecting renamed series in the legend would not have any effect.Scatter Chart
has been updated:The x-axis would not update correctly with updated query results
The trend line toggle in the style panel was invisible.
Fixed an issue with parameters in dashboards, where the values of a fixed list parameter would not have their order maintained when exporting and importing templates.
Other
Fixed a bug where very long string literals in a regex could cause a query/parser to fail with a stack overflow.
Unlimited waits for nodes to get in sync has been fixed. This caused digest coordination to fail, to limit the time allowed for a node to get "in sync" on a partition before leadership was assigned to it, in cases where the previous digest leader shut down gracefully.
Known Issues
Security
OIDC known issue preventing login:
if you are running LogScale self-hosted, this version will not work in an environment where you are Authenticating with OpenID Connect and you use an HTTP Proxy, but you do not want to call OpenIDConnect through the HTTP Proxy.
Please do not upgrade to this version if this is the case — upgrade to Falcon LogScale 1.63.4 LTS (2023-02-01) and Falcon LogScale 1.70.1 LTS (2023-02-01) instead, where this issue has been fixed.
Falcon LogScale 1.70.0 LTS (2023-01-16)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.70.0 | LTS | 2023-01-16 | Cloud | 2024-01-31 | No | 1.44.0 | No |
Hide file hashes
TAR Checksum | Value |
---|---|
MD5 | d9bb7d6cbb0ca0bda19849432c379edf |
SHA1 | b6c06ea11e89db8f31c68b1aa2ad278aad0fe433 |
SHA256 | aaae7b77d39c82f5b6931f7be9da2e03ac7cb68bc7f246529562f3900bf23b02 |
SHA512 | 6370a6f003a2f2ac6f59fdbaa8d7578e2eae7658645bc97cc4ed5f3c75489c914cb09a35e8ca5a6b6203f8a084aac769bc6d9d9b7fd9547fa2cfe9cef6639f40 |
Docker Image | SHA256 Checksum |
---|---|
humio | 9bdd95bb499feb635b2fa10e2ccc8d3738173998679e0b59852b61e9f9072b07 |
humio-core | 8693b1c4fe80d51907f4b706e5f464ed990e365b50099ad8b6a8eb33d370b6cb |
kafka | f22f369876933b93084f34d73014697b6cbae24c3d915e8fb590f2289f4361ce |
zookeeper | b4bb9f58e8a519db4850e395824142fe62fe52f8fcf024de1001fd76481e63c7 |
Download: https://repo.humio.com/repository/maven-releases/com/humio/server/1.70.0/server-1.70.0.tar.gz
Bug fixes and updates.
Deprecation
Items that have been deprecated and may be removed in a future release.
The following environment variables are deprecated as of this release. Their removal will be announced in a future version.
The recommended steps for migrating off of ZooKeeper are described in Falcon LogScale 1.70.0 LTS (2023-01-16).
Note
When using Operator and Kubernetes deployments, you must upgrade to 0.17.0 of operator to support migration away from the ZooKeeper requirement. See Operator Version 0.17.0.
The query function
holtwinters()
is now deprecated and will be removed along with the release of future version 1.73; therefore, its usage in alerts is not recommended.
Upgrades
Changes that may occur or be required during an upgrade.
Installation and Deployment
We have enabled a new vhost selection method by default. The way hosts select their vhost number when joining the cluster has changed, the new logic is described at Node Identifiers documentation page.
The new logic does not depend on ZooKeeper, even for clusters where nodes occasionally lose disk contents, such as Kubernetes. In order to smooth migration for clusters using ZooKeeper, the new logic will still interact with ZooKeeper to avoid nodes using a mix of new and old vhost code from fighting over the vhost numbers. This is only necessary while migrating.
The recommended steps for migrating off of ZooKeeper are as follows:
Deploy the new LogScale version to all nodes.
Remove
ZOOKEEPER_URL_FOR_NODE_UUID
,ZOOKEEPER_URL
,ZOOKEEPER_PREFIX_FOR_NODE_UUID
,ZOOKEEPER_SESSIONTIMEOUT_FOR_NODE_UUID
from the configuration for all nodes.Reboot
Once rebooted, LogScale will no longer need ZooKeeper directly, except as an indirect dependency of Kafka. Due to this, the 4 ZooKeeper-related variables are deprecated as of this release and will be removed in a future version.
Since vhost numbers now change when a disk is wiped, cluster administrators for clusters using nodes where
USING_EPHEMERAL_DISKS
is set totrue
will need to ensure that the storage and digest partitioning tables are up to date as hosts join and leave the cluster. Updating the tables is handled automatically if using the LogScale Kubernetes operator, but for clusters that do not use this operator, cluster administrators should run scripts periodically to keep the storage and digest tables up to date. This is not a new requirement for ephemeral clusters, but we're providing a reminder here since it may be needed more frequently now.The cluster GraphQL query can provide updated tables (the
suggestedIngestPartitions
andsuggestedStoragePartitions
fields), which can then be applied via the updateIngestPartitionScheme and updateStoragePartitionScheme GraphQL mutations.Should you experience any issue in using this feature, you may opt out by setting
NEW_VHOST_SELECTION_ENABLED=false
. If you do this, please reach out to support with feedback, as we otherwise intend to remove the old vhost selection logic in the coming months.Note
When using Operator and Kubernetes deployments, you must upgrade to 0.17.0 of operator to support migration away from the ZooKeeper requirement. See Operator Version 0.17.0.
New features and improvements
Dashboards and Widgets
Added support for export and import of dashboards with query based widgets which use a fixed time window.
Other
Add code to ensure all mini-segments for the same target end up located on the same hosts. A change in 1.63 could create a situation where mini-segments for the same merge target wound up on different nodes, which the query code currently assumes can't happen. This could cause
Result is partial
responses to user queries.New background task
TagGroupingSuggestionsJob
that reports on flow rate in repositories with many datasources on what it considers slow ones, controlled by configuration of segment sizes and flush intervals. The output in the log can be input to decision on addTag Grouping
to a repository to reduce the number of slow datasources.
Fixed in this release
Security
Update Netty to address CVE-2022-41915.
Automation and Alerts
Fixed a bug where a link in the notification for a failed alert would link to a non-existing page.
Dashboards and Widgets
Fixed three bugs in the
Bar Chart
— where the sorting would be wrong with updating query results in the stacked version, flickering would occur when deselecting all series in the legend, and deselecting renamed series in the legend would not have any effect.Scatter Chart
has been updated:The x-axis would not update correctly with updated query results
The trend line toggle in the style panel was invisible.
Fixed an issue with parameters in dashboards, where the values of a fixed list parameter would not have their order maintained when exporting and importing templates.
Other
Fixed a bug where very long string literals in a regex could cause a query/parser to fail with a stack overflow.
Known Issues
Security
OIDC known issue preventing login:
if you are running LogScale self-hosted, this version will not work in an environment where you are Authenticating with OpenID Connect and you use an HTTP Proxy, but you do not want to call OpenIDConnect through the HTTP Proxy.
Please do not upgrade to this version if this is the case — upgrade to Falcon LogScale 1.63.4 LTS (2023-02-01) and Falcon LogScale 1.70.1 LTS (2023-02-01) instead, where this issue has been fixed.
Falcon LogScale 1.69.0 GA (2022-12-13)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.69.0 | GA | 2022-12-13 | Cloud | 2024-01-31 | No | 1.44.0 | No |
Available for download two days after release.
Bug fixes and updates.
Deprecation
Items that have been deprecated and may be removed in a future release.
The query function
holtwinters()
is now deprecated and will be removed along with the release of future version 1.73; therefore, its usage in alerts is not recommended.
New features and improvements
Storage
Reduced CPU usage of background tasks for the case of high partition count and high datasource count.
Queries
Add support for GET+DELETE requests for queries by external Query ID without including the repository name in the URL. The new URL is
/api/v1/queryjobs/QUERYID
. Note that shared dashboard token authentication is not supported on this API. (The existing API on/api/v1/repositories/REPONAM/queryjobs/QUERYID
remains unmodified and support POST requests for submit of queries.)
Other
Add bounds to maximum number of active notifications per user.
Added option to filter by group and role permission types in
groupsPage
androlesPage
queries.Throttle publish to global-events topic internally based on time spent in recent transactions of the same type (digest-related writes are not throttled). See details at new configuration variable
GLOBAL_THROTTLE_PERCENTAGE
page. Also see the metricglobal-operation-time
for measurements of the time spent.
Fixed in this release
Other
Allow creating Kafka topics even if a broker is down.
LogScale no longer considers every host to be alive for a period after rebooting. Only hosts marked as
running
in global will be considered alive. This fixes an issue where a query coordinator might pointlessly direct queries to dead nodes because the coordinator had recently booted.
Falcon LogScale 1.68.0 GA (2022-12-06)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.68.0 | GA | 2022-12-06 | Cloud | 2024-01-31 | No | 1.44.0 | No |
Available for download two days after release.
Bug fixes and updates.
Deprecation
Items that have been deprecated and may be removed in a future release.
The query function
holtwinters()
is now deprecated and will be removed along with the release of future version 1.73; therefore, its usage in alerts is not recommended.
New features and improvements
Falcon Data Replicator
Enforcing S3 file size limits (30MB) in FDR feeds. Files will not be ingested if they are above the limit.
UI Changes
Introduced the Social Login Settings feature: all customers with access to the organization identity providers page can now change social login settings in the UI. See Authentication & Identity Providers for details.
No longer possible to add a color to roles. Existing role colors removed from the UI.
Automation and Alerts
Improved performance when storing alert errors and trigger times in global.
Configuration
Set dynamic configuration
BucketStorageWriteVersion
to3
. This sets the format for files written to bucket storage to use a format that allows files larger than 2GB and incurs less memory pressure when decrypting files during download from the bucket. The new format is supported since 1.44.0 only.Set minimum version for the cluster to be 1.44.0.
Changed the default value for
AUTHENTICATION_METHOD
fromnone
tosingle-user
.To set username and password, use the environment variables:
SINGLE_USER_USERNAME
— if not set, default isuser
.SINGLE_USER_PASSWORD
— if not set, a random password is generated.
See
SINGLE_USER_USERNAME
andSINGLE_USER_PASSWORD
documentation for more details on these variables.
Other
Audit logging has been improved.
New metric
global-operation-time
: tracks local time spent processing each kind of message received on the global events topic.
Fixed in this release
UI Changes
The warning about unsaved changes being lost on an edited dashboard will now only show when actual changes have been made.
Other
Fixed a bug in decryption code used when decrypting downloaded files from bucket storage when
version-for-bucket-writes=3
. The bug did not allow to decrypt files larger than 2GB.Fixed an issue where LogScale could log secrets to the debug log when configured to use LDAP or when configured to use SSL for Kafka.
Falcon LogScale 1.67.0 GA (2022-11-29)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.67.0 | GA | 2022-11-29 | Cloud | 2024-01-31 | No | 1.30.0 | No |
Available for download two days after release.
Bug fixes and updates.
New features and improvements
Functions
A new query function named
createEvents()
has been released. This function creates events from strings and is used for testing queries.
Fixed in this release
UI Changes
URL paths with repository name and no trailing /search resolved to
Not Found
. The URL /repoName will now again show the search page for therepoName
repository.IP Location drilldowns now correctly use lat and lon field names instead of latitude and longitude .
Functions
Falcon LogScale 1.66.0 GA (2022-11-22)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.66.0 | GA | 2022-11-22 | Cloud | 2024-01-31 | No | 1.30.0 | Yes |
Available for download two days after release.
Bug fixes and updates.
New features and improvements
Configuration
Added a new configuration to control when digest coordination will permit a node that is not "in sync" and to set a time limit. See
MAX_SECS_WAIT_FOR_SYNC_WHEN_CHANGING_DIGEST_LEADER
for all the details.
Other
Make adjustments to the HostsCleanerJob. It will now remove references to missing hosts in fewer writes, and will stop immediately if a host rejoins the cluster.
Fixed in this release
Functions
Fixed a bug seen in version 1.65 where
groupBy()
on multiple fields would sometimes produce multiple rows for the same combination of keys.
Other
Fixed an issue that could cause repeated unnecessary updates of currentHosts for some segments.
Fixed a race where unavailable segments, due to nodes going away, would not become available again after nodes returning.
Fixed an issue that could cause the error message Object is missing required member replicationFactor when downgrading from current versions to older versions. The error message is only a nuisance, since the object failing deserialization isn't in use in released code yet.
Packages
Fixed an issue where deleting a parser through an update or uninstall of a package could fail in an unexpected way if the parser was used by an ingest listener or an FDR feed. Now, a proper error message will be shown.
Falcon LogScale 1.65.0 GA (2022-11-15)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.65.0 | GA | 2022-11-15 | Cloud | 2024-01-31 | No | 1.30.0 | No |
Available for download two days after release.
Bug fixes and updates.
New features and improvements
Security
The version of Jackson has been upgraded to address CVE-2022-42003 vulnerability.
UI Changes
The
Repository
icon has been changed to match the new look and feel.A new UI for
event forwarders
located under Organisation Settings now allows you to configure your event forwarders. See Event Forwarders for details.
Automation and Alerts
Added a query editor warning (yellow wavy lines) for joins in alerts.
Configuration
Added a new dynamic configuration
UndersizedMergingRetentionPercentage
, with a default value of20
. This configuration value is used when selecting undersized segments to merge, this setting controls how wide a time span can be merged together.The setting is interpreted as a percentage of the repository's
retention by time
setting. A reasonable range is 0 through to 90.
Other
Docker images have been upgraded to Java 17.0.5.
Audit logging for s3 archiving which tracks when it is enabled, disabled, configured, and restarted.
Increased the limits for
bucket
. The maximum number of series has been raised from 50 to 500 and the maximum number of output events has been raised from 10,000 to 100,000.Avoid writing some messages to global if we can tell up-front that the message is unnecessary.
Reduce the scope of a precondition for a particular write to global. This should reduce unnecessary transaction rejections when such writes are bulked together.
Fixed in this release
Configuration
Fix some issues with the workings of the
BUCKET_STORAGE_MULTIPLE_ENDPOINTS
andS3_STORAGE_ENDPOINT_BASE
configurations.The intent of this configuration is to allow users to configure buckets in multiple bucket services, for instance to allow migrating from AWS bucket storage to a local S3 service. When
true
, each bucket in global can have a separate endpoint configuration, as defined inS3_STORAGE_ENDPOINT_BASE
and similar configurations. This allows an existing cluster running against AWS S3 to begin uploading segments to an on-prem S3 by switching the endpoint base, while still keeping access to existing segments in AWS.When
false
(default), the endpoint base configuration is applied to all existing buckets on boot. This is intended for cases where the base URL needs to be changed for all bucket, for instance due to the introduction of a proxy.The issue was that we were not consistently looking up endpoint urls in global for the relevant bucket, but instead simply used whichever endpoint url happened to be defined in configuration at the time. This has been fixed.
Other
The SAML login to
Humio
using deeplinks now works correctly.When a host is removed from global, a job tries to clean up any references to it from other places in global, such as segments. Fixed a bug in this job that meant it didn't clean up references on segments that were tombstoned but not yet gone from global. This issue could block cleanup of those segments.
Fixed a minor desynchronization issue related to idle datasources.
Fixed a bug where interaction context menus did not update the query editor in Safari.
Falcon LogScale 1.64.0 GA (2022-11-01)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.64.0 | GA | 2022-11-01 | Cloud | 2024-01-31 | No | 1.30.0 | No |
Available for download two days after release.
Bug fixes and updates.
New features and improvements
Configuration
Added new dynamic configuration for
MaxIngestRequestSize
to allow limiting size of ingest requests after content-encoding has been applied. The default can be set using the new configuration variableMAX_INGEST_REQUEST_SIZE
, or applied via the dynamic configuration.
Dashboards and Widgets
It is now possible to specify a dashboard by name in the URL. It is also possible to have dashboard ID as a parameter in order to have permanent links.
Functions
Improved memory allocation for the query function
split()
.Removed the restriction that
case
andmatch
expressions cannot be used in subqueries.The query function
split()
now allows splitting arrays that contain arrays. For example, the eventa[0][0]=1, a[1][0]=2
can now be split usingsplit(a)
produces two events:_index=0, a[0]=1' and '_index=1, a[0]=2
.The query function
join()
now provides information to optimize the query.The query function
in()
has been improved w.r.t. performance when searching in tag fields.
Other
In the internal request log, include decoded size of the request body after content-encoding has been applied in new field
decodedContentLength
. This allows inspecting compression ratio of incoming requests and range of values seen. Requests without compression havecontentLength
in this new field too.
Fixed in this release
UI Changes
The Inspection Panel now copies text correctly again.
menu in the eventFixed a bug where a disabled item in the main menu could be clicked on and which would redirect to the homepage.
Functions
Fixed an issue where
percentile()
would crash on invalid input.The query functions
bucket()
,holtwinters()
,beta:repeating()
,series()
,session()
, andwindow()
wrongly acceptednow
as a duration. This error has been fixed.
Other
Fixed an issue that could cause merged segments to appear to be missing after a restart, due to the datasource going idle.
Falcon LogScale 1.63.6 LTS (2023-03-22)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.63.6 | LTS | 2023-03-22 | Cloud | 2023-11-30 | No | 1.30.0 | No |
Hide file hashes
TAR Checksum | Value |
---|---|
MD5 | c6a5dedd2c76d1ceb44984d2b6c249ee |
SHA1 | d45f31016599ea0c905c93cf731cb7d2fcd47a6c |
SHA256 | 8c9d3dc61b1f8a2f392ff53dd88602abcdb04b3084c847f54f00e55887d561a2 |
SHA512 | 1a5db287ab19b1c3efe1a5c447e27d7d4d66834f608a777485ea2aec73c8f7963b816ffe7b29758285dd692adf11989be302c3863feda97e0962e723b42c7e5a |
Docker Image | SHA256 Checksum |
---|---|
humio | 8b20c3c128d786c6dd121bab5f6878129343d9d9967b855ed9d9825a13560cd6 |
humio-core | 3d03504a00022115b7a581938dd8d0d7f96a583c57d02efdf15b4127971914df |
kafka | 6c9978f48ade34b4bb12f7e5862c556ba60e2650555db05ae36ac9abd44062ca |
zookeeper | cad51c3f5959cd99609d6abdb28ea938b87942780567a462e897ba34e31d3679 |
Download: https://repo.humio.com/repository/maven-releases/com/humio/server/1.63.6/server-1.63.6.tar.gz
These notes include entries from the following previous releases: 1.63.1, 1.63.2, 1.63.3, 1.63.4, 1.63.5
Bug fix.
Removed
Items that have been removed as of this release.
Installation and Deployment
Deprecated feature removal: the file-based backup feature was deprecated in 1.42.0 and is now removed from Humio. The following configs are no longer supported and will do nothing if set:
The
DELETE_BACKUP_AFTER_MILLIS
config is still supported, as it is used for configuring the delay between a file being marked for deletion in Humio, and that file being removed from bucket storage.
Upgrades
Changes that may occur or be required during an upgrade.
Other
Kafka client has been upgraded to 3.4.0.
Kafka broker has been upgraded to 3.4.0 in the Kafka container.
The container upgrade is performed for security reasons to resolve CVE-2022-36944 issue, which Kafka should however not be affected by. If you wish to do a rolling upgrade of your Kafka containers, please always refer to Kafka upgrade guide.
New features and improvements
Security
The version of Jackson has been upgraded to address CVE-2022-42003 vulnerability.
Falcon Data Replicator
Added the new
fileDownloadParallelism
setting for FDR feeds to download files from the same SQS message in parallel. See Adjust Polling Nodes Per Feed for all the details.
UI Changes
Humio is now a Falcon product. The Humio owl logo and icons are therefore replaced by beautiful falcons.
Change Humio logo to Falcon LogScale on login and signup pages.
Interactions on JSON data now enabled for JSON arrays in the Event List.
Parsing JSON arrays in drill-down context menus no longer adds a trailing dot to the prefix field name.
The
Single Value
widget has updated properties:New design for the toggle switch: it is now bigger and has a green/gray color profile instead of blue/gray.
The color profile of the displayed value by trend is now customizable.
Following its name change, mentions of Humio have been changed to Falcon LogScale.
Add Falcon LogScale announcement on login and signup pages.
Contextual drill-down menus for field interactions have been introduced, see Field Interactions. In particular:
Fields in the Inspection Panel are now provided with and context menu items, replacing the former buttons, see updates at Inspecting Events.
The Fields Panel on the left-hand side of the User Interface is now provided with and context menu items, replacing the former drill-down buttons in the field details flyout (when clicking a field in the fields menu). See updates at Displaying Fields.
Fields that have JSON, URL and Timestamps content will have a
drill-down option which will parse the field as a LogScale field.Parsing JSON will automatically use the field name as prefix for the new field name.
Fields containing numbers (currently JSON only) will have
, , , , and drill-down options.
Automation and Alerts
Added two new message templates to actions,
{query_start_s}
and{query_end_s}
. See Message Templates and Variables for details.Self-hosted only: the old implementation of how alert queries are run has been removed. As a consequence, the dynamic configuration
UseLegacyAlertJob
has also been removed.
GraphQL API
Added new createDashboardFromTemplateV2 mutation with input parameters aligned with the rest of the create from template mutations.
Dashboards and Widgets
JSON in
and formats columns inEvent List
widgets now have fields underlined on hover and are clickable. This allows drill-downs and copying values easily.
Functions
QueryAPI — Added
staticMetaData
property toQueryJobStartedResult
. At the moment it only contains the propertyexecutionMode
, which can be used to communicate hints about the way the backend executes the query to the front-end.Improved the
format()
:Fixed an issue where the
format()
function would output the wrong amount of left padded zeros for decimal conversions.Formatting large positive numbers as hex no longer causes a loss of bits for integers less than 2^63.
Formatting negative numbers as hex no longer produces unintelligible strings.
Fixed an issue where adding the
#
flag would not display the correct formatting string.Fixed an issue where specifying the time/date modifier
N
would fail to parse.Fixed an issue where supplying multiple fields required you to specify the index of the last field as an argument specifier.
Added a length specifier to allow for outputting fields as 32-bit integers instead of 64-bits.
Using the type specifier
%F
now tries to format the specified field as a floating point.
See the
format()
reference documentation page for all the above mentioned updates on the supported formatting syntax.QueryAPI —
executionModeHint
renamed toexecutionMode
.Introduced new valid array syntax in
array:contains()
andarray:regex()
functions:Changed the expected format of the
array
parameter.Changed these functions to no longer be experimental.
Other
Add code to ensure all mini-segments for the same target end up located on the same hosts. A change in 1.63 could create a situation where mini-segments for the same merge target wound up on different nodes, which the query code currently assumes can't happen. This could cause
Result is partial
responses to user queries.When selecting a parser test case, the selected test case is highlighted in the UI, so you can see what is selected.
Added a new dynamic configuration
UndersizedMergingRetentionPercentage
, with a default value of20
. This configuration value is used when selecting undersized segments to merge, this setting controls how wide a time span can be merged together.The setting is interpreted as a percentage of the repository's
retention by time
setting. A reasonable range is 0 through to 90.New background task that runs at startup. It verifies the checksums present in local segment files, traversing the most recently updated segment files on the local disk, using the timestamps they have when Humio status. If a file has invalid checksum it will be renamed to
crc-error.X
whereX
is the ID of the segment. An error will be logged as well.Add an additional validation check when uploading files to S3-like bucket storage. Humio will now perform a HEAD request for the file's final location in the bucket to verify that the upload succeeded.
Added use of the HTTP Proxy Client Configuration, if configured, in a lot of places.
Add a script in the tarball distribution's bin directory to check the execution environment, checking common permission issues and other requirements for an environment suitable for running LogScale.
Added a new ingest endpoint for receiving metrics and traces via OpenTelemetry OTLP/http. See Ingesting with OpenTelemetry for all the details.
Empty datasource directories will be now removed from the local file system while starting the server.
Created new test function for event forwarders, which takes as input an event forwarder configuration and tests whether it is possible to connect to the Kafka server. The current test function which takes an ID as input and tests an existing event forwarder by ID, is now marked as deprecated.
Use latest version of Java 17 in Docker images.
It is now possible to expand multiple bell notifications.
Fixed in this release
Security
Update Netty to address CVE-2022-41915.
UI Changes
URL paths with repository name and no trailing /search resolved to
Not Found
. The URL /repoName will now again show the search page for therepoName
repository.Change missing
@timestamp
field to give a warning instead of an error in functionstail()
,head()
,bucket()
, andtimeChart()
.
Automation and Alerts
Fixed a bug where a link in the notification for a failed alert would link to a non-existing page.
API
Fixed an issue with API Explorer that could fail to load in some configurations when using cookie authentication.
Configuration
Fixed an issue where the environment variable
OIDC_USE_HTTP_PROXY
was not respected. It means that LogScale will now call all OIDC endpoints directly without going through the HTTP Proxy, whenOIDC_USE_HTTP_PROXY
is set tofalse
.This fixes the known issue previously reported in Falcon LogScale 1.63.1 LTS (2022-11-14), Falcon LogScale 1.63.2 LTS (2022-11-30), Falcon LogScale 1.63.3 LTS (2022-12-21) and Falcon LogScale 1.70.0 LTS (2023-01-16).
Dashboards and Widgets
Fixed a bug where query result containing no valid results was handled incorrectly in visualisation.
Bug fixed in
Scatter Chart
widget tooltip, so that the description of the actual point only is shown in the tooltip when hovering the mouse over one point, instead of multiple points.
Functions
Fixed an issue where
match()
would sometimes give errors whenignoreCase=true
and events contained latin1 encoded characters.Fixed an issue where
NaN
values could causegroupBy()
queries to fail.Fixed a bug where the
selfJoin()
function would not apply thepostfilter
parameter.
Other
Unlimited waits for nodes to get in sync has been fixed. This caused digest coordination to fail, to limit the time allowed for a node to get "in sync" on a partition before leadership was assigned to it, in cases where the previous digest leader shut down gracefully.
Fixed an issue where nothing was displayed on the average ingest chart in case only one datapoint is present.
It is now possible for a user to use the same personal invite token after the user has been transferred to another organization.
When selecting a parser test case, the selected test case is highlighted in the UI, so you can see what is selected.
Fixed a bug in decryption code used when decrypting downloaded files from bucket storage when
version-for-bucket-writes=3
. The bug did not allow to decrypt files larger than 2GB.Fixed a regression causing a reflective method lookup to fail when Humio is running on a Java prior to 13.
When a host is removed from global, a job tries to clean up any references to it from other places in global, such as segments. Fixed a bug in this job that meant it didn't clean up references on segments that were tombstoned but not yet gone from global. This could block cleanup of those segments.
Fix an issue that could cause event redaction tasks to fail to complete, if a segment having events redacted was deleted due to retention.
Fix an issue causing a content-length check for bucket uploads to fail when encryption was enabled. The content-length check is not normally enabled, so this should only affect clusters that have disabled ETag-based validation.
Fixed an issue where LogScale could log secrets to the debug log when configured to use LDAP or when configured to use SSL for Kafka.
Known Issues
Security
OIDC known issue preventing login:
if you are running LogScale self-hosted, this version will not work in an environment where you are Authenticating with OpenID Connect and you use an HTTP Proxy, but you do not want to call OpenIDConnect through the HTTP Proxy.
Please do not upgrade to this version if this is the case — upgrade to Falcon LogScale 1.63.4 LTS (2023-02-01) and Falcon LogScale 1.70.1 LTS (2023-02-01) instead, where this issue has been fixed.
Falcon LogScale 1.63.5 LTS (2023-03-06)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.63.5 | LTS | 2023-03-06 | Cloud | 2023-11-30 | No | 1.30.0 | No |
Hide file hashes
TAR Checksum | Value |
---|---|
MD5 | e4684d95d5cf279a36f9581d395dbd6b |
SHA1 | b179a38c49fb4d4c9b298ea867f69f97e1eb5dc8 |
SHA256 | a0bfa4e22b5800a5ca2f117914d30a68df66fd1ddfe5452cc97960832e3714b8 |
SHA512 | 168dcec977536d77f3f793b811b41e483b6ca17b942d305953ce6099d1d01de7d7a14fa135681d330ef0ce5b9bff94457fba265064e838f1fedca49f8e032f3d |
Docker Image | SHA256 Checksum |
---|---|
humio | c78c2e3e9b3956346aa8ad0b90b68ad42c17e25e22b2b45ad0c9d564aa6f6e29 |
humio-core | 4811ddd5298fa41d7ccf123f83aa060d77a2583528208dfdfe6c98b298f931f3 |
kafka | adae65ce4cb90bf64fcef617d13d896a531f08cca8286b7241be62013f3c801a |
zookeeper | 4e146244b375c139e49e3247b24cdfd8adfded7a07b9dba074ebb68878033619 |
Download: https://repo.humio.com/repository/maven-releases/com/humio/server/1.63.5/server-1.63.5.tar.gz
These notes include entries from the following previous releases: 1.63.1, 1.63.2, 1.63.3, 1.63.4
Security fix.
Removed
Items that have been removed as of this release.
Installation and Deployment
Deprecated feature removal: the file-based backup feature was deprecated in 1.42.0 and is now removed from Humio. The following configs are no longer supported and will do nothing if set:
The
DELETE_BACKUP_AFTER_MILLIS
config is still supported, as it is used for configuring the delay between a file being marked for deletion in Humio, and that file being removed from bucket storage.
Upgrades
Changes that may occur or be required during an upgrade.
Other
Kafka client has been upgraded to 3.4.0.
Kafka broker has been upgraded to 3.4.0 in the Kafka container.
The container upgrade is performed for security reasons to resolve CVE-2022-36944 issue, which Kafka should however not be affected by. If you wish to do a rolling upgrade of your Kafka containers, please always refer to Kafka upgrade guide.
New features and improvements
Security
The version of Jackson has been upgraded to address CVE-2022-42003 vulnerability.
Falcon Data Replicator
Added the new
fileDownloadParallelism
setting for FDR feeds to download files from the same SQS message in parallel. See Adjust Polling Nodes Per Feed for all the details.
UI Changes
Humio is now a Falcon product. The Humio owl logo and icons are therefore replaced by beautiful falcons.
Change Humio logo to Falcon LogScale on login and signup pages.
Interactions on JSON data now enabled for JSON arrays in the Event List.
Parsing JSON arrays in drill-down context menus no longer adds a trailing dot to the prefix field name.
The
Single Value
widget has updated properties:New design for the toggle switch: it is now bigger and has a green/gray color profile instead of blue/gray.
The color profile of the displayed value by trend is now customizable.
Following its name change, mentions of Humio have been changed to Falcon LogScale.
Add Falcon LogScale announcement on login and signup pages.
Contextual drill-down menus for field interactions have been introduced, see Field Interactions. In particular:
Fields in the Inspection Panel are now provided with and context menu items, replacing the former buttons, see updates at Inspecting Events.
The Fields Panel on the left-hand side of the User Interface is now provided with and context menu items, replacing the former drill-down buttons in the field details flyout (when clicking a field in the fields menu). See updates at Displaying Fields.
Fields that have JSON, URL and Timestamps content will have a
drill-down option which will parse the field as a LogScale field.Parsing JSON will automatically use the field name as prefix for the new field name.
Fields containing numbers (currently JSON only) will have
, , , , and drill-down options.
Automation and Alerts
Added two new message templates to actions,
{query_start_s}
and{query_end_s}
. See Message Templates and Variables for details.Self-hosted only: the old implementation of how alert queries are run has been removed. As a consequence, the dynamic configuration
UseLegacyAlertJob
has also been removed.
GraphQL API
Added new createDashboardFromTemplateV2 mutation with input parameters aligned with the rest of the create from template mutations.
Dashboards and Widgets
JSON in
and formats columns inEvent List
widgets now have fields underlined on hover and are clickable. This allows drill-downs and copying values easily.
Functions
QueryAPI — Added
staticMetaData
property toQueryJobStartedResult
. At the moment it only contains the propertyexecutionMode
, which can be used to communicate hints about the way the backend executes the query to the front-end.Improved the
format()
:Fixed an issue where the
format()
function would output the wrong amount of left padded zeros for decimal conversions.Formatting large positive numbers as hex no longer causes a loss of bits for integers less than 2^63.
Formatting negative numbers as hex no longer produces unintelligible strings.
Fixed an issue where adding the
#
flag would not display the correct formatting string.Fixed an issue where specifying the time/date modifier
N
would fail to parse.Fixed an issue where supplying multiple fields required you to specify the index of the last field as an argument specifier.
Added a length specifier to allow for outputting fields as 32-bit integers instead of 64-bits.
Using the type specifier
%F
now tries to format the specified field as a floating point.
See the
format()
reference documentation page for all the above mentioned updates on the supported formatting syntax.QueryAPI —
executionModeHint
renamed toexecutionMode
.Introduced new valid array syntax in
array:contains()
andarray:regex()
functions:Changed the expected format of the
array
parameter.Changed these functions to no longer be experimental.
Other
Add code to ensure all mini-segments for the same target end up located on the same hosts. A change in 1.63 could create a situation where mini-segments for the same merge target wound up on different nodes, which the query code currently assumes can't happen. This could cause
Result is partial
responses to user queries.When selecting a parser test case, the selected test case is highlighted in the UI, so you can see what is selected.
Added a new dynamic configuration
UndersizedMergingRetentionPercentage
, with a default value of20
. This configuration value is used when selecting undersized segments to merge, this setting controls how wide a time span can be merged together.The setting is interpreted as a percentage of the repository's
retention by time
setting. A reasonable range is 0 through to 90.New background task that runs at startup. It verifies the checksums present in local segment files, traversing the most recently updated segment files on the local disk, using the timestamps they have when Humio status. If a file has invalid checksum it will be renamed to
crc-error.X
whereX
is the ID of the segment. An error will be logged as well.Add an additional validation check when uploading files to S3-like bucket storage. Humio will now perform a HEAD request for the file's final location in the bucket to verify that the upload succeeded.
Added use of the HTTP Proxy Client Configuration, if configured, in a lot of places.
Add a script in the tarball distribution's bin directory to check the execution environment, checking common permission issues and other requirements for an environment suitable for running LogScale.
Added a new ingest endpoint for receiving metrics and traces via OpenTelemetry OTLP/http. See Ingesting with OpenTelemetry for all the details.
Empty datasource directories will be now removed from the local file system while starting the server.
Created new test function for event forwarders, which takes as input an event forwarder configuration and tests whether it is possible to connect to the Kafka server. The current test function which takes an ID as input and tests an existing event forwarder by ID, is now marked as deprecated.
Use latest version of Java 17 in Docker images.
It is now possible to expand multiple bell notifications.
Fixed in this release
Security
Update Netty to address CVE-2022-41915.
UI Changes
URL paths with repository name and no trailing /search resolved to
Not Found
. The URL /repoName will now again show the search page for therepoName
repository.Change missing
@timestamp
field to give a warning instead of an error in functionstail()
,head()
,bucket()
, andtimeChart()
.
Automation and Alerts
Fixed a bug where a link in the notification for a failed alert would link to a non-existing page.
Configuration
Fixed an issue where the environment variable
OIDC_USE_HTTP_PROXY
was not respected. It means that LogScale will now call all OIDC endpoints directly without going through the HTTP Proxy, whenOIDC_USE_HTTP_PROXY
is set tofalse
.This fixes the known issue previously reported in Falcon LogScale 1.63.1 LTS (2022-11-14), Falcon LogScale 1.63.2 LTS (2022-11-30), Falcon LogScale 1.63.3 LTS (2022-12-21) and Falcon LogScale 1.70.0 LTS (2023-01-16).
Dashboards and Widgets
Fixed a bug where query result containing no valid results was handled incorrectly in visualisation.
Bug fixed in
Scatter Chart
widget tooltip, so that the description of the actual point only is shown in the tooltip when hovering the mouse over one point, instead of multiple points.
Functions
Fixed an issue where
match()
would sometimes give errors whenignoreCase=true
and events contained latin1 encoded characters.Fixed an issue where
NaN
values could causegroupBy()
queries to fail.Fixed a bug where the
selfJoin()
function would not apply thepostfilter
parameter.
Other
Unlimited waits for nodes to get in sync has been fixed. This caused digest coordination to fail, to limit the time allowed for a node to get "in sync" on a partition before leadership was assigned to it, in cases where the previous digest leader shut down gracefully.
Fixed an issue where nothing was displayed on the average ingest chart in case only one datapoint is present.
It is now possible for a user to use the same personal invite token after the user has been transferred to another organization.
When selecting a parser test case, the selected test case is highlighted in the UI, so you can see what is selected.
Fixed a bug in decryption code used when decrypting downloaded files from bucket storage when
version-for-bucket-writes=3
. The bug did not allow to decrypt files larger than 2GB.Fixed a regression causing a reflective method lookup to fail when Humio is running on a Java prior to 13.
When a host is removed from global, a job tries to clean up any references to it from other places in global, such as segments. Fixed a bug in this job that meant it didn't clean up references on segments that were tombstoned but not yet gone from global. This could block cleanup of those segments.
Fix an issue that could cause event redaction tasks to fail to complete, if a segment having events redacted was deleted due to retention.
Fix an issue causing a content-length check for bucket uploads to fail when encryption was enabled. The content-length check is not normally enabled, so this should only affect clusters that have disabled ETag-based validation.
Fixed an issue where LogScale could log secrets to the debug log when configured to use LDAP or when configured to use SSL for Kafka.
Known Issues
Security
OIDC known issue preventing login:
if you are running LogScale self-hosted, this version will not work in an environment where you are Authenticating with OpenID Connect and you use an HTTP Proxy, but you do not want to call OpenIDConnect through the HTTP Proxy.
Please do not upgrade to this version if this is the case — upgrade to Falcon LogScale 1.63.4 LTS (2023-02-01) and Falcon LogScale 1.70.1 LTS (2023-02-01) instead, where this issue has been fixed.
Falcon LogScale 1.63.4 LTS (2023-02-01)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.63.4 | LTS | 2023-02-01 | Cloud | 2023-11-30 | No | 1.30.0 | No |
Hide file hashes
TAR Checksum | Value |
---|---|
MD5 | b8b3067213532194d2510cc1b12b1ed8 |
SHA1 | 5418f0926f51fafb413500fbdbb41df496a7fb0d |
SHA256 | 994882158ac89800a9418fe8d2b51426dc794ec1fbc5dc1d365530c8449fefca |
SHA512 | 262f662af3647d77352e1abe990be003f98a2abf50be0350b538ca8cc13ff56ec6414f3a045758d64bbf4e137918877fe56d70a1b3ef01e1405da4514742d66e |
Docker Image | SHA256 Checksum |
---|---|
humio | 8861250649894e744ecb070db48a50c5ffdf071f7bc26a3b1bc41970262da77e |
humio-core | b310155c2958037385eb20527dad3bd10a0854c0761ef9991ca4c48d01074a42 |
kafka | 4d9fb7715fd851192005c1cf8180e9ff9a27b84f58e720b36f4d4f3baa5a4449 |
zookeeper | d00d8999eb2880d7d0ef307210b7c5aa3949b2c16ec1d471af36d9344a8501c9 |
Download: https://repo.humio.com/repository/maven-releases/com/humio/server/1.63.4/server-1.63.4.tar.gz
These notes include entries from the following previous releases: 1.63.1, 1.63.2, 1.63.3
Bug fixes and updates.
Removed
Items that have been removed as of this release.
Installation and Deployment
Deprecated feature removal: the file-based backup feature was deprecated in 1.42.0 and is now removed from Humio. The following configs are no longer supported and will do nothing if set:
The
DELETE_BACKUP_AFTER_MILLIS
config is still supported, as it is used for configuring the delay between a file being marked for deletion in Humio, and that file being removed from bucket storage.
New features and improvements
Security
The version of Jackson has been upgraded to address CVE-2022-42003 vulnerability.
Falcon Data Replicator
Added the new
fileDownloadParallelism
setting for FDR feeds to download files from the same SQS message in parallel. See Adjust Polling Nodes Per Feed for all the details.
UI Changes
Humio is now a Falcon product. The Humio owl logo and icons are therefore replaced by beautiful falcons.
Change Humio logo to Falcon LogScale on login and signup pages.
Interactions on JSON data now enabled for JSON arrays in the Event List.
Parsing JSON arrays in drill-down context menus no longer adds a trailing dot to the prefix field name.
The
Single Value
widget has updated properties:New design for the toggle switch: it is now bigger and has a green/gray color profile instead of blue/gray.
The color profile of the displayed value by trend is now customizable.
Following its name change, mentions of Humio have been changed to Falcon LogScale.
Add Falcon LogScale announcement on login and signup pages.
Contextual drill-down menus for field interactions have been introduced, see Field Interactions. In particular:
Fields in the Inspection Panel are now provided with and context menu items, replacing the former buttons, see updates at Inspecting Events.
The Fields Panel on the left-hand side of the User Interface is now provided with and context menu items, replacing the former drill-down buttons in the field details flyout (when clicking a field in the fields menu). See updates at Displaying Fields.
Fields that have JSON, URL and Timestamps content will have a
drill-down option which will parse the field as a LogScale field.Parsing JSON will automatically use the field name as prefix for the new field name.
Fields containing numbers (currently JSON only) will have
, , , , and drill-down options.
Automation and Alerts
Added two new message templates to actions,
{query_start_s}
and{query_end_s}
. See Message Templates and Variables for details.Self-hosted only: the old implementation of how alert queries are run has been removed. As a consequence, the dynamic configuration
UseLegacyAlertJob
has also been removed.
GraphQL API
Added new createDashboardFromTemplateV2 mutation with input parameters aligned with the rest of the create from template mutations.
Dashboards and Widgets
JSON in
and formats columns inEvent List
widgets now have fields underlined on hover and are clickable. This allows drill-downs and copying values easily.
Functions
QueryAPI — Added
staticMetaData
property toQueryJobStartedResult
. At the moment it only contains the propertyexecutionMode
, which can be used to communicate hints about the way the backend executes the query to the front-end.Improved the
format()
:Fixed an issue where the
format()
function would output the wrong amount of left padded zeros for decimal conversions.Formatting large positive numbers as hex no longer causes a loss of bits for integers less than 2^63.
Formatting negative numbers as hex no longer produces unintelligible strings.
Fixed an issue where adding the
#
flag would not display the correct formatting string.Fixed an issue where specifying the time/date modifier
N
would fail to parse.Fixed an issue where supplying multiple fields required you to specify the index of the last field as an argument specifier.
Added a length specifier to allow for outputting fields as 32-bit integers instead of 64-bits.
Using the type specifier
%F
now tries to format the specified field as a floating point.
See the
format()
reference documentation page for all the above mentioned updates on the supported formatting syntax.QueryAPI —
executionModeHint
renamed toexecutionMode
.Introduced new valid array syntax in
array:contains()
andarray:regex()
functions:Changed the expected format of the
array
parameter.Changed these functions to no longer be experimental.
Other
Add code to ensure all mini-segments for the same target end up located on the same hosts. A change in 1.63 could create a situation where mini-segments for the same merge target wound up on different nodes, which the query code currently assumes can't happen. This could cause
Result is partial
responses to user queries.When selecting a parser test case, the selected test case is highlighted in the UI, so you can see what is selected.
Added a new dynamic configuration
UndersizedMergingRetentionPercentage
, with a default value of20
. This configuration value is used when selecting undersized segments to merge, this setting controls how wide a time span can be merged together.The setting is interpreted as a percentage of the repository's
retention by time
setting. A reasonable range is 0 through to 90.New background task that runs at startup. It verifies the checksums present in local segment files, traversing the most recently updated segment files on the local disk, using the timestamps they have when Humio status. If a file has invalid checksum it will be renamed to
crc-error.X
whereX
is the ID of the segment. An error will be logged as well.Add an additional validation check when uploading files to S3-like bucket storage. Humio will now perform a HEAD request for the file's final location in the bucket to verify that the upload succeeded.
Added use of the HTTP Proxy Client Configuration, if configured, in a lot of places.
Add a script in the tarball distribution's bin directory to check the execution environment, checking common permission issues and other requirements for an environment suitable for running LogScale.
Added a new ingest endpoint for receiving metrics and traces via OpenTelemetry OTLP/http. See Ingesting with OpenTelemetry for all the details.
Empty datasource directories will be now removed from the local file system while starting the server.
Created new test function for event forwarders, which takes as input an event forwarder configuration and tests whether it is possible to connect to the Kafka server. The current test function which takes an ID as input and tests an existing event forwarder by ID, is now marked as deprecated.
Use latest version of Java 17 in Docker images.
It is now possible to expand multiple bell notifications.
Fixed in this release
Security
Update Netty to address CVE-2022-41915.
UI Changes
URL paths with repository name and no trailing /search resolved to
Not Found
. The URL /repoName will now again show the search page for therepoName
repository.Change missing
@timestamp
field to give a warning instead of an error in functionstail()
,head()
,bucket()
, andtimeChart()
.
Automation and Alerts
Fixed a bug where a link in the notification for a failed alert would link to a non-existing page.
Configuration
Fixed an issue where the environment variable
OIDC_USE_HTTP_PROXY
was not respected. It means that LogScale will now call all OIDC endpoints directly without going through the HTTP Proxy, whenOIDC_USE_HTTP_PROXY
is set tofalse
.This fixes the known issue previously reported in Falcon LogScale 1.63.1 LTS (2022-11-14), Falcon LogScale 1.63.2 LTS (2022-11-30), Falcon LogScale 1.63.3 LTS (2022-12-21) and Falcon LogScale 1.70.0 LTS (2023-01-16).
Dashboards and Widgets
Fixed a bug where query result containing no valid results was handled incorrectly in visualisation.
Bug fixed in
Scatter Chart
widget tooltip, so that the description of the actual point only is shown in the tooltip when hovering the mouse over one point, instead of multiple points.
Functions
Fixed an issue where
match()
would sometimes give errors whenignoreCase=true
and events contained latin1 encoded characters.Fixed an issue where
NaN
values could causegroupBy()
queries to fail.Fixed a bug where the
selfJoin()
function would not apply thepostfilter
parameter.
Other
Unlimited waits for nodes to get in sync has been fixed. This caused digest coordination to fail, to limit the time allowed for a node to get "in sync" on a partition before leadership was assigned to it, in cases where the previous digest leader shut down gracefully.
Fixed an issue where nothing was displayed on the average ingest chart in case only one datapoint is present.
It is now possible for a user to use the same personal invite token after the user has been transferred to another organization.
When selecting a parser test case, the selected test case is highlighted in the UI, so you can see what is selected.
Fixed a bug in decryption code used when decrypting downloaded files from bucket storage when
version-for-bucket-writes=3
. The bug did not allow to decrypt files larger than 2GB.Fixed a regression causing a reflective method lookup to fail when Humio is running on a Java prior to 13.
When a host is removed from global, a job tries to clean up any references to it from other places in global, such as segments. Fixed a bug in this job that meant it didn't clean up references on segments that were tombstoned but not yet gone from global. This could block cleanup of those segments.
Fix an issue that could cause event redaction tasks to fail to complete, if a segment having events redacted was deleted due to retention.
Fix an issue causing a content-length check for bucket uploads to fail when encryption was enabled. The content-length check is not normally enabled, so this should only affect clusters that have disabled ETag-based validation.
Fixed an issue where LogScale could log secrets to the debug log when configured to use LDAP or when configured to use SSL for Kafka.
Known Issues
Security
OIDC known issue preventing login:
if you are running LogScale self-hosted, this version will not work in an environment where you are Authenticating with OpenID Connect and you use an HTTP Proxy, but you do not want to call OpenIDConnect through the HTTP Proxy.
Please do not upgrade to this version if this is the case — upgrade to Falcon LogScale 1.63.4 LTS (2023-02-01) and Falcon LogScale 1.70.1 LTS (2023-02-01) instead, where this issue has been fixed.
Falcon LogScale 1.63.3 LTS (2022-12-21)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.63.3 | LTS | 2022-12-21 | Cloud | 2023-11-30 | No | 1.30.0 | No |
Hide file hashes
JAR Checksum | Value |
---|---|
MD5 | 74b7a4bc3143245a361537e572c5afb2 |
SHA1 | ef312b1b71d041d1b9a4ebfd7ef5108c6fff24b3 |
SHA256 | c7fe720697a9c624db9cc26268883b7389092c3c0dbc5541b643ff12cf62226d |
SHA512 | 5fb5b6dee09abafc1f45b2cce0889e315cd1aa6fc79dd393bfb78eac5bd51047ada7670d65fe4bbc7908b0791c17048552eb03abf675fb70b74778b5142db523 |
Docker Image | SHA256 Checksum |
---|---|
humio | ba99848f525941344e972b9fddc437042aaaa851bbba6a10f3e54aebccafbe62 |
humio-core | ef99f9f84bfd3258f4abc92e6b3f1ec300003caaf93cb330a1a52f419e9e6e76 |
kafka | d96ae0f7043c57f0f1b0ae79a03d27fdb38854232f63dc341987158b6703b09f |
zookeeper | 357e3e93a78cc19e7d8aad157e6b1416c36fbf31153897d40adc9aa02b9b9b61 |
Download: https://repo.humio.com/repository/maven-releases/com/humio/server/1.63.3/server-1.63.3.tar.gz
These notes include entries from the following previous releases: 1.63.1, 1.63.2
Bug fixes and updates.
Removed
Items that have been removed as of this release.
Installation and Deployment
Deprecated feature removal: the file-based backup feature was deprecated in 1.42.0 and is now removed from Humio. The following configs are no longer supported and will do nothing if set:
The
DELETE_BACKUP_AFTER_MILLIS
config is still supported, as it is used for configuring the delay between a file being marked for deletion in Humio, and that file being removed from bucket storage.
New features and improvements
Security
The version of Jackson has been upgraded to address CVE-2022-42003 vulnerability.
Falcon Data Replicator
Added the new
fileDownloadParallelism
setting for FDR feeds to download files from the same SQS message in parallel. See Adjust Polling Nodes Per Feed for all the details.
UI Changes
Humio is now a Falcon product. The Humio owl logo and icons are therefore replaced by beautiful falcons.
Change Humio logo to Falcon LogScale on login and signup pages.
Interactions on JSON data now enabled for JSON arrays in the Event List.
Parsing JSON arrays in drill-down context menus no longer adds a trailing dot to the prefix field name.
The
Single Value
widget has updated properties:New design for the toggle switch: it is now bigger and has a green/gray color profile instead of blue/gray.
The color profile of the displayed value by trend is now customizable.
Following its name change, mentions of Humio have been changed to Falcon LogScale.
Add Falcon LogScale announcement on login and signup pages.
Contextual drill-down menus for field interactions have been introduced, see Field Interactions. In particular:
Fields in the Inspection Panel are now provided with and context menu items, replacing the former buttons, see updates at Inspecting Events.
The Fields Panel on the left-hand side of the User Interface is now provided with and context menu items, replacing the former drill-down buttons in the field details flyout (when clicking a field in the fields menu). See updates at Displaying Fields.
Fields that have JSON, URL and Timestamps content will have a
drill-down option which will parse the field as a LogScale field.Parsing JSON will automatically use the field name as prefix for the new field name.
Fields containing numbers (currently JSON only) will have
, , , , and drill-down options.
Automation and Alerts
Added two new message templates to actions,
{query_start_s}
and{query_end_s}
. See Message Templates and Variables for details.Self-hosted only: the old implementation of how alert queries are run has been removed. As a consequence, the dynamic configuration
UseLegacyAlertJob
has also been removed.
GraphQL API
Added new createDashboardFromTemplateV2 mutation with input parameters aligned with the rest of the create from template mutations.
Dashboards and Widgets
JSON in
and formats columns inEvent List
widgets now have fields underlined on hover and are clickable. This allows drill-downs and copying values easily.
Functions
QueryAPI — Added
staticMetaData
property toQueryJobStartedResult
. At the moment it only contains the propertyexecutionMode
, which can be used to communicate hints about the way the backend executes the query to the front-end.Improved the
format()
:Fixed an issue where the
format()
function would output the wrong amount of left padded zeros for decimal conversions.Formatting large positive numbers as hex no longer causes a loss of bits for integers less than 2^63.
Formatting negative numbers as hex no longer produces unintelligible strings.
Fixed an issue where adding the
#
flag would not display the correct formatting string.Fixed an issue where specifying the time/date modifier
N
would fail to parse.Fixed an issue where supplying multiple fields required you to specify the index of the last field as an argument specifier.
Added a length specifier to allow for outputting fields as 32-bit integers instead of 64-bits.
Using the type specifier
%F
now tries to format the specified field as a floating point.
See the
format()
reference documentation page for all the above mentioned updates on the supported formatting syntax.QueryAPI —
executionModeHint
renamed toexecutionMode
.Introduced new valid array syntax in
array:contains()
andarray:regex()
functions:Changed the expected format of the
array
parameter.Changed these functions to no longer be experimental.
Other
Add code to ensure all mini-segments for the same target end up located on the same hosts. A change in 1.63 could create a situation where mini-segments for the same merge target wound up on different nodes, which the query code currently assumes can't happen. This could cause
Result is partial
responses to user queries.When selecting a parser test case, the selected test case is highlighted in the UI, so you can see what is selected.
Added a new dynamic configuration
UndersizedMergingRetentionPercentage
, with a default value of20
. This configuration value is used when selecting undersized segments to merge, this setting controls how wide a time span can be merged together.The setting is interpreted as a percentage of the repository's
retention by time
setting. A reasonable range is 0 through to 90.New background task that runs at startup. It verifies the checksums present in local segment files, traversing the most recently updated segment files on the local disk, using the timestamps they have when Humio status. If a file has invalid checksum it will be renamed to
crc-error.X
whereX
is the ID of the segment. An error will be logged as well.Add an additional validation check when uploading files to S3-like bucket storage. Humio will now perform a HEAD request for the file's final location in the bucket to verify that the upload succeeded.
Added use of the HTTP Proxy Client Configuration, if configured, in a lot of places.
Add a script in the tarball distribution's bin directory to check the execution environment, checking common permission issues and other requirements for an environment suitable for running LogScale.
Added a new ingest endpoint for receiving metrics and traces via OpenTelemetry OTLP/http. See Ingesting with OpenTelemetry for all the details.
Empty datasource directories will be now removed from the local file system while starting the server.
Created new test function for event forwarders, which takes as input an event forwarder configuration and tests whether it is possible to connect to the Kafka server. The current test function which takes an ID as input and tests an existing event forwarder by ID, is now marked as deprecated.
Use latest version of Java 17 in Docker images.
It is now possible to expand multiple bell notifications.
Fixed in this release
Security
Update Netty to address CVE-2022-41915.
UI Changes
URL paths with repository name and no trailing /search resolved to
Not Found
. The URL /repoName will now again show the search page for therepoName
repository.Change missing
@timestamp
field to give a warning instead of an error in functionstail()
,head()
,bucket()
, andtimeChart()
.
Automation and Alerts
Fixed a bug where a link in the notification for a failed alert would link to a non-existing page.
Dashboards and Widgets
Fixed a bug where query result containing no valid results was handled incorrectly in visualisation.
Bug fixed in
Scatter Chart
widget tooltip, so that the description of the actual point only is shown in the tooltip when hovering the mouse over one point, instead of multiple points.
Functions
Fixed an issue where
match()
would sometimes give errors whenignoreCase=true
and events contained latin1 encoded characters.Fixed an issue where
NaN
values could causegroupBy()
queries to fail.Fixed a bug where the
selfJoin()
function would not apply thepostfilter
parameter.
Other
Fixed an issue where nothing was displayed on the average ingest chart in case only one datapoint is present.
It is now possible for a user to use the same personal invite token after the user has been transferred to another organization.
When selecting a parser test case, the selected test case is highlighted in the UI, so you can see what is selected.
Fixed a bug in decryption code used when decrypting downloaded files from bucket storage when
version-for-bucket-writes=3
. The bug did not allow to decrypt files larger than 2GB.Fixed a regression causing a reflective method lookup to fail when Humio is running on a Java prior to 13.
When a host is removed from global, a job tries to clean up any references to it from other places in global, such as segments. Fixed a bug in this job that meant it didn't clean up references on segments that were tombstoned but not yet gone from global. This could block cleanup of those segments.
Fix an issue that could cause event redaction tasks to fail to complete, if a segment having events redacted was deleted due to retention.
Fix an issue causing a content-length check for bucket uploads to fail when encryption was enabled. The content-length check is not normally enabled, so this should only affect clusters that have disabled ETag-based validation.
Fixed an issue where LogScale could log secrets to the debug log when configured to use LDAP or when configured to use SSL for Kafka.
Known Issues
Security
OIDC known issue preventing login:
if you are running LogScale self-hosted, this version will not work in an environment where you are Authenticating with OpenID Connect and you use an HTTP Proxy, but you do not want to call OpenIDConnect through the HTTP Proxy.
Please do not upgrade to this version if this is the case — upgrade to Falcon LogScale 1.63.4 LTS (2023-02-01) and Falcon LogScale 1.70.1 LTS (2023-02-01) instead, where this issue has been fixed.
Falcon LogScale 1.63.2 LTS (2022-11-30)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.63.2 | LTS | 2022-11-30 | Cloud | 2023-11-30 | No | 1.30.0 | No |
Hide file hashes
JAR Checksum | Value |
---|---|
MD5 | 4b6142d1443af26cbe05f794f8fb65ec |
SHA1 | 2d21db39a7d4de74df05aea99baab1ad5d5def35 |
SHA256 | fe66c9be4aab6b027a5ebc5711fedaeee0aea088c7a20981dfdca07739325f1a |
SHA512 | efb7e6aaa4dbbfd1662552fb3f07b6895bcb2220b69da34a6deb228bd13b286a38ac470f436a0378fa24701d13094da3d3524ebd0f9580a9a599ca5c |
Docker Image | SHA256 Checksum |
---|---|
humio | dd063817c4b302708422213deaa45ac8c799dfc199ae9dc8e80c18a12f882c28 |
humio-core | 2f52662a0191e3723c65ac344b8aedb8a0b4a31f324823857fc2dcd13262c5e4 |
kafka | cfc1d3f130db496dff29c8c8bf3897253615b03521584bd1f3e17da7a398b67f |
zookeeper | 3d485ef41d36cbc71b583bee7153f75d660c2d1bf42c88fa6dbc21557eac0123 |
Download: https://repo.humio.com/repository/maven-releases/com/humio/server/1.63.2/server-1.63.2.tar.gz
These notes include entries from the following previous releases: 1.63.1
Bug fixes and updates.
Removed
Items that have been removed as of this release.
Installation and Deployment
Deprecated feature removal: the file-based backup feature was deprecated in 1.42.0 and is now removed from Humio. The following configs are no longer supported and will do nothing if set:
The
DELETE_BACKUP_AFTER_MILLIS
config is still supported, as it is used for configuring the delay between a file being marked for deletion in Humio, and that file being removed from bucket storage.
New features and improvements
Security
The version of Jackson has been upgraded to address CVE-2022-42003 vulnerability.
Falcon Data Replicator
Added the new
fileDownloadParallelism
setting for FDR feeds to download files from the same SQS message in parallel. See Adjust Polling Nodes Per Feed for all the details.
UI Changes
Humio is now a Falcon product. The Humio owl logo and icons are therefore replaced by beautiful falcons.
Change Humio logo to Falcon LogScale on login and signup pages.
Interactions on JSON data now enabled for JSON arrays in the Event List.
Parsing JSON arrays in drill-down context menus no longer adds a trailing dot to the prefix field name.
The
Single Value
widget has updated properties:New design for the toggle switch: it is now bigger and has a green/gray color profile instead of blue/gray.
The color profile of the displayed value by trend is now customizable.
Following its name change, mentions of Humio have been changed to Falcon LogScale.
Add Falcon LogScale announcement on login and signup pages.
Contextual drill-down menus for field interactions have been introduced, see Field Interactions. In particular:
Fields in the Inspection Panel are now provided with and context menu items, replacing the former buttons, see updates at Inspecting Events.
The Fields Panel on the left-hand side of the User Interface is now provided with and context menu items, replacing the former drill-down buttons in the field details flyout (when clicking a field in the fields menu). See updates at Displaying Fields.
Fields that have JSON, URL and Timestamps content will have a
drill-down option which will parse the field as a LogScale field.Parsing JSON will automatically use the field name as prefix for the new field name.
Fields containing numbers (currently JSON only) will have
, , , , and drill-down options.
Automation and Alerts
Added two new message templates to actions,
{query_start_s}
and{query_end_s}
. See Message Templates and Variables for details.Self-hosted only: the old implementation of how alert queries are run has been removed. As a consequence, the dynamic configuration
UseLegacyAlertJob
has also been removed.
GraphQL API
Added new createDashboardFromTemplateV2 mutation with input parameters aligned with the rest of the create from template mutations.
Dashboards and Widgets
JSON in
and formats columns inEvent List
widgets now have fields underlined on hover and are clickable. This allows drill-downs and copying values easily.
Functions
QueryAPI — Added
staticMetaData
property toQueryJobStartedResult
. At the moment it only contains the propertyexecutionMode
, which can be used to communicate hints about the way the backend executes the query to the front-end.Improved the
format()
:Fixed an issue where the
format()
function would output the wrong amount of left padded zeros for decimal conversions.Formatting large positive numbers as hex no longer causes a loss of bits for integers less than 2^63.
Formatting negative numbers as hex no longer produces unintelligible strings.
Fixed an issue where adding the
#
flag would not display the correct formatting string.Fixed an issue where specifying the time/date modifier
N
would fail to parse.Fixed an issue where supplying multiple fields required you to specify the index of the last field as an argument specifier.
Added a length specifier to allow for outputting fields as 32-bit integers instead of 64-bits.
Using the type specifier
%F
now tries to format the specified field as a floating point.
See the
format()
reference documentation page for all the above mentioned updates on the supported formatting syntax.QueryAPI —
executionModeHint
renamed toexecutionMode
.Introduced new valid array syntax in
array:contains()
andarray:regex()
functions:Changed the expected format of the
array
parameter.Changed these functions to no longer be experimental.
Other
When selecting a parser test case, the selected test case is highlighted in the UI, so you can see what is selected.
Added a new dynamic configuration
UndersizedMergingRetentionPercentage
, with a default value of20
. This configuration value is used when selecting undersized segments to merge, this setting controls how wide a time span can be merged together.The setting is interpreted as a percentage of the repository's
retention by time
setting. A reasonable range is 0 through to 90.New background task that runs at startup. It verifies the checksums present in local segment files, traversing the most recently updated segment files on the local disk, using the timestamps they have when Humio status. If a file has invalid checksum it will be renamed to
crc-error.X
whereX
is the ID of the segment. An error will be logged as well.Add an additional validation check when uploading files to S3-like bucket storage. Humio will now perform a HEAD request for the file's final location in the bucket to verify that the upload succeeded.
Added use of the HTTP Proxy Client Configuration, if configured, in a lot of places.
Add a script in the tarball distribution's bin directory to check the execution environment, checking common permission issues and other requirements for an environment suitable for running LogScale.
Added a new ingest endpoint for receiving metrics and traces via OpenTelemetry OTLP/http. See Ingesting with OpenTelemetry for all the details.
Empty datasource directories will be now removed from the local file system while starting the server.
Created new test function for event forwarders, which takes as input an event forwarder configuration and tests whether it is possible to connect to the Kafka server. The current test function which takes an ID as input and tests an existing event forwarder by ID, is now marked as deprecated.
Use latest version of Java 17 in Docker images.
It is now possible to expand multiple bell notifications.
Fixed in this release
UI Changes
URL paths with repository name and no trailing /search resolved to
Not Found
. The URL /repoName will now again show the search page for therepoName
repository.Change missing
@timestamp
field to give a warning instead of an error in functionstail()
,head()
,bucket()
, andtimeChart()
.
Dashboards and Widgets
Fixed a bug where query result containing no valid results was handled incorrectly in visualisation.
Bug fixed in
Scatter Chart
widget tooltip, so that the description of the actual point only is shown in the tooltip when hovering the mouse over one point, instead of multiple points.
Functions
Fixed an issue where
match()
would sometimes give errors whenignoreCase=true
and events contained latin1 encoded characters.Fixed an issue where
NaN
values could causegroupBy()
queries to fail.Fixed a bug where the
selfJoin()
function would not apply thepostfilter
parameter.
Other
Fixed an issue where nothing was displayed on the average ingest chart in case only one datapoint is present.
It is now possible for a user to use the same personal invite token after the user has been transferred to another organization.
When selecting a parser test case, the selected test case is highlighted in the UI, so you can see what is selected.
Fixed a regression causing a reflective method lookup to fail when Humio is running on a Java prior to 13.
When a host is removed from global, a job tries to clean up any references to it from other places in global, such as segments. Fixed a bug in this job that meant it didn't clean up references on segments that were tombstoned but not yet gone from global. This could block cleanup of those segments.
Fix an issue that could cause event redaction tasks to fail to complete, if a segment having events redacted was deleted due to retention.
Fix an issue causing a content-length check for bucket uploads to fail when encryption was enabled. The content-length check is not normally enabled, so this should only affect clusters that have disabled ETag-based validation.
Known Issues
Security
OIDC known issue preventing login:
if you are running LogScale self-hosted, this version will not work in an environment where you are Authenticating with OpenID Connect and you use an HTTP Proxy, but you do not want to call OpenIDConnect through the HTTP Proxy.
Please do not upgrade to this version if this is the case — upgrade to Falcon LogScale 1.63.4 LTS (2023-02-01) and Falcon LogScale 1.70.1 LTS (2023-02-01) instead, where this issue has been fixed.
Falcon LogScale 1.63.1 LTS (2022-11-14)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.63.1 | LTS | 2022-11-14 | Cloud | 2023-11-30 | No | 1.30.0 | No |
Download: https://repo.humio.com/repository/maven-releases/com/humio/server/1.63.1/server-1.63.1.tar.gz
Bug fixes and updates.
Removed
Items that have been removed as of this release.
Installation and Deployment
Deprecated feature removal: the file-based backup feature was deprecated in 1.42.0 and is now removed from Humio. The following configs are no longer supported and will do nothing if set:
The
DELETE_BACKUP_AFTER_MILLIS
config is still supported, as it is used for configuring the delay between a file being marked for deletion in Humio, and that file being removed from bucket storage.
New features and improvements
Security
The version of Jackson has been upgraded to address CVE-2022-42003 vulnerability.
Falcon Data Replicator
Added the new
fileDownloadParallelism
setting for FDR feeds to download files from the same SQS message in parallel. See Adjust Polling Nodes Per Feed for all the details.
UI Changes
Humio is now a Falcon product. The Humio owl logo and icons are therefore replaced by beautiful falcons.
Change Humio logo to Falcon LogScale on login and signup pages.
Interactions on JSON data now enabled for JSON arrays in the Event List.
Parsing JSON arrays in drill-down context menus no longer adds a trailing dot to the prefix field name.
The
Single Value
widget has updated properties:New design for the toggle switch: it is now bigger and has a green/gray color profile instead of blue/gray.
The color profile of the displayed value by trend is now customizable.
Following its name change, mentions of Humio have been changed to Falcon LogScale.
Add Falcon LogScale announcement on login and signup pages.
Contextual drill-down menus for field interactions have been introduced, see Field Interactions. In particular:
Fields in the Inspection Panel are now provided with and context menu items, replacing the former buttons, see updates at Inspecting Events.
The Fields Panel on the left-hand side of the User Interface is now provided with and context menu items, replacing the former drill-down buttons in the field details flyout (when clicking a field in the fields menu). See updates at Displaying Fields.
Fields that have JSON, URL and Timestamps content will have a
drill-down option which will parse the field as a LogScale field.Parsing JSON will automatically use the field name as prefix for the new field name.
Fields containing numbers (currently JSON only) will have
, , , , and drill-down options.
Automation and Alerts
Added two new message templates to actions,
{query_start_s}
and{query_end_s}
. See Message Templates and Variables for details.Self-hosted only: the old implementation of how alert queries are run has been removed. As a consequence, the dynamic configuration
UseLegacyAlertJob
has also been removed.
GraphQL API
Added new createDashboardFromTemplateV2 mutation with input parameters aligned with the rest of the create from template mutations.
Dashboards and Widgets
JSON in
and formats columns inEvent List
widgets now have fields underlined on hover and are clickable. This allows drill-downs and copying values easily.
Functions
QueryAPI — Added
staticMetaData
property toQueryJobStartedResult
. At the moment it only contains the propertyexecutionMode
, which can be used to communicate hints about the way the backend executes the query to the front-end.Improved the
format()
:Fixed an issue where the
format()
function would output the wrong amount of left padded zeros for decimal conversions.Formatting large positive numbers as hex no longer causes a loss of bits for integers less than 2^63.
Formatting negative numbers as hex no longer produces unintelligible strings.
Fixed an issue where adding the
#
flag would not display the correct formatting string.Fixed an issue where specifying the time/date modifier
N
would fail to parse.Fixed an issue where supplying multiple fields required you to specify the index of the last field as an argument specifier.
Added a length specifier to allow for outputting fields as 32-bit integers instead of 64-bits.
Using the type specifier
%F
now tries to format the specified field as a floating point.
See the
format()
reference documentation page for all the above mentioned updates on the supported formatting syntax.QueryAPI —
executionModeHint
renamed toexecutionMode
.Introduced new valid array syntax in
array:contains()
andarray:regex()
functions:Changed the expected format of the
array
parameter.Changed these functions to no longer be experimental.
Other
When selecting a parser test case, the selected test case is highlighted in the UI, so you can see what is selected.
Added a new dynamic configuration
UndersizedMergingRetentionPercentage
, with a default value of20
. This configuration value is used when selecting undersized segments to merge, this setting controls how wide a time span can be merged together.The setting is interpreted as a percentage of the repository's
retention by time
setting. A reasonable range is 0 through to 90.New background task that runs at startup. It verifies the checksums present in local segment files, traversing the most recently updated segment files on the local disk, using the timestamps they have when Humio status. If a file has invalid checksum it will be renamed to
crc-error.X
whereX
is the ID of the segment. An error will be logged as well.Add an additional validation check when uploading files to S3-like bucket storage. Humio will now perform a HEAD request for the file's final location in the bucket to verify that the upload succeeded.
Added use of the HTTP Proxy Client Configuration, if configured, in a lot of places.
Add a script in the tarball distribution's bin directory to check the execution environment, checking common permission issues and other requirements for an environment suitable for running LogScale.
Added a new ingest endpoint for receiving metrics and traces via OpenTelemetry OTLP/http. See Ingesting with OpenTelemetry for all the details.
Empty datasource directories will be now removed from the local file system while starting the server.
Created new test function for event forwarders, which takes as input an event forwarder configuration and tests whether it is possible to connect to the Kafka server. The current test function which takes an ID as input and tests an existing event forwarder by ID, is now marked as deprecated.
Use latest version of Java 17 in Docker images.
It is now possible to expand multiple bell notifications.
Fixed in this release
UI Changes
Change missing
@timestamp
field to give a warning instead of an error in functionstail()
,head()
,bucket()
, andtimeChart()
.
Dashboards and Widgets
Fixed a bug where query result containing no valid results was handled incorrectly in visualisation.
Bug fixed in
Scatter Chart
widget tooltip, so that the description of the actual point only is shown in the tooltip when hovering the mouse over one point, instead of multiple points.
Functions
Fixed an issue where
match()
would sometimes give errors whenignoreCase=true
and events contained latin1 encoded characters.Fixed an issue where
NaN
values could causegroupBy()
queries to fail.Fixed a bug where the
selfJoin()
function would not apply thepostfilter
parameter.
Other
Fixed an issue where nothing was displayed on the average ingest chart in case only one datapoint is present.
It is now possible for a user to use the same personal invite token after the user has been transferred to another organization.
When selecting a parser test case, the selected test case is highlighted in the UI, so you can see what is selected.
Fixed a regression causing a reflective method lookup to fail when Humio is running on a Java prior to 13.
When a host is removed from global, a job tries to clean up any references to it from other places in global, such as segments. Fixed a bug in this job that meant it didn't clean up references on segments that were tombstoned but not yet gone from global. This could block cleanup of those segments.
Fix an issue that could cause event redaction tasks to fail to complete, if a segment having events redacted was deleted due to retention.
Fix an issue causing a content-length check for bucket uploads to fail when encryption was enabled. The content-length check is not normally enabled, so this should only affect clusters that have disabled ETag-based validation.
Known Issues
Security
OIDC known issue preventing login:
if you are running LogScale self-hosted, this version will not work in an environment where you are Authenticating with OpenID Connect and you use an HTTP Proxy, but you do not want to call OpenIDConnect through the HTTP Proxy.
Please do not upgrade to this version if this is the case — upgrade to Falcon LogScale 1.63.4 LTS (2023-02-01) and Falcon LogScale 1.70.1 LTS (2023-02-01) instead, where this issue has been fixed.
Falcon LogScale 1.63.0 GA (2022-10-25)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.63.0 | GA | 2022-10-25 | Cloud | 2023-11-30 | No | 1.30.0 | No |
Available for download two days after release.
Bug fixes and updates.
New features and improvements
GraphQL API
Added enableEventForwarder and disableEventForwarder mutations to enable/disable event forwarders.
Log Collector
LogScale Collector download page has moved into the new top level tab Falcon Log Collector Manage your Fleet (Cloud-only).
Humio Log Collector is now Falcon LogScale Collector.
New FleetOverview functionality for the LogScale Collector 1.2.0 is available.
Functions
The
holtwinters()
query function will be deprecated with the release of future version 1.68. From then, it cannot be expected to work in alerts, and it will be removed entirely with the release of version 1.72.The
base64Decode()
query function now accepts non-canonical encodings.The
parseCsv()
function has improved its performance, in particular in terms of memory pressure.
Other
Close all segments a node is working on when shutting down. This should help start later in Kafka after reboots.
Fixed in this release
Other
Fixed an issue with validations when creating a new Ingest Listener as Netflow/UDP.
The form validation for Ingest Listener will now clearly tell the user that the parser needs to be selected when you change between different protocols.
Fixed a race condition where the segment top offset wasn't removed when a datasource went idle due to a race. This could result in event redaction not running for such segments.
Falcon LogScale 1.62.0 GA (2022-10-18)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.62.0 | GA | 2022-10-18 | Cloud | 2023-11-30 | No | 1.30.0 | No |
Available for download two days after release.
Updates.
New features and improvements
UI Changes
Change Humio logo to Falcon LogScale on login and signup pages.
Parsing JSON arrays in drill-down context menus no longer adds a trailing dot to the prefix field name.
Following its name change, mentions of Humio have been changed to Falcon LogScale.
Add Falcon LogScale announcement on login and signup pages.
Functions
Introduced new valid array syntax in
array:contains()
andarray:regex()
functions:Changed the expected format of the
array
parameter.Changed these functions to no longer be experimental.
Other
Add a script in the tarball distribution's bin directory to check the execution environment, checking common permission issues and other requirements for an environment suitable for running LogScale.
Added a new ingest endpoint for receiving metrics and traces via OpenTelemetry OTLP/http. See Ingesting with OpenTelemetry for all the details.
Humio Server 1.61.0 GA (2022-10-11)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.61.0 | GA | 2022-10-11 | Cloud | 2023-11-30 | No | 1.30.0 | No |
Available for download two days after release.
Bug fixes and updates.
New features and improvements
Falcon Data Replicator
Added the new
fileDownloadParallelism
setting for FDR feeds to download files from the same SQS message in parallel. See Adjust Polling Nodes Per Feed for all the details.
UI Changes
Interactions on JSON data now enabled for JSON arrays in the Event List.
Functions
QueryAPI — Added
staticMetaData
property toQueryJobStartedResult
. At the moment it only contains the propertyexecutionMode
, which can be used to communicate hints about the way the backend executes the query to the front-end.QueryAPI —
executionModeHint
renamed toexecutionMode
.
Fixed in this release
Functions
Fixed a bug where the
selfJoin()
function would not apply thepostfilter
parameter.
Other
Fixed an issue where nothing was displayed on the average ingest chart in case only one datapoint is present.
Fixed a regression causing a reflective method lookup to fail when Humio is running on a Java prior to 13.
Fix an issue that could cause event redaction tasks to fail to complete, if a segment having events redacted was deleted due to retention.
Humio Server 1.60.0 GA (2022-10-04)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.60.0 | GA | 2022-10-04 | Cloud | 2023-11-30 | No | 1.30.0 | No |
Available for download two days after release.
Bug fixes and updates.
New features and improvements
UI Changes
Contextual drill-down menus for field interactions have been introduced, see Field Interactions. In particular:
Fields in the Inspection Panel are now provided with and context menu items, replacing the former buttons, see updates at Inspecting Events.
The Fields Panel on the left-hand side of the User Interface is now provided with and context menu items, replacing the former drill-down buttons in the field details flyout (when clicking a field in the fields menu). See updates at Displaying Fields.
Fields that have JSON, URL and Timestamps content will have a
drill-down option which will parse the field as a LogScale field.Parsing JSON will automatically use the field name as prefix for the new field name.
Fields containing numbers (currently JSON only) will have
, , , , and drill-down options.
GraphQL API
Added new createDashboardFromTemplateV2 mutation with input parameters aligned with the rest of the create from template mutations.
Dashboards and Widgets
JSON in
and formats columns inEvent List
widgets now have fields underlined on hover and are clickable. This allows drill-downs and copying values easily.
Other
New background task that runs at startup. It verifies the checksums present in local segment files, traversing the most recently updated segment files on the local disk, using the timestamps they have when Humio status. If a file has invalid checksum it will be renamed to
crc-error.X
whereX
is the ID of the segment. An error will be logged as well.Use latest version of Java 17 in Docker images.
It is now possible to expand multiple bell notifications.
Fixed in this release
Dashboards and Widgets
Fixed a bug where query result containing no valid results was handled incorrectly in visualisation.
Functions
Fixed an issue where
NaN
values could causegroupBy()
queries to fail.
Humio Server 1.59.0 GA (2022-09-27)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.59.0 | GA | 2022-09-27 | Cloud | 2023-11-30 | No | 1.30.0 | No |
Available for download two days after release.
Updates.
New features and improvements
UI Changes
The
Single Value
widget has updated properties:New design for the toggle switch: it is now bigger and has a green/gray color profile instead of blue/gray.
The color profile of the displayed value by trend is now customizable.
Automation and Alerts
Added two new message templates to actions,
{query_start_s}
and{query_end_s}
. See Message Templates and Variables for details.
Humio Server 1.58.0 GA (2022-09-20)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.58.0 | GA | 2022-09-20 | Cloud | 2023-11-30 | No | 1.30.0 | No |
Available for download two days after release.
Bug fixes and updates.
New features and improvements
UI Changes
Humio is now a Falcon product. The Humio owl logo and icons are therefore replaced by beautiful falcons.
Functions
Improved the
format()
:Fixed an issue where the
format()
function would output the wrong amount of left padded zeros for decimal conversions.Formatting large positive numbers as hex no longer causes a loss of bits for integers less than 2^63.
Formatting negative numbers as hex no longer produces unintelligible strings.
Fixed an issue where adding the
#
flag would not display the correct formatting string.Fixed an issue where specifying the time/date modifier
N
would fail to parse.Fixed an issue where supplying multiple fields required you to specify the index of the last field as an argument specifier.
Added a length specifier to allow for outputting fields as 32-bit integers instead of 64-bits.
Using the type specifier
%F
now tries to format the specified field as a floating point.
See the
format()
reference documentation page for all the above mentioned updates on the supported formatting syntax.
Other
Add an additional validation check when uploading files to S3-like bucket storage. Humio will now perform a HEAD request for the file's final location in the bucket to verify that the upload succeeded.
Empty datasource directories will be now removed from the local file system while starting the server.
Fixed in this release
UI Changes
Change missing
@timestamp
field to give a warning instead of an error in functionstail()
,head()
,bucket()
, andtimeChart()
.
Other
Fix an issue causing a content-length check for bucket uploads to fail when encryption was enabled. The content-length check is not normally enabled, so this should only affect clusters that have disabled ETag-based validation.
Fix a regression introduced in 1.46.0 that can cause Humio to fail to properly replay data from Kafka when a node is restarted.
Humio Server 1.57.0 GA (2022-09-13)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.57.0 | GA | 2022-09-13 | Cloud | 2023-11-30 | No | 1.30.0 | No |
Available for download two days after release.
Bug fixes and updates.
Removed
Items that have been removed as of this release.
Installation and Deployment
Deprecated feature removal: the file-based backup feature was deprecated in 1.42.0 and is now removed from Humio. The following configs are no longer supported and will do nothing if set:
The
DELETE_BACKUP_AFTER_MILLIS
config is still supported, as it is used for configuring the delay between a file being marked for deletion in Humio, and that file being removed from bucket storage.
New features and improvements
UI Changes
Humio is now a Falcon product. The Humio owl logo and icons are therefore replaced by beautiful falcons.
Other
When selecting a parser test case, the selected test case is highlighted in the UI, so you can see what is selected.
Added use of the HTTP Proxy Client Configuration, if configured, in a lot of places.
Created new test function for event forwarders, which takes as input an event forwarder configuration and tests whether it is possible to connect to the Kafka server. The current test function which takes an ID as input and tests an existing event forwarder by ID, is now marked as deprecated.
Fixed in this release
Dashboards and Widgets
Bug fixed in
Scatter Chart
widget tooltip, so that the description of the actual point only is shown in the tooltip when hovering the mouse over one point, instead of multiple points.
Functions
Fixed an issue where
match()
would sometimes give errors whenignoreCase=true
and events contained latin1 encoded characters.
Other
It is now possible for a user to use the same personal invite token after the user has been transferred to another organization.
When selecting a parser test case, the selected test case is highlighted in the UI, so you can see what is selected.
Fixed an issue where the HTTP threads (Akka pool) could get blocked while sending ingest requests to Kafka, which could result in Humio HTTP endpoints not responding.
Humio Server 1.56.4 LTS (2022-12-21)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.56.4 | LTS | 2022-12-21 | Cloud | 2023-09-30 | No | 1.30.0 | No |
Hide file hashes
JAR Checksum | Value |
---|---|
MD5 | 2afb7131fc25ec5161869b348421a51d |
SHA1 | 5c93d5bd0b065b2de49571f9285ab087c2b7b6fd |
SHA256 | ca1b48a745b492dd2a580ff2efcd37a20440787dc150c65ae51ad3547bbc5528 |
SHA512 | c24ff5be15fd46b712de76d574d4b9e49464e3679cd8b5fe707442337fac6fc828cc9173663a3e2f3271a3ad6545c4b87c849ce75db27cfb5ed05795f998011f |
Docker Image | SHA256 Checksum |
---|---|
humio | c44efa344016b1e3750fbaa0c48aaf82a1a70400c7e5f84c230396143523fc94 |
humio-core | 0c5cc7251f855759999c61a259fc3b3dae5180b43f97296fe4ff0709c872af6b |
kafka | 270d083cd4a0e1947e9f151396404bd77c3e6581dd3615e04814b9b4e0108743 |
zookeeper | 116ef60d3affef2415dfb3b25b6460b67bceebedc4b11070039aa3ba2dbcbb31 |
Download: https://repo.humio.com/repository/maven-releases/com/humio/server/1.56.4/server-1.56.4.tar.gz
These notes include entries from the following previous releases: 1.56.2, 1.56.3
Bug fixes and updates.
New features and improvements
Security
The version of Jackson has been upgraded to address CVE-2022-42003 vulnerability.
Falcon Data Replicator
The feature flag for FDR feeds has been removed. FDR feeds are now generally available.
UI Changes
The event lists column header menus have been redesigned to be simpler:
You can now click the border between columns header in the event to fit the column to the content.
The Event List column Format Panel has been updated to make it easier to manage columns.
See Formatting Columns.
It is now possible to interact directly with the JSON properties and values in the EventList.
In the Event List you can assign data types to a column field. You can now make the setting the default for a fields and the setting is remembered when even the field is added to the Event List, e.g. from the fields panel on the Search page. The button for assigning default data type to a field can be found in the Data type dropdown menu in the column headers of the event list widget. See Field Data Types.
It is now possible to scroll to the selected event on the Search page.
Add UI for enabling and disabling social logins on the identity providers page.
The Log line format type in the Event List will now render fully expanded JSON when a JSON structure starts with a square bracket or curly bracket followed by a newline.
Humio is now a Falcon product. The Humio owl logo and icons are therefore replaced by beautiful falcons.
Documentation
Fixed a broken link to the documentation for Message Templates and Variables when editing alerts and scheduled searches.
Automation and Alerts
When creating new Actions, the new name will now stay when you change the Action Type without getting cleared. This also works when you want to change the New Action name while creating a New Action.
When you create or edit an action it will now show a warning dialog if you have unsaved changes.
A major change has been made to how alert queries are run in order to better reuse live queries when nodes are restarted in a Humio cluster. Find more details at Alerts.
With the new implementation for running alerts, alerts will now start faster after a node has been restarted, making it easier for alerts with a small search interval to be able to alert on events during the downtime.
GraphQL API
Deprecates the
defaultSharedTimeIsLive
input field on the updateDashboard GraphQL mutation, in favor ofupdateFrequency
.
Configuration
New dynamic configuration
MinimumHumioVersion
, default value is0.0.0
, that allows setting a minimum Humio version that this cluster will accept starting on. This allows protecting against inadvertently later rolling back too far for some other feature to be turned on, that has an implied minimum version for support of that feature.On cloud: added a configuration on dynamic identity providers to configure if users are allowed to be lazily created.
Added environment variable
ENABLE_SANDBOXES
to make it possible to enable and disable sandbox repositories.
Dashboards and Widgets
Implemented support for widgets with a fixed time interval on dashboards.
Queries
When searching for queries using the
Query Monitor
in Cluster Administration you can now filter queries based on internal and external query IDs.
Functions
Improved warning message when using
groupBy()
withlimit=max
and the limit is exceeded.Query functions
selectFromMin()
andselectFromMax()
are now generally available for use.BREAKING CHANGE: Changes to the serialization format of the Intermediate Language representation of queries.
Description: The serialization format used to serialize the intermediate language representation of queries has changed to a JSON format. This has multiple consequences for on-prem customers. During upgrades to this version and rollbacks from this version you can expect the following:
Queries can be slower than usual initially as the query cache clears itself.
Queries may cause deserialization errors if they are run during upgrade and two or more nodes have different versions. It is recommended to block all queries upon upgrade and downgrade to and from this version and have all nodes upgrade at the same time.
Other
In case view is not found we will try to fixup the cache on all cluster nodes.
It is now possible to select an entire permissions group when configuring permissions for a role.
Added the possibility of creating a role that grants permissions on the system and organization levels from the UI.
Updated the flow of creating and editing roles in the Understanding Your Organization pages.
In the dialog for entering a name, when creating a new entity (Alerts, Actions, Scheduled Searches, Parsing Data), hitting Enter without filling out the name field will now show an error and will not let you go on to the next page.
Permit the first character in the field name of a field being turned into a tag to be anything. If the first character does not match
[a-zA-Z]
then strip that from the resulting tag name. This does not alter the set of allowed names for tags, but allows the field names being turned into tags to have any character as the leading one, e.g. permitting examples such as&path
and*path
as field names to turn into the tag#path
.Allow any root user and any user with the PatchGlobal permission to use the global patch API. Previously required using the server-local special bootstraps root token, that would be valid only on the local node, thus hard to use via a load balancer.
Added support for writing
H
in place of minutes in the cron schedule of scheduled searches — see Cron Schedule Templates for details.Added new system permission, PatchGlobal, enabling access to the global patch API.
Reduced memory usage for queries that include
noResultUntilDone: true
in their inputs. This reduces memory usage in queries that do "export" of an aggregate result via the Query API, as well as the "inner" queries in joins, and queries from scheduled searches.When saving a parser, validate that the fields designated as tag fields have names that are valid as tag field names. Since packages with invalid parsers cannot be installed, if you have an invalid parser in a package, you will need to edit it to keep being able to install it.
Added an option to make token hashing output in json format. See tokenhashing usage described at Hashed Root Access Token.
When configuring SAML and OIDC for an organization, for users with the
ManageOrganizations
permission to enable/disable whether the IDP is Default and Humio managed.
Fixed in this release
Security
Update Netty to address CVE-2022-41915.
Update Scala to address CVE-2022-36944.
Falcon Data Replicator
Fixed a bug where a dropdown for choosing a parser was not visible in a dialog when creating a new FDR feed.
Removed the deprecated feature flag
FdrFeeds
UI Changes
GraphQL API
Fixed an error when querying for actions in GraphQL on a deleted view.
Marked all feature flags as preview in GraphQL, which means that once they are no longer needed, they will be removed without being deprecated first.
Dashboards and Widgets
Fixed an issue where word wrap did not work in the Inspect Panel.
Fixed a bug where certain queries would make it seem that all widgets were incompatible, even though the table view still works.
Importing a dashboard with Shared time enabled and Live disabled would import the dashboard with Live enabled. Likewise, when creating a new dashboard from a template, Live would be on.
The
button on the dashboard correctly applies the typed filter again.The Single Value color threshold list could get into a state where you could not type threshold values into the four text fields.
Functions
Other
Fixing an issue, where the sessions of a user wasn't revoked when the user was deleted.
Fixed a bug in decryption code used when decrypting downloaded files from bucket storage when
version-for-bucket-writes=3
. The bug did not allow to decrypt files larger than 2GB.Fixes a bug where a placeholder would appear for the region selector on the login pages, even though it itself wouldn't be shown since it has no configured regions.
It is no longer possible to have an upload file action with a path in the file name. This would result in an unusable file being created.
Fixed an issue where some segments could stall the background process implementing event redaction. This could then result in segments not being merged. The visible symptom would be segments with
topOffset
attribute being-1
, andMiniSegmentMergeLatencyLoggerJob
logging that some segments are not being merged.We have removed the
@host
field from thehumio-activity
logs and the#host
tag from thehumio-audit
log, as we can no longer provide meaningful values for these. The@host
field in thehumio-metrics
logs will remain, but its value will be changed to thevhost id
(an integer number).Fixed an issue where queries could fail when the requests within the cluster were more than 8 MB each.
Fixed an issue where delete events from a mini-segment could result in the merge of those mini-segments into the resulting target segment never got executed.
Fixed an issue where the HTTP threads (Akka pool) could get blocked while sending ingest requests to Kafka, which could result in Humio HTTP endpoints not responding.
Fixed an issue with tags in Event Forwarding, so that it is now possible to filter on tags using event forwarding rules, and the tags are present in the forwarded events.
Fixed an issue where LogScale could log secrets to the debug log when configured to use LDAP or when configured to use SSL for Kafka.
Fix a regression introduced in 1.46.0 that can cause Humio to fail to properly replay data from Kafka when a node is restarted.
Packages
Previously parsing packages was very strict, falling when detecting unsupported files. This is no longer the case, unsupported files will now be ignored and won't stop the package from installing.
Humio Server 1.56.3 LTS (2022-10-05)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.56.3 | LTS | 2022-10-05 | Cloud | 2023-09-30 | No | 1.30.0 | No |
Hide file hashes
JAR Checksum | Value |
---|---|
MD5 | f58e3b5944609cb3331a54277b4bf9d6 |
SHA1 | 48c6bf3339693489e073cbac67ba799fbe48661f |
SHA256 | 0273a18b354b2844a48175b16fd88f02b1522281a94850a3cc6815430d9efea5 |
SHA512 | 9ba1ef1ffc2f9d5d1a366b11f724bf24489e9f97c456104399b31d02b4da2903b6c1ac3903ecab862b305e530c19d12400748657cf096f35838ebab978b045e2 |
Docker Image | SHA256 Checksum |
---|---|
humio | 6de3bd848774503be64bc0ff6301afa5544bf379e58a8f485703d139ff13c2e1 |
humio-core | 7a15c633a82db8246a9d4a2e51bd3f802f11f7e08c53d88cb0f79a4886fd12ab |
kafka | e12d028592c8f92fc23413fc5c9eef903690d3f164b571d05cba3f6319101ceb |
zookeeper | f2e3390dd8552af9830c711f39c409f2ef668a457d377b38610e77d03f42cb2c |
Download: https://repo.humio.com/repository/maven-releases/com/humio/server/1.56.3/server-1.56.3.tar.gz
These notes include entries from the following previous releases: 1.56.2
Bug fixes and updates.
New features and improvements
Falcon Data Replicator
The feature flag for FDR feeds has been removed. FDR feeds are now generally available.
UI Changes
The event lists column header menus have been redesigned to be simpler:
You can now click the border between columns header in the event to fit the column to the content.
The Event List column Format Panel has been updated to make it easier to manage columns.
See Formatting Columns.
It is now possible to interact directly with the JSON properties and values in the EventList.
In the Event List you can assign data types to a column field. You can now make the setting the default for a fields and the setting is remembered when even the field is added to the Event List, e.g. from the fields panel on the Search page. The button for assigning default data type to a field can be found in the Data type dropdown menu in the column headers of the event list widget. See Field Data Types.
It is now possible to scroll to the selected event on the Search page.
Add UI for enabling and disabling social logins on the identity providers page.
The Log line format type in the Event List will now render fully expanded JSON when a JSON structure starts with a square bracket or curly bracket followed by a newline.
Humio is now a Falcon product. The Humio owl logo and icons are therefore replaced by beautiful falcons.
Documentation
Fixed a broken link to the documentation for Message Templates and Variables when editing alerts and scheduled searches.
Automation and Alerts
When creating new Actions, the new name will now stay when you change the Action Type without getting cleared. This also works when you want to change the New Action name while creating a New Action.
When you create or edit an action it will now show a warning dialog if you have unsaved changes.
A major change has been made to how alert queries are run in order to better reuse live queries when nodes are restarted in a Humio cluster. Find more details at Alerts.
With the new implementation for running alerts, alerts will now start faster after a node has been restarted, making it easier for alerts with a small search interval to be able to alert on events during the downtime.
GraphQL API
Deprecates the
defaultSharedTimeIsLive
input field on the updateDashboard GraphQL mutation, in favor ofupdateFrequency
.
Configuration
New dynamic configuration
MinimumHumioVersion
, default value is0.0.0
, that allows setting a minimum Humio version that this cluster will accept starting on. This allows protecting against inadvertently later rolling back too far for some other feature to be turned on, that has an implied minimum version for support of that feature.On cloud: added a configuration on dynamic identity providers to configure if users are allowed to be lazily created.
Added environment variable
ENABLE_SANDBOXES
to make it possible to enable and disable sandbox repositories.
Dashboards and Widgets
Implemented support for widgets with a fixed time interval on dashboards.
Queries
When searching for queries using the
Query Monitor
in Cluster Administration you can now filter queries based on internal and external query IDs.
Functions
Improved warning message when using
groupBy()
withlimit=max
and the limit is exceeded.Query functions
selectFromMin()
andselectFromMax()
are now generally available for use.BREAKING CHANGE: Changes to the serialization format of the Intermediate Language representation of queries.
Description: The serialization format used to serialize the intermediate language representation of queries has changed to a JSON format. This has multiple consequences for on-prem customers. During upgrades to this version and rollbacks from this version you can expect the following:
Queries can be slower than usual initially as the query cache clears itself.
Queries may cause deserialization errors if they are run during upgrade and two or more nodes have different versions. It is recommended to block all queries upon upgrade and downgrade to and from this version and have all nodes upgrade at the same time.
Other
In case view is not found we will try to fixup the cache on all cluster nodes.
It is now possible to select an entire permissions group when configuring permissions for a role.
Added the possibility of creating a role that grants permissions on the system and organization levels from the UI.
Updated the flow of creating and editing roles in the Understanding Your Organization pages.
In the dialog for entering a name, when creating a new entity (Alerts, Actions, Scheduled Searches, Parsing Data), hitting Enter without filling out the name field will now show an error and will not let you go on to the next page.
Permit the first character in the field name of a field being turned into a tag to be anything. If the first character does not match
[a-zA-Z]
then strip that from the resulting tag name. This does not alter the set of allowed names for tags, but allows the field names being turned into tags to have any character as the leading one, e.g. permitting examples such as&path
and*path
as field names to turn into the tag#path
.Allow any root user and any user with the PatchGlobal permission to use the global patch API. Previously required using the server-local special bootstraps root token, that would be valid only on the local node, thus hard to use via a load balancer.
Added support for writing
H
in place of minutes in the cron schedule of scheduled searches — see Cron Schedule Templates for details.Added new system permission, PatchGlobal, enabling access to the global patch API.
Reduced memory usage for queries that include
noResultUntilDone: true
in their inputs. This reduces memory usage in queries that do "export" of an aggregate result via the Query API, as well as the "inner" queries in joins, and queries from scheduled searches.When saving a parser, validate that the fields designated as tag fields have names that are valid as tag field names. Since packages with invalid parsers cannot be installed, if you have an invalid parser in a package, you will need to edit it to keep being able to install it.
Added an option to make token hashing output in json format. See tokenhashing usage described at Hashed Root Access Token.
When configuring SAML and OIDC for an organization, for users with the
ManageOrganizations
permission to enable/disable whether the IDP is Default and Humio managed.
Fixed in this release
Security
Update Scala to address CVE-2022-36944.
Falcon Data Replicator
Fixed a bug where a dropdown for choosing a parser was not visible in a dialog when creating a new FDR feed.
Removed the deprecated feature flag
FdrFeeds
UI Changes
GraphQL API
Fixed an error when querying for actions in GraphQL on a deleted view.
Marked all feature flags as preview in GraphQL, which means that once they are no longer needed, they will be removed without being deprecated first.
Dashboards and Widgets
Fixed an issue where word wrap did not work in the Inspect Panel.
Fixed a bug where certain queries would make it seem that all widgets were incompatible, even though the table view still works.
Importing a dashboard with Shared time enabled and Live disabled would import the dashboard with Live enabled. Likewise, when creating a new dashboard from a template, Live would be on.
The
button on the dashboard correctly applies the typed filter again.The Single Value color threshold list could get into a state where you could not type threshold values into the four text fields.
Functions
Other
Fixing an issue, where the sessions of a user wasn't revoked when the user was deleted.
Fixes a bug where a placeholder would appear for the region selector on the login pages, even though it itself wouldn't be shown since it has no configured regions.
It is no longer possible to have an upload file action with a path in the file name. This would result in an unusable file being created.
Fixed an issue where some segments could stall the background process implementing event redaction. This could then result in segments not being merged. The visible symptom would be segments with
topOffset
attribute being-1
, andMiniSegmentMergeLatencyLoggerJob
logging that some segments are not being merged.We have removed the
@host
field from thehumio-activity
logs and the#host
tag from thehumio-audit
log, as we can no longer provide meaningful values for these. The@host
field in thehumio-metrics
logs will remain, but its value will be changed to thevhost id
(an integer number).Fixed an issue where queries could fail when the requests within the cluster were more than 8 MB each.
Fixed an issue where delete events from a mini-segment could result in the merge of those mini-segments into the resulting target segment never got executed.
Fixed an issue where the HTTP threads (Akka pool) could get blocked while sending ingest requests to Kafka, which could result in Humio HTTP endpoints not responding.
Fixed an issue with tags in Event Forwarding, so that it is now possible to filter on tags using event forwarding rules, and the tags are present in the forwarded events.
Fix a regression introduced in 1.46.0 that can cause Humio to fail to properly replay data from Kafka when a node is restarted.
Packages
Previously parsing packages was very strict, falling when detecting unsupported files. This is no longer the case, unsupported files will now be ignored and won't stop the package from installing.
Humio Server 1.56.2 LTS (2022-09-26)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.56.2 | LTS | 2022-09-26 | Cloud | 2023-09-30 | No | 1.30.0 | No |
Hide file hashes
JAR Checksum | Value |
---|---|
MD5 | 954f0d64fe9405b757c522a34c28c1cc |
SHA1 | 14a5afe64517155bc6f7dd1da5d8a94ed42155a0 |
SHA256 | cd02dbf3ced3c551e13b1a39d621949770f4ade27b1c5c7f233791b339fd5aa5 |
SHA512 | a26172900cb29b89702bf2b61020850d6a24e739de046fc486e5b49b2b06296757086f00984a22d3950d70a0967105408acf8ce003bf5b8e7e2db9f8a0ba3b64 |
Docker Image | SHA256 Checksum |
---|---|
humio | 3d47b95b292f61d31e491ed7682d540675b62dda8693ed0a23b39a6dd55fdbba |
humio-core | b13a27d6d39436469033fb8f69f5a6945283a71468ea2248c2902cb38c842501 |
kafka | eecc82f9fb8ad2cb7e890650ff8e20538905a33fa53da4272c0582b68d11937e |
zookeeper | 0ab597706043240b1591e412a0b85b03737f583254148cd33bf5443039b368ad |
Download: https://repo.humio.com/repository/maven-releases/com/humio/server/1.56.2/server-1.56.2.tar.gz
Bug fixes and updates.
New features and improvements
Falcon Data Replicator
The feature flag for FDR feeds has been removed. FDR feeds are now generally available.
UI Changes
The event lists column header menus have been redesigned to be simpler:
You can now click the border between columns header in the event to fit the column to the content.
The Event List column Format Panel has been updated to make it easier to manage columns.
See Formatting Columns.
It is now possible to interact directly with the JSON properties and values in the EventList.
In the Event List you can assign data types to a column field. You can now make the setting the default for a fields and the setting is remembered when even the field is added to the Event List, e.g. from the fields panel on the Search page. The button for assigning default data type to a field can be found in the Data type dropdown menu in the column headers of the event list widget. See Field Data Types.
It is now possible to scroll to the selected event on the Search page.
Add UI for enabling and disabling social logins on the identity providers page.
The Log line format type in the Event List will now render fully expanded JSON when a JSON structure starts with a square bracket or curly bracket followed by a newline.
Humio is now a Falcon product. The Humio owl logo and icons are therefore replaced by beautiful falcons.
Documentation
Fixed a broken link to the documentation for Message Templates and Variables when editing alerts and scheduled searches.
Automation and Alerts
When creating new Actions, the new name will now stay when you change the Action Type without getting cleared. This also works when you want to change the New Action name while creating a New Action.
When you create or edit an action it will now show a warning dialog if you have unsaved changes.
A major change has been made to how alert queries are run in order to better reuse live queries when nodes are restarted in a Humio cluster. Find more details at Alerts.
With the new implementation for running alerts, alerts will now start faster after a node has been restarted, making it easier for alerts with a small search interval to be able to alert on events during the downtime.
GraphQL API
Deprecates the
defaultSharedTimeIsLive
input field on the updateDashboard GraphQL mutation, in favor ofupdateFrequency
.
Configuration
New dynamic configuration
MinimumHumioVersion
, default value is0.0.0
, that allows setting a minimum Humio version that this cluster will accept starting on. This allows protecting against inadvertently later rolling back too far for some other feature to be turned on, that has an implied minimum version for support of that feature.On cloud: added a configuration on dynamic identity providers to configure if users are allowed to be lazily created.
Added environment variable
ENABLE_SANDBOXES
to make it possible to enable and disable sandbox repositories.
Dashboards and Widgets
Implemented support for widgets with a fixed time interval on dashboards.
Queries
When searching for queries using the
Query Monitor
in Cluster Administration you can now filter queries based on internal and external query IDs.
Functions
Improved warning message when using
groupBy()
withlimit=max
and the limit is exceeded.Query functions
selectFromMin()
andselectFromMax()
are now generally available for use.BREAKING CHANGE: Changes to the serialization format of the Intermediate Language representation of queries.
Description: The serialization format used to serialize the intermediate language representation of queries has changed to a JSON format. This has multiple consequences for on-prem customers. During upgrades to this version and rollbacks from this version you can expect the following:
Queries can be slower than usual initially as the query cache clears itself.
Queries may cause deserialization errors if they are run during upgrade and two or more nodes have different versions. It is recommended to block all queries upon upgrade and downgrade to and from this version and have all nodes upgrade at the same time.
Other
In case view is not found we will try to fixup the cache on all cluster nodes.
It is now possible to select an entire permissions group when configuring permissions for a role.
Added the possibility of creating a role that grants permissions on the system and organization levels from the UI.
Updated the flow of creating and editing roles in the Understanding Your Organization pages.
In the dialog for entering a name, when creating a new entity (Alerts, Actions, Scheduled Searches, Parsing Data), hitting Enter without filling out the name field will now show an error and will not let you go on to the next page.
Permit the first character in the field name of a field being turned into a tag to be anything. If the first character does not match
[a-zA-Z]
then strip that from the resulting tag name. This does not alter the set of allowed names for tags, but allows the field names being turned into tags to have any character as the leading one, e.g. permitting examples such as&path
and*path
as field names to turn into the tag#path
.Allow any root user and any user with the PatchGlobal permission to use the global patch API. Previously required using the server-local special bootstraps root token, that would be valid only on the local node, thus hard to use via a load balancer.
Added support for writing
H
in place of minutes in the cron schedule of scheduled searches — see Cron Schedule Templates for details.Added new system permission, PatchGlobal, enabling access to the global patch API.
Reduced memory usage for queries that include
noResultUntilDone: true
in their inputs. This reduces memory usage in queries that do "export" of an aggregate result via the Query API, as well as the "inner" queries in joins, and queries from scheduled searches.When saving a parser, validate that the fields designated as tag fields have names that are valid as tag field names. Since packages with invalid parsers cannot be installed, if you have an invalid parser in a package, you will need to edit it to keep being able to install it.
Added an option to make token hashing output in json format. See tokenhashing usage described at Hashed Root Access Token.
When configuring SAML and OIDC for an organization, for users with the
ManageOrganizations
permission to enable/disable whether the IDP is Default and Humio managed.
Fixed in this release
Falcon Data Replicator
Fixed a bug where a dropdown for choosing a parser was not visible in a dialog when creating a new FDR feed.
Removed the deprecated feature flag
FdrFeeds
UI Changes
GraphQL API
Fixed an error when querying for actions in GraphQL on a deleted view.
Marked all feature flags as preview in GraphQL, which means that once they are no longer needed, they will be removed without being deprecated first.
Dashboards and Widgets
Fixed an issue where word wrap did not work in the Inspect Panel.
Fixed a bug where certain queries would make it seem that all widgets were incompatible, even though the table view still works.
Importing a dashboard with Shared time enabled and Live disabled would import the dashboard with Live enabled. Likewise, when creating a new dashboard from a template, Live would be on.
The
button on the dashboard correctly applies the typed filter again.The Single Value color threshold list could get into a state where you could not type threshold values into the four text fields.
Functions
Other
Fixing an issue, where the sessions of a user wasn't revoked when the user was deleted.
Fixes a bug where a placeholder would appear for the region selector on the login pages, even though it itself wouldn't be shown since it has no configured regions.
It is no longer possible to have an upload file action with a path in the file name. This would result in an unusable file being created.
Fixed an issue where some segments could stall the background process implementing event redaction. This could then result in segments not being merged. The visible symptom would be segments with
topOffset
attribute being-1
, andMiniSegmentMergeLatencyLoggerJob
logging that some segments are not being merged.We have removed the
@host
field from thehumio-activity
logs and the#host
tag from thehumio-audit
log, as we can no longer provide meaningful values for these. The@host
field in thehumio-metrics
logs will remain, but its value will be changed to thevhost id
(an integer number).Fixed an issue where queries could fail when the requests within the cluster were more than 8 MB each.
Fixed an issue where delete events from a mini-segment could result in the merge of those mini-segments into the resulting target segment never got executed.
Fixed an issue where the HTTP threads (Akka pool) could get blocked while sending ingest requests to Kafka, which could result in Humio HTTP endpoints not responding.
Fixed an issue with tags in Event Forwarding, so that it is now possible to filter on tags using event forwarding rules, and the tags are present in the forwarded events.
Fix a regression introduced in 1.46.0 that can cause Humio to fail to properly replay data from Kafka when a node is restarted.
Packages
Previously parsing packages was very strict, falling when detecting unsupported files. This is no longer the case, unsupported files will now be ignored and won't stop the package from installing.
Humio Server 1.56.1 GA (2022-09-20)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.56.1 | GA | 2022-09-20 | Cloud | 2023-09-30 | No | 1.30.0 | No |
Available for download two days after release.
Update.
New features and improvements
UI Changes
Humio is now a Falcon product. The Humio owl logo and icons are therefore replaced by beautiful falcons.
Humio Server 1.56.0 GA (2022-09-06)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.56.0 | GA | 2022-09-06 | Cloud | 2023-09-30 | No | 1.30.0 | No |
Available for download two days after release.
Bug fixes and updates.
New features and improvements
UI Changes
The event lists column header menus have been redesigned to be simpler:
You can now click the border between columns header in the event to fit the column to the content.
The Event List column Format Panel has been updated to make it easier to manage columns.
See Formatting Columns.
It is now possible to interact directly with the JSON properties and values in the EventList.
In the Event List you can assign data types to a column field. You can now make the setting the default for a fields and the setting is remembered when even the field is added to the Event List, e.g. from the fields panel on the Search page. The button for assigning default data type to a field can be found in the Data type dropdown menu in the column headers of the event list widget. See Field Data Types.
Humio is now a Falcon product. The Humio owl logo and icons are therefore replaced by beautiful falcons.
Dashboards and Widgets
Implemented support for widgets with a fixed time interval on dashboards.
Functions
BREAKING CHANGE: Changes to the serialization format of the Intermediate Language representation of queries.
Description: The serialization format used to serialize the intermediate language representation of queries has changed to a JSON format. This has multiple consequences for on-prem customers. During upgrades to this version and rollbacks from this version you can expect the following:
Queries can be slower than usual initially as the query cache clears itself.
Queries may cause deserialization errors if they are run during upgrade and two or more nodes have different versions. It is recommended to block all queries upon upgrade and downgrade to and from this version and have all nodes upgrade at the same time.
Other
Added the possibility of creating a role that grants permissions on the system and organization levels from the UI.
Updated the flow of creating and editing roles in the Understanding Your Organization pages.
Fixed in this release
Falcon Data Replicator
Removed the deprecated feature flag
FdrFeeds
GraphQL API
Marked all feature flags as preview in GraphQL, which means that once they are no longer needed, they will be removed without being deprecated first.
Dashboards and Widgets
Fixed a bug where certain queries would make it seem that all widgets were incompatible, even though the table view still works.
Importing a dashboard with Shared time enabled and Live disabled would import the dashboard with Live enabled. Likewise, when creating a new dashboard from a template, Live would be on.
Other
It is no longer possible to have an upload file action with a path in the file name. This would result in an unusable file being created.
Fixed an issue where some segments could stall the background process implementing event redaction. This could then result in segments not being merged. The visible symptom would be segments with
topOffset
attribute being-1
, andMiniSegmentMergeLatencyLoggerJob
logging that some segments are not being merged.Fixed an issue with tags in Event Forwarding, so that it is now possible to filter on tags using event forwarding rules, and the tags are present in the forwarded events.
Humio Server 1.55.0 GA (2022-08-30)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.55.0 | GA | 2022-08-30 | Cloud | 2023-09-30 | No | 1.30.0 | No |
Available for download two days after release.
Bug fixes and updates.
New features and improvements
UI Changes
It is now possible to scroll to the selected event on the Search page.
Automation and Alerts
When creating new Actions, the new name will now stay when you change the Action Type without getting cleared. This also works when you want to change the New Action name while creating a New Action.
When you create or edit an action it will now show a warning dialog if you have unsaved changes.
Functions
Query functions
selectFromMin()
andselectFromMax()
are now generally available for use.
Other
It is now possible to select an entire permissions group when configuring permissions for a role.
In the dialog for entering a name, when creating a new entity (Alerts, Actions, Scheduled Searches, Parsing Data), hitting Enter without filling out the name field will now show an error and will not let you go on to the next page.
Fixed in this release
Other
Fixing an issue, where the sessions of a user wasn't revoked when the user was deleted.
Humio Server 1.54.0 GA (2022-08-23)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.54.0 | GA | 2022-08-23 | Cloud | 2023-09-30 | No | 1.30.0 | No |
Available for download two days after release.
Bug fixes and updates.
New features and improvements
UI Changes
The Log line format type in the Event List will now render fully expanded JSON when a JSON structure starts with a square bracket or curly bracket followed by a newline.
Configuration
Added environment variable
ENABLE_SANDBOXES
to make it possible to enable and disable sandbox repositories.
Other
Added an option to make token hashing output in json format. See tokenhashing usage described at Hashed Root Access Token.
When configuring SAML and OIDC for an organization, for users with the
ManageOrganizations
permission to enable/disable whether the IDP is Default and Humio managed.
Fixed in this release
Functions
Other
Fixed an issue where queries could fail when the requests within the cluster were more than 8 MB each.
Packages
Previously parsing packages was very strict, falling when detecting unsupported files. This is no longer the case, unsupported files will now be ignored and won't stop the package from installing.
Humio Server 1.53.0 GA (2022-08-16)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.53.0 | GA | 2022-08-16 | Cloud | 2023-09-30 | No | 1.30.0 | No |
Available for download two days after release.
Bug fixes and updates.
New features and improvements
UI Changes
Add UI for enabling and disabling social logins on the identity providers page.
Queries
When searching for queries using the
Query Monitor
in Cluster Administration you can now filter queries based on internal and external query IDs.
Other
Reduced memory usage for queries that include
noResultUntilDone: true
in their inputs. This reduces memory usage in queries that do "export" of an aggregate result via the Query API, as well as the "inner" queries in joins, and queries from scheduled searches.
Fixed in this release
Dashboards and Widgets
Fixed an issue where word wrap did not work in the Inspect Panel.
The
button on the dashboard correctly applies the typed filter again.The Single Value color threshold list could get into a state where you could not type threshold values into the four text fields.
Other
We have removed the
@host
field from thehumio-activity
logs and the#host
tag from thehumio-audit
log, as we can no longer provide meaningful values for these. The@host
field in thehumio-metrics
logs will remain, but its value will be changed to thevhost id
(an integer number).Fixed an issue where delete events from a mini-segment could result in the merge of those mini-segments into the resulting target segment never got executed.
Humio Server 1.52.0 GA (2022-08-09)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.52.0 | GA | 2022-08-09 | Cloud | 2023-09-30 | No | 1.30.0 | No |
Available for download two days after release.
Bug fixes and updates.
New features and improvements
Falcon Data Replicator
The feature flag for FDR feeds has been removed. FDR feeds are now generally available.
Documentation
Fixed a broken link to the documentation for Message Templates and Variables when editing alerts and scheduled searches.
Automation and Alerts
A major change has been made to how alert queries are run in order to better reuse live queries when nodes are restarted in a Humio cluster. Find more details at Alerts.
With the new implementation for running alerts, alerts will now start faster after a node has been restarted, making it easier for alerts with a small search interval to be able to alert on events during the downtime.
GraphQL API
Deprecates the
defaultSharedTimeIsLive
input field on the updateDashboard GraphQL mutation, in favor ofupdateFrequency
.
Configuration
New dynamic configuration
MinimumHumioVersion
, default value is0.0.0
, that allows setting a minimum Humio version that this cluster will accept starting on. This allows protecting against inadvertently later rolling back too far for some other feature to be turned on, that has an implied minimum version for support of that feature.On cloud: added a configuration on dynamic identity providers to configure if users are allowed to be lazily created.
Functions
Other
In case view is not found we will try to fixup the cache on all cluster nodes.
Permit the first character in the field name of a field being turned into a tag to be anything. If the first character does not match
[a-zA-Z]
then strip that from the resulting tag name. This does not alter the set of allowed names for tags, but allows the field names being turned into tags to have any character as the leading one, e.g. permitting examples such as&path
and*path
as field names to turn into the tag#path
.Allow any root user and any user with the PatchGlobal permission to use the global patch API. Previously required using the server-local special bootstraps root token, that would be valid only on the local node, thus hard to use via a load balancer.
Added support for writing
H
in place of minutes in the cron schedule of scheduled searches — see Cron Schedule Templates for details.Added new system permission, PatchGlobal, enabling access to the global patch API.
When saving a parser, validate that the fields designated as tag fields have names that are valid as tag field names. Since packages with invalid parsers cannot be installed, if you have an invalid parser in a package, you will need to edit it to keep being able to install it.
Fixed in this release
Falcon Data Replicator
Fixed a bug where a dropdown for choosing a parser was not visible in a dialog when creating a new FDR feed.
UI Changes
GraphQL API
Fixed an error when querying for actions in GraphQL on a deleted view.
Other
Fixes a bug where a placeholder would appear for the region selector on the login pages, even though it itself wouldn't be shown since it has no configured regions.
Humio Server 1.51.3 LTS (2022-12-21)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.51.3 | LTS | 2022-12-21 | Cloud | 2023-08-31 | No | 1.30.0 | No |
Hide file hashes
JAR Checksum | Value |
---|---|
MD5 | 905f743a0266a5a5263dab94b23bbbb5 |
SHA1 | e4e32ca0fe460e7d708418d8b42bf30939298706 |
SHA256 | d81ad776a869a90b199e405e51f71ba0861a5034045d0b17611564a689370af5 |
SHA512 | 78f53b9ed42816638064fe3dc03d86e1667f13a60b45a13e32d55428ca1ecdcf88966d536224a999d95583c61390141b0d14fcb9643db80329f8a839da570e41 |
Docker Image | SHA256 Checksum |
---|---|
humio | 3400bf4a1e7304b5e904406d3c0ef8baa7d3feeb4a25d8b5607fc8a7fbdb2c60 |
humio-core | 7604f6e444a43be68d510023e34e2ebda393c48516f29baafd561876a5fa3c72 |
kafka | 1528fc8436b88d07d81e232ac9631a49e8d86ebe1911bf80477911262a3fe6fe |
zookeeper | 55f5e094f8da5d9b4767b1b342b78dce32c67dc6e9112b049415dea79aa98444 |
Download: https://repo.humio.com/repository/maven-releases/com/humio/server/1.51.3/server-1.51.3.tar.gz
These notes include entries from the following previous releases: 1.51.0, 1.51.1, 1.51.2
Bug fixes and updates.
Removed
Items that have been removed as of this release.
API
The deprecated REST API for actions has been removed, except for the endpoint for testing an action.
The deprecated REST API for parsers has been removed.
Deprecation
Items that have been deprecated and may be removed in a future release.
Deprecated enabledFeatures query. Use the new featureFlags query instead.
New features and improvements
Security
The version of Jackson has been upgraded to address CVE-2022-42003 vulnerability.
Falcon Data Replicator
FDR polling is now turned on by default. Whether FDR polling should be turned on or off on a node can be configured using the
ENABLE_FDR_POLLING_ON_NODE
configuration variable.If an S3 file is found to be incorrectly formatted during FDR ingest, it will not be ingested completely, but an attempt is made to ingest the remaining S3 files of the SQS message.
If an S3 file cannot be found during FDR ingest, it will not be ingested, but an attempt is made to ingest the remaining S3 files of the SQS message.
Added environment variable
FDR_USE_PROXY
which makes the fdr job use the proxy settings specified with:HTTP_PROXY_*
environment variables.
UI Changes
The design of the Time Selector has been updated, and it now features an button on the dashboard page. See Time Interval Settings.
Field columns now support multiple formatting options. See Formatting Columns for details.
Add missing accessibility features to the login page.
In lists of users, with user avatars containing user initials, the current user would sometimes appear to have an opening parenthesis as their last initial.
The Live checkbox is now no longer checked automatically when changing the value of the time window in the Time Selector. See Changing Time Interval for details.
If Humio fails to start because the cluster is being upgraded, a dedicated message will show when launching the UI.
The Save As... button is now always displayed on the Search page, see it described at Saving Searches.
Improved keyboard accessibility for creating repositories and views.
New styling of errors on search and dashboard pages.
Adds an icon and a hint to a disabled side navigation menu item that tells the user the reason for it being disabled.
Toggle switches anywhere in the UI can now be accessed using the tab key and can be accessed using the keyboard.
When editing an email action in the UI and adding multiple recipients, it is now possible to add a space after the comma in the comma-separated list of recipients.
Documentation
All documentation links have been updated after the documentation site has been restructured. Please contact support, if you experience any broken links.
Automation and Alerts
Fixed a bug where an alert with name longer than 50 characters could not be edited.
GraphQL API
Added preview fields
isClusterBeingUpdated
andminimumNodeVersion
to the GraphQL Cluster object type.Added a new dynamic configuration flag
QueryResultRowCountLimit
that globally limits how many results (events) a query can return. This flag can be set by administrators through GraphQL. See Limits & Standards for more details.The GQL API mutation updateDashboard has been updated to take a new argument updateFrequency which can currently only be
NEVER
orREALTIME
, which correspond respectively to "dashboard where queries are never updated after first completion" and "dashboard where query results are updated indefinitely".Expose a new GraphQL type with feature flag descriptions and whether they are experimental.
Added a GraphQL mutation for testing an action. It is still in preview, but it will replace the equivalent REST endpoint soon.
Improved error messaging of GraphQL queries and mutations for alerts, scheduled searches and actions in cases where a given repository or view cannot be found.
Configuration
Adds a new metric for measuring the merge latency, which is defined as the latency between the last mini-segment being written in a sequence with the same merge target, and those mini-segments being merged. The metric name is
segment-merge-latency-ms
.Detect need for higher autoshard count by monitoring ingest request flow in the cluster. Dynamically increase the number of autoshards for each datasource to keep flow on each resulting shard below approximately 2MB/s. New dynamic configuration for this that sets the target maximum rate of ingest for each shard of a datasource:
TargetMaxRateForDatasource
. Default value is 2000000 (2 MB).Added a new environment variable
GLOB_MATCH_LIMIT
which sets the maximum number of rows for csv_file inmatch(..., file=csv_file, glob=true)
function. PreviouslyMAX_STATE_SIZE
was used to determine this limit. The default value of this variable is 20000. If you've changed the value ofMAX_STATE_SIZE
, we recommend that you also changeGLOB_MATCH_LIMIT
to the same value for a seamless upgrade.Default value of configuration variable
S3_ARCHIVING_WORKERCOUNT
raised from1
to(vCPU/4)
.Added a new dynamic configuration
GroupDefaultLimit
. This can be done through GraphQL. See Limits & Standards for details. If you've changed the value ofMAX_STATE_LIMIT
, we recommend that you also changeGroupDefaultLimit
andGroupMaxLimit
to the same value for a seamless upgrade, seegroupBy()
for details.Introduced new dynamic configuration
LiveQueryMemoryLimit
. It can be set using GraphQL. See Limits & Standards for details.Introduced new dynamic configuration
JoinRowLimit
. It can be set using GraphQL and can be used as an alternative to the environment variableMAX_JOIN_LIMIT
. If theJoinRowLimit
is set, then its value will be used instead ofMAX_JOIN_LIMIT
. If it is not set, thenMAX_JOIN_LIMIT
will be used.Introduced new dynamic configuration
StateRowLimit
. It can be set using GraphQL. See Limits & Standards for details.Improve the error message if Humio is configured to use bucket storage, but the credentials for the bucket are not configured.
Change default value for configuration
AUTOSHARDING_MAX
from 16 to 128.Add environment variable
EULA_URL
to specificy url for terms and conditions.Added a link to humio-activity repository for debugging IDP configurations to the page for setting up the same.
Bucket storage now has support for a new format for the keys (file names) for the files placed in the bucket. When the new format is applied, the listing of files only happens for the prefixes "tmp/" and "globalsnapshots/". This helps products such a "HCP". The new format is applied only to buckets created after the dynamic configuration
BucketStorageKeySchemeVersion
has been set to "2". Existing cluster can start using the new format for new files by setting this dynamic configuration. The change will take effect after restarting the cluster. When creating a new Humio cluster, the new format is the default. The new format is supported only on Humio version 1.41+.Introduced new dynamic configuration
GroupMaxLimit
. It can be set using GraphQL. See Limits & Standards for details.Support for KMS on S3 bucket for Bucket Storage. Specify full ARN of the key. The
key_id
is persisted in the internalBucketEntity
so that a later change of the ID of the key to use for uploads will make Humio still refer the old keyID when downloading files uploaded using the previous key. Setting a new value for the target key results in a fresh internal bucket entity to track which files used kms and which did not. For simplicity it is recommended to not mix KMS and non-KMS configurations on the same S3 bucket.New configuration variable
S3_STORAGE_KMS_KEY_ARN
that specifies the KMS key to use.New configuration variable
S3_STORAGE_2_KMS_KEY_ARN
for 2nd bucket key.New configuration variable
S3_RECOVER_FROM_KMS_KEY_ARN
for recovery bucket key.
New file format for files uploaded to bucket storage that allows files larger than 2GB to be written to bucket storage. This may be turned on by setting the dynamic configuration
BucketStorageWriteVersion
to3
. When creating a new Humio clusters, the new format is the default. The new format is supported only on Humio version 1.41+.New configurations
BUCKET_STORAGE_SSE_COMPATIBLE
that makes bucket storage not verify checksums of raw objects after uploading to an S3. This option is turned on automatically is KMS is enabled (seeS3_STORAGE_KMS_KEY_ARN
) but is available directly here for use with other S3 compatible providers where verifying even content length does not work.Mini segments usually get merged if their event timestamps span more than
MAX_HOURS_SEGMENT_OPEN
. Mini segments created as part of backfilling did not follow this rule, but will now get merged if their ingest timestamps span more thanMAX_HOURS_SEGMENT_OPEN
.Adds a new logger job that logs the age of an unmerged miniSegment if the age exceeds the threshold set by the env variable
MINI_SEGMENT_MAX_MERGE_DELAY_MS_BEFORE_WARNING
. The default value ofMINI_SEGMENT_MAX_MERGE_DELAY_MS_BEFORE_WARNING
is 2 xMAX_HOURS_SEGMENT_OPEN
.MAX_HOURS_SEGMENT_OPEN
defaults to 24 hours. The error log produced looks like:Oldest unmerged miniSegment is older than the threshold thresholdMs={value} miniSegmentAgeMs={value} segment={value}
.Introduced new dynamic configuration
QueryMemoryLimit
. It can be set using GraphQL. See alsoLiveQueryMemoryLimit
for live queries. For more details, see Limits & Standards.
Dashboards and Widgets
Applied stylistic changes for the Inspect Panel used in Widget Editor.
Dashboards can now be configured to not update after the initial search has completed. This mode is mainly meant to be used when a dashboard is interactive and not for wall-mounted monitors that should update continually. The feature can be accessed from the Dashboard properties panel when a dashboard is put in edit-mode. See Working in Edit Mode.
Bar Chart
widget:The Y-axis can now start at smaller values than 1 for logarithmic scales, when the data contain small enough values.
It now has an
Auto
setting for the Input Data Format property, see Wide or Long Input Format for details.Now works with bucket query results.
Added empty states for all widget types that will be rendered when there are no results.
When importing existing dashboard with a static Shared time, recent changes in the time selection would make those dashboards live.
Introducing the
Heat Map
widget that visualizes aggregated data as a colorised grid.The
Pie Chart
widget now uses the first column for the series as a fall back option.The Dashboard page now displays the current cluster status.
Note
widget:Default background color is now
Auto
.Introduced the text color configuration option.
Sorting of
Pie Chart
widget categories, descending by value. Categories grouped asOthers
will always be last.The widget legend column width is now based on the custom series title (if specified) instead of the original series name.
The
Normalize
option for theWorld Map
widget has been replaced by a third magnitude mode namedNone
, which results in fixed size and opacity for all marks.Table
widgets will now break lines for newline characters in columns.Better handling of dashboard connections issues during restarts and upgrades.
Single Value
widget:Missing buckets are now shown as gaps on the sparkline.
Isolated data points are now visualized as dots on the sparkline.
Pie Chart
widget now uses the first column for the series as a fall back option.Single Value
widget new configuration: deprecated fielduse-colorised-thresholds
in favor ofcolor-method
.Single Value
widget Editor: the configuration optionEnable Thresholds
is being replaced by an option calledMethod
under the Colors section.
Log Collector
The Log Collector download page has been enabled for on-prem deployments.
Functions
Added validation to the
field
andkey
parameters of thejoin()
function, so empty lists will be rejected with a meaningful error message.The
groupBy()
function now acceptsmax
as value for thelimit
parameter, which sets the limit to the largest allowed value (as configured by the dynamic configurationGroupMaxLimit
).Improved the phrasing of the warning shown when
groupBy()
exceeds the max or default limit.Added validation to the
field
parameter of thekvParse()
function, so empty lists will be rejected with a meaningful error message.
Other
All users will not have access to the audit log or search all view by default anymore. Access can be granted with permissions.
Bump the version of the Monaco code editor.
Streaming queries that fail to validate now return a message of why validation failed.
Fix a bug causing Humio's digest coordinator to allow nodes to take over digest without catching up to the current leader. This could cause the new leader to replay more data from Kafka than necessary.
Fixed an issue where query auto-completion sometimes wouldn't show the documentation for the suggested functions.
Adds a new metric for the temp disk usage. The metric name is
temp-disk-usage-bytes
and denotes how many bytes are used.Added a log message with the maximum state size seen by the live part of live queries.
Include the requester in logs from QuerySessions when a live query is restarted or cancelled.
The audit log system repository on Cloud has been replaced with a view, so that dashboards etc. can be created on top of audit log data.
Make
BucketStorageUploadJob
only log at info level rather than error if a segment upload fails because the segment has been removed from the host. This can happen if node X tries to upload a segment, but node Y beats it to the punch. Node X may then choose to remove its copy before the upload completes.When unregistering a node from a cluster, return a validation error if it is still alive. Hosts should be shut down before attempting to remove them from the cluster. This validation can be skipped using the same
accept-data-loss
parameter that also disables other validations for the unregistration endpoint.Added detection and handling of all queries being blocked during Humio upgrades.
Added a log of the approximate query result size before transmission to the frontend, captured by the
approximateResultBeforeSerialization
key.Add flag whether a feature is experimental.
Added a log line for when a query exceeds its allotted memory quota.
The referrer meta tag for Humio has been changed from no-referrer to same-origin.
Compute next set of Prometheus metrics only in a single thread concurrently. If more requests arrive, then the next request gets the previous response.
Make a number of improvements to the digest partition coordinator. The coordinator now tries harder to avoid assigning digest to nodes that are not caught up on fetching segments from the other nodes. It also does a better job unassigning digest from dead nodes in edge cases.
Fix an unhandled IO exception from
TempDirUsageJob
. The consequence of the uncaught exception was only noise in the error log.Added a new action type that creates a CSV file from the query result and uploads it to Humio to be used with the
match()
query function. See Action Type: Upload File.Java in the docker images no longer has the
cap_net_bind_service
capability and thus Humio cannot bind directly to privileged ports when running as a non-root user.Add warning when a multitenancy user is changing data retention on an unlimited repository.
Improved performance of NDJSON format in S3 Archiving.
Fix a bug that could cause Humio to spuriously log errors warning about segments not being merged for datasources doing backfilling.
Humio now logs digest partition assignments regularly. The logs can be found using the query
class=*DigestLeadershipLoggerJob*
.All feature flags now contains a textual description about what features are hidden behind the flag.
Adds a logger job for cluster management stats it log the stats every 2 minutes, which makes them searchable in Humio.
The logs belong to the class
c.h.c.ClusterManagementStatsLoggerJob
, logs for all segments containsglobalSegmentStats
log about singular segments starts withsegmentStats
.Remove remains of default groups and roles. The concept was replaced with UserRoles.
Fixed in this release
Security
Update Netty to address CVE-2022-24823.
Update Netty to address CVE-2022-41915.
Bump javax.el to address CVE-2021-28170.
Update Scala to address CVE-2022-36944.
Falcon Data Replicator
FDR Ingest will no longer fail on events that are larger than the maximum allowed event size. Instead, such messages will be truncated.
UI Changes
Prevent the UI showing errors for smaller connection issues while restarting.
Websocket connections are now kept open when transitioning pages, and are used more efficiently for syntax highlighting.
Fixed an issue where some warnings would show twice.
Intermediate network issues are not reported immediately as an error in the UI.
Cloud: Updated the layout for license key page.
Fix the dropdown menus closing too early on the home page.
Fixed a bug where the "=" and "/=" buttons did not appear on cells in the event list where they should.
When viewing the events behind e.g. a Time Chart, the events will now only display with the
@timestamp
and@rawstring
columns.
GraphQL API
Fix the
assets
GraphQL query in organizations with views that are not 1-to-1 linked.
Configuration
Fixed a bug that could result in merging small ("undersized") segments even if the resulting segment would then have a wider than desired time span. The goal it to not produce segments that span more than the 10% of the retention setting for time for the repository. If no time-based retention is configured on the repository, then 3 times the value of configuration variable
MAX_HOURS_SEGMENT_OPEN
is applied as limit. For default settings, that results in 72 hours.Fixed an issue where event forwarding still showed as beta.
Fixed an issue where delete events from a mini-segment could result in the merge of those mini-segments into the resulting target segment never got executed.
Index in block needs reading from blockwriter before adding each item.
Fixed a bug where the @id field of events in live query were off by one.
Dashboards and Widgets
Fixed a bug where certain queries would make it seem that all widgets were incompatible, even though the table view still works.
Importing a dashboard with Shared time enabled and Live disabled would import the dashboard with Live enabled. Likewise, when creating a new dashboard from a template, Live would be on.
The
button on the dashboard correctly applies the typed filter again.The theme toggle on a shared dashboard was moved to the header panel and no longer overlaps with any widgets.
The
Time Chart
widget regression line is no longer affected by the interpolation setting.
Functions
Fixed a bug where using
eval
as an argument to a function would result in a confusing error message.Fixed a bug where
ioc:lookup()
would sometimes give incorrect results when negated.Revised some of the error messages and warnings regarding
join()
andselfJoin()
.Fixed a recent bug which caused the category links from
groupBy()
-groups to be lost when a subsequentsort()
was used, and also made grouping-based charts (bar, pie, heat map) unusable in such cases.Fixed a bug related to query result metadata for some functions when used as the last aggegate function in a query.
Fixed a bug where the
writeJson()
function would write any field starting with a case-insensitiveinf
orinfinity
prefix as a null value in the resulting JSON.
Other
Make streaming queries search segments newest-to-oldest rather than oldest-to-newest. Streaming queries do not ensure the order of exported events anyway, and searching newest-to-oldest is more efficient.
Fix a bug where changing a role for a user under a repository would trigger infinite network requests.
Centralise decision for names of files in bucket, allow more than one variant.
Improved hover messages for strings.
Fixed an issue where query auto-completion would sometimes delete existing parentheses.
If a segment is deleted or otherwise disappears from global while Humio is attempting to upload it to bucket storage, the upload will now be dropped with an info-level log, rather than requeued with an error log.
Fixes the placement of a confirmation dialog when attempting to change retention.
Fixed a bug in decryption code used when decrypting downloaded files from bucket storage when
version-for-bucket-writes=3
. The bug did not allow to decrypt files larger than 2GB.Humio will now clean up its tmp directories by deleting all "humiotmp" directories in the data directory when terminating gracefully.
Fix a regression in the launcher script causing
JVM_LOG_DIR
to not be evaluated relative to the Humio base install path. All paths in the launcher script should now be relative to the base install path, which is the directory containing the bin folder.Fix a bug that could cause merge targets to be cached indefinitely if the associated minis had their mergeTarget unset. The effect was a minor memory leak.
Fix a bug that could cause Humio to attempt to merge mini-segments from one datasource into a segment in another datasource, causing an error to be thrown.
When configuring thread priorities, Humio will no longer attempt to call the native setpriority function. It will instead only call the Java API for setting thread priority.
Fixed an issue for ephemeral disk based installs where segment files could stay longer on local disks than they were required to, in cases where some nodes listed in the cluster were not alive for extended periods of time.
Fixed an issue where JSON parsing on ingest and in the query language was inefficient for large JSON objects.
Fix performance issue for users with access to many views.
Improve file path handling in DiskSpaceJob to eliminate edge cases where the job might not have been able to tell if a file was on primary or secondary storage.
Fix type in
Unregisters node
text on cluster admin UI.Fixed an issue where event forwarder properties were not properly validated.
Reduced the timeout used when testing event forwarders in order to get a better error when timeouts happen.
Fix a bug that could cause a
NumberFormatException
to be thrown fromZooKeeperStatsClient
.Fixed an issue where some segments could stall the background process implementing event redaction. This could then result in segments not being merged. The visible symptom would be segments with
topOffset
attribute being-1
, andMiniSegmentMergeLatencyLoggerJob
logging that some segments are not being merged.Fix a bug causing digesters to continue digesting even if the local disk is full. The digester will now pause digesting and error log if this occurs.
Fix response entities not being discarded in error cases for the
proxyqueryjobs
endpoint, which could cause warnings in the log.Update
org.json:json
to address a vulnerability that could cause stack overflows.Fix an issue causing the event forwarding feature to incorrectly reject topic names that contained a dash
(-)
.Fix an issue that could rarely cause exceptions to be thrown from
Segments.originalBytesWritten
, causing noise in the log.Fix an issue causing Humio to create a large number of temporary directories in the data directory.
Bump woodstox to address SNYK-JAVA-COMFASTERXMLWOODSTOX-2928754.
Fixed an issue where queries could fail when the requests within the cluster were more than 8 MB each.
Some errors messages wrongly pointed to the beginning of the query.
Kafka upgrades to 3.2.0 in the Docker images and in the Humio dependencies.
Fixed an issue where LogScale could log secrets to the debug log when configured to use LDAP or when configured to use SSL for Kafka.
Fix a regression introduced in 1.46.0 that can cause Humio to fail to properly replay data from Kafka when a node is restarted.
Fixed an issue where strings like
Nana
andInformation
could be interpreted asNaN
(not-a-number) andinfinity
, respectively.Fixed a bug where multiline comments weren't always highlighted correctly.
Humio Server 1.51.2 LTS (2022-10-05)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.51.2 | LTS | 2022-10-05 | Cloud | 2023-08-31 | No | 1.30.0 | No |
Hide file hashes
JAR Checksum | Value |
---|---|
MD5 | e21bc8fda9250669b15d7d61521501f7 |
SHA1 | 50669930b6f27b097221b7e9a12cc9b061d70e52 |
SHA256 | aec3efe7a376ada047e813aad87cd68c43a8ea0de1cfb863190b77265b3f8d32 |
SHA512 | 1b9cec3f814d207bcc49ce9c43bcfcd139f7ddfaedd293c9a20fdfee2898aa0ff5fb5c82374eef2371e31753e331e15f8a6646a0e23b326885a1105fbd6081d4 |
Docker Image | SHA256 Checksum |
---|---|
humio | a63f552480e3e1ccaa22a095ff7b98efcbfc01d9e66fa7f1b40b45242226fc4en |
humio-core | af30f2076612b8e04a8c55172b3ff6cbd41d09b9be64b0267d9f8bd6d3b9efdc |
kafka | 723d25e8ba4bfb7247c4d39021155681aefab20f584c2d7db786941aa9d1522d |
zookeeper | 5878dbf9e32a0913da59d0b2e8dd13c5c7ba6043ba933a8b23d368f0d1241315 |
Download: https://repo.humio.com/repository/maven-releases/com/humio/server/1.51.2/server-1.51.2.tar.gz
These notes include entries from the following previous releases: 1.51.0, 1.51.1
Bug fixes and updates.
Removed
Items that have been removed as of this release.
API
The deprecated REST API for actions has been removed, except for the endpoint for testing an action.
The deprecated REST API for parsers has been removed.
Deprecation
Items that have been deprecated and may be removed in a future release.
Deprecated enabledFeatures query. Use the new featureFlags query instead.
New features and improvements
Falcon Data Replicator
FDR polling is now turned on by default. Whether FDR polling should be turned on or off on a node can be configured using the
ENABLE_FDR_POLLING_ON_NODE
configuration variable.If an S3 file is found to be incorrectly formatted during FDR ingest, it will not be ingested completely, but an attempt is made to ingest the remaining S3 files of the SQS message.
If an S3 file cannot be found during FDR ingest, it will not be ingested, but an attempt is made to ingest the remaining S3 files of the SQS message.
Added environment variable
FDR_USE_PROXY
which makes the fdr job use the proxy settings specified with:HTTP_PROXY_*
environment variables.
UI Changes
The design of the Time Selector has been updated, and it now features an button on the dashboard page. See Time Interval Settings.
Field columns now support multiple formatting options. See Formatting Columns for details.
Add missing accessibility features to the login page.
In lists of users, with user avatars containing user initials, the current user would sometimes appear to have an opening parenthesis as their last initial.
The Live checkbox is now no longer checked automatically when changing the value of the time window in the Time Selector. See Changing Time Interval for details.
If Humio fails to start because the cluster is being upgraded, a dedicated message will show when launching the UI.
The Save As... button is now always displayed on the Search page, see it described at Saving Searches.
Improved keyboard accessibility for creating repositories and views.
New styling of errors on search and dashboard pages.
Adds an icon and a hint to a disabled side navigation menu item that tells the user the reason for it being disabled.
Toggle switches anywhere in the UI can now be accessed using the tab key and can be accessed using the keyboard.
When editing an email action in the UI and adding multiple recipients, it is now possible to add a space after the comma in the comma-separated list of recipients.
Documentation
All documentation links have been updated after the documentation site has been restructured. Please contact support, if you experience any broken links.
Automation and Alerts
Fixed a bug where an alert with name longer than 50 characters could not be edited.
GraphQL API
Added preview fields
isClusterBeingUpdated
andminimumNodeVersion
to the GraphQL Cluster object type.Added a new dynamic configuration flag
QueryResultRowCountLimit
that globally limits how many results (events) a query can return. This flag can be set by administrators through GraphQL. See Limits & Standards for more details.The GQL API mutation updateDashboard has been updated to take a new argument updateFrequency which can currently only be
NEVER
orREALTIME
, which correspond respectively to "dashboard where queries are never updated after first completion" and "dashboard where query results are updated indefinitely".Expose a new GraphQL type with feature flag descriptions and whether they are experimental.
Added a GraphQL mutation for testing an action. It is still in preview, but it will replace the equivalent REST endpoint soon.
Improved error messaging of GraphQL queries and mutations for alerts, scheduled searches and actions in cases where a given repository or view cannot be found.
Configuration
Adds a new metric for measuring the merge latency, which is defined as the latency between the last mini-segment being written in a sequence with the same merge target, and those mini-segments being merged. The metric name is
segment-merge-latency-ms
.Detect need for higher autoshard count by monitoring ingest request flow in the cluster. Dynamically increase the number of autoshards for each datasource to keep flow on each resulting shard below approximately 2MB/s. New dynamic configuration for this that sets the target maximum rate of ingest for each shard of a datasource:
TargetMaxRateForDatasource
. Default value is 2000000 (2 MB).Added a new environment variable
GLOB_MATCH_LIMIT
which sets the maximum number of rows for csv_file inmatch(..., file=csv_file, glob=true)
function. PreviouslyMAX_STATE_SIZE
was used to determine this limit. The default value of this variable is 20000. If you've changed the value ofMAX_STATE_SIZE
, we recommend that you also changeGLOB_MATCH_LIMIT
to the same value for a seamless upgrade.Default value of configuration variable
S3_ARCHIVING_WORKERCOUNT
raised from1
to(vCPU/4)
.Added a new dynamic configuration
GroupDefaultLimit
. This can be done through GraphQL. See Limits & Standards for details. If you've changed the value ofMAX_STATE_LIMIT
, we recommend that you also changeGroupDefaultLimit
andGroupMaxLimit
to the same value for a seamless upgrade, seegroupBy()
for details.Introduced new dynamic configuration
LiveQueryMemoryLimit
. It can be set using GraphQL. See Limits & Standards for details.Introduced new dynamic configuration
JoinRowLimit
. It can be set using GraphQL and can be used as an alternative to the environment variableMAX_JOIN_LIMIT
. If theJoinRowLimit
is set, then its value will be used instead ofMAX_JOIN_LIMIT
. If it is not set, thenMAX_JOIN_LIMIT
will be used.Introduced new dynamic configuration
StateRowLimit
. It can be set using GraphQL. See Limits & Standards for details.Improve the error message if Humio is configured to use bucket storage, but the credentials for the bucket are not configured.
Change default value for configuration
AUTOSHARDING_MAX
from 16 to 128.Add environment variable
EULA_URL
to specificy url for terms and conditions.Added a link to humio-activity repository for debugging IDP configurations to the page for setting up the same.
Bucket storage now has support for a new format for the keys (file names) for the files placed in the bucket. When the new format is applied, the listing of files only happens for the prefixes "tmp/" and "globalsnapshots/". This helps products such a "HCP". The new format is applied only to buckets created after the dynamic configuration
BucketStorageKeySchemeVersion
has been set to "2". Existing cluster can start using the new format for new files by setting this dynamic configuration. The change will take effect after restarting the cluster. When creating a new Humio cluster, the new format is the default. The new format is supported only on Humio version 1.41+.Introduced new dynamic configuration
GroupMaxLimit
. It can be set using GraphQL. See Limits & Standards for details.Support for KMS on S3 bucket for Bucket Storage. Specify full ARN of the key. The
key_id
is persisted in the internalBucketEntity
so that a later change of the ID of the key to use for uploads will make Humio still refer the old keyID when downloading files uploaded using the previous key. Setting a new value for the target key results in a fresh internal bucket entity to track which files used kms and which did not. For simplicity it is recommended to not mix KMS and non-KMS configurations on the same S3 bucket.New configuration variable
S3_STORAGE_KMS_KEY_ARN
that specifies the KMS key to use.New configuration variable
S3_STORAGE_2_KMS_KEY_ARN
for 2nd bucket key.New configuration variable
S3_RECOVER_FROM_KMS_KEY_ARN
for recovery bucket key.
New file format for files uploaded to bucket storage that allows files larger than 2GB to be written to bucket storage. This may be turned on by setting the dynamic configuration
BucketStorageWriteVersion
to3
. When creating a new Humio clusters, the new format is the default. The new format is supported only on Humio version 1.41+.New configurations
BUCKET_STORAGE_SSE_COMPATIBLE
that makes bucket storage not verify checksums of raw objects after uploading to an S3. This option is turned on automatically is KMS is enabled (seeS3_STORAGE_KMS_KEY_ARN
) but is available directly here for use with other S3 compatible providers where verifying even content length does not work.Mini segments usually get merged if their event timestamps span more than
MAX_HOURS_SEGMENT_OPEN
. Mini segments created as part of backfilling did not follow this rule, but will now get merged if their ingest timestamps span more thanMAX_HOURS_SEGMENT_OPEN
.Adds a new logger job that logs the age of an unmerged miniSegment if the age exceeds the threshold set by the env variable
MINI_SEGMENT_MAX_MERGE_DELAY_MS_BEFORE_WARNING
. The default value ofMINI_SEGMENT_MAX_MERGE_DELAY_MS_BEFORE_WARNING
is 2 xMAX_HOURS_SEGMENT_OPEN
.MAX_HOURS_SEGMENT_OPEN
defaults to 24 hours. The error log produced looks like:Oldest unmerged miniSegment is older than the threshold thresholdMs={value} miniSegmentAgeMs={value} segment={value}
.Introduced new dynamic configuration
QueryMemoryLimit
. It can be set using GraphQL. See alsoLiveQueryMemoryLimit
for live queries. For more details, see Limits & Standards.
Dashboards and Widgets
Applied stylistic changes for the Inspect Panel used in Widget Editor.
Dashboards can now be configured to not update after the initial search has completed. This mode is mainly meant to be used when a dashboard is interactive and not for wall-mounted monitors that should update continually. The feature can be accessed from the Dashboard properties panel when a dashboard is put in edit-mode. See Working in Edit Mode.
Bar Chart
widget:The Y-axis can now start at smaller values than 1 for logarithmic scales, when the data contain small enough values.
It now has an
Auto
setting for the Input Data Format property, see Wide or Long Input Format for details.Now works with bucket query results.
Added empty states for all widget types that will be rendered when there are no results.
When importing existing dashboard with a static Shared time, recent changes in the time selection would make those dashboards live.
Introducing the
Heat Map
widget that visualizes aggregated data as a colorised grid.The
Pie Chart
widget now uses the first column for the series as a fall back option.The Dashboard page now displays the current cluster status.
Note
widget:Default background color is now
Auto
.Introduced the text color configuration option.
Sorting of
Pie Chart
widget categories, descending by value. Categories grouped asOthers
will always be last.The widget legend column width is now based on the custom series title (if specified) instead of the original series name.
The
Normalize
option for theWorld Map
widget has been replaced by a third magnitude mode namedNone
, which results in fixed size and opacity for all marks.Table
widgets will now break lines for newline characters in columns.Better handling of dashboard connections issues during restarts and upgrades.
Single Value
widget:Missing buckets are now shown as gaps on the sparkline.
Isolated data points are now visualized as dots on the sparkline.
Pie Chart
widget now uses the first column for the series as a fall back option.Single Value
widget new configuration: deprecated fielduse-colorised-thresholds
in favor ofcolor-method
.Single Value
widget Editor: the configuration optionEnable Thresholds
is being replaced by an option calledMethod
under the Colors section.
Log Collector
The Log Collector download page has been enabled for on-prem deployments.
Functions
Added validation to the
field
andkey
parameters of thejoin()
function, so empty lists will be rejected with a meaningful error message.The
groupBy()
function now acceptsmax
as value for thelimit
parameter, which sets the limit to the largest allowed value (as configured by the dynamic configurationGroupMaxLimit
).Improved the phrasing of the warning shown when
groupBy()
exceeds the max or default limit.Added validation to the
field
parameter of thekvParse()
function, so empty lists will be rejected with a meaningful error message.
Other
All users will not have access to the audit log or search all view by default anymore. Access can be granted with permissions.
Bump the version of the Monaco code editor.
Streaming queries that fail to validate now return a message of why validation failed.
Fix a bug causing Humio's digest coordinator to allow nodes to take over digest without catching up to the current leader. This could cause the new leader to replay more data from Kafka than necessary.
Fixed an issue where query auto-completion sometimes wouldn't show the documentation for the suggested functions.
Adds a new metric for the temp disk usage. The metric name is
temp-disk-usage-bytes
and denotes how many bytes are used.Added a log message with the maximum state size seen by the live part of live queries.
Include the requester in logs from QuerySessions when a live query is restarted or cancelled.
The audit log system repository on Cloud has been replaced with a view, so that dashboards etc. can be created on top of audit log data.
Make
BucketStorageUploadJob
only log at info level rather than error if a segment upload fails because the segment has been removed from the host. This can happen if node X tries to upload a segment, but node Y beats it to the punch. Node X may then choose to remove its copy before the upload completes.When unregistering a node from a cluster, return a validation error if it is still alive. Hosts should be shut down before attempting to remove them from the cluster. This validation can be skipped using the same
accept-data-loss
parameter that also disables other validations for the unregistration endpoint.Added detection and handling of all queries being blocked during Humio upgrades.
Added a log of the approximate query result size before transmission to the frontend, captured by the
approximateResultBeforeSerialization
key.Add flag whether a feature is experimental.
Added a log line for when a query exceeds its allotted memory quota.
The referrer meta tag for Humio has been changed from no-referrer to same-origin.
Compute next set of Prometheus metrics only in a single thread concurrently. If more requests arrive, then the next request gets the previous response.
Make a number of improvements to the digest partition coordinator. The coordinator now tries harder to avoid assigning digest to nodes that are not caught up on fetching segments from the other nodes. It also does a better job unassigning digest from dead nodes in edge cases.
Fix an unhandled IO exception from
TempDirUsageJob
. The consequence of the uncaught exception was only noise in the error log.Added a new action type that creates a CSV file from the query result and uploads it to Humio to be used with the
match()
query function. See Action Type: Upload File.Java in the docker images no longer has the
cap_net_bind_service
capability and thus Humio cannot bind directly to privileged ports when running as a non-root user.Add warning when a multitenancy user is changing data retention on an unlimited repository.
Improved performance of NDJSON format in S3 Archiving.
Fix a bug that could cause Humio to spuriously log errors warning about segments not being merged for datasources doing backfilling.
Humio now logs digest partition assignments regularly. The logs can be found using the query
class=*DigestLeadershipLoggerJob*
.All feature flags now contains a textual description about what features are hidden behind the flag.
Adds a logger job for cluster management stats it log the stats every 2 minutes, which makes them searchable in Humio.
The logs belong to the class
c.h.c.ClusterManagementStatsLoggerJob
, logs for all segments containsglobalSegmentStats
log about singular segments starts withsegmentStats
.Remove remains of default groups and roles. The concept was replaced with UserRoles.
Fixed in this release
Security
Update Netty to address CVE-2022-24823.
Bump javax.el to address CVE-2021-28170.
Update Scala to address CVE-2022-36944.
Falcon Data Replicator
FDR Ingest will no longer fail on events that are larger than the maximum allowed event size. Instead, such messages will be truncated.
UI Changes
Prevent the UI showing errors for smaller connection issues while restarting.
Websocket connections are now kept open when transitioning pages, and are used more efficiently for syntax highlighting.
Fixed an issue where some warnings would show twice.
Intermediate network issues are not reported immediately as an error in the UI.
Cloud: Updated the layout for license key page.
Fix the dropdown menus closing too early on the home page.
Fixed a bug where the "=" and "/=" buttons did not appear on cells in the event list where they should.
When viewing the events behind e.g. a Time Chart, the events will now only display with the
@timestamp
and@rawstring
columns.
GraphQL API
Fix the
assets
GraphQL query in organizations with views that are not 1-to-1 linked.
Configuration
Fixed a bug that could result in merging small ("undersized") segments even if the resulting segment would then have a wider than desired time span. The goal it to not produce segments that span more than the 10% of the retention setting for time for the repository. If no time-based retention is configured on the repository, then 3 times the value of configuration variable
MAX_HOURS_SEGMENT_OPEN
is applied as limit. For default settings, that results in 72 hours.Fixed an issue where event forwarding still showed as beta.
Fixed an issue where delete events from a mini-segment could result in the merge of those mini-segments into the resulting target segment never got executed.
Index in block needs reading from blockwriter before adding each item.
Fixed a bug where the @id field of events in live query were off by one.
Dashboards and Widgets
Fixed a bug where certain queries would make it seem that all widgets were incompatible, even though the table view still works.
Importing a dashboard with Shared time enabled and Live disabled would import the dashboard with Live enabled. Likewise, when creating a new dashboard from a template, Live would be on.
The
button on the dashboard correctly applies the typed filter again.The theme toggle on a shared dashboard was moved to the header panel and no longer overlaps with any widgets.
The
Time Chart
widget regression line is no longer affected by the interpolation setting.
Functions
Fixed a bug where using
eval
as an argument to a function would result in a confusing error message.Fixed a bug where
ioc:lookup()
would sometimes give incorrect results when negated.Revised some of the error messages and warnings regarding
join()
andselfJoin()
.Fixed a recent bug which caused the category links from
groupBy()
-groups to be lost when a subsequentsort()
was used, and also made grouping-based charts (bar, pie, heat map) unusable in such cases.Fixed a bug related to query result metadata for some functions when used as the last aggegate function in a query.
Fixed a bug where the
writeJson()
function would write any field starting with a case-insensitiveinf
orinfinity
prefix as a null value in the resulting JSON.
Other
Make streaming queries search segments newest-to-oldest rather than oldest-to-newest. Streaming queries do not ensure the order of exported events anyway, and searching newest-to-oldest is more efficient.
Fix a bug where changing a role for a user under a repository would trigger infinite network requests.
Centralise decision for names of files in bucket, allow more than one variant.
Improved hover messages for strings.
Fixed an issue where query auto-completion would sometimes delete existing parentheses.
If a segment is deleted or otherwise disappears from global while Humio is attempting to upload it to bucket storage, the upload will now be dropped with an info-level log, rather than requeued with an error log.
Fixes the placement of a confirmation dialog when attempting to change retention.
Humio will now clean up its tmp directories by deleting all "humiotmp" directories in the data directory when terminating gracefully.
Fix a regression in the launcher script causing
JVM_LOG_DIR
to not be evaluated relative to the Humio base install path. All paths in the launcher script should now be relative to the base install path, which is the directory containing the bin folder.Fix a bug that could cause merge targets to be cached indefinitely if the associated minis had their mergeTarget unset. The effect was a minor memory leak.
Fix a bug that could cause Humio to attempt to merge mini-segments from one datasource into a segment in another datasource, causing an error to be thrown.
When configuring thread priorities, Humio will no longer attempt to call the native setpriority function. It will instead only call the Java API for setting thread priority.
Fixed an issue for ephemeral disk based installs where segment files could stay longer on local disks than they were required to, in cases where some nodes listed in the cluster were not alive for extended periods of time.
Fixed an issue where JSON parsing on ingest and in the query language was inefficient for large JSON objects.
Fix performance issue for users with access to many views.
Improve file path handling in DiskSpaceJob to eliminate edge cases where the job might not have been able to tell if a file was on primary or secondary storage.
Fix type in
Unregisters node
text on cluster admin UI.Fixed an issue where event forwarder properties were not properly validated.
Reduced the timeout used when testing event forwarders in order to get a better error when timeouts happen.
Fix a bug that could cause a
NumberFormatException
to be thrown fromZooKeeperStatsClient
.Fixed an issue where some segments could stall the background process implementing event redaction. This could then result in segments not being merged. The visible symptom would be segments with
topOffset
attribute being-1
, andMiniSegmentMergeLatencyLoggerJob
logging that some segments are not being merged.Fix a bug causing digesters to continue digesting even if the local disk is full. The digester will now pause digesting and error log if this occurs.
Fix response entities not being discarded in error cases for the
proxyqueryjobs
endpoint, which could cause warnings in the log.Update
org.json:json
to address a vulnerability that could cause stack overflows.Fix an issue causing the event forwarding feature to incorrectly reject topic names that contained a dash
(-)
.Fix an issue that could rarely cause exceptions to be thrown from
Segments.originalBytesWritten
, causing noise in the log.Fix an issue causing Humio to create a large number of temporary directories in the data directory.
Bump woodstox to address SNYK-JAVA-COMFASTERXMLWOODSTOX-2928754.
Fixed an issue where queries could fail when the requests within the cluster were more than 8 MB each.
Some errors messages wrongly pointed to the beginning of the query.
Kafka upgrades to 3.2.0 in the Docker images and in the Humio dependencies.
Fix a regression introduced in 1.46.0 that can cause Humio to fail to properly replay data from Kafka when a node is restarted.
Fixed an issue where strings like
Nana
andInformation
could be interpreted asNaN
(not-a-number) andinfinity
, respectively.Fixed a bug where multiline comments weren't always highlighted correctly.
Humio Server 1.51.1 LTS (2022-08-29)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.51.1 | LTS | 2022-08-29 | Cloud | 2023-08-31 | No | 1.30.0 | No |
Hide file hashes
JAR Checksum | Value |
---|---|
MD5 | 6410ab777ec6d98066a3492f4a6b68af |
SHA1 | 2b55844a4c4d5d613f091a7a753977e6b51477f4 |
SHA256 | a8beb23cfa0b00aebbd4a4a95665deedcce868c0be1de5c5383601fd55ba60f9 |
SHA512 | c37d2ee75d3b7ac4259fd453c24fa8ada18de360837ba475cf552747ac1b0b07dfce618ced2708e51ee9d66c5ff1a36081bb57a2531922930c385846ac80d73c |
Docker Image | SHA256 Checksum |
---|---|
humio | 10c21dbc2eba33d4e401b0559ae0ecacfd1f80e9184b946164e674068380d286 |
humio-core | f232ce1b182b74bb534249fbf8eba1ab41544242353e1fe4d21f69a0e6a7c190 |
kafka | 98fe1b8f3c6caadb6efb6de3e5be3215e9987f235c1134ac0e14eb8705d1d2d8 |
zookeeper | 97c571a338e94b9ecaf66c6e9625c593dff68fe62c04c1b2a1ffd44bf10d39ba |
Download: https://repo.humio.com/repository/maven-releases/com/humio/server/1.51.1/server-1.51.1.tar.gz
These notes include entries from the following previous releases: 1.51.0
Bug fix.
Removed
Items that have been removed as of this release.
API
The deprecated REST API for actions has been removed, except for the endpoint for testing an action.
The deprecated REST API for parsers has been removed.
Deprecation
Items that have been deprecated and may be removed in a future release.
Deprecated enabledFeatures query. Use the new featureFlags query instead.
New features and improvements
Falcon Data Replicator
FDR polling is now turned on by default. Whether FDR polling should be turned on or off on a node can be configured using the
ENABLE_FDR_POLLING_ON_NODE
configuration variable.If an S3 file is found to be incorrectly formatted during FDR ingest, it will not be ingested completely, but an attempt is made to ingest the remaining S3 files of the SQS message.
If an S3 file cannot be found during FDR ingest, it will not be ingested, but an attempt is made to ingest the remaining S3 files of the SQS message.
Added environment variable
FDR_USE_PROXY
which makes the fdr job use the proxy settings specified with:HTTP_PROXY_*
environment variables.
UI Changes
The design of the Time Selector has been updated, and it now features an button on the dashboard page. See Time Interval Settings.
Field columns now support multiple formatting options. See Formatting Columns for details.
Add missing accessibility features to the login page.
In lists of users, with user avatars containing user initials, the current user would sometimes appear to have an opening parenthesis as their last initial.
The Live checkbox is now no longer checked automatically when changing the value of the time window in the Time Selector. See Changing Time Interval for details.
If Humio fails to start because the cluster is being upgraded, a dedicated message will show when launching the UI.
The Save As... button is now always displayed on the Search page, see it described at Saving Searches.
Improved keyboard accessibility for creating repositories and views.
New styling of errors on search and dashboard pages.
Adds an icon and a hint to a disabled side navigation menu item that tells the user the reason for it being disabled.
Toggle switches anywhere in the UI can now be accessed using the tab key and can be accessed using the keyboard.
When editing an email action in the UI and adding multiple recipients, it is now possible to add a space after the comma in the comma-separated list of recipients.
Documentation
All documentation links have been updated after the documentation site has been restructured. Please contact support, if you experience any broken links.
Automation and Alerts
Fixed a bug where an alert with name longer than 50 characters could not be edited.
GraphQL API
Added preview fields
isClusterBeingUpdated
andminimumNodeVersion
to the GraphQL Cluster object type.Added a new dynamic configuration flag
QueryResultRowCountLimit
that globally limits how many results (events) a query can return. This flag can be set by administrators through GraphQL. See Limits & Standards for more details.The GQL API mutation updateDashboard has been updated to take a new argument updateFrequency which can currently only be
NEVER
orREALTIME
, which correspond respectively to "dashboard where queries are never updated after first completion" and "dashboard where query results are updated indefinitely".Expose a new GraphQL type with feature flag descriptions and whether they are experimental.
Added a GraphQL mutation for testing an action. It is still in preview, but it will replace the equivalent REST endpoint soon.
Improved error messaging of GraphQL queries and mutations for alerts, scheduled searches and actions in cases where a given repository or view cannot be found.
Configuration
Adds a new metric for measuring the merge latency, which is defined as the latency between the last mini-segment being written in a sequence with the same merge target, and those mini-segments being merged. The metric name is
segment-merge-latency-ms
.Detect need for higher autoshard count by monitoring ingest request flow in the cluster. Dynamically increase the number of autoshards for each datasource to keep flow on each resulting shard below approximately 2MB/s. New dynamic configuration for this that sets the target maximum rate of ingest for each shard of a datasource:
TargetMaxRateForDatasource
. Default value is 2000000 (2 MB).Added a new environment variable
GLOB_MATCH_LIMIT
which sets the maximum number of rows for csv_file inmatch(..., file=csv_file, glob=true)
function. PreviouslyMAX_STATE_SIZE
was used to determine this limit. The default value of this variable is 20000. If you've changed the value ofMAX_STATE_SIZE
, we recommend that you also changeGLOB_MATCH_LIMIT
to the same value for a seamless upgrade.Default value of configuration variable
S3_ARCHIVING_WORKERCOUNT
raised from1
to(vCPU/4)
.Added a new dynamic configuration
GroupDefaultLimit
. This can be done through GraphQL. See Limits & Standards for details. If you've changed the value ofMAX_STATE_LIMIT
, we recommend that you also changeGroupDefaultLimit
andGroupMaxLimit
to the same value for a seamless upgrade, seegroupBy()
for details.Introduced new dynamic configuration
LiveQueryMemoryLimit
. It can be set using GraphQL. See Limits & Standards for details.Introduced new dynamic configuration
JoinRowLimit
. It can be set using GraphQL and can be used as an alternative to the environment variableMAX_JOIN_LIMIT
. If theJoinRowLimit
is set, then its value will be used instead ofMAX_JOIN_LIMIT
. If it is not set, thenMAX_JOIN_LIMIT
will be used.Introduced new dynamic configuration
StateRowLimit
. It can be set using GraphQL. See Limits & Standards for details.Improve the error message if Humio is configured to use bucket storage, but the credentials for the bucket are not configured.
Change default value for configuration
AUTOSHARDING_MAX
from 16 to 128.Add environment variable
EULA_URL
to specificy url for terms and conditions.Added a link to humio-activity repository for debugging IDP configurations to the page for setting up the same.
Bucket storage now has support for a new format for the keys (file names) for the files placed in the bucket. When the new format is applied, the listing of files only happens for the prefixes "tmp/" and "globalsnapshots/". This helps products such a "HCP". The new format is applied only to buckets created after the dynamic configuration
BucketStorageKeySchemeVersion
has been set to "2". Existing cluster can start using the new format for new files by setting this dynamic configuration. The change will take effect after restarting the cluster. When creating a new Humio cluster, the new format is the default. The new format is supported only on Humio version 1.41+.Introduced new dynamic configuration
GroupMaxLimit
. It can be set using GraphQL. See Limits & Standards for details.Support for KMS on S3 bucket for Bucket Storage. Specify full ARN of the key. The
key_id
is persisted in the internalBucketEntity
so that a later change of the ID of the key to use for uploads will make Humio still refer the old keyID when downloading files uploaded using the previous key. Setting a new value for the target key results in a fresh internal bucket entity to track which files used kms and which did not. For simplicity it is recommended to not mix KMS and non-KMS configurations on the same S3 bucket.New configuration variable
S3_STORAGE_KMS_KEY_ARN
that specifies the KMS key to use.New configuration variable
S3_STORAGE_2_KMS_KEY_ARN
for 2nd bucket key.New configuration variable
S3_RECOVER_FROM_KMS_KEY_ARN
for recovery bucket key.
New file format for files uploaded to bucket storage that allows files larger than 2GB to be written to bucket storage. This may be turned on by setting the dynamic configuration
BucketStorageWriteVersion
to3
. When creating a new Humio clusters, the new format is the default. The new format is supported only on Humio version 1.41+.New configurations
BUCKET_STORAGE_SSE_COMPATIBLE
that makes bucket storage not verify checksums of raw objects after uploading to an S3. This option is turned on automatically is KMS is enabled (seeS3_STORAGE_KMS_KEY_ARN
) but is available directly here for use with other S3 compatible providers where verifying even content length does not work.Mini segments usually get merged if their event timestamps span more than
MAX_HOURS_SEGMENT_OPEN
. Mini segments created as part of backfilling did not follow this rule, but will now get merged if their ingest timestamps span more thanMAX_HOURS_SEGMENT_OPEN
.Adds a new logger job that logs the age of an unmerged miniSegment if the age exceeds the threshold set by the env variable
MINI_SEGMENT_MAX_MERGE_DELAY_MS_BEFORE_WARNING
. The default value ofMINI_SEGMENT_MAX_MERGE_DELAY_MS_BEFORE_WARNING
is 2 xMAX_HOURS_SEGMENT_OPEN
.MAX_HOURS_SEGMENT_OPEN
defaults to 24 hours. The error log produced looks like:Oldest unmerged miniSegment is older than the threshold thresholdMs={value} miniSegmentAgeMs={value} segment={value}
.Introduced new dynamic configuration
QueryMemoryLimit
. It can be set using GraphQL. See alsoLiveQueryMemoryLimit
for live queries. For more details, see Limits & Standards.
Dashboards and Widgets
Applied stylistic changes for the Inspect Panel used in Widget Editor.
Dashboards can now be configured to not update after the initial search has completed. This mode is mainly meant to be used when a dashboard is interactive and not for wall-mounted monitors that should update continually. The feature can be accessed from the Dashboard properties panel when a dashboard is put in edit-mode. See Working in Edit Mode.
Bar Chart
widget:The Y-axis can now start at smaller values than 1 for logarithmic scales, when the data contain small enough values.
It now has an
Auto
setting for the Input Data Format property, see Wide or Long Input Format for details.Now works with bucket query results.
Added empty states for all widget types that will be rendered when there are no results.
When importing existing dashboard with a static Shared time, recent changes in the time selection would make those dashboards live.
Introducing the
Heat Map
widget that visualizes aggregated data as a colorised grid.The
Pie Chart
widget now uses the first column for the series as a fall back option.The Dashboard page now displays the current cluster status.
Note
widget:Default background color is now
Auto
.Introduced the text color configuration option.
Sorting of
Pie Chart
widget categories, descending by value. Categories grouped asOthers
will always be last.The widget legend column width is now based on the custom series title (if specified) instead of the original series name.
The
Normalize
option for theWorld Map
widget has been replaced by a third magnitude mode namedNone
, which results in fixed size and opacity for all marks.Table
widgets will now break lines for newline characters in columns.Better handling of dashboard connections issues during restarts and upgrades.
Single Value
widget:Missing buckets are now shown as gaps on the sparkline.
Isolated data points are now visualized as dots on the sparkline.
Pie Chart
widget now uses the first column for the series as a fall back option.Single Value
widget new configuration: deprecated fielduse-colorised-thresholds
in favor ofcolor-method
.Single Value
widget Editor: the configuration optionEnable Thresholds
is being replaced by an option calledMethod
under the Colors section.
Log Collector
The Log Collector download page has been enabled for on-prem deployments.
Functions
Added validation to the
field
andkey
parameters of thejoin()
function, so empty lists will be rejected with a meaningful error message.The
groupBy()
function now acceptsmax
as value for thelimit
parameter, which sets the limit to the largest allowed value (as configured by the dynamic configurationGroupMaxLimit
).Improved the phrasing of the warning shown when
groupBy()
exceeds the max or default limit.Added validation to the
field
parameter of thekvParse()
function, so empty lists will be rejected with a meaningful error message.
Other
All users will not have access to the audit log or search all view by default anymore. Access can be granted with permissions.
Bump the version of the Monaco code editor.
Streaming queries that fail to validate now return a message of why validation failed.
Fix a bug causing Humio's digest coordinator to allow nodes to take over digest without catching up to the current leader. This could cause the new leader to replay more data from Kafka than necessary.
Fixed an issue where query auto-completion sometimes wouldn't show the documentation for the suggested functions.
Adds a new metric for the temp disk usage. The metric name is
temp-disk-usage-bytes
and denotes how many bytes are used.Added a log message with the maximum state size seen by the live part of live queries.
Include the requester in logs from QuerySessions when a live query is restarted or cancelled.
The audit log system repository on Cloud has been replaced with a view, so that dashboards etc. can be created on top of audit log data.
Make
BucketStorageUploadJob
only log at info level rather than error if a segment upload fails because the segment has been removed from the host. This can happen if node X tries to upload a segment, but node Y beats it to the punch. Node X may then choose to remove its copy before the upload completes.When unregistering a node from a cluster, return a validation error if it is still alive. Hosts should be shut down before attempting to remove them from the cluster. This validation can be skipped using the same
accept-data-loss
parameter that also disables other validations for the unregistration endpoint.Added detection and handling of all queries being blocked during Humio upgrades.
Added a log of the approximate query result size before transmission to the frontend, captured by the
approximateResultBeforeSerialization
key.Add flag whether a feature is experimental.
Added a log line for when a query exceeds its allotted memory quota.
The referrer meta tag for Humio has been changed from no-referrer to same-origin.
Compute next set of Prometheus metrics only in a single thread concurrently. If more requests arrive, then the next request gets the previous response.
Make a number of improvements to the digest partition coordinator. The coordinator now tries harder to avoid assigning digest to nodes that are not caught up on fetching segments from the other nodes. It also does a better job unassigning digest from dead nodes in edge cases.
Fix an unhandled IO exception from
TempDirUsageJob
. The consequence of the uncaught exception was only noise in the error log.Added a new action type that creates a CSV file from the query result and uploads it to Humio to be used with the
match()
query function. See Action Type: Upload File.Java in the docker images no longer has the
cap_net_bind_service
capability and thus Humio cannot bind directly to privileged ports when running as a non-root user.Add warning when a multitenancy user is changing data retention on an unlimited repository.
Improved performance of NDJSON format in S3 Archiving.
Fix a bug that could cause Humio to spuriously log errors warning about segments not being merged for datasources doing backfilling.
Humio now logs digest partition assignments regularly. The logs can be found using the query
class=*DigestLeadershipLoggerJob*
.All feature flags now contains a textual description about what features are hidden behind the flag.
Adds a logger job for cluster management stats it log the stats every 2 minutes, which makes them searchable in Humio.
The logs belong to the class
c.h.c.ClusterManagementStatsLoggerJob
, logs for all segments containsglobalSegmentStats
log about singular segments starts withsegmentStats
.Remove remains of default groups and roles. The concept was replaced with UserRoles.
Fixed in this release
Security
Update Netty to address CVE-2022-24823.
Bump javax.el to address CVE-2021-28170.
Falcon Data Replicator
FDR Ingest will no longer fail on events that are larger than the maximum allowed event size. Instead, such messages will be truncated.
UI Changes
Prevent the UI showing errors for smaller connection issues while restarting.
Websocket connections are now kept open when transitioning pages, and are used more efficiently for syntax highlighting.
Fixed an issue where some warnings would show twice.
Intermediate network issues are not reported immediately as an error in the UI.
Cloud: Updated the layout for license key page.
Fix the dropdown menus closing too early on the home page.
Fixed a bug where the "=" and "/=" buttons did not appear on cells in the event list where they should.
When viewing the events behind e.g. a Time Chart, the events will now only display with the
@timestamp
and@rawstring
columns.
GraphQL API
Fix the
assets
GraphQL query in organizations with views that are not 1-to-1 linked.
Configuration
Fixed a bug that could result in merging small ("undersized") segments even if the resulting segment would then have a wider than desired time span. The goal it to not produce segments that span more than the 10% of the retention setting for time for the repository. If no time-based retention is configured on the repository, then 3 times the value of configuration variable
MAX_HOURS_SEGMENT_OPEN
is applied as limit. For default settings, that results in 72 hours.Fixed an issue where event forwarding still showed as beta.
Fixed an issue where delete events from a mini-segment could result in the merge of those mini-segments into the resulting target segment never got executed.
Index in block needs reading from blockwriter before adding each item.
Fixed a bug where the @id field of events in live query were off by one.
Dashboards and Widgets
The
button on the dashboard correctly applies the typed filter again.The theme toggle on a shared dashboard was moved to the header panel and no longer overlaps with any widgets.
The
Time Chart
widget regression line is no longer affected by the interpolation setting.
Functions
Fixed a bug where using
eval
as an argument to a function would result in a confusing error message.Fixed a bug where
ioc:lookup()
would sometimes give incorrect results when negated.Revised some of the error messages and warnings regarding
join()
andselfJoin()
.Fixed a recent bug which caused the category links from
groupBy()
-groups to be lost when a subsequentsort()
was used, and also made grouping-based charts (bar, pie, heat map) unusable in such cases.Fixed a bug related to query result metadata for some functions when used as the last aggegate function in a query.
Fixed a bug where the
writeJson()
function would write any field starting with a case-insensitiveinf
orinfinity
prefix as a null value in the resulting JSON.
Other
Make streaming queries search segments newest-to-oldest rather than oldest-to-newest. Streaming queries do not ensure the order of exported events anyway, and searching newest-to-oldest is more efficient.
Fix a bug where changing a role for a user under a repository would trigger infinite network requests.
Centralise decision for names of files in bucket, allow more than one variant.
Improved hover messages for strings.
Fixed an issue where query auto-completion would sometimes delete existing parentheses.
If a segment is deleted or otherwise disappears from global while Humio is attempting to upload it to bucket storage, the upload will now be dropped with an info-level log, rather than requeued with an error log.
Fixes the placement of a confirmation dialog when attempting to change retention.
Humio will now clean up its tmp directories by deleting all "humiotmp" directories in the data directory when terminating gracefully.
Fix a regression in the launcher script causing
JVM_LOG_DIR
to not be evaluated relative to the Humio base install path. All paths in the launcher script should now be relative to the base install path, which is the directory containing the bin folder.Fix a bug that could cause merge targets to be cached indefinitely if the associated minis had their mergeTarget unset. The effect was a minor memory leak.
Fix a bug that could cause Humio to attempt to merge mini-segments from one datasource into a segment in another datasource, causing an error to be thrown.
When configuring thread priorities, Humio will no longer attempt to call the native setpriority function. It will instead only call the Java API for setting thread priority.
Fixed an issue for ephemeral disk based installs where segment files could stay longer on local disks than they were required to, in cases where some nodes listed in the cluster were not alive for extended periods of time.
Fixed an issue where JSON parsing on ingest and in the query language was inefficient for large JSON objects.
Fix performance issue for users with access to many views.
Improve file path handling in DiskSpaceJob to eliminate edge cases where the job might not have been able to tell if a file was on primary or secondary storage.
Fix type in
Unregisters node
text on cluster admin UI.Fixed an issue where event forwarder properties were not properly validated.
Reduced the timeout used when testing event forwarders in order to get a better error when timeouts happen.
Fix a bug that could cause a
NumberFormatException
to be thrown fromZooKeeperStatsClient
.Fix a bug causing digesters to continue digesting even if the local disk is full. The digester will now pause digesting and error log if this occurs.
Fix response entities not being discarded in error cases for the
proxyqueryjobs
endpoint, which could cause warnings in the log.Update
org.json:json
to address a vulnerability that could cause stack overflows.Fix an issue causing the event forwarding feature to incorrectly reject topic names that contained a dash
(-)
.Fix an issue that could rarely cause exceptions to be thrown from
Segments.originalBytesWritten
, causing noise in the log.Fix an issue causing Humio to create a large number of temporary directories in the data directory.
Bump woodstox to address SNYK-JAVA-COMFASTERXMLWOODSTOX-2928754.
Fixed an issue where queries could fail when the requests within the cluster were more than 8 MB each.
Some errors messages wrongly pointed to the beginning of the query.
Kafka upgrades to 3.2.0 in the Docker images and in the Humio dependencies.
Fixed an issue where strings like
Nana
andInformation
could be interpreted asNaN
(not-a-number) andinfinity
, respectively.Fixed a bug where multiline comments weren't always highlighted correctly.
Humio Server 1.51.0 LTS (2022-08-15)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.51.0 | LTS | 2022-08-15 | Cloud | 2023-08-31 | No | 1.30.0 | No |
Hide file hashes
JAR Checksum | Value |
---|---|
MD5 | af0041ec27647291086073dc83628bc2 |
SHA1 | 210e924a2863c3b28d08659241c0a283482c12b8 |
SHA256 | fddcb2184d76d5cfefb2f6a02705f66764d8972fb2e664ef7115353344d05680 |
SHA512 | 2b230a836e18d13a282ea28be23130897876e6cbe0bfa30a71a3795a7704255cbdf1c19eadc094cd970e58e0234c540a6b469266765d736fc6435c0a01bbda1e |
Docker Image | SHA256 Checksum |
---|---|
humio | c5db5fac0b03adf9039c31ef4ba69c49356b230b01e0eab0b16c477768dd52af |
humio-core | 36dbe1d90534d2ca72bdadb4992397aa60d2315565ce7a3c7c272a48617cf759 |
kafka | 92de8c0d092fe5c04cc10e076a9bb183c8f3136b9813f3ae74b423ae090cb0d8 |
zookeeper | 85620182a8c5f91426e0d138ca21947975c99a16fbb00f9e7d0c0ca7a8e94d2a |
Download: https://repo.humio.com/repository/maven-releases/com/humio/server/1.51.0/server-1.51.0.tar.gz
Bug fixes and updates.
Removed
Items that have been removed as of this release.
API
The deprecated REST API for actions has been removed, except for the endpoint for testing an action.
The deprecated REST API for parsers has been removed.
Deprecation
Items that have been deprecated and may be removed in a future release.
Deprecated enabledFeatures query. Use the new featureFlags query instead.
New features and improvements
Falcon Data Replicator
FDR polling is now turned on by default. Whether FDR polling should be turned on or off on a node can be configured using the
ENABLE_FDR_POLLING_ON_NODE
configuration variable.If an S3 file is found to be incorrectly formatted during FDR ingest, it will not be ingested completely, but an attempt is made to ingest the remaining S3 files of the SQS message.
If an S3 file cannot be found during FDR ingest, it will not be ingested, but an attempt is made to ingest the remaining S3 files of the SQS message.
Added environment variable
FDR_USE_PROXY
which makes the fdr job use the proxy settings specified with:HTTP_PROXY_*
environment variables.
UI Changes
The design of the Time Selector has been updated, and it now features an button on the dashboard page. See Time Interval Settings.
Field columns now support multiple formatting options. See Formatting Columns for details.
Add missing accessibility features to the login page.
In lists of users, with user avatars containing user initials, the current user would sometimes appear to have an opening parenthesis as their last initial.
The Live checkbox is now no longer checked automatically when changing the value of the time window in the Time Selector. See Changing Time Interval for details.
If Humio fails to start because the cluster is being upgraded, a dedicated message will show when launching the UI.
The Save As... button is now always displayed on the Search page, see it described at Saving Searches.
Improved keyboard accessibility for creating repositories and views.
New styling of errors on search and dashboard pages.
Adds an icon and a hint to a disabled side navigation menu item that tells the user the reason for it being disabled.
Toggle switches anywhere in the UI can now be accessed using the tab key and can be accessed using the keyboard.
When editing an email action in the UI and adding multiple recipients, it is now possible to add a space after the comma in the comma-separated list of recipients.
Documentation
All documentation links have been updated after the documentation site has been restructured. Please contact support, if you experience any broken links.
Automation and Alerts
Fixed a bug where an alert with name longer than 50 characters could not be edited.
GraphQL API
Added preview fields
isClusterBeingUpdated
andminimumNodeVersion
to the GraphQL Cluster object type.Added a new dynamic configuration flag
QueryResultRowCountLimit
that globally limits how many results (events) a query can return. This flag can be set by administrators through GraphQL. See Limits & Standards for more details.The GQL API mutation updateDashboard has been updated to take a new argument updateFrequency which can currently only be
NEVER
orREALTIME
, which correspond respectively to "dashboard where queries are never updated after first completion" and "dashboard where query results are updated indefinitely".Expose a new GraphQL type with feature flag descriptions and whether they are experimental.
Added a GraphQL mutation for testing an action. It is still in preview, but it will replace the equivalent REST endpoint soon.
Improved error messaging of GraphQL queries and mutations for alerts, scheduled searches and actions in cases where a given repository or view cannot be found.
Configuration
Adds a new metric for measuring the merge latency, which is defined as the latency between the last mini-segment being written in a sequence with the same merge target, and those mini-segments being merged. The metric name is
segment-merge-latency-ms
.Detect need for higher autoshard count by monitoring ingest request flow in the cluster. Dynamically increase the number of autoshards for each datasource to keep flow on each resulting shard below approximately 2MB/s. New dynamic configuration for this that sets the target maximum rate of ingest for each shard of a datasource:
TargetMaxRateForDatasource
. Default value is 2000000 (2 MB).Added a new environment variable
GLOB_MATCH_LIMIT
which sets the maximum number of rows for csv_file inmatch(..., file=csv_file, glob=true)
function. PreviouslyMAX_STATE_SIZE
was used to determine this limit. The default value of this variable is 20000. If you've changed the value ofMAX_STATE_SIZE
, we recommend that you also changeGLOB_MATCH_LIMIT
to the same value for a seamless upgrade.Default value of configuration variable
S3_ARCHIVING_WORKERCOUNT
raised from1
to(vCPU/4)
.Added a new dynamic configuration
GroupDefaultLimit
. This can be done through GraphQL. See Limits & Standards for details. If you've changed the value ofMAX_STATE_LIMIT
, we recommend that you also changeGroupDefaultLimit
andGroupMaxLimit
to the same value for a seamless upgrade, seegroupBy()
for details.Introduced new dynamic configuration
LiveQueryMemoryLimit
. It can be set using GraphQL. See Limits & Standards for details.Introduced new dynamic configuration
JoinRowLimit
. It can be set using GraphQL and can be used as an alternative to the environment variableMAX_JOIN_LIMIT
. If theJoinRowLimit
is set, then its value will be used instead ofMAX_JOIN_LIMIT
. If it is not set, thenMAX_JOIN_LIMIT
will be used.Introduced new dynamic configuration
StateRowLimit
. It can be set using GraphQL. See Limits & Standards for details.Improve the error message if Humio is configured to use bucket storage, but the credentials for the bucket are not configured.
Change default value for configuration
AUTOSHARDING_MAX
from 16 to 128.Add environment variable
EULA_URL
to specificy url for terms and conditions.Added a link to humio-activity repository for debugging IDP configurations to the page for setting up the same.
Bucket storage now has support for a new format for the keys (file names) for the files placed in the bucket. When the new format is applied, the listing of files only happens for the prefixes "tmp/" and "globalsnapshots/". This helps products such a "HCP". The new format is applied only to buckets created after the dynamic configuration
BucketStorageKeySchemeVersion
has been set to "2". Existing cluster can start using the new format for new files by setting this dynamic configuration. The change will take effect after restarting the cluster. When creating a new Humio cluster, the new format is the default. The new format is supported only on Humio version 1.41+.Introduced new dynamic configuration
GroupMaxLimit
. It can be set using GraphQL. See Limits & Standards for details.Support for KMS on S3 bucket for Bucket Storage. Specify full ARN of the key. The
key_id
is persisted in the internalBucketEntity
so that a later change of the ID of the key to use for uploads will make Humio still refer the old keyID when downloading files uploaded using the previous key. Setting a new value for the target key results in a fresh internal bucket entity to track which files used kms and which did not. For simplicity it is recommended to not mix KMS and non-KMS configurations on the same S3 bucket.New configuration variable
S3_STORAGE_KMS_KEY_ARN
that specifies the KMS key to use.New configuration variable
S3_STORAGE_2_KMS_KEY_ARN
for 2nd bucket key.New configuration variable
S3_RECOVER_FROM_KMS_KEY_ARN
for recovery bucket key.
New file format for files uploaded to bucket storage that allows files larger than 2GB to be written to bucket storage. This may be turned on by setting the dynamic configuration
BucketStorageWriteVersion
to3
. When creating a new Humio clusters, the new format is the default. The new format is supported only on Humio version 1.41+.New configurations
BUCKET_STORAGE_SSE_COMPATIBLE
that makes bucket storage not verify checksums of raw objects after uploading to an S3. This option is turned on automatically is KMS is enabled (seeS3_STORAGE_KMS_KEY_ARN
) but is available directly here for use with other S3 compatible providers where verifying even content length does not work.Mini segments usually get merged if their event timestamps span more than
MAX_HOURS_SEGMENT_OPEN
. Mini segments created as part of backfilling did not follow this rule, but will now get merged if their ingest timestamps span more thanMAX_HOURS_SEGMENT_OPEN
.Adds a new logger job that logs the age of an unmerged miniSegment if the age exceeds the threshold set by the env variable
MINI_SEGMENT_MAX_MERGE_DELAY_MS_BEFORE_WARNING
. The default value ofMINI_SEGMENT_MAX_MERGE_DELAY_MS_BEFORE_WARNING
is 2 xMAX_HOURS_SEGMENT_OPEN
.MAX_HOURS_SEGMENT_OPEN
defaults to 24 hours. The error log produced looks like:Oldest unmerged miniSegment is older than the threshold thresholdMs={value} miniSegmentAgeMs={value} segment={value}
.Introduced new dynamic configuration
QueryMemoryLimit
. It can be set using GraphQL. See alsoLiveQueryMemoryLimit
for live queries. For more details, see Limits & Standards.
Dashboards and Widgets
Applied stylistic changes for the Inspect Panel used in Widget Editor.
Dashboards can now be configured to not update after the initial search has completed. This mode is mainly meant to be used when a dashboard is interactive and not for wall-mounted monitors that should update continually. The feature can be accessed from the Dashboard properties panel when a dashboard is put in edit-mode. See Working in Edit Mode.
Bar Chart
widget:The Y-axis can now start at smaller values than 1 for logarithmic scales, when the data contain small enough values.
It now has an
Auto
setting for the Input Data Format property, see Wide or Long Input Format for details.Now works with bucket query results.
Added empty states for all widget types that will be rendered when there are no results.
When importing existing dashboard with a static Shared time, recent changes in the time selection would make those dashboards live.
Introducing the
Heat Map
widget that visualizes aggregated data as a colorised grid.The
Pie Chart
widget now uses the first column for the series as a fall back option.The Dashboard page now displays the current cluster status.
Note
widget:Default background color is now
Auto
.Introduced the text color configuration option.
Sorting of
Pie Chart
widget categories, descending by value. Categories grouped asOthers
will always be last.The widget legend column width is now based on the custom series title (if specified) instead of the original series name.
The
Normalize
option for theWorld Map
widget has been replaced by a third magnitude mode namedNone
, which results in fixed size and opacity for all marks.Table
widgets will now break lines for newline characters in columns.Better handling of dashboard connections issues during restarts and upgrades.
Single Value
widget:Missing buckets are now shown as gaps on the sparkline.
Isolated data points are now visualized as dots on the sparkline.
Pie Chart
widget now uses the first column for the series as a fall back option.Single Value
widget new configuration: deprecated fielduse-colorised-thresholds
in favor ofcolor-method
.Single Value
widget Editor: the configuration optionEnable Thresholds
is being replaced by an option calledMethod
under the Colors section.
Log Collector
The Log Collector download page has been enabled for on-prem deployments.
Functions
Added validation to the
field
andkey
parameters of thejoin()
function, so empty lists will be rejected with a meaningful error message.The
groupBy()
function now acceptsmax
as value for thelimit
parameter, which sets the limit to the largest allowed value (as configured by the dynamic configurationGroupMaxLimit
).Improved the phrasing of the warning shown when
groupBy()
exceeds the max or default limit.Added validation to the
field
parameter of thekvParse()
function, so empty lists will be rejected with a meaningful error message.
Other
All users will not have access to the audit log or search all view by default anymore. Access can be granted with permissions.
Bump the version of the Monaco code editor.
Streaming queries that fail to validate now return a message of why validation failed.
Fix a bug causing Humio's digest coordinator to allow nodes to take over digest without catching up to the current leader. This could cause the new leader to replay more data from Kafka than necessary.
Fixed an issue where query auto-completion sometimes wouldn't show the documentation for the suggested functions.
Adds a new metric for the temp disk usage. The metric name is
temp-disk-usage-bytes
and denotes how many bytes are used.Added a log message with the maximum state size seen by the live part of live queries.
Include the requester in logs from QuerySessions when a live query is restarted or cancelled.
The audit log system repository on Cloud has been replaced with a view, so that dashboards etc. can be created on top of audit log data.
Make
BucketStorageUploadJob
only log at info level rather than error if a segment upload fails because the segment has been removed from the host. This can happen if node X tries to upload a segment, but node Y beats it to the punch. Node X may then choose to remove its copy before the upload completes.When unregistering a node from a cluster, return a validation error if it is still alive. Hosts should be shut down before attempting to remove them from the cluster. This validation can be skipped using the same
accept-data-loss
parameter that also disables other validations for the unregistration endpoint.Added detection and handling of all queries being blocked during Humio upgrades.
Added a log of the approximate query result size before transmission to the frontend, captured by the
approximateResultBeforeSerialization
key.Add flag whether a feature is experimental.
Added a log line for when a query exceeds its allotted memory quota.
The referrer meta tag for Humio has been changed from no-referrer to same-origin.
Compute next set of Prometheus metrics only in a single thread concurrently. If more requests arrive, then the next request gets the previous response.
Make a number of improvements to the digest partition coordinator. The coordinator now tries harder to avoid assigning digest to nodes that are not caught up on fetching segments from the other nodes. It also does a better job unassigning digest from dead nodes in edge cases.
Fix an unhandled IO exception from
TempDirUsageJob
. The consequence of the uncaught exception was only noise in the error log.Added a new action type that creates a CSV file from the query result and uploads it to Humio to be used with the
match()
query function. See Action Type: Upload File.Java in the docker images no longer has the
cap_net_bind_service
capability and thus Humio cannot bind directly to privileged ports when running as a non-root user.Add warning when a multitenancy user is changing data retention on an unlimited repository.
Improved performance of NDJSON format in S3 Archiving.
Fix a bug that could cause Humio to spuriously log errors warning about segments not being merged for datasources doing backfilling.
Humio now logs digest partition assignments regularly. The logs can be found using the query
class=*DigestLeadershipLoggerJob*
.All feature flags now contains a textual description about what features are hidden behind the flag.
Adds a logger job for cluster management stats it log the stats every 2 minutes, which makes them searchable in Humio.
The logs belong to the class
c.h.c.ClusterManagementStatsLoggerJob
, logs for all segments containsglobalSegmentStats
log about singular segments starts withsegmentStats
.Remove remains of default groups and roles. The concept was replaced with UserRoles.
Fixed in this release
Security
Update Netty to address CVE-2022-24823.
Bump javax.el to address CVE-2021-28170.
Falcon Data Replicator
FDR Ingest will no longer fail on events that are larger than the maximum allowed event size. Instead, such messages will be truncated.
UI Changes
Prevent the UI showing errors for smaller connection issues while restarting.
Websocket connections are now kept open when transitioning pages, and are used more efficiently for syntax highlighting.
Fixed an issue where some warnings would show twice.
Intermediate network issues are not reported immediately as an error in the UI.
Cloud: Updated the layout for license key page.
Fix the dropdown menus closing too early on the home page.
Fixed a bug where the "=" and "/=" buttons did not appear on cells in the event list where they should.
When viewing the events behind e.g. a Time Chart, the events will now only display with the
@timestamp
and@rawstring
columns.
GraphQL API
Fix the
assets
GraphQL query in organizations with views that are not 1-to-1 linked.
Configuration
Fixed a bug that could result in merging small ("undersized") segments even if the resulting segment would then have a wider than desired time span. The goal it to not produce segments that span more than the 10% of the retention setting for time for the repository. If no time-based retention is configured on the repository, then 3 times the value of configuration variable
MAX_HOURS_SEGMENT_OPEN
is applied as limit. For default settings, that results in 72 hours.Fixed an issue where event forwarding still showed as beta.
Fixed an issue where delete events from a mini-segment could result in the merge of those mini-segments into the resulting target segment never got executed.
Index in block needs reading from blockwriter before adding each item.
Fixed a bug where the @id field of events in live query were off by one.
Dashboards and Widgets
The theme toggle on a shared dashboard was moved to the header panel and no longer overlaps with any widgets.
The
Time Chart
widget regression line is no longer affected by the interpolation setting.
Functions
Fixed a bug where using
eval
as an argument to a function would result in a confusing error message.Fixed a bug where
ioc:lookup()
would sometimes give incorrect results when negated.Revised some of the error messages and warnings regarding
join()
andselfJoin()
.Fixed a bug where the
writeJson()
function would write any field starting with a case-insensitiveinf
orinfinity
prefix as a null value in the resulting JSON.
Other
Make streaming queries search segments newest-to-oldest rather than oldest-to-newest. Streaming queries do not ensure the order of exported events anyway, and searching newest-to-oldest is more efficient.
Fix a bug where changing a role for a user under a repository would trigger infinite network requests.
Centralise decision for names of files in bucket, allow more than one variant.
Improved hover messages for strings.
Fixed an issue where query auto-completion would sometimes delete existing parentheses.
If a segment is deleted or otherwise disappears from global while Humio is attempting to upload it to bucket storage, the upload will now be dropped with an info-level log, rather than requeued with an error log.
Fixes the placement of a confirmation dialog when attempting to change retention.
Humio will now clean up its tmp directories by deleting all "humiotmp" directories in the data directory when terminating gracefully.
Fix a regression in the launcher script causing
JVM_LOG_DIR
to not be evaluated relative to the Humio base install path. All paths in the launcher script should now be relative to the base install path, which is the directory containing the bin folder.Fix a bug that could cause merge targets to be cached indefinitely if the associated minis had their mergeTarget unset. The effect was a minor memory leak.
Fix a bug that could cause Humio to attempt to merge mini-segments from one datasource into a segment in another datasource, causing an error to be thrown.
When configuring thread priorities, Humio will no longer attempt to call the native setpriority function. It will instead only call the Java API for setting thread priority.
Fixed an issue where JSON parsing on ingest and in the query language was inefficient for large JSON objects.
Fix performance issue for users with access to many views.
Improve file path handling in DiskSpaceJob to eliminate edge cases where the job might not have been able to tell if a file was on primary or secondary storage.
Fix type in
Unregisters node
text on cluster admin UI.Fixed an issue where event forwarder properties were not properly validated.
Reduced the timeout used when testing event forwarders in order to get a better error when timeouts happen.
Fix a bug that could cause a
NumberFormatException
to be thrown fromZooKeeperStatsClient
.Fix a bug causing digesters to continue digesting even if the local disk is full. The digester will now pause digesting and error log if this occurs.
Fix response entities not being discarded in error cases for the
proxyqueryjobs
endpoint, which could cause warnings in the log.Update
org.json:json
to address a vulnerability that could cause stack overflows.Fix an issue causing the event forwarding feature to incorrectly reject topic names that contained a dash
(-)
.Fix an issue that could rarely cause exceptions to be thrown from
Segments.originalBytesWritten
, causing noise in the log.Fix an issue causing Humio to create a large number of temporary directories in the data directory.
Bump woodstox to address SNYK-JAVA-COMFASTERXMLWOODSTOX-2928754.
Some errors messages wrongly pointed to the beginning of the query.
Kafka upgrades to 3.2.0 in the Docker images and in the Humio dependencies.
Fixed an issue where strings like
Nana
andInformation
could be interpreted asNaN
(not-a-number) andinfinity
, respectively.Fixed a bug where multiline comments weren't always highlighted correctly.
Humio Server 1.50.0 GA (2022-08-02)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.50.0 | GA | 2022-08-02 | Cloud | 2023-08-31 | No | 1.30.0 | No |
Available for download two days after release.
Bug fixes and an updated dependency, released to cloud only.
New features and improvements
UI Changes
The design of the Time Selector has been updated, and it now features an button on the dashboard page. See Time Interval Settings.
Adds an icon and a hint to a disabled side navigation menu item that tells the user the reason for it being disabled.
When editing an email action in the UI and adding multiple recipients, it is now possible to add a space after the comma in the comma-separated list of recipients.
Documentation
All documentation links have been updated after the documentation site has been restructured. Please contact support, if you experience any broken links.
GraphQL API
The GQL API mutation updateDashboard has been updated to take a new argument updateFrequency which can currently only be
NEVER
orREALTIME
, which correspond respectively to "dashboard where queries are never updated after first completion" and "dashboard where query results are updated indefinitely".
Dashboards and Widgets
Dashboards can now be configured to not update after the initial search has completed. This mode is mainly meant to be used when a dashboard is interactive and not for wall-mounted monitors that should update continually. The feature can be accessed from the Dashboard properties panel when a dashboard is put in edit-mode. See Working in Edit Mode.
Functions
Added validation to the
field
parameter of thetop()
function, so empty lists will be rejected with a meaningful error message.Added validation to the
field
andkey
parameters of thejoin()
function, so empty lists will be rejected with a meaningful error message.Improved the phrasing of the warning shown when
groupBy()
exceeds the max or default limit.Added validation to the
field
parameter of thekvParse()
function, so empty lists will be rejected with a meaningful error message.
Other
Streaming queries that fail to validate now return a message of why validation failed.
Fixed an issue where query auto-completion sometimes wouldn't show the documentation for the suggested functions.
Added a new action type that creates a CSV file from the query result and uploads it to Humio to be used with the
match()
query function. See Action Type: Upload File.Humio now logs digest partition assignments regularly. The logs can be found using the query
class=*DigestLeadershipLoggerJob*
.
Fixed in this release
GraphQL API
Fix the
assets
GraphQL query in organizations with views that are not 1-to-1 linked.
Configuration
Fixed an issue where validation of
+/- Infinity
as integer arguments would crash.Fixed an issue where event forwarding still showed as beta.
Functions
Fixed an issue where
join()
would not produce the correct results whenmode=left
was set.
Other
Fixed an issue where query auto-completion would sometimes delete existing parentheses.
Fixed an issue where JSON parsing on ingest and in the query language was inefficient for large JSON objects.
Fix performance issue for users with access to many views.
Fix an issue that could rarely cause exceptions to be thrown from
Segments.originalBytesWritten
, causing noise in the log.Fix an issue causing Humio to create a large number of temporary directories in the data directory.
Humio Server 1.49.1 GA (2022-07-26)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.49.1 | GA | 2022-07-26 | Cloud | 2023-08-31 | No | 1.30.0 | No |
Available for download two days after release.
Bug fixes and an updated dependency, released to cloud only.
Removed
Items that have been removed as of this release.
API
The deprecated REST API for parsers has been removed.
New features and improvements
UI Changes
The Save As... button is now always displayed on the Search page, see it described at Saving Searches.
Automation and Alerts
Fixed a bug where an alert with name longer than 50 characters could not be edited.
Functions
Other
Make
BucketStorageUploadJob
only log at info level rather than error if a segment upload fails because the segment has been removed from the host. This can happen if node X tries to upload a segment, but node Y beats it to the punch. Node X may then choose to remove its copy before the upload completes.Fix an unhandled IO exception from
TempDirUsageJob
. The consequence of the uncaught exception was only noise in the error log.Java in the docker images no longer has the
cap_net_bind_service
capability and thus Humio cannot bind directly to privileged ports when running as a non-root user.
Packages
Parser installation will now be ignored when installing a package into a system repository.
Fixed in this release
UI Changes
Fixed an issue where some warnings would show twice.
Functions
Revised some of the error messages and warnings regarding
join()
andselfJoin()
.
Other
Fix a bug that could cause a
NumberFormatException
to be thrown fromZooKeeperStatsClient
.Fixed an issue where strings like
Nana
andInformation
could be interpreted asNaN
(not-a-number) andinfinity
, respectively.
Humio Server 1.49.0 Not Released (2022-07-26)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.49.0 | Not Released | 2022-07-26 | Internal Only | 2023-07-31 | No | 1.30.0 | No |
Available for download two days after release.
Not released.
Humio Server 1.48.1 GA (2022-07-19)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.48.1 | GA | 2022-07-19 | Cloud | 2023-08-31 | No | 1.30.0 | No |
Available for download two days after release.
Bug fixes and an updated dependency, released to cloud only.
Removed
Items that have been removed as of this release.
Installation and Deployment
Remove the following feature flags and their usage:
EnterpriseLogin
,OidcDynamicIdpProviders
,UsagePage
,RequestToActivity
,CommunityNewDemoData
.
Deprecation
Items that have been deprecated and may be removed in a future release.
Deprecated enabledFeatures query. Use the new featureFlags query instead.
New features and improvements
UI Changes
Add missing accessibility features to the login page.
The Live checkbox is now no longer checked automatically when changing the value of the time window in the Time Selector. See Changing Time Interval for details.
Updated styling on the log in pages.
GraphQL API
Expose a new GraphQL type with feature flag descriptions and whether they are experimental.
Other
Include the requester in logs from QuerySessions when a live query is restarted or cancelled.
Added detection and handling of all queries being blocked during Humio upgrades.
Add flag whether a feature is experimental.
All feature flags now contains a textual description about what features are hidden behind the flag.
Fixed in this release
UI Changes
When viewing the events behind e.g. a Time Chart, the events will now only display with the
@timestamp
and@rawstring
columns.
Dashboards and Widgets
The theme toggle on a shared dashboard was moved to the header panel and no longer overlaps with any widgets.
Other
Fixes the placement of a confirmation dialog when attempting to change retention.
Fix response entities not being discarded in error cases for the
proxyqueryjobs
endpoint, which could cause warnings in the log.
Humio Server 1.48.0 Not Released (2022-07-19)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.48.0 | Not Released | 2022-07-19 | Internal Only | 2023-07-31 | No | 1.30.0 | No |
Available for download two days after release.
Not released.
Humio Server 1.47.1 GA (2022-07-12)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.47.1 | GA | 2022-07-12 | Cloud | 2023-08-31 | No | 1.30.0 | No |
Available for download two days after release.
Bug fixes and an updated dependency, released to cloud only.
New features and improvements
Falcon Data Replicator
FDR polling is now turned on by default. Whether FDR polling should be turned on or off on a node can be configured using the
ENABLE_FDR_POLLING_ON_NODE
configuration variable.
UI Changes
If Humio fails to start because the cluster is being upgraded, a dedicated message will show when launching the UI.
GraphQL API
Added preview fields
isClusterBeingUpdated
andminimumNodeVersion
to the GraphQL Cluster object type.Added a new dynamic configuration flag
QueryResultRowCountLimit
that globally limits how many results (events) a query can return. This flag can be set by administrators through GraphQL. See Limits & Standards for more details.
Configuration
Added a new dynamic configuration
GroupDefaultLimit
. This can be done through GraphQL. See Limits & Standards for details. If you've changed the value ofMAX_STATE_LIMIT
, we recommend that you also changeGroupDefaultLimit
andGroupMaxLimit
to the same value for a seamless upgrade, seegroupBy()
for details.Introduced new dynamic configuration
LiveQueryMemoryLimit
. It can be set using GraphQL. See Limits & Standards for details.Introduced new dynamic configuration
JoinRowLimit
. It can be set using GraphQL and can be used as an alternative to the environment variableMAX_JOIN_LIMIT
. If theJoinRowLimit
is set, then its value will be used instead ofMAX_JOIN_LIMIT
. If it is not set, thenMAX_JOIN_LIMIT
will be used.Introduced new dynamic configuration
StateRowLimit
. It can be set using GraphQL. See Limits & Standards for details.Introduced new dynamic configuration
GroupMaxLimit
. It can be set using GraphQL. See Limits & Standards for details.Support for KMS on S3 bucket for Bucket Storage. Specify full ARN of the key. The
key_id
is persisted in the internalBucketEntity
so that a later change of the ID of the key to use for uploads will make Humio still refer the old keyID when downloading files uploaded using the previous key. Setting a new value for the target key results in a fresh internal bucket entity to track which files used kms and which did not. For simplicity it is recommended to not mix KMS and non-KMS configurations on the same S3 bucket.New configuration variable
S3_STORAGE_KMS_KEY_ARN
that specifies the KMS key to use.New configuration variable
S3_STORAGE_2_KMS_KEY_ARN
for 2nd bucket key.New configuration variable
S3_RECOVER_FROM_KMS_KEY_ARN
for recovery bucket key.
New configurations
BUCKET_STORAGE_SSE_COMPATIBLE
that makes bucket storage not verify checksums of raw objects after uploading to an S3. This option is turned on automatically is KMS is enabled (seeS3_STORAGE_KMS_KEY_ARN
) but is available directly here for use with other S3 compatible providers where verifying even content length does not work.Mini segments usually get merged if their event timestamps span more than
MAX_HOURS_SEGMENT_OPEN
. Mini segments created as part of backfilling did not follow this rule, but will now get merged if their ingest timestamps span more thanMAX_HOURS_SEGMENT_OPEN
.Introduced new dynamic configuration
QueryMemoryLimit
. It can be set using GraphQL. See alsoLiveQueryMemoryLimit
for live queries. For more details, see Limits & Standards.
Dashboards and Widgets
Applied stylistic changes for the Inspect Panel used in Widget Editor.
Table
widgets will now break lines for newline characters in columns.
Other
All users will not have access to the audit log or search all view by default anymore. Access can be granted with permissions.
The audit log system repository on Cloud has been replaced with a view, so that dashboards etc. can be created on top of audit log data.
Improved performance of validation of keys in tags.
The referrer meta tag for Humio has been changed from no-referrer to same-origin.
Compute next set of Prometheus metrics only in a single thread concurrently. If more requests arrive, then the next request gets the previous response.
Fix a bug that could cause Humio to spuriously log errors warning about segments not being merged for datasources doing backfilling.
Fixed in this release
Functions
Fixed a bug where using
eval
as an argument to a function would result in a confusing error message.
Other
Fix type in
Unregisters node
text on cluster admin UI.
Humio Server 1.47.0 Not Released (2022-07-12)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.47.0 | Not Released | 2022-07-12 | Internal Only | 2023-07-31 | No | 1.30.0 | No |
Available for download two days after release.
Not released.
Humio Server 1.46.0 GA (2022-07-05)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.46.0 | GA | 2022-07-05 | Cloud | 2023-08-31 | No | 1.30.0 | No |
Available for download two days after release.
Bug fixes and an updated dependency, released to cloud only.
New features and improvements
UI Changes
In lists of users, with user avatars containing user initials, the current user would sometimes appear to have an opening parenthesis as their last initial.
New styling of errors on search and dashboard pages.
GraphQL API
Improved error messaging of GraphQL queries and mutations for alerts, scheduled searches and actions in cases where a given repository or view cannot be found.
Dashboards and Widgets
Added empty states for all widget types that will be rendered when there are no results.
Introducing the
Heat Map
widget that visualizes aggregated data as a colorised grid.Sorting of
Pie Chart
widget categories, descending by value. Categories grouped asOthers
will always be last.The widget legend column width is now based on the custom series title (if specified) instead of the original series name.
Better handling of dashboard connections issues during restarts and upgrades.
Other
Fix a bug causing Humio's digest coordinator to allow nodes to take over digest without catching up to the current leader. This could cause the new leader to replay more data from Kafka than necessary.
When unregistering a node from a cluster, return a validation error if it is still alive. Hosts should be shut down before attempting to remove them from the cluster. This validation can be skipped using the same
accept-data-loss
parameter that also disables other validations for the unregistration endpoint.Make a number of improvements to the digest partition coordinator. The coordinator now tries harder to avoid assigning digest to nodes that are not caught up on fetching segments from the other nodes. It also does a better job unassigning digest from dead nodes in edge cases.
Remove remains of default groups and roles. The concept was replaced with UserRoles.
Fixed in this release
Falcon Data Replicator
FDR Ingest will no longer fail on events that are larger than the maximum allowed event size. Instead, such messages will be truncated.
UI Changes
Intermediate network issues are not reported immediately as an error in the UI.
Configuration
Fixed a bug that could result in merging small ("undersized") segments even if the resulting segment would then have a wider than desired time span. The goal it to not produce segments that span more than the 10% of the retention setting for time for the repository. If no time-based retention is configured on the repository, then 3 times the value of configuration variable
MAX_HOURS_SEGMENT_OPEN
is applied as limit. For default settings, that results in 72 hours.
Functions
Fixed a bug where the
writeJson()
function would write any field starting with a case-insensitiveinf
orinfinity
prefix as a null value in the resulting JSON.
Other
Kafka upgrades to 3.2.0 in the Docker images and in the Humio dependencies.
Humio Server 1.45.0 GA (2022-06-28)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.45.0 | GA | 2022-06-28 | Cloud | 2023-08-31 | No | 1.30.0 | No |
Available for download two days after release.
Bug fixes and an updated dependency, released to cloud only.
New features and improvements
Configuration
Adds a new metric for measuring the merge latency, which is defined as the latency between the last mini-segment being written in a sequence with the same merge target, and those mini-segments being merged. The metric name is
segment-merge-latency-ms
.Adds a new logger job that logs the age of an unmerged miniSegment if the age exceeds the threshold set by the env variable
MINI_SEGMENT_MAX_MERGE_DELAY_MS_BEFORE_WARNING
. The default value ofMINI_SEGMENT_MAX_MERGE_DELAY_MS_BEFORE_WARNING
is 2 xMAX_HOURS_SEGMENT_OPEN
.MAX_HOURS_SEGMENT_OPEN
defaults to 24 hours. The error log produced looks like:Oldest unmerged miniSegment is older than the threshold thresholdMs={value} miniSegmentAgeMs={value} segment={value}
.
Dashboards and Widgets
Other
Bump the version of the Monaco code editor.
Added a log message with the maximum state size seen by the live part of live queries.
Fixed in this release
UI Changes
Websocket connections are now kept open when transitioning pages, and are used more efficiently for syntax highlighting.
Fix the dropdown menus closing too early on the home page.
Dashboards and Widgets
The
Time Chart
widget regression line is no longer affected by the interpolation setting.
Other
Make streaming queries search segments newest-to-oldest rather than oldest-to-newest. Streaming queries do not ensure the order of exported events anyway, and searching newest-to-oldest is more efficient.
Fix a bug causing digesters to continue digesting even if the local disk is full. The digester will now pause digesting and error log if this occurs.
Bump woodstox to address SNYK-JAVA-COMFASTERXMLWOODSTOX-2928754.
Some errors messages wrongly pointed to the beginning of the query.
Humio Server 1.44.0 GA (2022-06-21)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.44.0 | GA | 2022-06-21 | Cloud | 2023-08-31 | No | 1.30.0 | No |
Available for download two days after release.
Bug fixes and an updated dependency, released to cloud only.
Removed
Items that have been removed as of this release.
API
The deprecated REST API for actions has been removed, except for the endpoint for testing an action.
New features and improvements
UI Changes
Improved keyboard accessibility for creating repositories and views.
Toggle switches anywhere in the UI can now be accessed using the tab key and can be accessed using the keyboard.
Configuration
Default value of configuration variable
S3_ARCHIVING_WORKERCOUNT
raised from1
to(vCPU/4)
.Introduced the new dynamic configuration
StateRowLimit
. It can be set using GraphQL.Introduced new dynamic configuration flag
JoinRowLimit
. It can be set using GraphQL. The flag can be used as an alternative to the environment variableMAX_JOIN_LIMIT
. If thejoin-row-limit
flag is set, then its value will be used instead ofMAX_JOIN_LIMIT
. If it is not set, thenMAX_JOIN_LIMIT
will be used.Introduced the new dynamic configuration
QueryMemoryLimit
. It can be set using GraphQL. The flag replaces the environment variableMAX_MEMORY_FOR_REDUCE
, so if you have changed the value ofMAX_MEMORY_FOR_REDUCE
, please useQueryMemoryLimit
now instead. See Limits & Standards for more details.Added a link to humio-activity repository for debugging IDP configurations to the page for setting up the same.
Added a new environment variable
GROUPBY_DEFAULT_LIMIT
which sets the default value for thelimit
parameter ofgroupBy()
. SeegroupBy()
documentation for details.
Dashboards and Widgets
Bar Chart
widget:The Y-axis can now start at smaller values than 1 for logarithmic scales, when the data contain small enough values.
It now has an
Auto
setting for the Input Data Format property, see Wide or Long Input Format for details.Now works with bucket query results.
The dashboards page now displays the current cluster status.
The
Normalize
option for theWorld Map
widget has been replaced by a third magnitude mode namedNone
, which results in fixed size and opacity for all marks.Single Value
widget:Missing buckets are now shown as gaps on the sparkline.
Isolated data points are now visualized as dots on the sparkline.
Log Collector
The Log Collector download page has been enabled for on-prem deployments.
Other
Adds a new metric for the temp disk usage. The metric name is
temp-disk-usage-bytes
and denotes how many bytes are used.Added a log of the approximate query result size before transmission to the frontend, captured by the
approximateResultBeforeSerialization
key.Add warning when a multitenancy user is changing data retention on an unlimited repository.
Improved performance of NDJSON format in S3 Archiving.
Adds a logger job for cluster management stats it log the stats every 2 minutes, which makes them searchable in Humio.
The logs belong to the class
c.h.c.ClusterManagementStatsLoggerJob
, logs for all segments containsglobalSegmentStats
log about singular segments starts withsegmentStats
.
Fixed in this release
Security
Update Netty to address CVE-2022-24823.
Bump javax.el to address CVE-2021-28170.
Other
Fix a bug where changing a role for a user under a repository would trigger infinite network requests.
If a segment is deleted or otherwise disappears from global while Humio is attempting to upload it to bucket storage, the upload will now be dropped with an info-level log, rather than requeued with an error log.
Improve file path handling in DiskSpaceJob to eliminate edge cases where the job might not have been able to tell if a file was on primary or secondary storage.
Update
org.json:json
to address a vulnerability that could cause stack overflows.Fix an issue causing the event forwarding feature to incorrectly reject topic names that contained a dash
(-)
.Fixed a bug where multiline comments weren't always highlighted correctly.
Humio Server 1.42.2 LTS (2022-10-05)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.42.2 | LTS | 2022-10-05 | Cloud | 2023-06-30 | No | 1.30.0 | No |
Hide file hashes
JAR Checksum | Value |
---|---|
MD5 | c129f5c1d5d5cf469abdfd604267b6d6 |
SHA1 | 737393d0e02fc6a3e5599a20d3b7f640d0ce3347 |
SHA256 | e283e268d4d268a21f0e7fe2f6ff08582a96cb6d37dd28b020670eb3600a8e86 |
SHA512 | 6cc3f4ad41b46d71ff67db7a4720e1a027583938ec1158b1fa45de8943030caf70e4b7b50dea9c42c92b3bb6d9ab335783a9b5f12dd9b79e240e4eadceac17ba |
Docker Image | SHA256 Checksum |
---|---|
humio | 4a5399bb43705c9e2f95d745fd87766552d9f1ef74d459feb3e10244b958d37e |
humio-core | 070dff4cd2a472994ad4999d36b98c1592fb6be703c9f2ae590f278dc47339e4 |
kafka | a5447202b222b70d6569e03668056e6059621894d3287f6d92f0e4ecbfb50a34 |
zookeeper | 6487f4600d004f1ea797a86bef307781ac8840b438c28c8405ffdfd334205f99 |
Download: https://repo.humio.com/repository/maven-releases/com/humio/server/1.42.2/server-1.42.2.tar.gz
These notes include entries from the following previous releases: 1.42.0, 1.42.1
Bug fixes and updates.
Deprecation
Items that have been deprecated and may be removed in a future release.
The Feature Flag, CookieAuthServerSide, has been deprecated as cookie authentication is now enabled by default. Instead, the configuration field
ENABLE_BEARER_TOKEN_AUTHORIZATION
has been introduced.The local disk based backup feature described at Making Back-Ups is deprecated, and is planned for removal in September 2022. We have found that restoring backups using this feature is difficult in practice, it is not commonly used, and the backup/restore functionality is covered by the bucket storage feature as well. For these reasons, we are deprecating this feature in favour of bucket storage.
The
DELETE_BACKUP_AFTER_MILLIS
configuration parameter, which controls the delay between data being deleted in Humio and removed from backup, will be retained, since it controls a similar delay for bucket storage. Customers using local disk based backups should migrate to using bucket storage instead. Systems not wishing to use a cloud bucket storage solution can keep backup support by instead installing an on-prem S3- or GCS-compatible solution, such as MinIO.
New features and improvements
Falcon Data Replicator
Added the
fdr-message-count
metric, which contains the approximate number of messages on an FDR feed's SQS queue.Added the
fdr-invisible-message-count
metric, which contains the approximate number of invisible messages on an FDR feed's SQS queue.Improved error logging, when an FDR feed fails to download data from an S3 bucket. It now clearly states when a download failed because the S3 bucket is located in a different region than the SQS queue.
UI Changes
The Format Panel is now available for changing the style of the data displayed in the Event list — see Changing the Data Display.
The Search page.
button is now always displayed on theBoth the
Scatter Chart
and theBar Chart
widgets now support automatically adding/toggling axis and legend titles based on the mapped data.The Fields Panel now enables you to fetch fields beyond those from the last 200 events — see Adding and Removing Fields.
Configuration
Improve the error message if Humio is configured to use bucket storage, but the credentials for the bucket are not configured.
Dashboards and Widgets
The
Single Value
widget is now available. Construct a query which returns any single value, or use thetimeChart()
query function to create a single-value widget instance with sparkline and trend indicators.The
Gauge
widget is being deprecated in favour of theSingle Value
widget. Configurations of the former widget are compatible with the latter. This means that persisted configurations of the Gauge widget (url / dashboard widgets / saved queries / recent queries) are still valid, but are visualized using the Single Value widget instead.
Log Collector
The Humio Log Collector can now be downloaded from the Organizational Settings page, see the Log Collector Documentation for a complete list of the supported logs formats and operating systems.
Functions
ioc:lookup()
would sometimes give incorrect results when negated.worldMap()
accepts more magnitude functions, anonymous functions and thepercentile()
function.worldMap()
will warn about licensing issues with IP database.sankey()
now accepts more weight functions such as anonymous functions and thepercentile()
function.
Other
Fixed an issue where Humio's ZooKeeper monitoring page would show X/0 followers in sync.
Fixed an issue that if download of IOCs took more than an hour, Humio would indefinitely start a new download every hour which would eventually fail.
Fixed an underlying bug causing
addToExistingJob did not find the existing job
to be error logged unnecessarily. Humio may decide to fetch a segment from bucket storage for querying. If this decision is made right as the query is cancelled, Humio could log the message above. With this fix, Humio will instead skip downloading the segment, and not log the error.Ensured that errors during view tombstone removal are logged and don't prevent the RetentionJob from performing other cleanup tasks.
Email actions can now add the result set as a
CSV
attachment.When cleaning up a deleted data space, don't error log if two nodes race to delete the data space metadata from global.
Logging to the humio-activity repository is now also done for events in sandbox repositories.
Specifying a versionless packageId will load the newest version of that package.
Fixed an issue where a scheduled search could trigger actions multiple times for the same time period if actions took a long time to finish.
Fixed in this release
Security
Update Scala to address CVE-2022-36944.
Other
Compute next set of Prometheus metrics only in a single thread concurrently. If more requests arrive, then the next request gets the previous response.
Fix performance issue for users with access to many views.
Updated dependencies to woodstox to fix a vulnerability.
Humio Server 1.42.1 LTS (2022-07-18)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.42.1 | LTS | 2022-07-18 | Cloud | 2023-06-30 | No | 1.30.0 | No |
Hide file hashes
JAR Checksum | Value |
---|---|
MD5 | 0d84ae2bac065c74f033b3f510812cf6 |
SHA1 | aa21f410e29c6189b56526c42d4414398544f3e3 |
SHA256 | 6b59d546a2f271d499b857b290f6c283b90282a5d9a213bc037652367f258c4d |
SHA512 | 688242a795f9526828841faabc34e59e048439a1c748842d0a0fdf8eb8dffa99ce2f5281fa1c067dbbbac3db0494af006f46d3917de4f85c73372d536a102d97 |
Docker Image | SHA256 Checksum |
---|---|
humio | d83d0ea9247637ce91d2b84123659858624fe329583e1c6fe6900bd4eaa4acae |
humio-core | 14ec97ed7bf950b0e0669627babebfb3e5ca67b38cc40d116254567ee4e9199d |
kafka | e9a4ee28381265a1b80071257996003a9bea0e29234a6394477d8859cb546e68 |
zookeeper | b377e734d662fe80b4dbc6b355965694464e437a2e0855272b9652ac9225b050 |
Download: https://repo.humio.com/repository/maven-releases/com/humio/server/1.42.1/server-1.42.1.tar.gz
These notes include entries from the following previous releases: 1.42.0
Bug fixes and an updated dependency.
Deprecation
Items that have been deprecated and may be removed in a future release.
The Feature Flag, CookieAuthServerSide, has been deprecated as cookie authentication is now enabled by default. Instead, the configuration field
ENABLE_BEARER_TOKEN_AUTHORIZATION
has been introduced.The local disk based backup feature described at Making Back-Ups is deprecated, and is planned for removal in September 2022. We have found that restoring backups using this feature is difficult in practice, it is not commonly used, and the backup/restore functionality is covered by the bucket storage feature as well. For these reasons, we are deprecating this feature in favour of bucket storage.
The
DELETE_BACKUP_AFTER_MILLIS
configuration parameter, which controls the delay between data being deleted in Humio and removed from backup, will be retained, since it controls a similar delay for bucket storage. Customers using local disk based backups should migrate to using bucket storage instead. Systems not wishing to use a cloud bucket storage solution can keep backup support by instead installing an on-prem S3- or GCS-compatible solution, such as MinIO.
New features and improvements
Falcon Data Replicator
Added the
fdr-message-count
metric, which contains the approximate number of messages on an FDR feed's SQS queue.Added the
fdr-invisible-message-count
metric, which contains the approximate number of invisible messages on an FDR feed's SQS queue.Improved error logging, when an FDR feed fails to download data from an S3 bucket. It now clearly states when a download failed because the S3 bucket is located in a different region than the SQS queue.
UI Changes
The Format Panel is now available for changing the style of the data displayed in the Event list — see Changing the Data Display.
The Search page.
button is now always displayed on theBoth the
Scatter Chart
and theBar Chart
widgets now support automatically adding/toggling axis and legend titles based on the mapped data.The Fields Panel now enables you to fetch fields beyond those from the last 200 events — see Adding and Removing Fields.
Configuration
Improve the error message if Humio is configured to use bucket storage, but the credentials for the bucket are not configured.
Dashboards and Widgets
The
Single Value
widget is now available. Construct a query which returns any single value, or use thetimeChart()
query function to create a single-value widget instance with sparkline and trend indicators.The
Gauge
widget is being deprecated in favour of theSingle Value
widget. Configurations of the former widget are compatible with the latter. This means that persisted configurations of the Gauge widget (url / dashboard widgets / saved queries / recent queries) are still valid, but are visualized using the Single Value widget instead.
Log Collector
The Humio Log Collector can now be downloaded from the Organizational Settings page, see the Log Collector Documentation for a complete list of the supported logs formats and operating systems.
Functions
ioc:lookup()
would sometimes give incorrect results when negated.worldMap()
accepts more magnitude functions, anonymous functions and thepercentile()
function.worldMap()
will warn about licensing issues with IP database.sankey()
now accepts more weight functions such as anonymous functions and thepercentile()
function.
Other
Fixed an issue where Humio's ZooKeeper monitoring page would show X/0 followers in sync.
Fixed an issue that if download of IOCs took more than an hour, Humio would indefinitely start a new download every hour which would eventually fail.
Fixed an underlying bug causing
addToExistingJob did not find the existing job
to be error logged unnecessarily. Humio may decide to fetch a segment from bucket storage for querying. If this decision is made right as the query is cancelled, Humio could log the message above. With this fix, Humio will instead skip downloading the segment, and not log the error.Ensured that errors during view tombstone removal are logged and don't prevent the RetentionJob from performing other cleanup tasks.
Email actions can now add the result set as a
CSV
attachment.When cleaning up a deleted data space, don't error log if two nodes race to delete the data space metadata from global.
Logging to the humio-activity repository is now also done for events in sandbox repositories.
Specifying a versionless packageId will load the newest version of that package.
Fixed an issue where a scheduled search could trigger actions multiple times for the same time period if actions took a long time to finish.
Fixed in this release
Other
Compute next set of Prometheus metrics only in a single thread concurrently. If more requests arrive, then the next request gets the previous response.
Updated dependencies to woodstox to fix a vulnerability.
Humio Server 1.42.0 LTS (2022-06-17)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.42.0 | LTS | 2022-06-17 | Cloud | 2023-06-30 | No | 1.30.0 | Yes |
Hide file hashes
JAR Checksum | Value |
---|---|
MD5 | 6f0306d98c1f4931a083fe9211841ea1 |
SHA1 | afee8f0c7c705b0c6b54c5a3321cf7ed6bbfc6f2 |
SHA256 | 0167bd6ae46368db74434133b9881380fd6c389c9ea35ac5612d9a643366d3ac |
SHA512 | e66acd012641e19d6c8c7613a56302c66138902912eb3aaa2157fef4199b6e77b20367be49bd90f2b430573b6317bd7214600436edfaf52cbffc108c0c414fd6 |
Docker Image | SHA256 Checksum |
---|---|
humio | 37cb34cccd8e0b749308f96bc49cd67c92ca6af537e5c01cec0ecf4d26d2712f |
humio-core | 958fc373d5cf0da75861981cbf6fb3cead3c41388480d72b4bb7ad85734bc0cd |
kafka | aea9942f9b789b058984dc348fa91657c8be210ad9d6f85ed8e0fdd6f09e408d |
zookeeper | d52cd8f48f55cf09db7426a746b536face085b8b919ed1cdc1ce57f7e74d450e |
Download: https://repo.humio.com/repository/maven-releases/com/humio/server/1.42.0/server-1.42.0.tar.gz
Bug fixes and updates.
Deprecation
Items that have been deprecated and may be removed in a future release.
The Feature Flag, CookieAuthServerSide, has been deprecated as cookie authentication is now enabled by default. Instead, the configuration field
ENABLE_BEARER_TOKEN_AUTHORIZATION
has been introduced.The local disk based backup feature described at Making Back-Ups is deprecated, and is planned for removal in September 2022. We have found that restoring backups using this feature is difficult in practice, it is not commonly used, and the backup/restore functionality is covered by the bucket storage feature as well. For these reasons, we are deprecating this feature in favour of bucket storage.
The
DELETE_BACKUP_AFTER_MILLIS
configuration parameter, which controls the delay between data being deleted in Humio and removed from backup, will be retained, since it controls a similar delay for bucket storage. Customers using local disk based backups should migrate to using bucket storage instead. Systems not wishing to use a cloud bucket storage solution can keep backup support by instead installing an on-prem S3- or GCS-compatible solution, such as MinIO.
New features and improvements
Falcon Data Replicator
Added the
fdr-message-count
metric, which contains the approximate number of messages on an FDR feed's SQS queue.Added the
fdr-invisible-message-count
metric, which contains the approximate number of invisible messages on an FDR feed's SQS queue.Improved error logging, when an FDR feed fails to download data from an S3 bucket. It now clearly states when a download failed because the S3 bucket is located in a different region than the SQS queue.
UI Changes
The Format Panel is now available for changing the style of the data displayed in the Event list — see Changing the Data Display.
Both the
Scatter Chart
and theBar Chart
widgets now support automatically adding/toggling axis and legend titles based on the mapped data.The Fields Panel now enables you to fetch fields beyond those from the last 200 events — see Adding and Removing Fields.
Configuration
Improve the error message if Humio is configured to use bucket storage, but the credentials for the bucket are not configured.
Dashboards and Widgets
The
Single Value
widget is now available. Construct a query which returns any single value, or use thetimeChart()
query function to create a single-value widget instance with sparkline and trend indicators.The
Gauge
widget is being deprecated in favour of theSingle Value
widget. Configurations of the former widget are compatible with the latter. This means that persisted configurations of the Gauge widget (url / dashboard widgets / saved queries / recent queries) are still valid, but are visualized using the Single Value widget instead.
Log Collector
The Humio Log Collector can now be downloaded from the Organizational Settings page, see the Log Collector Documentation for a complete list of the supported logs formats and operating systems.
Functions
ioc:lookup()
would sometimes give incorrect results when negated.worldMap()
accepts more magnitude functions, anonymous functions and thepercentile()
function.worldMap()
will warn about licensing issues with IP database.sankey()
now accepts more weight functions such as anonymous functions and thepercentile()
function.
Other
Fixed an issue where Humio's ZooKeeper monitoring page would show X/0 followers in sync.
Fixed an issue that if download of IOCs took more than an hour, Humio would indefinitely start a new download every hour which would eventually fail.
Fixed an underlying bug causing
addToExistingJob did not find the existing job
to be error logged unnecessarily. Humio may decide to fetch a segment from bucket storage for querying. If this decision is made right as the query is cancelled, Humio could log the message above. With this fix, Humio will instead skip downloading the segment, and not log the error.Ensured that errors during view tombstone removal are logged and don't prevent the RetentionJob from performing other cleanup tasks.
Email actions can now add the result set as a
CSV
attachment.When cleaning up a deleted data space, don't error log if two nodes race to delete the data space metadata from global.
Logging to the humio-activity repository is now also done for events in sandbox repositories.
Specifying a versionless packageId will load the newest version of that package.
Fixed an issue where a scheduled search could trigger actions multiple times for the same time period if actions took a long time to finish.
Humio Server 1.40.0 LTS (2022-05-12)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.40.0 | LTS | 2022-05-12 | Cloud | 2023-05-31 | No | 1.30.0 | Yes |
Hide file hashes
JAR Checksum | Value |
---|---|
MD5 | 8a733e1201103eeef32e63b0bf4c8977 |
SHA1 | 5b217fb48f1b5684330ec70fc5d20d322b0a75f8 |
SHA256 | 8838d422459feb6a56d1f15578c581fec7983165635fb4e74f312c2cc4da8046 |
SHA512 | 94bb617a37475918313decc3bf56696890c90d3e3f91de78dccb9431fee0b1bba8d90f60f0d591f5acbea7c6e09c5cb57ddf95fba088a34efc92a0899ac4aef9 |
Docker Image | SHA256 Checksum |
---|---|
humio | 7c9b77b32fc84e31ecc57461ae3e8bfac9b584fb6fb3af0b909bd7e05903d0d8 |
humio-core | 9326081840d3f852df54702c9d5e72ea492d49c55aab51ed83b1b234439c4ec7 |
kafka | 344a06f56ada7ea9af2c7c5d146fa07f6fda87be750a7283e6f753189b42a0b5 |
zookeeper | 42cdbca9d0ce73516a27beda618390a40db3e086580ce3d6ab2779c1952980ee |
Download: https://repo.humio.com/repository/maven-releases/com/humio/server/1.40.0/server-1.40.0.tar.gz
1.40 REQUIRES minimum version 1.30.0 of Humio to start. Clusters wishing to upgrade from older versions must upgrade to 1.30.0+ first. After running 1.40.0 or later, you cannot run versions prior to 1.30.0.
Behavior Changes
Scripts or environment which make use of these tools should be checked and updated for the new configuration:
Configuration
The
selfJoin()
query function was observed to cause memory problems, so we have set a limit of .0.0 output events (there was previously no bound). This limit can be adjusted with the GraphQL mutation setDynamicConfig with configuration flag SelfJoinLimit. A value of-1
returnsselfJoin
to its old, unbounded version.
New features and improvements
Falcon Data Replicator
The static configuration variable
ENABLE_FDR_POLLING_ON_NODE
is no longer supported, as its functionality has been replaced with the dynamic configurations listed above.Introduced dynamic configuration options for changing FDR polling behaviour at runtime. FDR polling is not enabled by default, so you should take care to set up these new configurations after upgrading, or you will risk that your FDR data isn't ingested into Humio before it is deleted from Falcon.
Using the dynamic configuration option
FdrEnable
, administrators can now turn FDR polling on/off on the entire cluster with a single update. Defaults tofalse
.Using the dynamic configuration option
FdrMaxNodes
, administrators can put a cap on how many nodes should at most simultaneously poll data from the same FDR feed. Defaults to5
nodes.Using the dynamic configuration option
FdrExcludedNodes
, administrators can now exclude specific nodes from polling from FDR. Defaults to the empty list, so all nodes will be used for polling.It is now possible to test an FDR feed in the UI, which will test that Humio can connect to the SQS queue and the S3 bucket.
Fixed an issue where exceptions in FDR were not properly logged.
UI Changes
Introducing the new
Scatter Chart
widget (previously known as XY):It supports long data format (one field for the series name and one field for the y values) as well as wide format (one field per series value).
You can now visualize data in the
Scatter Chart
when queried with thetimeChart()
,bucket()
andgroupBy()
functions, as well as thetable()
function like before.
Added style options to either truncate or show full legend labels in widgets.
Improvements to the
Sankey Diagram
widget, it now has multiple style options; show/hide the y-axis, sorting type, label position, and colors plus labels for series.Added support in
fieldstats()
query function for skipping events. This is used by the UI, but only in situations where we know an approximate result is acceptable and where processing all events would be too costly.Improvements to the
Pie Chart
widget, it now has a max series setting similar to theTime Chart
widget.Syntax highlighting for XML, JSON and accesslog data now uses more distinguishable colors.
The
@timestamp
column is now allowed to be moved amongst the other columns in the event list.When using a widget that is not compatible with the current data, the
button now works again.The widget dropdown can now be navigated with the keyboard.
Events with JSON data can now be collapsed and expanded in the JSON panel.
Keep empty lines in queries when exporting assets as templates or to packages.
GraphQL API
Added two new organization level permissions:
DeleteAllRepositories
andDeleteAllViews
that allow repository and view deletion, respectively, inside an organization.The GraphQL queries and mutations for FDR feeds are no longer in preview.
Removed the following deprecated GraphQL fields:
UserSettings.settings
,UserSettings.isEventListOrderChangedMessageDismissed
, andUserSettings.isNewRepoHelpDismissed
.Changed permission token related GraphQL endpoints to use enumerations instead of strings.
It is now possible to refer a parser by name when creating or updating an ingest listener using the GraphQL API mutations createIngestListenerV3 and updateIngestListenerV3. It is now also possible to change the repository on an ingest listener using updateIngestListenerV3. The old mutations createIngestListenerV2 and updateIngestListenerV2 have been deprecated.
Removed the deprecated clientMutationId argument from the GraphQL mutation updateSettings.
Marked experimental language features as preview in GraphQL API.
Added a GraphQL mutation deleteSearchDomainById that deletes views or repositories by ID.
It is now possible to refer a parser by name when creating an ingest token or assigning a parser to an existing ingest token using the GraphQL API mutations addIngestTokenV3 and assignParserToIngestTokenV2. The old mutations addIngestTokenV2 and assignParserToIngestToken have been deprecated.
Added a new GraphQL mutation to rename views or repositories by ID.
Configuration
Added a new config
NATIVE_FADVICE_SUPPORT
(defaulttrue
) to allow turning off the use offadvice
internally.Amended how Humio chooses segments to download from bucket storage when prefetching. If
S3_STORAGE_PREFERRED_COPY_SOURCE
isfalse
, the prefetcher will only download segments that are not already on another host. Otherwise, it will download as many hosts as necessary to follow the configured replication factor. This should help avoid excessive bucket downloads when nodes in the cluster have lots of empty disk space.Validate block CRCs before uploading segment files to bucket storage. Can be disabled by setting
VALIDATE_BLOCK_CRCS_BEFORE_UPLOAD
tofalse
.Added a new config
NATIVE_FALLOCATE_SUPPORT
(defaulttrue
) to allow turning off the use offallocate
andftruncate
internally.Require that
{S3/GCS}_STORAGE
config must be set before{S3/GCS}_STORAGE_2
is set.Added a new configuration variable
BUCKET_STORAGE_TRUST_POLICY
for the dual-bucket use case. This setting configures which bucket is considered the "trusted" bucket when two buckets are configured, which impacts when Humio considers data to be safely replicated. Supported values arePrimary
for trusting the primary bucket,Secondary
for trusting the secondary bucket,TrustEither
for considering data safely replicated if it is in either bucket, andRequireBoth
for considering data safely replicated only if it is in both buckets. This config replaces theBUCKET_STORAGE_2_TRUSTED
configuration,true
in the old configuration equates toSecondary
in the new configuration. The default value of the new configuration isSecondary
.
Dashboards and Widgets
Improvements to the
Time Chart
widget:It now has an option to show the underlying data points, which makes it possible to inspect the behaviour of the different interpolation methods.
Trend lines can now be added in the chart.
Introducing the
Single Value
widget. Construct a query which returns any single value, or use thetimeChart()
query function to create a single-value widget instance with sparkline and trend indicators.Improvements to the
Bar Chart
widget:Added style options to name the x and y axis.
Added option for interpreting the resulting query data as either wide or long format data.
Added option to set a max label length for the x-axis, instead of the bottom padding option. With auto-padding and this style option, it is easier to fit the wanted information in the view.
It is now possible to configure bar charts to have a logarithmic y axis.
Introduced the stacked bar charts option.
It no longer has an artificial minimum height for bars, as this may distort at a glance interpretations of the chart.
It no longer has sorting by default, which means that the order will be identical to the query result. You can now sort the x axis of the bar chart by using the
sort()
query function, if sort by series in the style options is not set.It now has a max series setting similar to the
Time Chart
widget.
Functions
The
findTimestamp()
function now supports date formats like23FEB2022
, that is date, literal month and year without any separators in between. Other formats still require separators between the parts.
Other
Fixed an ingest bug where, under some circumstances, we would reverse the order of events in a batch.
Fixed bugs related to repository deletes.
It is now possible to create a view with the same name as a deleted view.
Fixed an ingest bug where if multiple types of errors occurred for an event we would only add error fields describing one of them. Now we always report all errors.
Added a new system-level permission allowing changing the user name of a user.
Fixed an issue where
OrganizationStatsUpdaterJob
would repeatedly post the errorcom.humio.entities.organization.OrganizationSingleModeNotSupported: Not supported when using organizations in single mode
when the cluster was configured for only one organization.Fixed an issue where query cancellation could in rare cases cause the query scheduler to throw exceptions.
Fixed how relative time is displayed.
Ingest listeners are now only stopped, not deleted, when a user deletes a repository. If the repository is restored, the ingest listener will be restarted automatically. When it is no longer possible to restore the repository, the ingest listener will be deleted.
Added support for restoring deleted repositories and views when using bucket storage. See Delete a Repository or View.
Humio is now more strict during a Kafka reset to avoid global desyncs. Only one node will be allowed to boot on the new epoch, remaining nodes won't be allowed to use their snapshots, and will need to fetch a fresh global snapshot from that node.
If the query scheduler attempts to read a broken segment file, it may be able to fetch a new copy from bucket storage in some cases. Humio will now only allow this if it can be guaranteed that no events from the broken segment have been added to the query result. Otherwise the query will receive a warning.
Fixed an ingest bug where we might discard
@timezone
and@error
fields in events with too many fields. Now we always retain those and only discard other fields.Fixed a bug with UTF-8 serialization of 4-byte codepoints (emojis etc.).
When Humio detects multiple datasources for the same set of tags, it will not deduplicate them by selecting one source to keep and marking the others replaced.
Added
humio-token-hashing.sh
to the Humio bin directory. This invokes a utility for generating root tokens.Added more visibility on organization limits when changing the retention settings on a repository.
Fixed an issue that links in alerts from OpsGenie actions were not clickable.
Added
humio-decrypt-bucket-file.sh
to the Humio bin directory. This invokes a utility for decrypting files downloaded from bucket storage.Fixed an ingest bug where sometimes we wouldn't turn event fields into tags if we fell back to using the key-value parser. Now we always turn fields into tags.
It is no longer possible to create ingest listeners on system repositories using the APIs. Previously, it was only prohibited in the UI.
Fixed a caching-related issue with
groupBy()
in live queries that would briefly cause inconsistent results.Webhook action now includes the 'Message Body Template' for
PATCH
andDELETE
requests as well if it is not empty.Fixed a race condition between nodes creating the merge result for the same target segment, and also transferring it among the nodes concurrently. If a query read the file during that race, an in-memory cache of the file header might hold contents that did not match the local file, resulting in
Broken segment
warnings in queries.Added a feature that allows deletion of repositories and views on cloud.
When calculating the starting offset in Kafka for digest, Humio will now trust that if a segment in global is listed as being in bucket storage, that segment is actually present in bucket storage. Humio no longer double checks by asking bucket storage directly.
Fixed an issue where download of IOCs from another node in the cluster could start before the previous download had finished, resulting in too many open connections between nodes in the cluster.
Fixed an issue where Filebeat 8.1 would not be compatible unless
output.elasticsearch.allow_older_versions
was set totrue
.Renamed the Humio tarball distribution to
humio-1.39.0.tar.gz
instead ofhumio-release-1.39.0.tar.gz
. The file now contains a directory namedhumio-1.39.0
instead ofhumio-release-1.39.0
.Updating alert labels using the addAlertLabel and removeAlertLabel mutations now requires the
ChangeTriggersAndActions
permission.Fixed an issue where the UI would not detect parameters in a query when using saved queries from a package.
Made changes to Humio's tracking of bucket storage downloads. This should avoid some rare cases where downloads could get stuck.
Reduced the amount of time Humio will spend during shutdown waiting for in-progress data to flush to disk to 60 seconds from 150 seconds.
Fixed an issue that could cause creation of two datasources for the same tag set if messages with the same tags happened to arrive on different Kafka partitions.
During ingest, if an event has too many fields we now sort the fields lexicographically and remove fields from the end. Before, there was no system to which fields were retained, it was effectively random.
Adding and removing queries from the query blocklist is now audit logged as two separate audit log event types,
query-blocklist-add
andquery-blocklist-remove
, rather than the single event typeblocklist
.Improved the phrasing of some error messages.
Fixed a bug where accessing a
csv
file with records spanning multiple lines would fail with an exception.The REST API for ingest listeners has been deprecated.
Improved distribution of new autosharded datasources.
Fixed an issue where an exception in rare cases could cause ingest requests to fail intermittently.
The query scheduler improperly handled regex limits being hit, it should result in a warning on the query. In some cases it was handled by retrying the segment read.
Fixed an issue where the
set-replication-defaults config endpoint
could attempt to assign storage to nodes configured not to store segments.Fixed an issue where some errors showed wrong positions in the search page query field.
It is no longer possible to delete a parser that is used by an ingest listener. You must first assign another parser to the ingest listener.
Fixed an issue where audit logging of alerts, scheduled searches and actions residing on views would yield incomplete or missing audit logs.
Fixed an issue where
NetFlow
parsing would crash if it received an options data record.It is now validated, that the parser supplied when creating or updating an ingest listener, exists.
Fixed an ingest bug where, when truncating an event with too many fields, we wouldn't count error fields, leading to the event still being larger than the maximum size.
Fixed an issue where Filebeat 8.0 would not be compatible unless
setup.ilm.enabled
was set tofalse
.Create, update and delete operations on ingest listeners are now always audit logged. Previously, they were only logged when performed through the REST API. Also, the audit log format has been updated to be similar to the format of other assets. Look for events with the
type
field set toingestlistener.create
,ingestlistener.update
, andingestlistener.delete
.Fixed an issue when using bucket storage alongside secondary storage, where Humio would download files to the secondary storage but register them as present in the primary. It will now download and register them as present on the secondary storage.
Fixed duplicate
Change triggers and actions
entry in view permission token page.Fixed an issue that could cause an exception to be thrown in the ingest code if digest assignment changed while a local segment file being written was still empty.
Improved performance of formatting action messages, when the query result for an alert or scheduled search contains large events.
Improved distribution onto partitions of tag combinations (datasources) that are affected by auto sharding, resulting in less collisions.
Improved the flow of creating a blocked query.
Humio will now periodically log node configs to the debug log, in addition to the existing log of config on node boot. These logs will come from
com.humio.jobs.ConfigLoggerJob
.When shared dashboards are disabled or become inaccessible because of IP filters, they will now be completely unreachable, and any dashboards already open will show an informative error message.
It is no longer possible to use experimental functions in Alerts, Parsers, and Event Forwarding. They are now only available on the search page.
Webhook action has been updated to only allow the following HTTP verbs:
GET
,HEAD
,POST
,PUT
,PATCH
,DELETE
andOPTIONS
.Added a feature that allows regular users with delete permissions on cloud to rename views and repositories.
Fixed an issue where non-default log formats such as
log4j2-json-stdout.xml
that logs toSTDOUT
were not fully in control of their output stream, as log entries of levelERROR
were also printed directly tostderr
from within the code. The default log4j2 configuration now includes a Console appender that prints errors tostdout
, achieving the same result, while allowing the other formats to fully control their output stream.Fixed an issue that could cause the query scheduler to erroneously retry searching a bucketed segment.
When logging Kafka consumer and producer metrics, Humio will now log repeated metrics like
records-lag-max
once per partition, with the partition specified in thepartition
field.Automatic system removals of queries expired from the blocklist are now audit logged as well.
Humio Server 1.38.2 LTS (2022-06-13)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.38.2 | LTS | 2022-06-13 | Cloud | 2023-03-31 | No | 1.26.0 | No |
Hide file hashes
JAR Checksum | Value |
---|---|
MD5 | c00de7af24633422654b3d16a764d753 |
SHA1 | 6c3043c27e5c7b3c7353f0f0e5e3e5b876319314 |
SHA256 | 6770856e5dcf19eee5384eecc40617fed15d92e2c2872ff7432f63bf43867c4a |
SHA512 | 7660d46ca1700336ce6fd2390e4b6de5b15963693499a6313513ff63ee9cedb6f99e1f595465ae3b1c420d0ba9b4626abe0ba81450f3bd7e92d528b0b1f51c6d |
Docker Image | SHA256 Checksum |
---|---|
humio | 1290b8e98ae3553b092879867a433077a1eb7ff626c989d01d14f71218644072 |
humio-core | 4a9c5aa23b7842034e34dda4aaffcb12f6f23a8e1c2cf04933b44a8cdafbe74d |
kafka | 04d14b84378ac9f6787af1b8477a0d2cbd6979da283bde177470d1af85a01637 |
zookeeper | 46aa8bbcaa2c64cfa7afdb6e07df0783658b4983a6727ca6670989364e687477 |
Download: https://repo.humio.com/repository/maven-releases/com/humio/server/1.38.2/server-1.38.2.tar.gz
These notes include entries from the following previous releases: 1.38.0, 1.38.1
Updated dependencies with security fixes and weakness.
New features and improvements
Falcon Data Replicator
Improved performance of FDRJob.
UI Changes
Minor UX improvements (ie. accessibility) on the queries panel.
On the time, bar and pie charts you can hold the
ALT
/OPTION
key to display long legend titles.When changing focus inside a dialog with the keyboard, the focus will no longer move outside the dialog while it is open.
Added a quick-fix for unknown escape sequences in the search field.
When using the table visualisation in dark mode, empty table cells are now clearly discernible.
First row entry in the statistics table on the repo page is now a table header and added hidden content to the empty table header in the new view page.
Added a warning for unknown escape sequences in the search field.
Hover information in the search field is shown despite an overlapping warning.
Reworked the hover message layout and changed the hover information on text (in the search field).
Better accessibility for queries panel. You can now tab to focus individual queries, and open a details panel. From here you can also access all actions in the details panel by tabbing.
Added a quick-fix to convert non-ASCII quotes to ASCII quotes in the search field.
Fixed a bug where the Package Marketplace would redirect to unsupported package versions on older Humio instances.
Hover over parameter names and arguments in the search field now includes the default value.
The Cluster Nodes table has been redesigned to allow for easier overview and copying the version-number.
Fixed an issue where queries with
tail()
would behave in an unexpected manner when an event is focused.The bar and pie charts now support holding the SHIFT key to display unformatted numeric values.
Visually hidden clipboard field is now hidden for assistive technologies/keyboard users.
The search page now has focus states on the Language Syntax, Event List Widget and buttons.
Pop-ups and drop-downs will now close automatically when focus leaves them.
GraphQL API
The
PERMISSION_MODEL_MODE
config option has been removed. All graphql related schema has also been removed.Fixed a bug in the response from calling the installPackageFromZip GraphQL mutation. Previously, the response type exposed a deprecated clientmutationid that could not be selected. Also now if form fields are missing they are properly reported in the response.
Deprecates the ReadContents view action, in favor of ReadEvents. This also means ReadEvents has been undeprecated, as we have slightly changed how we consider read rights, and want the action names to match this.
Configuration
The Property
inter.broker.protocol.version
inkafka.properties
now defaults to 2.4 if not specified. Users upgrading Kafka can either setinter.broker.protocol.version
manually inkafka.properties
, or passDEFAULT_INTER_BROKER_PROTOCOL_VERSION
as an environment variable to Docker when launching the container. Please follow Kafka's upgrade guidelines when upgrading a Kafka cluster to avoid data loss https://kafka.apache.org/documentation/#upgrade_3_1_0.Reduce default value of
INGESTQUEUE_COMPRESSION_LEVEL
, the ingest queue compression level from 1 to 0. This reduces time spent compressing before inserting into the ingest queue by roughly 4x at the expense of a 10-20% increase in size required in Kafka for the ingest queue topic.Added new configuration
NATIVE_FALLOCATE_SUPPORT
(defaulttrue
) to allow turning off the use offallocate
andftruncate
internally.Added config
RDNS_DEFAULT_SERVER
for specifying what DNS server is the default for therdns
query function.Added new settings for how uploads to bucket storage are validated. In the case that validation with etags are not available, content length can be used instead.
When Kafka topic configuration is managed by Humio (default true) set
max.message.bytes
on the topics to the value of ConfigTOPIC_MAX_MESSAGE_BYTES
, default is 8388608 (8 MB). Minimum value is 2 MB.Added new configuration
NATIVE_FADVICE_SUPPORT
(defaulttrue
) to allow turning off the use of fadvice internally.Added config
IP_FILTER_RDNS
for specifying what IP addresses can be queried using therdns
query function.Added config
IP_FILTER_RDNS_SERVER
for specifying what DNS servers can be allowed in therdns()
query function.Added the config
CORS_ALLOWED_ORIGINS
a comma separated list for CORS allowed origins, default allows all origins.Fixed a bug where
TLS_KEYSTORE_TYPE
andTLS_TRUSTSTORE_TYPE
would only recognize lower-case values.
Functions
Fixed an issue where
tail()
could produce results inconsistent with other query functions, when used in a live query.
Other
Fixed an issue with epoch and offsets not always being stripped from segments.
Ensure only a cluster leader that still holds cluster leadership can force digesters to release partition leadership. This could cause spurious reboots in clusters where leadership was under contention.
For HTTP Event Collector (HEC) the input field sourcetype is now also stored in @sourcetype.
Published new versions of the Humio Kafka Docker containers for Kafka 3.1.0.
Added a new system-level permission that allows changing usernames of users.
During identity provider configuration, it's possible to fetch SAML configuration from an endpoint.
Improved off-heap memory handling. Humio now typically uses only 1 GB on systems with 32 vCPUs, down from typically 16 GB. This leaves more memory for other processes and page cache for data.
Fixed a compatibility issue with LogStash 7.16+ and 8.0.0 when using the Elasticsearch output plugin.
Improved the performance of deletes from global.
Do not run the Global snapshot consistency check on stateless ingest nodes.
Fixed an issue where users could be shown in-development feature on the client when running a local installation of Humio.
Fixed a bug in the Sankey chart such that it now updates on updated query results.
Added tombstoning to uploaded files, which helps with avoiding data loss.
Allow cluster managers access to settings for personal sandboxes and to block and kill queries in them.
Fixed an issue where top(max) could throw an exception when given values large enough to be represented as positive infinity.
Fixed an issue where live queries would sometimes double-count parts of the historic data.
Warn at startup if
CORES
>AvailableProcessorCount
as seen by the JVM.Fixed a bug where the
button on the Fields panel would do nothingFixed an issue where queries of the form #someTagField != someValue ... would sometimes produce incorrect results.
Fixed a bug where providing a bad view/repository name when blocking queries would block the query in all views and repositories.
Fixed a compatibility issue with FileBeat 8.0.0.
Fixed several issues where users could add invalid query filters via the
context button after selecting text in the Event List.Fixed an ingest bug where under some circumstances we would reverse the order of events in a batch.
During Digest startup, abort fetching segments from other nodes if the assigned partition set changes while fetching.
Fixed an issue where negated functions could lose their negation.
Fixed an issue where
percentile()
would crash on inputs larger than ~1.76e308.Previously a package could be updated with another package with the same name and version, but with different content. This is no longer allowed, and any attempt do so will be rejected and fail.
The Kafka client has been upgraded to 3.1.0 from 2.8.1. 3.1.0 enables the idempotent producer by default, which implies acks=all. Clusters that set acknowledgements to a different number via
EXTRA_KAFKA_CONFIGS_FILE
should update their config to also specifyenable.idempotence=false
.LSP warnings don't crash queries any more.
Ensure a digester can only acquire or release partition leadership if no other digester has leadership of that partition. This could cause spurious reboots if digester leadership became contended.
Fixed in this release
Security
Updated dependencies to fix vulnerabilities to CVE-2021-22573.
Summary
Updated java script dependencies to fix vulnerabilities.
Updated java script dependencies to fix vulnerabilities.
Updated dependencies to Jackson to fix a vulnerability.
Other
Use latest version of Java 1.13 on Docker image.
Use latest version of Alpine on Docker image.
Humio Server 1.38.1 LTS (2022-04-27)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.38.1 | LTS | 2022-04-27 | Cloud | 2023-03-31 | No | 1.26.0 | No |
Hide file hashes
JAR Checksum | Value |
---|---|
MD5 | fcaf2c288bd1f7c0fbc63deca59aa53d |
SHA1 | aec8d78eb49ee4954f2e542f614c89595da0ec39 |
SHA256 | d31c13fcecfd380562faddb7ba4662484aa2254f1f0f54591241341870c7ae83 |
SHA512 | 98134b241d67029255018c24032897623e91a3f54627c53e1c801bd4528b9ea13ab48abe328f2b76f40048bf4cb7dd45480a9398f6bd8dea4edb5db1d296b3da |
Docker Image | SHA256 Checksum |
---|---|
humio | 2d33f1706e053414996c9ac5d3533a15b637d3b321ad91362ac1a2fb4e54d722 |
humio-core | 7c1a2c853c4d7c348cfe948340a0f4101eb92a9ce365943897bf6c34cf393312 |
kafka | e3fc06aca9f5df2e628fb6a2b78c39aaf1c1b682728eb74bbd87d018aa83b49a |
zookeeper | 73746eff47c136fc01bf5898c4fecf43d459d3f103ccd29e10edf2ce7c79255d |
Download: https://repo.humio.com/repository/maven-releases/com/humio/server/1.38.1/server-1.38.1.tar.gz
These notes include entries from the following previous releases: 1.38.0
Updated dependencies with security fixes and weakness.
New features and improvements
Falcon Data Replicator
Improved performance of FDRJob.
UI Changes
Minor UX improvements (ie. accessibility) on the queries panel.
On the time, bar and pie charts you can hold the
ALT
/OPTION
key to display long legend titles.When changing focus inside a dialog with the keyboard, the focus will no longer move outside the dialog while it is open.
Added a quick-fix for unknown escape sequences in the search field.
When using the table visualisation in dark mode, empty table cells are now clearly discernible.
First row entry in the statistics table on the repo page is now a table header and added hidden content to the empty table header in the new view page.
Added a warning for unknown escape sequences in the search field.
Hover information in the search field is shown despite an overlapping warning.
Reworked the hover message layout and changed the hover information on text (in the search field).
Better accessibility for queries panel. You can now tab to focus individual queries, and open a details panel. From here you can also access all actions in the details panel by tabbing.
Added a quick-fix to convert non-ASCII quotes to ASCII quotes in the search field.
Fixed a bug where the Package Marketplace would redirect to unsupported package versions on older Humio instances.
Hover over parameter names and arguments in the search field now includes the default value.
The Cluster Nodes table has been redesigned to allow for easier overview and copying the version-number.
Fixed an issue where queries with
tail()
would behave in an unexpected manner when an event is focused.The bar and pie charts now support holding the SHIFT key to display unformatted numeric values.
Visually hidden clipboard field is now hidden for assistive technologies/keyboard users.
The search page now has focus states on the Language Syntax, Event List Widget and buttons.
Pop-ups and drop-downs will now close automatically when focus leaves them.
GraphQL API
The
PERMISSION_MODEL_MODE
config option has been removed. All graphql related schema has also been removed.Fixed a bug in the response from calling the installPackageFromZip GraphQL mutation. Previously, the response type exposed a deprecated clientmutationid that could not be selected. Also now if form fields are missing they are properly reported in the response.
Deprecates the ReadContents view action, in favor of ReadEvents. This also means ReadEvents has been undeprecated, as we have slightly changed how we consider read rights, and want the action names to match this.
Configuration
The Property
inter.broker.protocol.version
inkafka.properties
now defaults to 2.4 if not specified. Users upgrading Kafka can either setinter.broker.protocol.version
manually inkafka.properties
, or passDEFAULT_INTER_BROKER_PROTOCOL_VERSION
as an environment variable to Docker when launching the container. Please follow Kafka's upgrade guidelines when upgrading a Kafka cluster to avoid data loss https://kafka.apache.org/documentation/#upgrade_3_1_0.Reduce default value of
INGESTQUEUE_COMPRESSION_LEVEL
, the ingest queue compression level from 1 to 0. This reduces time spent compressing before inserting into the ingest queue by roughly 4x at the expense of a 10-20% increase in size required in Kafka for the ingest queue topic.Added new configuration
NATIVE_FALLOCATE_SUPPORT
(defaulttrue
) to allow turning off the use offallocate
andftruncate
internally.Added config
RDNS_DEFAULT_SERVER
for specifying what DNS server is the default for therdns
query function.Added new settings for how uploads to bucket storage are validated. In the case that validation with etags are not available, content length can be used instead.
When Kafka topic configuration is managed by Humio (default true) set
max.message.bytes
on the topics to the value of ConfigTOPIC_MAX_MESSAGE_BYTES
, default is 8388608 (8 MB). Minimum value is 2 MB.Added new configuration
NATIVE_FADVICE_SUPPORT
(defaulttrue
) to allow turning off the use of fadvice internally.Added config
IP_FILTER_RDNS
for specifying what IP addresses can be queried using therdns
query function.Added config
IP_FILTER_RDNS_SERVER
for specifying what DNS servers can be allowed in therdns()
query function.Added the config
CORS_ALLOWED_ORIGINS
a comma separated list for CORS allowed origins, default allows all origins.Fixed a bug where
TLS_KEYSTORE_TYPE
andTLS_TRUSTSTORE_TYPE
would only recognize lower-case values.
Functions
Fixed an issue where
tail()
could produce results inconsistent with other query functions, when used in a live query.
Other
Fixed an issue with epoch and offsets not always being stripped from segments.
Ensure only a cluster leader that still holds cluster leadership can force digesters to release partition leadership. This could cause spurious reboots in clusters where leadership was under contention.
For HTTP Event Collector (HEC) the input field sourcetype is now also stored in @sourcetype.
Published new versions of the Humio Kafka Docker containers for Kafka 3.1.0.
Added a new system-level permission that allows changing usernames of users.
During identity provider configuration, it's possible to fetch SAML configuration from an endpoint.
Improved off-heap memory handling. Humio now typically uses only 1 GB on systems with 32 vCPUs, down from typically 16 GB. This leaves more memory for other processes and page cache for data.
Fixed a compatibility issue with LogStash 7.16+ and 8.0.0 when using the Elasticsearch output plugin.
Improved the performance of deletes from global.
Do not run the Global snapshot consistency check on stateless ingest nodes.
Fixed an issue where users could be shown in-development feature on the client when running a local installation of Humio.
Fixed a bug in the Sankey chart such that it now updates on updated query results.
Added tombstoning to uploaded files, which helps with avoiding data loss.
Allow cluster managers access to settings for personal sandboxes and to block and kill queries in them.
Fixed an issue where top(max) could throw an exception when given values large enough to be represented as positive infinity.
Fixed an issue where live queries would sometimes double-count parts of the historic data.
Warn at startup if
CORES
>AvailableProcessorCount
as seen by the JVM.Fixed a bug where the
button on the Fields panel would do nothingFixed an issue where queries of the form #someTagField != someValue ... would sometimes produce incorrect results.
Fixed a bug where providing a bad view/repository name when blocking queries would block the query in all views and repositories.
Fixed a compatibility issue with FileBeat 8.0.0.
Fixed several issues where users could add invalid query filters via the
context button after selecting text in the Event List.Fixed an ingest bug where under some circumstances we would reverse the order of events in a batch.
During Digest startup, abort fetching segments from other nodes if the assigned partition set changes while fetching.
Fixed an issue where negated functions could lose their negation.
Fixed an issue where
percentile()
would crash on inputs larger than ~1.76e308.Previously a package could be updated with another package with the same name and version, but with different content. This is no longer allowed, and any attempt do so will be rejected and fail.
The Kafka client has been upgraded to 3.1.0 from 2.8.1. 3.1.0 enables the idempotent producer by default, which implies acks=all. Clusters that set acknowledgements to a different number via
EXTRA_KAFKA_CONFIGS_FILE
should update their config to also specifyenable.idempotence=false
.LSP warnings don't crash queries any more.
Ensure a digester can only acquire or release partition leadership if no other digester has leadership of that partition. This could cause spurious reboots if digester leadership became contended.
Fixed in this release
Summary
Updated java script dependencies to fix vulnerabilities.
Updated dependencies to Jackson to fix a vulnerability.
Other
Use latest version of Java 1.13 on Docker image.
Use latest version of Alpine on Docker image.
Humio Server 1.38.0 LTS (2022-03-15)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.38.0 | LTS | 2022-03-15 | Cloud | 2023-03-31 | No | 1.26.0 | Yes |
Hide file hashes
JAR Checksum | Value |
---|---|
MD5 | 83afa7f6d2c55efbb88387474efd1264 |
SHA1 | 5587c3365fcfb22e1a45ea96b9503497b705125e |
SHA256 | 357591952ed12d0d9f93b084cff4b0ff6848d7a464071ff27edee9f921e23174 |
SHA512 | 7a88658b07d69f6a2fe151da8fe94ebfe0a1259ffa48c45df6154dcce4f6453fac8f78087cefdf869fb6322fa4e31a01f830d9db998293ebeabb0f2c8e3e5cfb |
Docker Image | SHA256 Checksum |
---|---|
humio | 9cd0f6a91b150bb51f9407451bd70c40dc49f730f76840e061a6570008d88453 |
humio-core | cff83e3a3ea8c455040ca31d4e8071a0f1c80be894fdde4cad768e69e1c449e5 |
kafka | d960ac292f781baa54c1388f4bbc94c77c1ae06b94a082a214777844f1435120 |
zookeeper | b2ab322170bfab4d221f6f5e4a4cca5751b2d97bcf75d451de4d24a00c775206 |
Download: https://repo.humio.com/repository/maven-releases/com/humio/server/1.38.0/server-1.38.0.tar.gz
Humio can now poll and ingest data from the Falcon platform's Falcon Data Replicator(FDR) service. This feature can be used as an alternative to the standalone fdr2humio project. See the Ingesting FDR Data into a Repository for more information.
New features and improvements
Falcon Data Replicator
Improved performance of FDRJob.
UI Changes
Minor UX improvements (ie. accessibility) on the queries panel.
On the time, bar and pie charts you can hold the
ALT
/OPTION
key to display long legend titles.When changing focus inside a dialog with the keyboard, the focus will no longer move outside the dialog while it is open.
Added a quick-fix for unknown escape sequences in the search field.
When using the table visualisation in dark mode, empty table cells are now clearly discernible.
First row entry in the statistics table on the repo page is now a table header and added hidden content to the empty table header in the new view page.
Added a warning for unknown escape sequences in the search field.
Hover information in the search field is shown despite an overlapping warning.
Reworked the hover message layout and changed the hover information on text (in the search field).
Better accessibility for queries panel. You can now tab to focus individual queries, and open a details panel. From here you can also access all actions in the details panel by tabbing.
Added a quick-fix to convert non-ASCII quotes to ASCII quotes in the search field.
Fixed a bug where the Package Marketplace would redirect to unsupported package versions on older Humio instances.
Hover over parameter names and arguments in the search field now includes the default value.
The Cluster Nodes table has been redesigned to allow for easier overview and copying the version-number.
Fixed an issue where queries with
tail()
would behave in an unexpected manner when an event is focused.The bar and pie charts now support holding the SHIFT key to display unformatted numeric values.
Visually hidden clipboard field is now hidden for assistive technologies/keyboard users.
The search page now has focus states on the Language Syntax, Event List Widget and buttons.
Pop-ups and drop-downs will now close automatically when focus leaves them.
GraphQL API
The
PERMISSION_MODEL_MODE
config option has been removed. All graphql related schema has also been removed.Fixed a bug in the response from calling the installPackageFromZip GraphQL mutation. Previously, the response type exposed a deprecated clientmutationid that could not be selected. Also now if form fields are missing they are properly reported in the response.
Deprecates the ReadContents view action, in favor of ReadEvents. This also means ReadEvents has been undeprecated, as we have slightly changed how we consider read rights, and want the action names to match this.
Configuration
The Property
inter.broker.protocol.version
inkafka.properties
now defaults to 2.4 if not specified. Users upgrading Kafka can either setinter.broker.protocol.version
manually inkafka.properties
, or passDEFAULT_INTER_BROKER_PROTOCOL_VERSION
as an environment variable to Docker when launching the container. Please follow Kafka's upgrade guidelines when upgrading a Kafka cluster to avoid data loss https://kafka.apache.org/documentation/#upgrade_3_1_0.Reduce default value of
INGESTQUEUE_COMPRESSION_LEVEL
, the ingest queue compression level from 1 to 0. This reduces time spent compressing before inserting into the ingest queue by roughly 4x at the expense of a 10-20% increase in size required in Kafka for the ingest queue topic.Added new configuration
NATIVE_FALLOCATE_SUPPORT
(defaulttrue
) to allow turning off the use offallocate
andftruncate
internally.Added config
RDNS_DEFAULT_SERVER
for specifying what DNS server is the default for therdns
query function.Added new settings for how uploads to bucket storage are validated. In the case that validation with etags are not available, content length can be used instead.
When Kafka topic configuration is managed by Humio (default true) set
max.message.bytes
on the topics to the value of ConfigTOPIC_MAX_MESSAGE_BYTES
, default is 8388608 (8 MB). Minimum value is 2 MB.Added new configuration
NATIVE_FADVICE_SUPPORT
(defaulttrue
) to allow turning off the use of fadvice internally.Added config
IP_FILTER_RDNS
for specifying what IP addresses can be queried using therdns
query function.Added config
IP_FILTER_RDNS_SERVER
for specifying what DNS servers can be allowed in therdns()
query function.Added the config
CORS_ALLOWED_ORIGINS
a comma separated list for CORS allowed origins, default allows all origins.Fixed a bug where
TLS_KEYSTORE_TYPE
andTLS_TRUSTSTORE_TYPE
would only recognize lower-case values.
Functions
Fixed an issue where
tail()
could produce results inconsistent with other query functions, when used in a live query.
Other
Fixed an issue with epoch and offsets not always being stripped from segments.
Ensure only a cluster leader that still holds cluster leadership can force digesters to release partition leadership. This could cause spurious reboots in clusters where leadership was under contention.
For HTTP Event Collector (HEC) the input field sourcetype is now also stored in @sourcetype.
Published new versions of the Humio Kafka Docker containers for Kafka 3.1.0.
Added a new system-level permission that allows changing usernames of users.
During identity provider configuration, it's possible to fetch SAML configuration from an endpoint.
Improved off-heap memory handling. Humio now typically uses only 1 GB on systems with 32 vCPUs, down from typically 16 GB. This leaves more memory for other processes and page cache for data.
Fixed a compatibility issue with LogStash 7.16+ and 8.0.0 when using the Elasticsearch output plugin.
Improved the performance of deletes from global.
Do not run the Global snapshot consistency check on stateless ingest nodes.
Fixed an issue where users could be shown in-development feature on the client when running a local installation of Humio.
Fixed a bug in the Sankey chart such that it now updates on updated query results.
Added tombstoning to uploaded files, which helps with avoiding data loss.
Allow cluster managers access to settings for personal sandboxes and to block and kill queries in them.
Fixed an issue where top(max) could throw an exception when given values large enough to be represented as positive infinity.
Fixed an issue where live queries would sometimes double-count parts of the historic data.
Warn at startup if
CORES
>AvailableProcessorCount
as seen by the JVM.Fixed a bug where the
button on the Fields panel would do nothingFixed an issue where queries of the form #someTagField != someValue ... would sometimes produce incorrect results.
Fixed a bug where providing a bad view/repository name when blocking queries would block the query in all views and repositories.
Fixed a compatibility issue with FileBeat 8.0.0.
Fixed several issues where users could add invalid query filters via the
context button after selecting text in the Event List.Fixed an ingest bug where under some circumstances we would reverse the order of events in a batch.
During Digest startup, abort fetching segments from other nodes if the assigned partition set changes while fetching.
Fixed an issue where negated functions could lose their negation.
Fixed an issue where
percentile()
would crash on inputs larger than ~1.76e308.Previously a package could be updated with another package with the same name and version, but with different content. This is no longer allowed, and any attempt do so will be rejected and fail.
The Kafka client has been upgraded to 3.1.0 from 2.8.1. 3.1.0 enables the idempotent producer by default, which implies acks=all. Clusters that set acknowledgements to a different number via
EXTRA_KAFKA_CONFIGS_FILE
should update their config to also specifyenable.idempotence=false
.LSP warnings don't crash queries any more.
Ensure a digester can only acquire or release partition leadership if no other digester has leadership of that partition. This could cause spurious reboots if digester leadership became contended.
Humio Server 1.37.1 GA (2022-02-25)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.37.1 | GA | 2022-02-25 | Cloud | 2023-03-31 | No | 1.26.0 | No |
Available for download two days after release.
Hide file hashes
JAR Checksum | Value |
---|---|
MD5 | e46117f38ef25afde602a458e5266b4b |
SHA1 | 3b0f5a30a7918a5f08ae8492bf855fadfc36fb90 |
SHA256 | 014420dfb22f8521a687eac1179c528afe0477fbb9284dc5381d9f91fd5329d1 |
SHA512 | ee13258be363f1b2103b864517ea4b9de9d6a78dcee0c4fcba95af2ea873dacebb298f6410f12cf68ecde3d17210256f85713ada7f5988726dc4440f5fbeee21 |
Docker Image | SHA256 Checksum |
---|---|
humio | e131f664f67e5ce98a397431b1f1a9d3c134d47dd24a24ed8ad76ae9e6d79eb0 |
humio-core | e0de42791dd3e83c2aa9bea871cb6e7a62bf51c835de2142adfd51aa863625a3 |
kafka | 9ab50e070047406cd6316fa2017d15ca24caa6e7a9b20ba741114b9cc1c3feec |
zookeeper | 47aa1a18a57652f156406021dc61f8bb1042bc2bd23c89e715e14a8be4fb07f9 |
Minor fixes and improvements.
New features and improvements
Falcon Data Replicator
Improved performance of FDRJob.
Other
Added a new system-level permission that allows changing usernames of users.
Improved off-heap memory handling. Humio now typically uses only 1 GB on systems with 32 vCPUs, down from typically 16 GB. This leaves more memory for other processes and page cache for data.
Fixed in this release
Other
Fixed an issue where users could be shown in-development feature on the client when running a local installation of Humio.
Fixed an issue where
QueryFunctionValidator
failed giving the error scala.MatchError.Fixed an issue where some queries using regex would use an unbounded regex engine.
Humio Server 1.37.0 GA (2022-02-14)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.37.0 | GA | 2022-02-14 | Cloud | 2023-03-31 | No | 1.26.0 | Yes |
Available for download two days after release.
Hide file hashes
JAR Checksum | Value |
---|---|
MD5 | f5c25eee6d9efb0ddf9d86ca74c84a3c |
SHA1 | 6026cc511279e9089bc49d8e5a0dd320e5397712 |
SHA256 | 097886ea8a6d2eece7980c46ba0b1002b7f7edf6d68109f91374b002a61e4975 |
SHA512 | 7c8fb7d2c53c5aab60ddf54250e47d41289cd9635005d3a765742298a20eb7f35426955a7388caf48fbc044747e917cfdd93003ce252c42358d947a504f73c29 |
Docker Image | SHA256 Checksum |
---|---|
humio | c27d93d4fd87d253117b1442a7391ea3cd28366a960727482b27b58706211435 |
humio-core | f43d4a206d9386d601798029df5c02d9b1e1b35bb6ae0f666aa6d8d36bcc60e3 |
kafka | 291d25337324ab70a3490f9b06ed2860215f37b84ced203b26cfa5d10877ce42 |
zookeeper | 606ebc2ffcd28bc36d7d091ad649743f22de21b891f7a75c9048343e019372ef |
Humio can now poll and ingest data from the Falcon platform's Falcon Data Replicator (FDR) service. This feature can be used as an alternative to the standalone fdr2humio project. See the Ingesting FDR Data into a Repository for more information.
New features and improvements
UI Changes
Reworked the hover message layout and changed the hover information on text (in the search field).
Hover over parameter names and arguments in the search field now includes the default value.
On the time, bar and pie charts you can hold the
ALT
/OPTION
key to display long legend titles.Added a quick-fix for unknown escape sequences in the search field.
The bar and pie charts now support holding the SHIFT key to display unformatted numeric values.
First row entry in the statistics table on the repo page is now a table header and added hidden content to the empty table header in the new view page.
The Cluster Nodes table has been redesigned to allow for easier overview and copying the version-number.
The search page now has focus states on the Language Syntax, Event List Widget and buttons.
When using the table visualisation in dark mode, empty table cells are now clearly discernible.
Better accessibility for queries panel. You can now tab to focus individual queries, and open a details panel. From here you can also access all actions in the details panel by tabbing.
Visually hidden clipboard field is now hidden for assistive technologies/keyboard users.
Added a warning for unknown escape sequences in the search field.
Minor UX improvements (ie. accessibility) on the queries panel.
Added a quick-fix to convert non-ASCII quotes to ASCII quotes in the search field.
Hover information in the search field is shown despite an overlapping warning.
Pop-ups and drop-downs will now close automatically when focus leaves them.
When changing focus inside a dialog with the keyboard, the focus will no longer move outside the dialog while it is open.
GraphQL API
Deprecates the ReadContents view action, in favor of ReadEvents. This also means ReadEvents has been undeprecated, as we have slightly changed how we consider read rights, and want the action names to match this.
Fixed a bug in the response from calling the installPackageFromZip GraphQL mutation. Previously, the response type exposed a deprecated clientmutationid that could not be selected. Also now if form fields are missing they are properly reported in the response.
Configuration
Fixed a bug where
TLS_KEYSTORE_TYPE
andTLS_TRUSTSTORE_TYPE
would only recognize lower-case values.Added config
RDNS_DEFAULT_SERVER
for specifying what DNS server is the default for therdns()
query function.Added config
IP_FILTER_RDNS
for specifying what IP addresses can be queried using therdns()
query function.Added new settings for how uploads to bucket storage are validated. In the case that validation with etags are not available, content length can be used instead.
Added config
IP_FILTER_RDNS_SERVER
for specifying what DNS servers can be allowed in therdns()
query function.Reduce default value of
INGESTQUEUE_COMPRESSION_LEVEL
, the ingest queue compression level from 1 to 0. This reduces time spent compressing before inserting into the ingest queue by roughly 4x at the expense of a 10-20% increase in size required in Kafka for the ingest queue topic.The
PERMISSION_MODEL_MODE
configuration option has been removed. All graphql related schema has also been removed.The Property
inter.broker.protocol.version
inkafka.properties
now defaults to 2.4 if not specified. Users upgrading Kafka can either setinter.broker.protocol.version
manually inkafka.properties
, or passDEFAULT_INTER_BROKER_PROTOCOL_VERSION
as an environment variable to Docker when launching the container. Please follow Kafka's upgrade guidelines when upgrading a Kafka cluster to avoid data loss https://kafka.apache.org/documentation/#upgrade_3_1_0.When Kafka topic configuration is managed by Humio (default true) set
max.message.bytes
on the topics to the value of ConfigTOPIC_MAX_MESSAGE_BYTES
, default is 8388608 (8 MB). Minimum value is 2 MB.Added the config
CORS_ALLOWED_ORIGINS
a comma separated list for CORS allowed origins, default allows all origins.
Other
Improve the performance of deletes from global.
Published new versions of the Humio Kafka Docker containers for Kafka 3.1.0.
Ensure only a cluster leader that still holds cluster leadership can force digesters to release partition leadership. This could cause spurious reboots in clusters where leadership was under contention.
Allow cluster managers access to settings for personal sandboxes and to block and kill queries in them.
Added tombstoning to uploaded files, which helps with avoiding data loss.
Do not run the Global snapshot consistency check on stateless ingest nodes.
The Kafka client has been upgraded to 3.1.0 from 2.8.1. 3.1.0 enables the idempotent producer by default, which implies acks=all. Clusters that set acks to a different number via
EXTRA_KAFKA_CONFIGS_FILE
should update their config to also specify enable.idempotence=falseDuring Digest startup, abort fetching segments from other nodes if the assigned partition set changes while fetching.
Ensure a digester can only acquire or release partition leadership if no other digester has leadership of that partition. This could cause spurious reboots if digester leadership became contended.
During identity provider configuration, it's possible to fetch SAML configuration from an endpoint.
Fixed in this release
UI Changes
Fixed an issue where live queries would sometimes double-count parts of the historic data.
Fixed a bug where the
button on the Fields panel would do nothingFixed a bug where the Package Marketplace would redirect to unsupported package versions on older Humio instances.
Previously a package could be updated with another package with the same name and version, but with different content. This is no longer allowed, and any attempt do so will be rejected and fail.
Fixed a compatibility issue with FileBeat 8.0.0.
Fixed several issues where users could add invalid query filters via the
context button after selecting text in the Event List.For HTTP Event Collector (HEC) the input field sourcetype is now also stored in @sourcetype.
Fixed an issue where
tail()
could produce results inconsistent with other query functions, when used in a live query.Fixes an issue with epoch and offsets not always being stripped from segments.
LSP warnings don't crash queries any more.
Fixed an issue where queries of the form #someTagField != someValue ... would sometimes produce incorrect results.
Fixed an issue where negated functions could lose their negation.
Fixed an issue where top(max) could throw an exception when given values large enough to be represented as positive infinity.
Fixed an issue where queries with
tail()
would behave in an unexpected manner when an event is focused.Fixed a bug where providing a bad view/repository name when blocking queries would block the query in all views and repositories.
Fixed a bug in the Sankey chart such that it now updates on updated query results.
Fixed a compatibility issue with LogStash 7.16+ and 8.0.0 when using the Elasticsearch output plugin.
Fixed an issue where
percentile()
would crash on inputs larger than ~1.76e308.Warn at startup if
CORES
>AvailableProcessorCount
as seen by the JVM.
Humio Server 1.36.4 LTS (2022-06-13)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.36.4 | LTS | 2022-06-13 | Cloud | 2023-01-31 | No | 1.26.0 | No |
Hide file hashes
JAR Checksum | Value |
---|---|
MD5 | ec11e6c40052ff89c3f7a2a2b1fddc95 |
SHA1 | fafb29f0c7fe9638ad0b136ae0e7d1402a750912 |
SHA256 | 3b8496c25ac0704d8e42f9261ec93877c7c9148f1f85d8bf0b5a6eaaf83e1820 |
SHA512 | 205f646e1a700d38c95916a69830ab2b4f14b7839d8eaefa61cbdaf4404759e3ed8f6b9279b0050008cdaacddad6222cb8a91f730753b84a1fc029e4a5207ab4 |
Docker Image | SHA256 Checksum |
---|---|
humio | 5c7189dfae8441322eea8eddbd25962dfc7e68f26eae3c7516e5a46f5b6313f0 |
humio-core | 57ab8aaa9ce90575fd037da08f2be15df18286e74480594f08dd3c5ae81f39fa |
kafka | 4a03a4e0cb9aab9a0c3f626c8b49daa64dafb54eafb8ae67298dd86b45624c33 |
zookeeper | 24ba0f7cdcaca8d841dba4ba2cdc7348e82c6c44c1ffff5537ba13d066c7c180 |
Download: https://repo.humio.com/repository/maven-releases/com/humio/server/1.36.4/server-1.36.4.tar.gz
These notes include entries from the following previous releases: 1.36.0, 1.36.1, 1.36.2, 1.36.3
Updated dependencies with security fixes and weakness.
New features and improvements
UI Changes
New feature to select text in the search page event list and include/exclude that in the search query.
Improved dark mode toggle button's accessibility.
Disable the option to creating a view if the user does not have Connect a view permission on any repository. This is more intuitive than getting an empty dropdown of repositories to choose from.
Improved accessibility when choosing a theme.
Allow more dialogs in the UI to be closed with the
Esc
key.Added ability to resize search page query field by dragging or fitting to query.
Time Selector is now accessible by keyboard.
Hovering over text within a query now shows the result of interpreting escape characters.
New dialogs for creation of parsers and dashboards.
GraphQL API
Improved the error messages when the GraphQL queries SearchDomain.alert, SearchDomain.action, and SearchDomain.savedQuery do not find the entity with the given ID.
Configuration
Added the config
CORS_ALLOWED_ORIGINS
a comma separated list for CORS allowed origins, default allows all origins.Added
INITIAL_FEATURE_FLAGS
which lets you enable/disable feature flags on startup. For instance, settingINITIAL_FEATURE_FLAGS=+UserRoles,-UsagePage
Enables
UserRoles
and disablesUsagePage
.Make
ZOOKEEPER_URL
optional. When not set, the zookeeper-status-logger job does not run, and the cluster administration page does not display information about a ZooKeeper cluster.New configuration
BUCKET_STORAGE_MULTIPLE_ENDPOINTS
and many configurations usingSTORAGE_2
as prefix. See Bucket Storage.When using
ZOOKEEPER_URL_FOR_NODE_UUID
for assignment of node ID to Humio nodes, and value ofZOOKEEPER_PREFIX_FOR_NODE_UUID
(default/humio_autouuid
) does not match contents of localUUID
file, acquire a fresh nodeuuid
.
Functions
Added job which will periodically run a query and record how long it took. By default the query is
count()
.Added a limit parameter to the
fieldstats()
function. This parameter limits the number of fields to include in the result.
Other
Added option to specify an IP Filter for which addresses hostname verification should not be made.
Added granular IP Filter support for shared dashboards (BETA - API only).
Added analytics on query language feature use to the audit-log under the fields
queryParserMetrics
.Allow the query scheduler to enqueue segments and
aux
files for download from bucket storage more regularly. This should ensure that queries fetching smallaux
files can more reliably keep the download job busy.Remove caching of API calls to prevent caching of potential sensitive data.
Added warning logs when errors are rendered to browser during OAuth flows.
Added exceptions to the Humio logs from
AlertJob
andScheduledSearchJob
.Added ability to override max auto shard count for a specific repository.
Improved the default permissions on the group page by leaving their view expanded once the user cancels update.
Allow the same view name across organizations.
Improved caching of UI static assets.
Improved the error message when an ingest request times out.
Added a job that scans segments which are waiting to be archived, this value is recorded in the metric:
s3-archiving-latency-max
.Improved Humio's detection of Kafka resets. We now load the Kafka cluster id once on boot. If it changes after that, the node will crash.
Improved usability of the groups page.
Fixed in this release
Security
Updated java script dependencies to fix vulnerabilities to CVE-2021-22573.
Summary
Use latest version of Java 1.13 on Docker image.
Use latest version of Alpine on Docker image.
Reading hashfilter in chunks to avoid having huge off heap buffers.
Updated dependencies to Jackson to fix a vulnerability.
Performance improvements of IngestPartitionCoordinator.
Updated java script dependencies to fix vulnerabilities.
Improve the performance of deletes from global.
Improved off-heap memory handling. Humio now typically uses only 1 GB on systems with 32 vCPUs, down from typically 16 GB. This leaves more memory for other processes and page cache for data.
Downgrade to Java 1.13 on Docker image to fix rare cases of JVM crashes.
UI Changes
For HTTP Event Collector (HEC) the input field
sourcetype
is now also stored in@sourcetype
.Remove
script-src: unsafe-eval
from content security policy.Removed a spurious warning log when requesting a non-existent
hash
file from S3.The action message templates
{events_str}
and{query_result_summary}
always evaluate to the same string. To reflect this, the UI has been updated so that these templates are combined into the same item in the template overview for Email, Slack and Webhook actions.Fixed an issue where the
SegmentMoverJob
could delete the local copy of a segment, if a pending download of the segment failed the CRC check. The job will now keep the downloaded file at a temporary path until the CRC check completes, to avoid deleting a local copy created by other jobs, e.g. by bucket downloads.The query endpoint API now supports languageVersion for specifying Humio query language versions.
Fixed a compatibility issue with Filebeat 7.16.0.
Make writes to Kafka's chatter topic block in a similar manner as writes to global.
Fixed an issue where
top
would fail if the sum of the values exceeded 2^63-1. Exceeding sums are now pegged to 2^63-1.When bootstrapping a new cluster, set the cluster version in global right away. Since nodes will not boot on a snapshot that doesn't specify a cluster version, it is important that this field exists in all snapshots.
Reenable a feature to make Humio delete local copies of bucketed segments, even if they are involved in a query.
Fixed an issue where repeating queries could cause other queries to fail.
Fixed an issue in the
Table
widget. It will no longer insert 0-values for missing fields in integer columns. Empty fields will be shown consistently, independent of the column data type.The
/hec
endpoint no longer responds toOPTIONS
requests saying it supportsGET
requests. It doesn't and never has.Fixed an issue where choosing a UI theme would not get saved properly in the user's settings.
Make Humio handle missing
aux
files a little faster when downloading segments from bucket storage.Fixed a race condition that could cause Humio to delete more segments than expected when initializing a digester node.
Fixed an issue in the Export to file dialog on the search page. It is now possible to export fields with spaces.
The
repository/.../query
endpoint now returns a status code of.0 (BadRequest)
when given an invalid query in some cases where previously it returned503 (ServiceUnavailable)
.Fixed an issue where the Humio query URLs sent by actions would land users on the search page in editing mode for the alert or scheduled search that had triggered. Now, they still land on the search page, but not in editing mode.
Fixed a race condition that could cause digesters to calculate two different offsets during startup when determining where to start consuming, and which partially written segments to discard, which could lead to data loss when partially written segments were replayed from Kafka.
Queries on views no longer restart when the ordering of the view's connections is changed.
Fixed an issue where queries of the form
#someTagField != someValue ...
would sometimes produce incorrect results.Code completion in the query editor now also works on the right hand side of
:=
.Fixed an issue where MaxMind databases would only update if a license was present at startup and not if it was added later.
Fixed
session()
such that it works when events arrive out of time order.Fixed an issue that repeatedly tried to restart live queries from a given user upon the deletion of the user.
Fixed an issue where live queries would sometimes double-count parts of the historic data.
When interacting with the REST API for files, errors now have detailed error messages.
Fixed an issue where, if a custom parser was overriding a built-in parser, then the custom parser could accidentally be overwritten by creating a new parser with the same name.
From the alerts overview and the scheduled searches overview, it is now possible to clear the error status on an alert or a scheduled search.
Errors on alerts are now cleared more granularly. Errors when starting the alert query are cleared as soon as the query is successfully started, errors from polling the query are cleared when the query is successfully polled, and errors from invoking actions are cleared when at least one action has been successfully triggered.
Reduce noise in the log when the bucket storage upload job attempts to upload a file that is deleted concurrently.
Errors on scheduled searches are now cleared more granularly. Errors when starting a query are cleared as soon as another query is successfully started, errors from polling a query are cleared when a query is successfully polled, and errors from invoking actions are cleared when at least one action has been successfully triggered.
No longer allow organization- and system-level ingest tokens to ingest into sandbox and system repos.
Reenable a feature to make Humio fetch and check
hash
files from bucket storage before fetching the segments.No longer allow requests to
/hec
to specify organizations by name. We now only accept IDs.SAML and OIDC only - During signout, Humio background tabs will be redirected to a signout landing page instead of to the login page.
Humio now tries to avoid interrupting threads during shutdown, instead allowing them to finish their work. This should reduce log noise when shutting down.
The
AlertJob
andScheduledSearchJob
now only log validation errors from running the queries as warnings, previously, some of these were logged as errors.Fixed an issue where nodes could request partitions from the query partitioning table that were not present.
When starting ingest, Humio checks that the computed starting position in Kafka is below the Kafka end offset. Ensure that the end offset is requested after the starting position is computed, not before. This might prevent a very rare spurious boot failure.
Fixed Humio always reading and discarding an already processed message from the ingest queue on boot.
Fixed a number of instability issues in the query scheduler. The scheduler should now more reliably ensure that each query either completes, or is cancelled.
Bumped the Humio Docker containers to Java 17. If you manually set any
--add-opens
flags in your JVM config, you should remove them. The container should set the right flags automatically.Fixed an issue where the digest coordinator could consider a host to be alive if the coordinator hadn't seen any timestamps from that host.
When creating ingest and chatter topic, reduce desired max.message.bytes to what Kafka cluster allows, if that is lower than our desired values.
Queries
Query partition tables updates are now rejected if written by a node that is no longer the cluster leader.
Other
Fixed a race condition between nodes creating the merge result for the same target segment, and also transferring it among the nodes concurrently. If a query read the file during that race condition, an in-memory cache of the file header might hold contents that did not match the local file, resulting in "Broken segment" warnings in queries.
Fix ingest bug where under some circumstances we would reverse the order of events in a batch.
Humio Server 1.36.3 LTS (2022-04-27)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.36.3 | LTS | 2022-04-27 | Cloud | 2023-01-31 | No | 1.26.0 | No |
Hide file hashes
JAR Checksum | Value |
---|---|
MD5 | 868ce151355a55e1793513eee4af54e2 |
SHA1 | a3e57e2fdbadcbfde4625d46084f25d84c5c2514 |
SHA256 | 59792217be1bf5d12ba47e1049df45778c6cefcabbc57d6f7126b5bc98f093c0 |
SHA512 | a93f931491b8c77050adde3c1aa983d3c949ad6708f8be932cfb901ea6e69a03cdd06a62394627f80e593afaa354f2f30edf596a23300f5c3b16faee80afb09c |
Docker Image | SHA256 Checksum |
---|---|
humio | be443af484a4131b93cdee2e7a5449fcfa8320a344d2f81f7b432dea89365a65 |
humio-core | 1f3a36d2f6a0d97bc9d4462072e3fafa2355c5bc8352e1bc47a526332c1ec01c |
kafka | ef6819d75f850b26a886b67e4e80f2d623eab268c4d2b4f85f7f47aa8daf9ecb |
zookeeper | a0edb279463029310c1f43555045297f54882082a70753916394a38b56c3e5be |
Download: https://repo.humio.com/repository/maven-releases/com/humio/server/1.36.3/server-1.36.3.tar.gz
These notes include entries from the following previous releases: 1.36.0, 1.36.1, 1.36.2
Updated dependencies with security fixes and weakness.
New features and improvements
UI Changes
New feature to select text in the search page event list and include/exclude that in the search query.
Improved dark mode toggle button's accessibility.
Disable the option to creating a view if the user does not have Connect a view permission on any repository. This is more intuitive than getting an empty dropdown of repositories to choose from.
Improved accessibility when choosing a theme.
Allow more dialogs in the UI to be closed with the
Esc
key.Added ability to resize search page query field by dragging or fitting to query.
Time Selector is now accessible by keyboard.
Hovering over text within a query now shows the result of interpreting escape characters.
New dialogs for creation of parsers and dashboards.
GraphQL API
Improved the error messages when the GraphQL queries SearchDomain.alert, SearchDomain.action, and SearchDomain.savedQuery do not find the entity with the given ID.
Configuration
Added the config
CORS_ALLOWED_ORIGINS
a comma separated list for CORS allowed origins, default allows all origins.Added
INITIAL_FEATURE_FLAGS
which lets you enable/disable feature flags on startup. For instance, settingINITIAL_FEATURE_FLAGS=+UserRoles,-UsagePage
Enables
UserRoles
and disablesUsagePage
.Make
ZOOKEEPER_URL
optional. When not set, the zookeeper-status-logger job does not run, and the cluster administration page does not display information about a ZooKeeper cluster.New configuration
BUCKET_STORAGE_MULTIPLE_ENDPOINTS
and many configurations usingSTORAGE_2
as prefix. See Bucket Storage.When using
ZOOKEEPER_URL_FOR_NODE_UUID
for assignment of node ID to Humio nodes, and value ofZOOKEEPER_PREFIX_FOR_NODE_UUID
(default/humio_autouuid
) does not match contents of localUUID
file, acquire a fresh nodeuuid
.
Functions
Added job which will periodically run a query and record how long it took. By default the query is
count()
.Added a limit parameter to the
fieldstats()
function. This parameter limits the number of fields to include in the result.
Other
Added option to specify an IP Filter for which addresses hostname verification should not be made.
Added granular IP Filter support for shared dashboards (BETA - API only).
Added analytics on query language feature use to the audit-log under the fields
queryParserMetrics
.Allow the query scheduler to enqueue segments and
aux
files for download from bucket storage more regularly. This should ensure that queries fetching smallaux
files can more reliably keep the download job busy.Remove caching of API calls to prevent caching of potential sensitive data.
Added warning logs when errors are rendered to browser during OAuth flows.
Added exceptions to the Humio logs from
AlertJob
andScheduledSearchJob
.Added ability to override max auto shard count for a specific repository.
Improved the default permissions on the group page by leaving their view expanded once the user cancels update.
Allow the same view name across organizations.
Improved caching of UI static assets.
Improved the error message when an ingest request times out.
Added a job that scans segments which are waiting to be archived, this value is recorded in the metric:
s3-archiving-latency-max
.Improved Humio's detection of Kafka resets. We now load the Kafka cluster id once on boot. If it changes after that, the node will crash.
Improved usability of the groups page.
Fixed in this release
Summary
Use latest version of Java 1.13 on Docker image.
Use latest version of Alpine on Docker image.
Reading hashfilter in chunks to avoid having huge off heap buffers.
Updated dependencies to Jackson to fix a vulnerability.
Performance improvements of IngestPartitionCoordinator.
Updated java script dependencies to fix vulnerabilities.
Improve the performance of deletes from global.
Improved off-heap memory handling. Humio now typically uses only 1 GB on systems with 32 vCPUs, down from typically 16 GB. This leaves more memory for other processes and page cache for data.
Downgrade to Java 1.13 on Docker image to fix rare cases of JVM crashes.
UI Changes
For HTTP Event Collector (HEC) the input field
sourcetype
is now also stored in@sourcetype
.Remove
script-src: unsafe-eval
from content security policy.Removed a spurious warning log when requesting a non-existent
hash
file from S3.The action message templates
{events_str}
and{query_result_summary}
always evaluate to the same string. To reflect this, the UI has been updated so that these templates are combined into the same item in the template overview for Email, Slack and Webhook actions.Fixed an issue where the
SegmentMoverJob
could delete the local copy of a segment, if a pending download of the segment failed the CRC check. The job will now keep the downloaded file at a temporary path until the CRC check completes, to avoid deleting a local copy created by other jobs, e.g. by bucket downloads.The query endpoint API now supports languageVersion for specifying Humio query language versions.
Fixed a compatibility issue with Filebeat 7.16.0.
Make writes to Kafka's chatter topic block in a similar manner as writes to global.
Fixed an issue where
top
would fail if the sum of the values exceeded 2^63-1. Exceeding sums are now pegged to 2^63-1.When bootstrapping a new cluster, set the cluster version in global right away. Since nodes will not boot on a snapshot that doesn't specify a cluster version, it is important that this field exists in all snapshots.
Reenable a feature to make Humio delete local copies of bucketed segments, even if they are involved in a query.
Fixed an issue where repeating queries could cause other queries to fail.
Fixed an issue in the
Table
widget. It will no longer insert 0-values for missing fields in integer columns. Empty fields will be shown consistently, independent of the column data type.The
/hec
endpoint no longer responds toOPTIONS
requests saying it supportsGET
requests. It doesn't and never has.Fixed an issue where choosing a UI theme would not get saved properly in the user's settings.
Make Humio handle missing
aux
files a little faster when downloading segments from bucket storage.Fixed a race condition that could cause Humio to delete more segments than expected when initializing a digester node.
Fixed an issue in the Export to file dialog on the search page. It is now possible to export fields with spaces.
The
repository/.../query
endpoint now returns a status code of.0 (BadRequest)
when given an invalid query in some cases where previously it returned503 (ServiceUnavailable)
.Fixed an issue where the Humio query URLs sent by actions would land users on the search page in editing mode for the alert or scheduled search that had triggered. Now, they still land on the search page, but not in editing mode.
Fixed a race condition that could cause digesters to calculate two different offsets during startup when determining where to start consuming, and which partially written segments to discard, which could lead to data loss when partially written segments were replayed from Kafka.
Queries on views no longer restart when the ordering of the view's connections is changed.
Fixed an issue where queries of the form
#someTagField != someValue ...
would sometimes produce incorrect results.Code completion in the query editor now also works on the right hand side of
:=
.Fixed an issue where MaxMind databases would only update if a license was present at startup and not if it was added later.
Fixed
session()
such that it works when events arrive out of time order.Fixed an issue that repeatedly tried to restart live queries from a given user upon the deletion of the user.
Fixed an issue where live queries would sometimes double-count parts of the historic data.
When interacting with the REST API for files, errors now have detailed error messages.
Fixed an issue where, if a custom parser was overriding a built-in parser, then the custom parser could accidentally be overwritten by creating a new parser with the same name.
From the alerts overview and the scheduled searches overview, it is now possible to clear the error status on an alert or a scheduled search.
Errors on alerts are now cleared more granularly. Errors when starting the alert query are cleared as soon as the query is successfully started, errors from polling the query are cleared when the query is successfully polled, and errors from invoking actions are cleared when at least one action has been successfully triggered.
Reduce noise in the log when the bucket storage upload job attempts to upload a file that is deleted concurrently.
Errors on scheduled searches are now cleared more granularly. Errors when starting a query are cleared as soon as another query is successfully started, errors from polling a query are cleared when a query is successfully polled, and errors from invoking actions are cleared when at least one action has been successfully triggered.
No longer allow organization- and system-level ingest tokens to ingest into sandbox and system repos.
Reenable a feature to make Humio fetch and check
hash
files from bucket storage before fetching the segments.No longer allow requests to
/hec
to specify organizations by name. We now only accept IDs.SAML and OIDC only - During signout, Humio background tabs will be redirected to a signout landing page instead of to the login page.
Humio now tries to avoid interrupting threads during shutdown, instead allowing them to finish their work. This should reduce log noise when shutting down.
The
AlertJob
andScheduledSearchJob
now only log validation errors from running the queries as warnings, previously, some of these were logged as errors.Fixed an issue where nodes could request partitions from the query partitioning table that were not present.
When starting ingest, Humio checks that the computed starting position in Kafka is below the Kafka end offset. Ensure that the end offset is requested after the starting position is computed, not before. This might prevent a very rare spurious boot failure.
Fixed Humio always reading and discarding an already processed message from the ingest queue on boot.
Fixed a number of instability issues in the query scheduler. The scheduler should now more reliably ensure that each query either completes, or is cancelled.
Bumped the Humio Docker containers to Java 17. If you manually set any
--add-opens
flags in your JVM config, you should remove them. The container should set the right flags automatically.Fixed an issue where the digest coordinator could consider a host to be alive if the coordinator hadn't seen any timestamps from that host.
When creating ingest and chatter topic, reduce desired max.message.bytes to what Kafka cluster allows, if that is lower than our desired values.
Queries
Query partition tables updates are now rejected if written by a node that is no longer the cluster leader.
Other
Fixed a race condition between nodes creating the merge result for the same target segment, and also transferring it among the nodes concurrently. If a query read the file during that race condition, an in-memory cache of the file header might hold contents that did not match the local file, resulting in "Broken segment" warnings in queries.
Fix ingest bug where under some circumstances we would reverse the order of events in a batch.
Humio Server 1.36.2 LTS (2022-03-01)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.36.2 | LTS | 2022-03-01 | Cloud | 2023-01-31 | No | 1.26.0 | No |
Hide file hashes
JAR Checksum | Value |
---|---|
MD5 | 4949fefd4012efb53b92595c364a4c7a |
SHA1 | 5350c5df3d0d43a5c08c2766003b28c959c912b1 |
SHA256 | 2fac7cb6e1216790893ec0759f697736a9ca2605313bcef53b7af54ee72257d5 |
SHA512 | 31fcdae45f52a263d5340b961a6e0feb98ea1fbb4ced0388216add576db9747995150112dee88d9ec3ea49cfc1367da1ca57bff13399d8d665fe9cdb80e9a1d8 |
Docker Image | SHA256 Checksum |
---|---|
humio | 4a80cd68778a6d5ccd734982c46e0355f0acf5c68ad564dd67cdb6e949750970 |
humio-core | 7a3e262871585d8a43df8c4c4dbcf706327cade2f683ffaa9a543afccc3e8687 |
kafka | 6e362a4827f9a41d37b19bdfa63f3cad7f77d2786f6b0366e410474de99113ce |
zookeeper | deac05cfaa4762eaf079afa3063cf9271eff7080157d0e785a4c345d280f5d24 |
Download: https://repo.humio.com/repository/maven-releases/com/humio/server/1.36.2/server-1.36.2.tar.gz
These notes include entries from the following previous releases: 1.36.0, 1.36.1
Performance and stability improvements.
New features and improvements
UI Changes
New feature to select text in the search page event list and include/exclude that in the search query.
Improved dark mode toggle button's accessibility.
Disable the option to creating a view if the user does not have Connect a view permission on any repository. This is more intuitive than getting an empty dropdown of repositories to choose from.
Improved accessibility when choosing a theme.
Allow more dialogs in the UI to be closed with the
Esc
key.Added ability to resize search page query field by dragging or fitting to query.
Time Selector is now accessible by keyboard.
Hovering over text within a query now shows the result of interpreting escape characters.
New dialogs for creation of parsers and dashboards.
GraphQL API
Improved the error messages when the GraphQL queries SearchDomain.alert, SearchDomain.action, and SearchDomain.savedQuery do not find the entity with the given ID.
Configuration
Added the config
CORS_ALLOWED_ORIGINS
a comma separated list for CORS allowed origins, default allows all origins.Added
INITIAL_FEATURE_FLAGS
which lets you enable/disable feature flags on startup. For instance, settingINITIAL_FEATURE_FLAGS=+UserRoles,-UsagePage
Enables
UserRoles
and disablesUsagePage
.Make
ZOOKEEPER_URL
optional. When not set, the zookeeper-status-logger job does not run, and the cluster administration page does not display information about a ZooKeeper cluster.New configuration
BUCKET_STORAGE_MULTIPLE_ENDPOINTS
and many configurations usingSTORAGE_2
as prefix. See Bucket Storage.When using
ZOOKEEPER_URL_FOR_NODE_UUID
for assignment of node ID to Humio nodes, and value ofZOOKEEPER_PREFIX_FOR_NODE_UUID
(default/humio_autouuid
) does not match contents of localUUID
file, acquire a fresh nodeuuid
.
Functions
Added job which will periodically run a query and record how long it took. By default the query is
count()
.Added a limit parameter to the
fieldstats()
function. This parameter limits the number of fields to include in the result.
Other
Added option to specify an IP Filter for which addresses hostname verification should not be made.
Added granular IP Filter support for shared dashboards (BETA - API only).
Added analytics on query language feature use to the audit-log under the fields
queryParserMetrics
.Allow the query scheduler to enqueue segments and
aux
files for download from bucket storage more regularly. This should ensure that queries fetching smallaux
files can more reliably keep the download job busy.Remove caching of API calls to prevent caching of potential sensitive data.
Added warning logs when errors are rendered to browser during OAuth flows.
Added exceptions to the Humio logs from
AlertJob
andScheduledSearchJob
.Added ability to override max auto shard count for a specific repository.
Improved the default permissions on the group page by leaving their view expanded once the user cancels update.
Allow the same view name across organizations.
Improved caching of UI static assets.
Improved the error message when an ingest request times out.
Added a job that scans segments which are waiting to be archived, this value is recorded in the metric:
s3-archiving-latency-max
.Improved Humio's detection of Kafka resets. We now load the Kafka cluster id once on boot. If it changes after that, the node will crash.
Improved usability of the groups page.
Fixed in this release
Summary
Reading hashfilter in chunks to avoid having huge off heap buffers.
Performance improvements of IngestPartitionCoordinator.
Improve the performance of deletes from global.
Improved off-heap memory handling. Humio now typically uses only 1 GB on systems with 32 vCPUs, down from typically 16 GB. This leaves more memory for other processes and page cache for data.
Downgrade to Java 1.13 on Docker image to fix rare cases of JVM crashes.
UI Changes
For HTTP Event Collector (HEC) the input field
sourcetype
is now also stored in@sourcetype
.Remove
script-src: unsafe-eval
from content security policy.Removed a spurious warning log when requesting a non-existent
hash
file from S3.The action message templates
{events_str}
and{query_result_summary}
always evaluate to the same string. To reflect this, the UI has been updated so that these templates are combined into the same item in the template overview for Email, Slack and Webhook actions.Fixed an issue where the
SegmentMoverJob
could delete the local copy of a segment, if a pending download of the segment failed the CRC check. The job will now keep the downloaded file at a temporary path until the CRC check completes, to avoid deleting a local copy created by other jobs, e.g. by bucket downloads.The query endpoint API now supports languageVersion for specifying Humio query language versions.
Fixed a compatibility issue with Filebeat 7.16.0.
Make writes to Kafka's chatter topic block in a similar manner as writes to global.
Fixed an issue where
top
would fail if the sum of the values exceeded 2^63-1. Exceeding sums are now pegged to 2^63-1.When bootstrapping a new cluster, set the cluster version in global right away. Since nodes will not boot on a snapshot that doesn't specify a cluster version, it is important that this field exists in all snapshots.
Reenable a feature to make Humio delete local copies of bucketed segments, even if they are involved in a query.
Fixed an issue where repeating queries could cause other queries to fail.
Fixed an issue in the
Table
widget. It will no longer insert 0-values for missing fields in integer columns. Empty fields will be shown consistently, independent of the column data type.The
/hec
endpoint no longer responds toOPTIONS
requests saying it supportsGET
requests. It doesn't and never has.Fixed an issue where choosing a UI theme would not get saved properly in the user's settings.
Make Humio handle missing
aux
files a little faster when downloading segments from bucket storage.Fixed a race condition that could cause Humio to delete more segments than expected when initializing a digester node.
Fixed an issue in the Export to file dialog on the search page. It is now possible to export fields with spaces.
The
repository/.../query
endpoint now returns a status code of.0 (BadRequest)
when given an invalid query in some cases where previously it returned503 (ServiceUnavailable)
.Fixed an issue where the Humio query URLs sent by actions would land users on the search page in editing mode for the alert or scheduled search that had triggered. Now, they still land on the search page, but not in editing mode.
Fixed a race condition that could cause digesters to calculate two different offsets during startup when determining where to start consuming, and which partially written segments to discard, which could lead to data loss when partially written segments were replayed from Kafka.
Queries on views no longer restart when the ordering of the view's connections is changed.
Fixed an issue where queries of the form
#someTagField != someValue ...
would sometimes produce incorrect results.Code completion in the query editor now also works on the right hand side of
:=
.Fixed an issue where MaxMind databases would only update if a license was present at startup and not if it was added later.
Fixed
session()
such that it works when events arrive out of time order.Fixed an issue that repeatedly tried to restart live queries from a given user upon the deletion of the user.
Fixed an issue where live queries would sometimes double-count parts of the historic data.
When interacting with the REST API for files, errors now have detailed error messages.
Fixed an issue where, if a custom parser was overriding a built-in parser, then the custom parser could accidentally be overwritten by creating a new parser with the same name.
From the alerts overview and the scheduled searches overview, it is now possible to clear the error status on an alert or a scheduled search.
Errors on alerts are now cleared more granularly. Errors when starting the alert query are cleared as soon as the query is successfully started, errors from polling the query are cleared when the query is successfully polled, and errors from invoking actions are cleared when at least one action has been successfully triggered.
Reduce noise in the log when the bucket storage upload job attempts to upload a file that is deleted concurrently.
Errors on scheduled searches are now cleared more granularly. Errors when starting a query are cleared as soon as another query is successfully started, errors from polling a query are cleared when a query is successfully polled, and errors from invoking actions are cleared when at least one action has been successfully triggered.
No longer allow organization- and system-level ingest tokens to ingest into sandbox and system repos.
Reenable a feature to make Humio fetch and check
hash
files from bucket storage before fetching the segments.No longer allow requests to
/hec
to specify organizations by name. We now only accept IDs.SAML and OIDC only - During signout, Humio background tabs will be redirected to a signout landing page instead of to the login page.
Humio now tries to avoid interrupting threads during shutdown, instead allowing them to finish their work. This should reduce log noise when shutting down.
The
AlertJob
andScheduledSearchJob
now only log validation errors from running the queries as warnings, previously, some of these were logged as errors.Fixed an issue where nodes could request partitions from the query partitioning table that were not present.
When starting ingest, Humio checks that the computed starting position in Kafka is below the Kafka end offset. Ensure that the end offset is requested after the starting position is computed, not before. This might prevent a very rare spurious boot failure.
Fixed Humio always reading and discarding an already processed message from the ingest queue on boot.
Fixed a number of instability issues in the query scheduler. The scheduler should now more reliably ensure that each query either completes, or is cancelled.
Bumped the Humio Docker containers to Java 17. If you manually set any
--add-opens
flags in your JVM config, you should remove them. The container should set the right flags automatically.Fixed an issue where the digest coordinator could consider a host to be alive if the coordinator hadn't seen any timestamps from that host.
When creating ingest and chatter topic, reduce desired max.message.bytes to what Kafka cluster allows, if that is lower than our desired values.
Queries
Query partition tables updates are now rejected if written by a node that is no longer the cluster leader.
Other
Fixed a race condition between nodes creating the merge result for the same target segment, and also transferring it among the nodes concurrently. If a query read the file during that race condition, an in-memory cache of the file header might hold contents that did not match the local file, resulting in "Broken segment" warnings in queries.
Humio Server 1.36.1 LTS (2022-02-14)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.36.1 | LTS | 2022-02-14 | Cloud | 2023-01-31 | No | 1.26.0 | No |
Hide file hashes
JAR Checksum | Value |
---|---|
MD5 | 0aee460ed84a4acda917ce6cd61c433d |
SHA1 | 9ea746229c3f1f0f780d356b7ce4b6f6e1bbb78d |
SHA256 | 56a3a2e7ba88e1f5a019184a0d68e83f25b319d54e3aaee3eba4a37f0ef8950c |
SHA512 | aa3117230e866ae621c3f72e1517a9629f45520ea2fa00b6dc8c2ad3a6a1589ffbde36f38b3b1ec0ad927f89da4a9df7f3e6b59f1e844796553b5675608304e9 |
Docker Image | SHA256 Checksum |
---|---|
humio | 2671bacddfe88f34eaba196332b08659e79eab40c2ce26a0e6c2f7e58a78a213 |
humio-core | ecfa5dd303d12725f754b4afabc44196ba37df739f04c30432401eb3149874ee |
kafka | 0d50f27e33bf61f8b76f482fa7b2c4237227af7e429b1080e65daa877c65b457 |
zookeeper | 62a511127a0eaa4ba0f5a29c421063d4c6822371ba6ee16e96ae1b930ca0e7ce |
Download: https://repo.humio.com/repository/maven-releases/com/humio/server/1.36.1/server-1.36.1.tar.gz
These notes include entries from the following previous releases: 1.36.0
Performance and stability improvements.
New features and improvements
UI Changes
New feature to select text in the search page event list and include/exclude that in the search query.
Improved dark mode toggle button's accessibility.
Disable the option to creating a view if the user does not have Connect a view permission on any repository. This is more intuitive than getting an empty dropdown of repositories to choose from.
Improved accessibility when choosing a theme.
Allow more dialogs in the UI to be closed with the
Esc
key.Added ability to resize search page query field by dragging or fitting to query.
Time Selector is now accessible by keyboard.
Hovering over text within a query now shows the result of interpreting escape characters.
New dialogs for creation of parsers and dashboards.
GraphQL API
Improved the error messages when the GraphQL queries SearchDomain.alert, SearchDomain.action, and SearchDomain.savedQuery do not find the entity with the given ID.
Configuration
Added the config
CORS_ALLOWED_ORIGINS
a comma separated list for CORS allowed origins, default allows all origins.Added
INITIAL_FEATURE_FLAGS
which lets you enable/disable feature flags on startup. For instance, settingINITIAL_FEATURE_FLAGS=+UserRoles,-UsagePage
Enables
UserRoles
and disablesUsagePage
.Make
ZOOKEEPER_URL
optional. When not set, the zookeeper-status-logger job does not run, and the cluster administration page does not display information about a ZooKeeper cluster.New configuration
BUCKET_STORAGE_MULTIPLE_ENDPOINTS
and many configurations usingSTORAGE_2
as prefix. See Bucket Storage.When using
ZOOKEEPER_URL_FOR_NODE_UUID
for assignment of node ID to Humio nodes, and value ofZOOKEEPER_PREFIX_FOR_NODE_UUID
(default/humio_autouuid
) does not match contents of localUUID
file, acquire a fresh nodeuuid
.
Functions
Added job which will periodically run a query and record how long it took. By default the query is
count()
.Added a limit parameter to the
fieldstats()
function. This parameter limits the number of fields to include in the result.
Other
Added option to specify an IP Filter for which addresses hostname verification should not be made.
Added granular IP Filter support for shared dashboards (BETA - API only).
Added analytics on query language feature use to the audit-log under the fields
queryParserMetrics
.Allow the query scheduler to enqueue segments and
aux
files for download from bucket storage more regularly. This should ensure that queries fetching smallaux
files can more reliably keep the download job busy.Remove caching of API calls to prevent caching of potential sensitive data.
Added warning logs when errors are rendered to browser during OAuth flows.
Added exceptions to the Humio logs from
AlertJob
andScheduledSearchJob
.Added ability to override max auto shard count for a specific repository.
Improved the default permissions on the group page by leaving their view expanded once the user cancels update.
Allow the same view name across organizations.
Improved caching of UI static assets.
Improved the error message when an ingest request times out.
Added a job that scans segments which are waiting to be archived, this value is recorded in the metric:
s3-archiving-latency-max
.Improved Humio's detection of Kafka resets. We now load the Kafka cluster id once on boot. If it changes after that, the node will crash.
Improved usability of the groups page.
Fixed in this release
Summary
Reading hashfilter in chunks to avoid having huge off heap buffers.
Performance improvements of IngestPartitionCoordinator.
Improve the performance of deletes from global.
Downgrade to Java 1.13 on Docker image to fix rare cases of JVM crashes.
UI Changes
For HTTP Event Collector (HEC) the input field
sourcetype
is now also stored in@sourcetype
.Remove
script-src: unsafe-eval
from content security policy.Removed a spurious warning log when requesting a non-existent
hash
file from S3.The action message templates
{events_str}
and{query_result_summary}
always evaluate to the same string. To reflect this, the UI has been updated so that these templates are combined into the same item in the template overview for Email, Slack and Webhook actions.Fixed an issue where the
SegmentMoverJob
could delete the local copy of a segment, if a pending download of the segment failed the CRC check. The job will now keep the downloaded file at a temporary path until the CRC check completes, to avoid deleting a local copy created by other jobs, e.g. by bucket downloads.The query endpoint API now supports languageVersion for specifying Humio query language versions.
Fixed a compatibility issue with Filebeat 7.16.0.
Make writes to Kafka's chatter topic block in a similar manner as writes to global.
Fixed an issue where
top
would fail if the sum of the values exceeded 2^63-1. Exceeding sums are now pegged to 2^63-1.When bootstrapping a new cluster, set the cluster version in global right away. Since nodes will not boot on a snapshot that doesn't specify a cluster version, it is important that this field exists in all snapshots.
Reenable a feature to make Humio delete local copies of bucketed segments, even if they are involved in a query.
Fixed an issue where repeating queries could cause other queries to fail.
Fixed an issue in the
Table
widget. It will no longer insert 0-values for missing fields in integer columns. Empty fields will be shown consistently, independent of the column data type.The
/hec
endpoint no longer responds toOPTIONS
requests saying it supportsGET
requests. It doesn't and never has.Fixed an issue where choosing a UI theme would not get saved properly in the user's settings.
Make Humio handle missing
aux
files a little faster when downloading segments from bucket storage.Fixed a race condition that could cause Humio to delete more segments than expected when initializing a digester node.
Fixed an issue in the Export to file dialog on the search page. It is now possible to export fields with spaces.
The
repository/.../query
endpoint now returns a status code of.0 (BadRequest)
when given an invalid query in some cases where previously it returned503 (ServiceUnavailable)
.Fixed an issue where the Humio query URLs sent by actions would land users on the search page in editing mode for the alert or scheduled search that had triggered. Now, they still land on the search page, but not in editing mode.
Fixed a race condition that could cause digesters to calculate two different offsets during startup when determining where to start consuming, and which partially written segments to discard, which could lead to data loss when partially written segments were replayed from Kafka.
Queries on views no longer restart when the ordering of the view's connections is changed.
Fixed an issue where queries of the form
#someTagField != someValue ...
would sometimes produce incorrect results.Code completion in the query editor now also works on the right hand side of
:=
.Fixed an issue where MaxMind databases would only update if a license was present at startup and not if it was added later.
Fixed
session()
such that it works when events arrive out of time order.Fixed an issue that repeatedly tried to restart live queries from a given user upon the deletion of the user.
Fixed an issue where live queries would sometimes double-count parts of the historic data.
When interacting with the REST API for files, errors now have detailed error messages.
Fixed an issue where, if a custom parser was overriding a built-in parser, then the custom parser could accidentally be overwritten by creating a new parser with the same name.
From the alerts overview and the scheduled searches overview, it is now possible to clear the error status on an alert or a scheduled search.
Errors on alerts are now cleared more granularly. Errors when starting the alert query are cleared as soon as the query is successfully started, errors from polling the query are cleared when the query is successfully polled, and errors from invoking actions are cleared when at least one action has been successfully triggered.
Reduce noise in the log when the bucket storage upload job attempts to upload a file that is deleted concurrently.
Errors on scheduled searches are now cleared more granularly. Errors when starting a query are cleared as soon as another query is successfully started, errors from polling a query are cleared when a query is successfully polled, and errors from invoking actions are cleared when at least one action has been successfully triggered.
No longer allow organization- and system-level ingest tokens to ingest into sandbox and system repos.
Reenable a feature to make Humio fetch and check
hash
files from bucket storage before fetching the segments.No longer allow requests to
/hec
to specify organizations by name. We now only accept IDs.SAML and OIDC only - During signout, Humio background tabs will be redirected to a signout landing page instead of to the login page.
Humio now tries to avoid interrupting threads during shutdown, instead allowing them to finish their work. This should reduce log noise when shutting down.
The
AlertJob
andScheduledSearchJob
now only log validation errors from running the queries as warnings, previously, some of these were logged as errors.Fixed an issue where nodes could request partitions from the query partitioning table that were not present.
When starting ingest, Humio checks that the computed starting position in Kafka is below the Kafka end offset. Ensure that the end offset is requested after the starting position is computed, not before. This might prevent a very rare spurious boot failure.
Fixed Humio always reading and discarding an already processed message from the ingest queue on boot.
Fixed a number of instability issues in the query scheduler. The scheduler should now more reliably ensure that each query either completes, or is cancelled.
Bumped the Humio Docker containers to Java 17. If you manually set any
--add-opens
flags in your JVM config, you should remove them. The container should set the right flags automatically.Fixed an issue where the digest coordinator could consider a host to be alive if the coordinator hadn't seen any timestamps from that host.
When creating ingest and chatter topic, reduce desired max.message.bytes to what Kafka cluster allows, if that is lower than our desired values.
Queries
Query partition tables updates are now rejected if written by a node that is no longer the cluster leader.
Humio Server 1.36.0 LTS (2022-01-31)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.36.0 | LTS | 2022-01-31 | Cloud | 2023-01-31 | No | 1.26.0 | Yes |
Hide file hashes
JAR Checksum | Value |
---|---|
MD5 | 87cdf81b183bf1a065596181b873d813 |
SHA1 | 8425487e7e3b566ed2eae42cac4cbb6010eb1825 |
SHA256 | 76af04b53a689411f4048e743e0491cb437b99503630a87cf5aba2eb0281231f |
SHA512 | 9dca89bf4097b1c4949c458b9010fff12edd25248c2f0f2e18835466e852d6c5fa6981c8763125b769c5a6673bd00a1a385bfb22f7241aeeadf9e826ef208c99 |
Docker Image | SHA256 Checksum |
---|---|
humio-core | 2641650964190056ac10ad0225b712c3a01d844bf2c5f517663187d45adf846c |
kafka | 99e3a00c93308aa92a8363c65644748d6ace602c1c6e425dcfc32be12432dee7 |
zookeeper | 45c911346e3b58501e1a1b264c178debd33edd692cd901dd9e87cbcd2f93e60a |
Download: https://repo.humio.com/repository/maven-releases/com/humio/server/1.36.0/server-1.36.0.tar.gz
Beta: Bucket storage support for dual targets
Support for dual targets to allow using one as the preferred
download and the other to trust for durability. One example of
this is to save on cost (on traffic) by using a local bucket
implementation, such as
MinIO, in the local
datacenter as the preferred bucket storage target, while using a
remote Amazon S3
bucket as the trusted bucket for durability. If the local
MinIO bucket is lost (or just
not responding for a while) the Humio cluster still works using
the AWS S3 bucket with no
reconfiguration or restart required. Configuration of the second
bucket is via configuration entries similar to the existing
STORAGE
keys, but using
the prefix STORAGE_2
for
the extra bucket.
When using dual targets, bucket storage backends may need
different proxy configurations for each backend - or not. The
new configuration
BUCKET_STORAGE_MULTIPLE_ENDPOINTS
(default
false
) controls whether
the proxy configuration in the environment is applied to all
bucket storage backends. When set to
true
, each bucket
preserves the active proxy/endpoint configuration and a change
to those will trigger creation of a fresh internally persisted
bucket storage access configuration.
New features and improvements
UI Changes
New feature to select text in the search page event list and include/exclude that in the search query.
Improved dark mode toggle button's accessibility.
Disable the option to creating a view if the user does not have Connect a view permission on any repository. This is more intuitive than getting an empty dropdown of repositories to choose from.
Improved accessibility when choosing a theme.
Allow more dialogs in the UI to be closed with the
Esc
key.Added ability to resize search page query field by dragging or fitting to query.
Time Selector is now accessible by keyboard.
Hovering over text within a query now shows the result of interpreting escape characters.
New dialogs for creation of parsers and dashboards.
GraphQL API
Improved the error messages when the GraphQL queries SearchDomain.alert, SearchDomain.action, and SearchDomain.savedQuery do not find the entity with the given ID.
Configuration
Added the config
CORS_ALLOWED_ORIGINS
a comma separated list for CORS allowed origins, default allows all origins.Added
INITIAL_FEATURE_FLAGS
which lets you enable/disable feature flags on startup. For instance, settingINITIAL_FEATURE_FLAGS=+UserRoles,-UsagePage
Enables
UserRoles
and disablesUsagePage
.Make
ZOOKEEPER_URL
optional. When not set, the zookeeper-status-logger job does not run, and the cluster administration page does not display information about a ZooKeeper cluster.New configuration
BUCKET_STORAGE_MULTIPLE_ENDPOINTS
and many configurations usingSTORAGE_2
as prefix. See Bucket Storage.When using
ZOOKEEPER_URL_FOR_NODE_UUID
for assignment of node ID to Humio nodes, and value ofZOOKEEPER_PREFIX_FOR_NODE_UUID
(default/humio_autouuid
) does not match contents of localUUID
file, acquire a fresh nodeuuid
.
Functions
Added job which will periodically run a query and record how long it took. By default the query is
count()
.Added a limit parameter to the
fieldstats()
function. This parameter limits the number of fields to include in the result.
Other
Added option to specify an IP Filter for which addresses hostname verification should not be made.
Added granular IP Filter support for shared dashboards (BETA - API only).
Added analytics on query language feature use to the audit-log under the fields
queryParserMetrics
.Allow the query scheduler to enqueue segments and
aux
files for download from bucket storage more regularly. This should ensure that queries fetching smallaux
files can more reliably keep the download job busy.Remove caching of API calls to prevent caching of potential sensitive data.
Added warning logs when errors are rendered to browser during OAuth flows.
Added exceptions to the Humio logs from
AlertJob
andScheduledSearchJob
.Added ability to override max auto shard count for a specific repository.
Improved the default permissions on the group page by leaving their view expanded once the user cancels update.
Allow the same view name across organizations.
Improved caching of UI static assets.
Improved the error message when an ingest request times out.
Added a job that scans segments which are waiting to be archived, this value is recorded in the metric:
s3-archiving-latency-max
.Improved Humio's detection of Kafka resets. We now load the Kafka cluster id once on boot. If it changes after that, the node will crash.
Improved usability of the groups page.
Fixed in this release
UI Changes
For HTTP Event Collector (HEC) the input field
sourcetype
is now also stored in@sourcetype
.Remove
script-src: unsafe-eval
from content security policy.Removed a spurious warning log when requesting a non-existent
hash
file from S3.The action message templates
{events_str}
and{query_result_summary}
always evaluate to the same string. To reflect this, the UI has been updated so that these templates are combined into the same item in the template overview for Email, Slack and Webhook actions.Fixed an issue where the
SegmentMoverJob
could delete the local copy of a segment, if a pending download of the segment failed the CRC check. The job will now keep the downloaded file at a temporary path until the CRC check completes, to avoid deleting a local copy created by other jobs, e.g. by bucket downloads.The query endpoint API now supports languageVersion for specifying Humio query language versions.
Fixed a compatibility issue with Filebeat 7.16.0.
Make writes to Kafka's chatter topic block in a similar manner as writes to global.
Fixed an issue where
top
would fail if the sum of the values exceeded 2^63-1. Exceeding sums are now pegged to 2^63-1.When bootstrapping a new cluster, set the cluster version in global right away. Since nodes will not boot on a snapshot that doesn't specify a cluster version, it is important that this field exists in all snapshots.
Reenable a feature to make Humio delete local copies of bucketed segments, even if they are involved in a query.
Fixed an issue where repeating queries could cause other queries to fail.
Fixed an issue in the
Table
widget. It will no longer insert 0-values for missing fields in integer columns. Empty fields will be shown consistently, independent of the column data type.The
/hec
endpoint no longer responds toOPTIONS
requests saying it supportsGET
requests. It doesn't and never has.Fixed an issue where choosing a UI theme would not get saved properly in the user's settings.
Make Humio handle missing
aux
files a little faster when downloading segments from bucket storage.Fixed a race condition that could cause Humio to delete more segments than expected when initializing a digester node.
Fixed an issue in the Export to file dialog on the search page. It is now possible to export fields with spaces.
The
repository/.../query
endpoint now returns a status code of.0 (BadRequest)
when given an invalid query in some cases where previously it returned503 (ServiceUnavailable)
.Fixed an issue where the Humio query URLs sent by actions would land users on the search page in editing mode for the alert or scheduled search that had triggered. Now, they still land on the search page, but not in editing mode.
Fixed a race condition that could cause digesters to calculate two different offsets during startup when determining where to start consuming, and which partially written segments to discard, which could lead to data loss when partially written segments were replayed from Kafka.
Queries on views no longer restart when the ordering of the view's connections is changed.
Fixed an issue where queries of the form
#someTagField != someValue ...
would sometimes produce incorrect results.Code completion in the query editor now also works on the right hand side of
:=
.Fixed an issue where MaxMind databases would only update if a license was present at startup and not if it was added later.
Fixed
session()
such that it works when events arrive out of time order.Fixed an issue that repeatedly tried to restart live queries from a given user upon the deletion of the user.
Fixed an issue where live queries would sometimes double-count parts of the historic data.
When interacting with the REST API for files, errors now have detailed error messages.
Fixed an issue where, if a custom parser was overriding a built-in parser, then the custom parser could accidentally be overwritten by creating a new parser with the same name.
From the alerts overview and the scheduled searches overview, it is now possible to clear the error status on an alert or a scheduled search.
Errors on alerts are now cleared more granularly. Errors when starting the alert query are cleared as soon as the query is successfully started, errors from polling the query are cleared when the query is successfully polled, and errors from invoking actions are cleared when at least one action has been successfully triggered.
Reduce noise in the log when the bucket storage upload job attempts to upload a file that is deleted concurrently.
Errors on scheduled searches are now cleared more granularly. Errors when starting a query are cleared as soon as another query is successfully started, errors from polling a query are cleared when a query is successfully polled, and errors from invoking actions are cleared when at least one action has been successfully triggered.
No longer allow organization- and system-level ingest tokens to ingest into sandbox and system repos.
Reenable a feature to make Humio fetch and check
hash
files from bucket storage before fetching the segments.No longer allow requests to
/hec
to specify organizations by name. We now only accept IDs.SAML and OIDC only - During signout, Humio background tabs will be redirected to a signout landing page instead of to the login page.
Humio now tries to avoid interrupting threads during shutdown, instead allowing them to finish their work. This should reduce log noise when shutting down.
The
AlertJob
andScheduledSearchJob
now only log validation errors from running the queries as warnings, previously, some of these were logged as errors.Fixed an issue where nodes could request partitions from the query partitioning table that were not present.
When starting ingest, Humio checks that the computed starting position in Kafka is below the Kafka end offset. Ensure that the end offset is requested after the starting position is computed, not before. This might prevent a very rare spurious boot failure.
Fixed Humio always reading and discarding an already processed message from the ingest queue on boot.
Fixed a number of instability issues in the query scheduler. The scheduler should now more reliably ensure that each query either completes, or is cancelled.
Bumped the Humio Docker containers to Java 17. If you manually set any
--add-opens
flags in your JVM config, you should remove them. The container should set the right flags automatically.Fixed an issue where the digest coordinator could consider a host to be alive if the coordinator hadn't seen any timestamps from that host.
When creating ingest and chatter topic, reduce desired max.message.bytes to what Kafka cluster allows, if that is lower than our desired values.
Queries
Query partition tables updates are now rejected if written by a node that is no longer the cluster leader.
Humio Server 1.35.0 GA (2022-01-17)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.35.0 | GA | 2022-01-17 | Cloud | 2023-01-31 | No | 1.26.0 | Yes |
Available for download two days after release.
Hide file hashes
JAR Checksum | Value |
---|---|
MD5 | 83b164ea22cbc8a347ee22a4e49d87fb |
SHA1 | cde673dc3026cf8455e980a7f1ffc17de2a92072 |
SHA256 | a43b1f86fe2f610eadaacfcff50264de578be56959125d24d27e51e1eacc403d |
SHA512 | a0897d2b8acb1ad888c1f0128bc46a9f3a425cc87adc40192ea3089533f247ce0ddff23971b1298d58cdb9f0e93f1bb24ae3392587de3a313fa4bbd89a32747b |
Docker Image | SHA256 Checksum |
---|---|
humio | 2f6d1b42b5d2d519bd0152bf6e28952a8f5d9e711bfc6c8c50fb58d917d033f0 |
humio-core | 56193b2add6ece05561e058ffcd1706989b9633cb1da857b26582df7d7bf210a |
kafka | 149bce1bfa2e9c3e8eb6ff3d8c9416fad058b392134cab5958bf405f2553d264 |
zookeeper | 9393097554e28403372ef38f87eab200292a41f75d8d38a8d102f926e7919eae |
Beta: Bucket storage support for dual targets
Support for dual targets to allow using one as the preferred
download and the other to trust for durability. One example of
this is to save on cost (on traffic) by using a local bucket
implementation, such as
MinIO, in the local
datacenter as the preferred bucket storage target, while using a
remote Amazon S3
bucket as the trusted bucket for durability. If the local
MinIO bucket is lost (or just
not responding for a while) the Humio cluster still works using
the AWS S3 bucket with no
reconfiguration or restart required. Configuration of the second
bucket is via configuration entries similar to the existing
STORAGE
keys, but using
the prefix STORAGE_2
for
the extra bucket.
When using dual targets, bucket storage backends may need
different proxy configurations for each backend - or not. The
new configuration
BUCKET_STORAGE_MULTIPLE_ENDPOINTS
(default
false
) controls whether
the proxy configuration in the environment is applied to all
bucket storage backends. When set to
true
, each bucket
preserves the active proxy/endpoint configuration and a change
to those will trigger creation of a fresh internally persisted
bucket storage access configuration.
New features and improvements
UI Changes
Added ability to resize search page query field by dragging or fitting to query.
Allow more dialogs in the UI to be closed with the
Esc
key.New dialogs for creation of parsers and dashboards.
Improved accessibility when choosing a theme.
New feature to select text in the search page event list and include/exclude that in the search query.
Time Selector is now accessible by keyboard.
Improved dark mode toggle button's accessibility.
Disable the option to creating a view if the user does not have Connect a view permission on any repository. This is more intuitive than getting an empty dropdown of repositories to choose from.
Hovering over text within a query now shows the result of interpreting escape characters.
GraphQL API
Improved the error messages when the GraphQL queries SearchDomain.alert, SearchDomain.action, and SearchDomain.savedQuery do not find the entity with the given ID.
Configuration
Make
ZOOKEEPER_URL
optional. When not set, the zookeeper-status-logger job does not run, and the cluster administration page does not display information about a ZooKeeper cluster.Added
INITIAL_FEATURE_FLAGS
which lets you enable/disable feature flags on startup. For instance, settingINITIAL_FEATURE_FLAGS=+UserRoles,-UsagePage enables
UserRoles
and disablesUsagePage
.When using
ZOOKEEPER_URL_FOR_NODE_UUID
for assignment of node ID to Humio nodes, and value ofZOOKEEPER_PREFIX_FOR_NODE_UUID
(default/humio_autouuid
) does not match contents of localUUID
file, acquire a fresh nodeuuid
.New configuration
BUCKET_STORAGE_MULTIPLE_ENDPOINTS
and many configurations usingSTORAGE_2
as prefix. See Bucket StorageReduce default value of
INGESTQUEUE_COMPRESSION_LEVEL
, the ingest queue compression level from 1 to 0. This reduces time spent compressing before inserting into the ingest queue by roughly 4x at the expense of a 10-20% increase in size required in Kafka for the ingest queue topic.
Functions
Added a
limit
parameter to thefieldstats()
function. This parameter limits the number of fields to include in the result.
Other
Allow the same view name across organizations.
Improved usability of the groups page.
Added warning logs when errors are rendered to browser during OAuth flows.
Allow the query scheduler to enqueue segments and
aux
files for download from bucket storage more regularly. This should ensure that queries fetching smallaux
files can more reliably keep the download job busy.Improved the default permissions on the group page by leaving their view expanded once the user cancels update.
Improved the error message when an ingest request times out.
Added granular IP Filter support for shared dashboards (BETA - API only).
Added ability to override max auto shard count for a specific repository.
Added exceptions to the Humio logs from
AlertJob
andScheduledSearchJob
.Added a job that scans segments which are waiting to be archived, this value is recorded in the metric:
s3-archiving-latency-max
.Improved Humio's detection of Kafka resets. We now load the Kafka cluster id once on boot. If it changes after that, the node will crash.
Added job which will periodically run a query and record how long it took. By default the query is
count()
.Added option to specify an IP Filter for which addresses hostname verification should not be made.
Added analytics on query language feature use to the audit-log under the fields
queryParserMetrics
.
Fixed in this release
UI Changes
Fixed
session()
function such that it works when events arrive out of time order.Fixed an issue in the Export to file dialog on the search page. It is now possible to export fields with spaces.
Fixed a compatibility issue with Filebeat 7.16.0.
Fixed a number of instability issues in the query scheduler. The scheduler should now more reliably ensure that each query either completes, or is cancelled.
Fixed an issue where the digest coordinator could consider a host to be alive if the coordinator hadn't seen any timestamps from that host.
Fixed an issue where live queries would sometimes double-count parts of the historic data.
Fixed an issue where the Humio query URLs sent by actions would land users on the search page in editing mode for the alert or scheduled search that had triggered. Now, they still land on the search page, but not in editing mode.
Remove
script-src: unsafe-eval
from content security policy.Removed a spurious warning log when requesting a non-existent
hash
file from S3.Errors on scheduled searches are now cleared more granularly. Errors when starting a query are cleared as soon as another query is successfully started, errors from polling a query are cleared when a query is successfully polled, and errors from invoking actions are cleared when at least one action has been successfully triggered.
Queries on views no longer restart when the ordering of the view's connections is changed.
When starting ingest, Humio checks that the computed starting position in Kafka is below the Kafka end offset. Ensure that the end offset is requested after the starting position is computed, not before. This might prevent a very rare spurious boot failure.
Errors on alerts are now cleared more granularly. Errors when starting the alert query are cleared as soon as the query is successfully started, errors from polling the query are cleared when the query is successfully polled, and errors from invoking actions are cleared when at least one action has been successfully triggered.
Fixed an issue where, if a custom parser was overriding a built-in parser, then the custom parser could accidentally be overwritten by creating a new parser with the same name.
The
/hec
endpoint no longer responds toOPTIONS
requests saying it supportsGET
requests. It doesn't and never has.Humio now tries to avoid interrupting threads during shutdown, instead allowing them to finish their work. This should reduce log noise when shutting down.
Reduce noise in the log when the bucket storage upload job attempts to upload a file that is deleted concurrently.
The action message templates
{events_str}
and{query_result_summary}
always evaluate to the same string. To reflect this, the UI has been updated so that these templates are combined into the same item in the template overview for Email, Slack and Webhook actions.Fixed an issue where nodes could request partitions from the query partitioning table that were not present.
Make writes to Kafka's chatter topic block in a similar manner as writes to global.
Fixed an issue where repeating queries could cause other queries to fail.
From the alerts overview and the scheduled searches overview, it is now possible to clear the error status on an alert or a scheduled search.
Fixed an issue in the Table widget. It will no longer insert 0-values for missing fields in integer columns. Empty fields will be shown consistently, independent of the column data type.
Bumped the Humio Docker containers to Java 17. If you manually set any
--add-opens
flags in your JVM config, you should remove them. The container should set the right flags automatically.The
AlertJob
andScheduledSearchJob
now only log validation errors from running the queries as warnings, previously, some of these were logged as errors.SAML and OIDC only - During signout, Humio background tabs will be redirected to a signout landing page instead of to the login page.
Fixed an issue that repeatedly tried to restart live queries from a given user upon the deletion of the user.
No longer allow requests to
/hec
to specify organizations by name. We now only accept IDs.Fixed an issue where choosing a UI theme would not get saved properly in the user's settings.
Fixed a race condition that could cause digesters to calculate two different offsets during startup when determining where to start consuming, and which partially written segments to discard, which could lead to data loss when partially written segments were replayed from Kafka.
When bootstrapping a new cluster, set the cluster version in global right away. Since nodes will not boot on a snapshot that doesn't specify a cluster version, it is important that this field exists in all snapshots.
Fixed Humio always reading and discarding an already processed message from the ingest queue on boot.
For HTTP Event Collector (HEC) the input field
sourcetype
is now also stored in@sourcetype
.Fixed an issue where MaxMind databases would only update if a license was present at startup and not if it was added later.
Fixed a race condition that could cause Humio to delete more segments than expected when initializing a digester node.
Reenable a feature to make Humio delete local copies of bucketed segments, even if they are involved in a query.
Code completion in the query editor now also works on the right hand side of
:=
.No longer allow organization- and system-level ingest tokens to ingest into sandbox and system repos.
The query endpoint API now supports languageVersion for specifying Humio query language versions.
Fixed an issue where the
SegmentMoverJob
could delete the local copy of a segment, if a pending download of the segment failed the CRC check. The job will now keep the downloaded file at a temporary path until the CRC check completes, to avoid deleting a local copy created by other jobs, e.g. by bucket downloads.The
repository/.../query
endpoint now returns a status code of.0 (BadRequest)
when given an invalid query in some cases where previously it returned503 (ServiceUnavailable)
.Reenable a feature to make Humio fetch and check
hash
files from bucket storage before fetching the segments.When creating ingest and chatter topic, reduce desired max.message.bytes to what Kafka cluster allows, if that is lower than our desired values.
Make Humio handle missing
aux
files a little faster when downloading segments from bucket storage.Fixed an issue where
top
would fail if the sum of the values exceeded 2^63-1. Exceeding sums are now pegged to 2^63-1.
Queries
Query partition tables updates are now rejected if written by a node that is no longer the cluster leader.
Humio Server 1.34.3 LTS (2022-03-09)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.34.3 | LTS | 2022-03-09 | Cloud | 2022-12-31 | No | 1.26.0 | No |
Hide file hashes
JAR Checksum | Value |
---|---|
MD5 | 7184f2582bed56c0b29ffb2d39f508a5 |
SHA1 | 9d7beb007f9ec9e7603bcbf164b3006dce0932ef |
SHA256 | ac95c6538dc88b6af31f0c55aaa46740503860ea90ad41d330593402915d885b |
SHA512 | d4738190dfd9b100ac896969e70a541f1d0242fb822f312142636cb5055009f13f0dbca6193f6cb8ce89d36d9f3f8b9ebec5a684570a4f9500bd8ffc3379ee6a |
Docker Image | SHA256 Checksum |
---|---|
humio | 955acd05d628a4da69a546ea58134a786c77349bfc21e10631b3f5b2934140ab |
humio-core | 64547f071a29e5df7eb945f369b28cf6d365b1590f577163fd18fe8b76ce63ce |
kafka | b3318e88aa3aaaae09d88fae2bd37882d1a41738983a1ef783f9f07c30c73101 |
zookeeper | 049489e9fcda7ca0ea11d96e45d8dde8d9a6582e0b9740b6865a952ec13332e5 |
Download: https://repo.humio.com/repository/maven-releases/com/humio/server/1.34.3/server-1.34.3.tar.gz
These notes include entries from the following previous releases: 1.34.0, 1.34.1, 1.34.2
Performance improvements of Ingest and internal caching.
New features and improvements
UI Changes
Added autofocus to the first field when opening a dialog using the save as functionality from the Search page.
Added buttons for stopping all queries, streaming queries, and historical queries from inside the query monitor.
Allow resize of columns in the event list by mouse.
Disable actions if permissions are handled externally.
Added maximum width to tabs on the Group page, so they do not keep expanding forever.
Validation error messages are now more precise and have improved formatting.
The overall look of message boxes in Humio has been updated.
Updated the links for Privacy Notice and Terms and Conditions.
Dark mode is officially deemed stable enough to be out of beta.
GraphQL API
The GraphQL field isEventForwardingEnabled on the HumioMetadata type is deprecated, as it is no longer in use internally. If you rely on this, please let us know.
Added three GraphQL mutations for stopping queries: stopAllQueries, stopStreamingQueries, and stopHistoricalQueries.
Added GraphQL mutation clearRecentQueries which a user can run to clear their recent queries in a specific view or repository.
Added 2-phase migration that will allow old user api tokens to be used and clean global from secrets after a 30 day period.
Changed old personal user token implementation to hash based.
Renamed the deleteEvents related GraphQL mutations and queries to redactEvents. The redactEvents API is intended for redacting sensitive data from a repository, not for bulk deletion of events. We think the new name invites fewer misunderstandings.
Configuration
When checking if the ViewAction.EventForwarding action is allowed (with e.g. SearchDomain.isActionAllowed), the answer will now be false if the event forwarding is not enabled on the server.
Functions
Improved performance of the query functions
drop()
and rename() by quite a bit.Added query function
math:arctan2()
to the query language.Added the
communityId()
function for calculating hashes of network flow tuples according to the (Community ID Spec).The
kvParse()
query function can now parse unquoted empty values using the new parameter separatorPadding to specify if your data has whitespace around the key-value separator (typically =). The default is "Unknown", which will leave the functionality of the function unchanged.Added a
minSpan
parameter totimeChart()
andbucket()
, which can be used to specify a minimum span when using a short time interval.Refactored query functions
join()
,selfJoin()
, andselfJoinFilter()
into user-visible and internal implementations.
Other
It is now possible to create actions, alerts, scheduled searches, and parsers from YAML template files.
Added new metric: bucket-storage-upload-latency-max. It shows the amount of time spent for the event that that has been pending for upload to bucket storage the longest.
It is now possible to ingest logs into Humio using LogStash v.7.13 and upwards.
Added a precondition that ensures that the number of ingest partitions cannot be reduced.
Added validation and a more clear error message for queries with a time span of 0.
Added metric for the number of currently running streaming queries.
Improved shutdown logic slightly, helping prevent thread pools from getting stuck or logging spurious errors during shutdown.
Made the transfer coordinator display more clear errors instead of an internal server error for multinode clusters.
Added Australian states to the States dropdown.
New metric: ingest-request-delay. Histogram of ingest request time spent being delayed due to exceeding limit on concurrent processing of ingest (milliseconds).
Improved handling of multiple nodes attempting to create views with the same names at the same time, as might happen when bootstrapping a cluster.
Improved the error reporting when installing, updating or exporting a package fails.
Create, update, and delete of dashboards is now audit logged.
Reword regular expression related error messages.
Added management API to put hosts in maintenance mode.
Improved error messages when an invalid regular expression is used in replace.
Retention based on compressed size will no longer account for segment replication.
Query validation has been improved to include several errors which used to first show up after submitting search.
Prepopulate email field when invited user is filling in a form with this information.
Node roles can now be assigned/removed at runtime.
Improved partition layout auto-balancing algorithm.
Added support in the humio event collector for organization- and system-wide ingest tokens and the ability to use a parser from a different repo than the one being ingested into.
A compressed segment with a size of 1GB will now always count for retention as 1 GB. Previously, a compressed segment with a size of 1GB might count for more than 1GB when calculating retention, if that segment had more replicas than configured. The effect on the retention policy was that if you had configured retention of .0GB compressed bytes, Humio might retain less than .0GB of compressed data if any of those segments had too many replicas.
Added checksum verification within hash filter files on read.
Query editor: improved code completion of function names.
Minor optimization when using groupBy with a single field.
Added "export as yaml" function to the list pages of parsers, actions and scheduled searches.
Reduce limit on number of datasources for sandbox repositories created when a user is created to .0 by default.
Fixed in this release
Security
Updated dependencies to Jawn for CVE-2022-21653.
Updated dependencies to nanoid for CVE-2021-23566.
Updated dependencies to Netty to fix CVE-2021-43797
Updated dependencies to node-fetch for CVE-2022-0235.
Updated dependencies to Akka for CVE-2021-42697.
Updated dependencies to follow-redirects for CVE-2022-0155.
Updated dependencies to log4j 2.17.1 to fix CVE-2021-44832 and CVE-2021-45105
Updated dependencies to log4j 2.16 to remove of message lookups (CVE-2021-44228 and CVE-2021-45046)
Summary
Performance improvements of Ingest and internal caching.
Updated dependencies to Jackson to fix a weakness
Fixes an issue with epoch and offsets not always being stripped from segments.
Fixed an issue where queries of the form #someTagField != someValue ... would sometimes produce incorrect results.
Fixed an issue where live queries would sometimes double-count parts of the historic data.
Automation and Alerts
Alerts and scheduled searches are no longer run on cloud for organizations with an expired trial license, and on-prem for any expired license.
Fixed an issue where an alert would not be throttled until after its actions had completed, which could make the alert trigger multiple times shortly after each other if an action was slow. Now, the alert is throttled as soon as it triggers.
Alerts and scheduled searches are now enabled per default when created. The check disabling these entities if no actions are attached has been replaced with a warning, which informs a user that even though the entity is enabled, nothing will trigger since no actions are attached.
Functions
Fixed a bug in the validation of the
bits
parameter ofhashMatch()
andhashRewrite()
.
Other
Fixed an issue where the segment merger could mishandle errors during merge.
Fixed an issue on on-prem trial license that would use user count limits from cloud.
Use a fresh (random) name for the tmp folder below the datadir to ensure that it is a proper subdir of the datadir and not a mount point.
Fixed styling issue on the search page where long errors would overflow the screen.
Fixed a bug where only part of the Users page was loading when navigating from the All organizations page.
When an alert query encounters a warning and Humio is not configured to trigger alerts despite warnings
ALERT_DESPITE_WARNINGS=true
the warning text will now be shown as an error message on the alert in the UI.Fixed an issue where certain problems would highlight the first word in a query.
Addressed an issue causing Humio to sometimes error log an ArrayIndexOutOfBoundsException during shutdown.
Fixed incorrect results when searching through saved queries and recent queries.
Fixed an issue where streaming (exporting) query results in JSON format could include extra "," characters within the output.
Fixed a bug where shared lookup files could not be downloaded from the UI.
Fixed a bug with the cache not being populated between restarts on single node clusters.
Fixed an issue when adding a group to a repository or view than an error message is displayed when the user is not the organization owner or root.
Prevent unauthorized analytics requests being sent.
Fixed an issue where error messages would show wrong input.
The field vhost in internal Humio logging is now reserved for denoting the host logging the message. Other uses of vhost now uses the field hostId.
Removed error query param from URL when entering Humio.
Fixed an issue that in rare cases would cause login errors.
No longer return the "Query Plan" in responses, but return a hash in the new field hashedQueryOnView instead. The plan could leak information not otherwise visible to the user, such as query prefixes being applied.
Fixed some widgets on dashboards reporting errors while waiting for data to load.
Fixed an issue where the web client could start queries from the beginning of time when scrolling backwards through events in the UI.
Changes to the state of IOC access on organizations are now reflected in the audit log.
Fixed an issue where a scheduled search failed and was retried, if it had multiple actions and at least one action was unknown to Humio. Now, the unknown action is logged, but the scheduled search completes successfully and continues to the next scheduled run.
When a digester fails to start, rather than restarting the JVM as previous releases did, keep retrying to start assuming that the issue is transient, such as data for a single ingest partition being unavailable for a short while. While in this situation the process reports the metric for ingest latency on the affected partitions as being uptime of the JVM process at this point. The idea is to signal that data is not flowing on those partitions, so that a monitored metric can raise an alarm somewhere. In lack of a proper latency in this situation, it's better to have a growing non-zero metrics than having the metrics being zero.
Fixed an issue where missing undersized segments in a datasource might cause Humio to repeatedly transfer undersized segments between nodes.
When creating or updating an action, the backend now verifies that the host url associated with the action is prefixed with either
http://
orhttps://
. This affects Actions of the type: Webhook, OpsGenie, Single-Channel Slack and VictorOps.Fixed an issue where choosing a UI theme would not get saved properly in the user's settings.
Fixed an edge case where Humio might create multiple copies of the same datasource when the number of Kafka partitions is changed. The fix ensures only one copy will be created.
Changed field type for zip codes.
Fixed a number of stability issues with the event redaction job.
Fixed an issue where the segment merger would write that the current node had a segment slightly before registering that segment in the local node.
Fixed an issue where clicking on the counters of parsed events on the Parsers page would open an empty search page, except for built-in parsers. Now, it correctly shows the latest parsed events for all parsers (except package parsers).
Changed default package type to "application" on the export package wizard.
Fixed an issue where
sort()
would cause events to be read in a non-optimal order for the entire query.Fixed an issue where a dashboard installed with a YAML file could be slightly different than what was specified in the file.
Fixed an issue where a failing event forwarder would be cached indefinitely and could negatively impact Humio performance.
Fixed an issue where comments spanning multiple lines wouldn't be colored correctly.
Browser storage is now cleared when initiating while unauthenticated.
Fixed an issue where OIDC without a discovery endpoint would fail to configure if
OIDC_TOKEN_ENDPOINT_AUTH_METHOD
was not set.Remove the ability to create ingest tokens and ingest listeners on system repositories.
When checking if the ViewAction.ChangeS3ArchivingSettings action is allowed (with e.g. SearchDomain.isActionAllowed), the answer will now be false if checked on a view, as the action only makes sense on repositories.
Fixed an issue on sandbox renaming, that would allow you to rename a sandbox and end up in a bad state.
When checking if the ViewAction.ChangeRepoConnections action is allowed (with e.g. SearchDomain.isActionAllowed), the answer will now be false if checked on a repository, as the action only makes sense on views.
Fixed an issue causing Humio running on Java 16+ to return incorrect search results when the input query contains Unicode surrogate pairs (e.g. when searching for an emoji).
Fixed a compatibility issue with Filebeat 7.16.0
Fixed a bug where invalid UTF-16 characters could not be ingested. They are now converted to
ufffd
.Fixed an issue where
series()
failed to serialize its state properly.When performing jobs triggered via the Redact Events API, Humio could restart queries for unrelated views until the delete job completed. This has been improved, so only views affected by the delete will be impacted.
Crash the node if any of a number of critical threads die. This should help prevent zombie nodes.
Temporary fix of issue with live queries not having first aggregator as
bucket()
ortimeChart()
, but then later in the query having those as a second aggregator. As a temporary fix, such queries will fail. In later releases, this will get fixed more properly.Changes to the state of backend feature flags are now reflected in the audit log.
Fixed an issue where some regexes could not be used.
Fixed an issue in the interactive tutorial.
Support Java 17.
Fixed an issue where the SegmentMoverJob could delete the local copy of a segment, if a pending download of the segment failed the CRC check. The job will now keep the downloaded file at a temporary path until the CRC check completes, to avoid deleting a local copy created by other jobs, e.g. by bucket downloads.
Fixed an issue where a digest node could be unable to rejoin the cluster after being shut down if all other digest nodes were also down at the time.
Fixed a bug where query coordination partitions would not get updated.
Removed a spurious warning log when requesting a non-existent hash file from S3.
Include view+parser-name in thread dumps when time is spent inside a parser.
Fixed a race condition that could cause Humio to delete more segments than expected when initializing a digester node.
Fixed a bug where offsets from one Kafka partition could be used when deciding where to start consuming for another partition, in the case where there were too many datasources in the repo. This led to a crash loop when the affected node was restarted.
Fixed an issue where release notes would not close when a release is open.
Humio Server 1.34.2 LTS (2022-02-01)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.34.2 | LTS | 2022-02-01 | Cloud | 2022-12-31 | No | 1.26.0 | No |
Hide file hashes
JAR Checksum | Value |
---|---|
MD5 | ebca829a100a92c4b96f05f421c9e5d2 |
SHA1 | f9d21990e4ae6f64eac6bd6aa6d65899583d1311 |
SHA256 | 2141bd8fb2537f0242b30998938d780bd53cedbf2b04622b956923f677a56350 |
SHA512 | 02d33bdfb65bf5ab949da8ae0a1bb9f04a509564c8d4a74cc08f5033040987152abd42743b0720290304f4bd5c4e50550327117efb6eda01d3b0ed8ef357938e |
Docker Image | SHA256 Checksum |
---|---|
humio | 426715cb9faedce379d6a6add003cfcaa3fe9ce5b5ce706fab202b89eb5c8c45 |
humio-core | a927eb85f709561b659e3a566a5b0fd3e6b3be84b19db2832d70722897265cbb |
kafka | f181a5a1668d6c3f1ebe5e5216a00b393ee20eb0f1845d1a93309fdbfac8406d |
zookeeper | 361063a0a2eb0361209781f7dff9d5f0067b33806f14882fc4a0a0ff9f434b66 |
Download: https://repo.humio.com/repository/maven-releases/com/humio/server/1.34.2/server-1.34.2.tar.gz
These notes include entries from the following previous releases: 1.34.0, 1.34.1
Updated dependencies with security fixes and weakness.
New features and improvements
UI Changes
Added autofocus to the first field when opening a dialog using the save as functionality from the Search page.
Added buttons for stopping all queries, streaming queries, and historical queries from inside the query monitor.
Allow resize of columns in the event list by mouse.
Disable actions if permissions are handled externally.
Added maximum width to tabs on the Group page, so they do not keep expanding forever.
Validation error messages are now more precise and have improved formatting.
The overall look of message boxes in Humio has been updated.
Updated the links for Privacy Notice and Terms and Conditions.
Dark mode is officially deemed stable enough to be out of beta.
GraphQL API
The GraphQL field isEventForwardingEnabled on the HumioMetadata type is deprecated, as it is no longer in use internally. If you rely on this, please let us know.
Added three GraphQL mutations for stopping queries: stopAllQueries, stopStreamingQueries, and stopHistoricalQueries.
Added GraphQL mutation clearRecentQueries which a user can run to clear their recent queries in a specific view or repository.
Added 2-phase migration that will allow old user api tokens to be used and clean global from secrets after a 30 day period.
Changed old personal user token implementation to hash based.
Renamed the deleteEvents related GraphQL mutations and queries to redactEvents. The redactEvents API is intended for redacting sensitive data from a repository, not for bulk deletion of events. We think the new name invites fewer misunderstandings.
Configuration
When checking if the ViewAction.EventForwarding action is allowed (with e.g. SearchDomain.isActionAllowed), the answer will now be false if the event forwarding is not enabled on the server.
Functions
Improved performance of the query functions
drop()
and rename() by quite a bit.Added query function
math:arctan2()
to the query language.Added the
communityId()
function for calculating hashes of network flow tuples according to the (Community ID Spec).The
kvParse()
query function can now parse unquoted empty values using the new parameter separatorPadding to specify if your data has whitespace around the key-value separator (typically =). The default is "Unknown", which will leave the functionality of the function unchanged.Added a
minSpan
parameter totimeChart()
andbucket()
, which can be used to specify a minimum span when using a short time interval.Refactored query functions
join()
,selfJoin()
, andselfJoinFilter()
into user-visible and internal implementations.
Other
It is now possible to create actions, alerts, scheduled searches, and parsers from YAML template files.
Added new metric: bucket-storage-upload-latency-max. It shows the amount of time spent for the event that that has been pending for upload to bucket storage the longest.
It is now possible to ingest logs into Humio using LogStash v.7.13 and upwards.
Added a precondition that ensures that the number of ingest partitions cannot be reduced.
Added validation and a more clear error message for queries with a time span of 0.
Added metric for the number of currently running streaming queries.
Improved shutdown logic slightly, helping prevent thread pools from getting stuck or logging spurious errors during shutdown.
Made the transfer coordinator display more clear errors instead of an internal server error for multinode clusters.
Added Australian states to the States dropdown.
New metric: ingest-request-delay. Histogram of ingest request time spent being delayed due to exceeding limit on concurrent processing of ingest (milliseconds).
Improved handling of multiple nodes attempting to create views with the same names at the same time, as might happen when bootstrapping a cluster.
Improved the error reporting when installing, updating or exporting a package fails.
Create, update, and delete of dashboards is now audit logged.
Reword regular expression related error messages.
Added management API to put hosts in maintenance mode.
Improved error messages when an invalid regular expression is used in replace.
Retention based on compressed size will no longer account for segment replication.
Query validation has been improved to include several errors which used to first show up after submitting search.
Prepopulate email field when invited user is filling in a form with this information.
Node roles can now be assigned/removed at runtime.
Improved partition layout auto-balancing algorithm.
Added support in the humio event collector for organization- and system-wide ingest tokens and the ability to use a parser from a different repo than the one being ingested into.
A compressed segment with a size of 1GB will now always count for retention as 1 GB. Previously, a compressed segment with a size of 1GB might count for more than 1GB when calculating retention, if that segment had more replicas than configured. The effect on the retention policy was that if you had configured retention of .0GB compressed bytes, Humio might retain less than .0GB of compressed data if any of those segments had too many replicas.
Added checksum verification within hash filter files on read.
Query editor: improved code completion of function names.
Minor optimization when using groupBy with a single field.
Added "export as yaml" function to the list pages of parsers, actions and scheduled searches.
Reduce limit on number of datasources for sandbox repositories created when a user is created to .0 by default.
Fixed in this release
Security
Updated dependencies to Jawn for CVE-2022-21653.
Updated dependencies to nanoid for CVE-2021-23566.
Updated dependencies to Netty to fix CVE-2021-43797
Updated dependencies to node-fetch for CVE-2022-0235.
Updated dependencies to Akka for CVE-2021-42697.
Updated dependencies to follow-redirects for CVE-2022-0155.
Updated dependencies to log4j 2.17.1 to fix CVE-2021-44832 and CVE-2021-45105
Updated dependencies to log4j 2.16 to remove of message lookups (CVE-2021-44228 and CVE-2021-45046)
Summary
Updated dependencies to Jackson to fix a weakness
Fixes an issue with epoch and offsets not always being stripped from segments.
Fixed an issue where queries of the form #someTagField != someValue ... would sometimes produce incorrect results.
Fixed an issue where live queries would sometimes double-count parts of the historic data.
Automation and Alerts
Alerts and scheduled searches are no longer run on cloud for organizations with an expired trial license, and on-prem for any expired license.
Fixed an issue where an alert would not be throttled until after its actions had completed, which could make the alert trigger multiple times shortly after each other if an action was slow. Now, the alert is throttled as soon as it triggers.
Alerts and scheduled searches are now enabled per default when created. The check disabling these entities if no actions are attached has been replaced with a warning, which informs a user that even though the entity is enabled, nothing will trigger since no actions are attached.
Functions
Fixed a bug in the validation of the
bits
parameter ofhashMatch()
andhashRewrite()
.
Other
Fixed an issue where the segment merger could mishandle errors during merge.
Fixed an issue on on-prem trial license that would use user count limits from cloud.
Use a fresh (random) name for the tmp folder below the datadir to ensure that it is a proper subdir of the datadir and not a mount point.
Fixed styling issue on the search page where long errors would overflow the screen.
Fixed a bug where only part of the Users page was loading when navigating from the All organizations page.
When an alert query encounters a warning and Humio is not configured to trigger alerts despite warnings
ALERT_DESPITE_WARNINGS=true
the warning text will now be shown as an error message on the alert in the UI.Fixed an issue where certain problems would highlight the first word in a query.
Addressed an issue causing Humio to sometimes error log an ArrayIndexOutOfBoundsException during shutdown.
Fixed incorrect results when searching through saved queries and recent queries.
Fixed an issue where streaming (exporting) query results in JSON format could include extra "," characters within the output.
Fixed a bug where shared lookup files could not be downloaded from the UI.
Fixed a bug with the cache not being populated between restarts on single node clusters.
Fixed an issue when adding a group to a repository or view than an error message is displayed when the user is not the organization owner or root.
Prevent unauthorized analytics requests being sent.
Fixed an issue where error messages would show wrong input.
The field vhost in internal Humio logging is now reserved for denoting the host logging the message. Other uses of vhost now uses the field hostId.
Removed error query param from URL when entering Humio.
Fixed an issue that in rare cases would cause login errors.
No longer return the "Query Plan" in responses, but return a hash in the new field hashedQueryOnView instead. The plan could leak information not otherwise visible to the user, such as query prefixes being applied.
Fixed some widgets on dashboards reporting errors while waiting for data to load.
Fixed an issue where the web client could start queries from the beginning of time when scrolling backwards through events in the UI.
Changes to the state of IOC access on organizations are now reflected in the audit log.
Fixed an issue where a scheduled search failed and was retried, if it had multiple actions and at least one action was unknown to Humio. Now, the unknown action is logged, but the scheduled search completes successfully and continues to the next scheduled run.
When a digester fails to start, rather than restarting the JVM as previous releases did, keep retrying to start assuming that the issue is transient, such as data for a single ingest partition being unavailable for a short while. While in this situation the process reports the metric for ingest latency on the affected partitions as being uptime of the JVM process at this point. The idea is to signal that data is not flowing on those partitions, so that a monitored metric can raise an alarm somewhere. In lack of a proper latency in this situation, it's better to have a growing non-zero metrics than having the metrics being zero.
Fixed an issue where missing undersized segments in a datasource might cause Humio to repeatedly transfer undersized segments between nodes.
When creating or updating an action, the backend now verifies that the host url associated with the action is prefixed with either
http://
orhttps://
. This affects Actions of the type: Webhook, OpsGenie, Single-Channel Slack and VictorOps.Fixed an issue where choosing a UI theme would not get saved properly in the user's settings.
Fixed an edge case where Humio might create multiple copies of the same datasource when the number of Kafka partitions is changed. The fix ensures only one copy will be created.
Changed field type for zip codes.
Fixed a number of stability issues with the event redaction job.
Fixed an issue where the segment merger would write that the current node had a segment slightly before registering that segment in the local node.
Fixed an issue where clicking on the counters of parsed events on the Parsers page would open an empty search page, except for built-in parsers. Now, it correctly shows the latest parsed events for all parsers (except package parsers).
Changed default package type to "application" on the export package wizard.
Fixed an issue where
sort()
would cause events to be read in a non-optimal order for the entire query.Fixed an issue where a dashboard installed with a YAML file could be slightly different than what was specified in the file.
Fixed an issue where a failing event forwarder would be cached indefinitely and could negatively impact Humio performance.
Fixed an issue where comments spanning multiple lines wouldn't be colored correctly.
Browser storage is now cleared when initiating while unauthenticated.
Fixed an issue where OIDC without a discovery endpoint would fail to configure if
OIDC_TOKEN_ENDPOINT_AUTH_METHOD
was not set.Remove the ability to create ingest tokens and ingest listeners on system repositories.
When checking if the ViewAction.ChangeS3ArchivingSettings action is allowed (with e.g. SearchDomain.isActionAllowed), the answer will now be false if checked on a view, as the action only makes sense on repositories.
Fixed an issue on sandbox renaming, that would allow you to rename a sandbox and end up in a bad state.
When checking if the ViewAction.ChangeRepoConnections action is allowed (with e.g. SearchDomain.isActionAllowed), the answer will now be false if checked on a repository, as the action only makes sense on views.
Fixed an issue causing Humio running on Java 16+ to return incorrect search results when the input query contains Unicode surrogate pairs (e.g. when searching for an emoji).
Fixed a compatibility issue with Filebeat 7.16.0
Fixed a bug where invalid UTF-16 characters could not be ingested. They are now converted to
ufffd
.Fixed an issue where
series()
failed to serialize its state properly.When performing jobs triggered via the Redact Events API, Humio could restart queries for unrelated views until the delete job completed. This has been improved, so only views affected by the delete will be impacted.
Crash the node if any of a number of critical threads die. This should help prevent zombie nodes.
Temporary fix of issue with live queries not having first aggregator as
bucket()
ortimeChart()
, but then later in the query having those as a second aggregator. As a temporary fix, such queries will fail. In later releases, this will get fixed more properly.Changes to the state of backend feature flags are now reflected in the audit log.
Fixed an issue where some regexes could not be used.
Fixed an issue in the interactive tutorial.
Support Java 17.
Fixed an issue where the SegmentMoverJob could delete the local copy of a segment, if a pending download of the segment failed the CRC check. The job will now keep the downloaded file at a temporary path until the CRC check completes, to avoid deleting a local copy created by other jobs, e.g. by bucket downloads.
Fixed an issue where a digest node could be unable to rejoin the cluster after being shut down if all other digest nodes were also down at the time.
Fixed a bug where query coordination partitions would not get updated.
Removed a spurious warning log when requesting a non-existent hash file from S3.
Include view+parser-name in thread dumps when time is spent inside a parser.
Fixed a race condition that could cause Humio to delete more segments than expected when initializing a digester node.
Fixed a bug where offsets from one Kafka partition could be used when deciding where to start consuming for another partition, in the case where there were too many datasources in the repo. This led to a crash loop when the affected node was restarted.
Fixed an issue where release notes would not close when a release is open.
Humio Server 1.34.1 LTS (2022-01-06)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.34.1 | LTS | 2022-01-06 | Cloud | 2022-12-31 | No | 1.26.0 | No |
Hide file hashes
JAR Checksum | Value |
---|---|
MD5 | 1d6b54fc6b4f52abff6c1106a904e4d1 |
SHA1 | 67e6e0cd4e6eb59a0d88f85dc0f511aefacef1dc |
SHA256 | 23f50f620fd5e51755521aba8185b76e4ca0187206fd55e8ed5fbbe14e42578b |
SHA512 | 6ad8fe4040b7c2e80643071e05b1d305f6d66e74c4041d3a4ac56f84ec6fd3f2b63c04c421029c436f081b8ab22e8a60bf17c5330f7fe117d178c596526a344f |
Docker Image | SHA256 Checksum |
---|---|
humio | e66c6b3fcf45bd36502c877e8a4a617f9f662c5cf672e6c6c73b45445d41ad20 |
humio-core | fa7e9355cd4f41e64b404e65e33166c28fdc6c2ce5ceeea382007778f9ff64ce |
kafka | 459b3a6df8a678493bfb3266d243b867ed3ae4251464276c854afa0b2f784dac |
zookeeper | 748cfc151533e1e2efc870299bc5c2b10fda97ec0c6f522dd8c8576d9aea315e |
Download: https://repo.humio.com/repository/maven-releases/com/humio/server/1.34.1/server-1.34.1.tar.gz
These notes include entries from the following previous releases: 1.34.0
Updated dependencies with security fixes and weakness.
New features and improvements
UI Changes
Added autofocus to the first field when opening a dialog using the save as functionality from the Search page.
Added buttons for stopping all queries, streaming queries, and historical queries from inside the query monitor.
Allow resize of columns in the event list by mouse.
Disable actions if permissions are handled externally.
Added maximum width to tabs on the Group page, so they do not keep expanding forever.
Validation error messages are now more precise and have improved formatting.
The overall look of message boxes in Humio has been updated.
Updated the links for Privacy Notice and Terms and Conditions.
Dark mode is officially deemed stable enough to be out of beta.
GraphQL API
The GraphQL field isEventForwardingEnabled on the HumioMetadata type is deprecated, as it is no longer in use internally. If you rely on this, please let us know.
Added three GraphQL mutations for stopping queries: stopAllQueries, stopStreamingQueries, and stopHistoricalQueries.
Added GraphQL mutation clearRecentQueries which a user can run to clear their recent queries in a specific view or repository.
Added 2-phase migration that will allow old user api tokens to be used and clean global from secrets after a 30 day period.
Changed old personal user token implementation to hash based.
Renamed the deleteEvents related GraphQL mutations and queries to redactEvents. The redactEvents API is intended for redacting sensitive data from a repository, not for bulk deletion of events. We think the new name invites fewer misunderstandings.
Configuration
When checking if the ViewAction.EventForwarding action is allowed (with e.g. SearchDomain.isActionAllowed), the answer will now be false if the event forwarding is not enabled on the server.
Functions
Improved performance of the query functions
drop()
and rename() by quite a bit.Added query function
math:arctan2()
to the query language.Added the
communityId()
function for calculating hashes of network flow tuples according to the (Community ID Spec).The
kvParse()
query function can now parse unquoted empty values using the new parameter separatorPadding to specify if your data has whitespace around the key-value separator (typically =). The default is "Unknown", which will leave the functionality of the function unchanged.Added a
minSpan
parameter totimeChart()
andbucket()
, which can be used to specify a minimum span when using a short time interval.Refactored query functions
join()
,selfJoin()
, andselfJoinFilter()
into user-visible and internal implementations.
Other
It is now possible to create actions, alerts, scheduled searches, and parsers from YAML template files.
Added new metric: bucket-storage-upload-latency-max. It shows the amount of time spent for the event that that has been pending for upload to bucket storage the longest.
It is now possible to ingest logs into Humio using LogStash v.7.13 and upwards.
Added a precondition that ensures that the number of ingest partitions cannot be reduced.
Added validation and a more clear error message for queries with a time span of 0.
Added metric for the number of currently running streaming queries.
Improved shutdown logic slightly, helping prevent thread pools from getting stuck or logging spurious errors during shutdown.
Made the transfer coordinator display more clear errors instead of an internal server error for multinode clusters.
Added Australian states to the States dropdown.
New metric: ingest-request-delay. Histogram of ingest request time spent being delayed due to exceeding limit on concurrent processing of ingest (milliseconds).
Improved handling of multiple nodes attempting to create views with the same names at the same time, as might happen when bootstrapping a cluster.
Improved the error reporting when installing, updating or exporting a package fails.
Create, update, and delete of dashboards is now audit logged.
Reword regular expression related error messages.
Added management API to put hosts in maintenance mode.
Improved error messages when an invalid regular expression is used in replace.
Retention based on compressed size will no longer account for segment replication.
Query validation has been improved to include several errors which used to first show up after submitting search.
Prepopulate email field when invited user is filling in a form with this information.
Node roles can now be assigned/removed at runtime.
Improved partition layout auto-balancing algorithm.
Added support in the humio event collector for organization- and system-wide ingest tokens and the ability to use a parser from a different repo than the one being ingested into.
A compressed segment with a size of 1GB will now always count for retention as 1 GB. Previously, a compressed segment with a size of 1GB might count for more than 1GB when calculating retention, if that segment had more replicas than configured. The effect on the retention policy was that if you had configured retention of .0GB compressed bytes, Humio might retain less than .0GB of compressed data if any of those segments had too many replicas.
Added checksum verification within hash filter files on read.
Query editor: improved code completion of function names.
Minor optimization when using groupBy with a single field.
Added "export as yaml" function to the list pages of parsers, actions and scheduled searches.
Reduce limit on number of datasources for sandbox repositories created when a user is created to .0 by default.
Fixed in this release
Security
Updated dependencies to Netty to fix CVE-2021-43797
Updated dependencies to log4j 2.17.1 to fix CVE-2021-44832 and CVE-2021-45105
Updated dependencies to log4j 2.16 to remove of message lookups (CVE-2021-44228 and CVE-2021-45046)
Summary
Updated dependencies to Jackson to fix a weakness
Automation and Alerts
Alerts and scheduled searches are no longer run on cloud for organizations with an expired trial license, and on-prem for any expired license.
Fixed an issue where an alert would not be throttled until after its actions had completed, which could make the alert trigger multiple times shortly after each other if an action was slow. Now, the alert is throttled as soon as it triggers.
Alerts and scheduled searches are now enabled per default when created. The check disabling these entities if no actions are attached has been replaced with a warning, which informs a user that even though the entity is enabled, nothing will trigger since no actions are attached.
Functions
Fixed a bug in the validation of the
bits
parameter ofhashMatch()
andhashRewrite()
.
Other
Fixed an issue where the segment merger could mishandle errors during merge.
Fixed an issue on on-prem trial license that would use user count limits from cloud.
Use a fresh (random) name for the tmp folder below the datadir to ensure that it is a proper subdir of the datadir and not a mount point.
Fixed styling issue on the search page where long errors would overflow the screen.
Fixed a bug where only part of the Users page was loading when navigating from the All organizations page.
When an alert query encounters a warning and Humio is not configured to trigger alerts despite warnings
ALERT_DESPITE_WARNINGS=true
the warning text will now be shown as an error message on the alert in the UI.Fixed an issue where certain problems would highlight the first word in a query.
Addressed an issue causing Humio to sometimes error log an ArrayIndexOutOfBoundsException during shutdown.
Fixed incorrect results when searching through saved queries and recent queries.
Fixed an issue where streaming (exporting) query results in JSON format could include extra "," characters within the output.
Fixed a bug where shared lookup files could not be downloaded from the UI.
Fixed a bug with the cache not being populated between restarts on single node clusters.
Fixed an issue when adding a group to a repository or view than an error message is displayed when the user is not the organization owner or root.
Prevent unauthorized analytics requests being sent.
Fixed an issue where error messages would show wrong input.
The field vhost in internal Humio logging is now reserved for denoting the host logging the message. Other uses of vhost now uses the field hostId.
Removed error query param from URL when entering Humio.
Fixed an issue that in rare cases would cause login errors.
No longer return the "Query Plan" in responses, but return a hash in the new field hashedQueryOnView instead. The plan could leak information not otherwise visible to the user, such as query prefixes being applied.
Fixed some widgets on dashboards reporting errors while waiting for data to load.
Fixed an issue where the web client could start queries from the beginning of time when scrolling backwards through events in the UI.
Changes to the state of IOC access on organizations are now reflected in the audit log.
Fixed an issue where a scheduled search failed and was retried, if it had multiple actions and at least one action was unknown to Humio. Now, the unknown action is logged, but the scheduled search completes successfully and continues to the next scheduled run.
When a digester fails to start, rather than restarting the JVM as previous releases did, keep retrying to start assuming that the issue is transient, such as data for a single ingest partition being unavailable for a short while. While in this situation the process reports the metric for ingest latency on the affected partitions as being uptime of the JVM process at this point. The idea is to signal that data is not flowing on those partitions, so that a monitored metric can raise an alarm somewhere. In lack of a proper latency in this situation, it's better to have a growing non-zero metrics than having the metrics being zero.
Fixed an issue where missing undersized segments in a datasource might cause Humio to repeatedly transfer undersized segments between nodes.
When creating or updating an action, the backend now verifies that the host url associated with the action is prefixed with either
http://
orhttps://
. This affects Actions of the type: Webhook, OpsGenie, Single-Channel Slack and VictorOps.Fixed an issue where choosing a UI theme would not get saved properly in the user's settings.
Fixed an edge case where Humio might create multiple copies of the same datasource when the number of Kafka partitions is changed. The fix ensures only one copy will be created.
Changed field type for zip codes.
Fixed a number of stability issues with the event redaction job.
Fixed an issue where the segment merger would write that the current node had a segment slightly before registering that segment in the local node.
Fixed an issue where clicking on the counters of parsed events on the Parsers page would open an empty search page, except for built-in parsers. Now, it correctly shows the latest parsed events for all parsers (except package parsers).
Changed default package type to "application" on the export package wizard.
Fixed an issue where
sort()
would cause events to be read in a non-optimal order for the entire query.Fixed an issue where a dashboard installed with a YAML file could be slightly different than what was specified in the file.
Fixed an issue where a failing event forwarder would be cached indefinitely and could negatively impact Humio performance.
Fixed an issue where comments spanning multiple lines wouldn't be colored correctly.
Browser storage is now cleared when initiating while unauthenticated.
Fixed an issue where OIDC without a discovery endpoint would fail to configure if
OIDC_TOKEN_ENDPOINT_AUTH_METHOD
was not set.Remove the ability to create ingest tokens and ingest listeners on system repositories.
When checking if the ViewAction.ChangeS3ArchivingSettings action is allowed (with e.g. SearchDomain.isActionAllowed), the answer will now be false if checked on a view, as the action only makes sense on repositories.
Fixed an issue on sandbox renaming, that would allow you to rename a sandbox and end up in a bad state.
When checking if the ViewAction.ChangeRepoConnections action is allowed (with e.g. SearchDomain.isActionAllowed), the answer will now be false if checked on a repository, as the action only makes sense on views.
Fixed an issue causing Humio running on Java 16+ to return incorrect search results when the input query contains Unicode surrogate pairs (e.g. when searching for an emoji).
Fixed a compatibility issue with Filebeat 7.16.0
Fixed a bug where invalid UTF-16 characters could not be ingested. They are now converted to
ufffd
.Fixed an issue where
series()
failed to serialize its state properly.When performing jobs triggered via the Redact Events API, Humio could restart queries for unrelated views until the delete job completed. This has been improved, so only views affected by the delete will be impacted.
Crash the node if any of a number of critical threads die. This should help prevent zombie nodes.
Temporary fix of issue with live queries not having first aggregator as
bucket()
ortimeChart()
, but then later in the query having those as a second aggregator. As a temporary fix, such queries will fail. In later releases, this will get fixed more properly.Changes to the state of backend feature flags are now reflected in the audit log.
Fixed an issue where some regexes could not be used.
Fixed an issue in the interactive tutorial.
Support Java 17.
Fixed an issue where the SegmentMoverJob could delete the local copy of a segment, if a pending download of the segment failed the CRC check. The job will now keep the downloaded file at a temporary path until the CRC check completes, to avoid deleting a local copy created by other jobs, e.g. by bucket downloads.
Fixed an issue where a digest node could be unable to rejoin the cluster after being shut down if all other digest nodes were also down at the time.
Fixed a bug where query coordination partitions would not get updated.
Removed a spurious warning log when requesting a non-existent hash file from S3.
Include view+parser-name in thread dumps when time is spent inside a parser.
Fixed a race condition that could cause Humio to delete more segments than expected when initializing a digester node.
Fixed a bug where offsets from one Kafka partition could be used when deciding where to start consuming for another partition, in the case where there were too many datasources in the repo. This led to a crash loop when the affected node was restarted.
Fixed an issue where release notes would not close when a release is open.
Humio Server 1.34.0 LTS (2021-12-15)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.34.0 | LTS | 2021-12-15 | Cloud | 2022-12-31 | No | 1.26.0 | Yes |
Hide file hashes
JAR Checksum | Value |
---|---|
MD5 | 4dc3b67c42b44e59e0ad6cc02a220f12 |
SHA1 | 099a9ae07d5595fb06c145aaa80b46ba9f74dd10 |
SHA256 | 22cd6f60f030d56bcae159b853efe6dcd76f7add127e64c37bcca58c66d1a155 |
SHA512 | 2645369a9ec989c1b40b190993ecbd4eca8be63b60318a1b1259124819446c06c3bbba92a9c22f1e194133b9da5d05b6c0346369c33e8c33a9777a2a168ae524 |
Download: https://repo.humio.com/repository/maven-releases/com/humio/server/1.34.0/server-1.34.0.tar.gz
Humio Server 1.34 REQUIRES minimum previous version 1.26.0 of Humio to start. Clusters wishing to upgrade from older versions must upgrade to 1.26.0+ first. After running 1.34.0 or later, you cannot downgrade to versions prior to 1.26.0.
You can now use the mouse to resize columns in the event list. Previously you had to click the column header and use the
buttons.New features and improvements
UI Changes
Added autofocus to the first field when opening a dialog using the save as functionality from the Search page.
Added buttons for stopping all queries, streaming queries, and historical queries from inside the query monitor.
Allow resize of columns in the event list by mouse.
Disable actions if permissions are handled externally.
Added maximum width to tabs on the Group page, so they do not keep expanding forever.
Validation error messages are now more precise and have improved formatting.
The overall look of message boxes in Humio has been updated.
Updated the links for Privacy Notice and Terms and Conditions.
Dark mode is officially deemed stable enough to be out of beta.
GraphQL API
The GraphQL field isEventForwardingEnabled on the HumioMetadata type is deprecated, as it is no longer in use internally. If you rely on this, please let us know.
Added three GraphQL mutations for stopping queries: stopAllQueries, stopStreamingQueries, and stopHistoricalQueries.
Added GraphQL mutation clearRecentQueries which a user can run to clear their recent queries in a specific view or repository.
Added 2-phase migration that will allow old user api tokens to be used and clean global from secrets after a 30 day period.
Changed old personal user token implementation to hash based.
Renamed the deleteEvents related GraphQL mutations and queries to redactEvents. The redactEvents API is intended for redacting sensitive data from a repository, not for bulk deletion of events. We think the new name invites fewer misunderstandings.
Configuration
When checking if the ViewAction.EventForwarding action is allowed (with e.g. SearchDomain.isActionAllowed), the answer will now be false if the event forwarding is not enabled on the server.
Functions
Improved performance of the query functions
drop()
and rename() by quite a bit.Added query function
math:arctan2()
to the query language.Added the
communityId()
function for calculating hashes of network flow tuples according to the (Community ID Spec).The
kvParse()
query function can now parse unquoted empty values using the new parameter separatorPadding to specify if your data has whitespace around the key-value separator (typically =). The default is "Unknown", which will leave the functionality of the function unchanged.Added a
minSpan
parameter totimeChart()
andbucket()
, which can be used to specify a minimum span when using a short time interval.Refactored query functions
join()
,selfJoin()
, andselfJoinFilter()
into user-visible and internal implementations.
Other
It is now possible to create actions, alerts, scheduled searches, and parsers from YAML template files.
Added new metric: bucket-storage-upload-latency-max. It shows the amount of time spent for the event that that has been pending for upload to bucket storage the longest.
It is now possible to ingest logs into Humio using LogStash v.7.13 and upwards.
Added a precondition that ensures that the number of ingest partitions cannot be reduced.
Added validation and a more clear error message for queries with a time span of 0.
Added metric for the number of currently running streaming queries.
Improved shutdown logic slightly, helping prevent thread pools from getting stuck or logging spurious errors during shutdown.
Made the transfer coordinator display more clear errors instead of an internal server error for multinode clusters.
Added Australian states to the States dropdown.
New metric: ingest-request-delay. Histogram of ingest request time spent being delayed due to exceeding limit on concurrent processing of ingest (milliseconds).
Improved handling of multiple nodes attempting to create views with the same names at the same time, as might happen when bootstrapping a cluster.
Improved the error reporting when installing, updating or exporting a package fails.
Create, update, and delete of dashboards is now audit logged.
Reword regular expression related error messages.
Added management API to put hosts in maintenance mode.
Improved error messages when an invalid regular expression is used in replace.
Retention based on compressed size will no longer account for segment replication.
Query validation has been improved to include several errors which used to first show up after submitting search.
Prepopulate email field when invited user is filling in a form with this information.
Node roles can now be assigned/removed at runtime.
Improved partition layout auto-balancing algorithm.
Added support in the humio event collector for organization- and system-wide ingest tokens and the ability to use a parser from a different repo than the one being ingested into.
A compressed segment with a size of 1GB will now always count for retention as 1 GB. Previously, a compressed segment with a size of 1GB might count for more than 1GB when calculating retention, if that segment had more replicas than configured. The effect on the retention policy was that if you had configured retention of .0GB compressed bytes, Humio might retain less than .0GB of compressed data if any of those segments had too many replicas.
Added checksum verification within hash filter files on read.
Query editor: improved code completion of function names.
Minor optimization when using groupBy with a single field.
Added "export as yaml" function to the list pages of parsers, actions and scheduled searches.
Reduce limit on number of datasources for sandbox repositories created when a user is created to .0 by default.
Fixed in this release
Security
Updated dependencies to log4j 2.16 to remove of message lookups (CVE-2021-44228 and CVE-2021-45046)
Automation and Alerts
Alerts and scheduled searches are no longer run on cloud for organizations with an expired trial license, and on-prem for any expired license.
Fixed an issue where an alert would not be throttled until after its actions had completed, which could make the alert trigger multiple times shortly after each other if an action was slow. Now, the alert is throttled as soon as it triggers.
Alerts and scheduled searches are now enabled per default when created. The check disabling these entities if no actions are attached has been replaced with a warning, which informs a user that even though the entity is enabled, nothing will trigger since no actions are attached.
Functions
Fixed a bug in the validation of the
bits
parameter ofhashMatch()
andhashRewrite()
.
Other
Fixed an issue where the segment merger could mishandle errors during merge.
Fixed an issue on on-prem trial license that would use user count limits from cloud.
Use a fresh (random) name for the tmp folder below the datadir to ensure that it is a proper subdir of the datadir and not a mount point.
Fixed styling issue on the search page where long errors would overflow the screen.
Fixed a bug where only part of the Users page was loading when navigating from the All organizations page.
When an alert query encounters a warning and Humio is not configured to trigger alerts despite warnings
ALERT_DESPITE_WARNINGS=true
the warning text will now be shown as an error message on the alert in the UI.Fixed an issue where certain problems would highlight the first word in a query.
Addressed an issue causing Humio to sometimes error log an ArrayIndexOutOfBoundsException during shutdown.
Fixed incorrect results when searching through saved queries and recent queries.
Fixed an issue where streaming (exporting) query results in JSON format could include extra "," characters within the output.
Fixed a bug where shared lookup files could not be downloaded from the UI.
Fixed a bug with the cache not being populated between restarts on single node clusters.
Fixed an issue when adding a group to a repository or view than an error message is displayed when the user is not the organization owner or root.
Prevent unauthorized analytics requests being sent.
Fixed an issue where error messages would show wrong input.
The field vhost in internal Humio logging is now reserved for denoting the host logging the message. Other uses of vhost now uses the field hostId.
Removed error query param from URL when entering Humio.
Fixed an issue that in rare cases would cause login errors.
No longer return the "Query Plan" in responses, but return a hash in the new field hashedQueryOnView instead. The plan could leak information not otherwise visible to the user, such as query prefixes being applied.
Fixed some widgets on dashboards reporting errors while waiting for data to load.
Fixed an issue where the web client could start queries from the beginning of time when scrolling backwards through events in the UI.
Changes to the state of IOC access on organizations are now reflected in the audit log.
Fixed an issue where a scheduled search failed and was retried, if it had multiple actions and at least one action was unknown to Humio. Now, the unknown action is logged, but the scheduled search completes successfully and continues to the next scheduled run.
When a digester fails to start, rather than restarting the JVM as previous releases did, keep retrying to start assuming that the issue is transient, such as data for a single ingest partition being unavailable for a short while. While in this situation the process reports the metric for ingest latency on the affected partitions as being uptime of the JVM process at this point. The idea is to signal that data is not flowing on those partitions, so that a monitored metric can raise an alarm somewhere. In lack of a proper latency in this situation, it's better to have a growing non-zero metrics than having the metrics being zero.
Fixed an issue where missing undersized segments in a datasource might cause Humio to repeatedly transfer undersized segments between nodes.
When creating or updating an action, the backend now verifies that the host url associated with the action is prefixed with either
http://
orhttps://
. This affects Actions of the type: Webhook, OpsGenie, Single-Channel Slack and VictorOps.Fixed an issue where choosing a UI theme would not get saved properly in the user's settings.
Fixed an edge case where Humio might create multiple copies of the same datasource when the number of Kafka partitions is changed. The fix ensures only one copy will be created.
Changed field type for zip codes.
Fixed a number of stability issues with the event redaction job.
Fixed an issue where the segment merger would write that the current node had a segment slightly before registering that segment in the local node.
Fixed an issue where clicking on the counters of parsed events on the Parsers page would open an empty search page, except for built-in parsers. Now, it correctly shows the latest parsed events for all parsers (except package parsers).
Changed default package type to "application" on the export package wizard.
Fixed an issue where
sort()
would cause events to be read in a non-optimal order for the entire query.Fixed an issue where a dashboard installed with a YAML file could be slightly different than what was specified in the file.
Fixed an issue where a failing event forwarder would be cached indefinitely and could negatively impact Humio performance.
Fixed an issue where comments spanning multiple lines wouldn't be colored correctly.
Browser storage is now cleared when initiating while unauthenticated.
Fixed an issue where OIDC without a discovery endpoint would fail to configure if
OIDC_TOKEN_ENDPOINT_AUTH_METHOD
was not set.Remove the ability to create ingest tokens and ingest listeners on system repositories.
When checking if the ViewAction.ChangeS3ArchivingSettings action is allowed (with e.g. SearchDomain.isActionAllowed), the answer will now be false if checked on a view, as the action only makes sense on repositories.
Fixed an issue on sandbox renaming, that would allow you to rename a sandbox and end up in a bad state.
When checking if the ViewAction.ChangeRepoConnections action is allowed (with e.g. SearchDomain.isActionAllowed), the answer will now be false if checked on a repository, as the action only makes sense on views.
Fixed an issue causing Humio running on Java 16+ to return incorrect search results when the input query contains Unicode surrogate pairs (e.g. when searching for an emoji).
Fixed a compatibility issue with Filebeat 7.16.0
Fixed a bug where invalid UTF-16 characters could not be ingested. They are now converted to
ufffd
.Fixed an issue where
series()
failed to serialize its state properly.When performing jobs triggered via the Redact Events API, Humio could restart queries for unrelated views until the delete job completed. This has been improved, so only views affected by the delete will be impacted.
Crash the node if any of a number of critical threads die. This should help prevent zombie nodes.
Temporary fix of issue with live queries not having first aggregator as
bucket()
ortimeChart()
, but then later in the query having those as a second aggregator. As a temporary fix, such queries will fail. In later releases, this will get fixed more properly.Changes to the state of backend feature flags are now reflected in the audit log.
Fixed an issue where some regexes could not be used.
Fixed an issue in the interactive tutorial.
Support Java 17.
Fixed an issue where the SegmentMoverJob could delete the local copy of a segment, if a pending download of the segment failed the CRC check. The job will now keep the downloaded file at a temporary path until the CRC check completes, to avoid deleting a local copy created by other jobs, e.g. by bucket downloads.
Fixed an issue where a digest node could be unable to rejoin the cluster after being shut down if all other digest nodes were also down at the time.
Fixed a bug where query coordination partitions would not get updated.
Removed a spurious warning log when requesting a non-existent hash file from S3.
Include view+parser-name in thread dumps when time is spent inside a parser.
Fixed a race condition that could cause Humio to delete more segments than expected when initializing a digester node.
Fixed a bug where offsets from one Kafka partition could be used when deciding where to start consuming for another partition, in the case where there were too many datasources in the repo. This led to a crash loop when the affected node was restarted.
Fixed an issue where release notes would not close when a release is open.
Humio Server 1.33.3 GA (2021-12-10)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.33.3 | GA | 2021-12-10 | Cloud | 2022-12-31 | No | 1.26.0 | No |
Available for download two days after release.
Hide file hashes
JAR Checksum | Value |
---|---|
MD5 | 10d2ee69d0bcdc6313fb3529912007ba |
SHA1 | 5be098e4f3f6500704e85c0763c918763b90aa34 |
SHA256 | eaefa5ce2c162fbd9d0a57d3b7c47dee828848eabdd7e96d885a812208c20a46 |
SHA512 | a9268b87f416d9b857eccb506c304ba7ca552cd6408ac13791b160e0d17c3d82e67213025ec7bca820e970676e4a24d1a1f4d20ff27e3cbc5bc30edf874e58a0 |
More security fixes related to
log4j
logging.
Fixed in this release
Security
Updated dependencies to address a critical security vulnerability for the log4j logging framework, "log4shell", (CVE-2021-44228).
Humio Server 1.33.2 GA (2021-12-10)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.33.2 | GA | 2021-12-10 | Cloud | 2022-12-31 | No | 1.26.0 | No |
Available for download two days after release.
Hide file hashes
JAR Checksum | Value |
---|---|
MD5 | 393be03445adbf84f63ef93b522b85fa |
SHA1 | 36454936dfe7c3863656043a8ee1d8639c1c198e |
SHA256 | e03616ba1d8d92e80031231ad7e2b7666121e7ccf7eee10f7a816241a90706ac |
SHA512 | 5d0504538cfa1db51f7188d4f6fef03aeb5a69b466f2e8367e91208745c4bf9050f271ffdf52ed6027796487ac11106726fbf3d7c3d6e259efa49f369793aa96 |
Security fix related to
log4j
logging, and fix
compatibility with Filebeat.
Fixed in this release
Security
Updated dependencies to address a critical security vulnerability for the log4j logging framework, "log4shell", (CVE-2021-44228).
Summary
Fixed a compatibility issue with Filebeat 7.16.0
Humio Server 1.33.1 GA (2021-11-23)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.33.1 | GA | 2021-11-23 | Cloud | 2022-12-31 | No | 1.26.0 | No |
Available for download two days after release.
Hide file hashes
JAR Checksum | Value |
---|---|
MD5 | d9f270df38f2ce472801c81a06fa67af |
SHA1 | c5740a06b4b86a8615767c388a7cf6d3e3a8ccf1 |
SHA256 | ed3c03b5dcb8fe2c7230bae2775346283d7cdc143854260672a5ba66f11d612b |
SHA512 | d2762a9fa648af32e85205b4d972ad15d47870fc030c111d31ca4d048ce9459ce6e50a3bce0063f0c7347a6640181b4e3c41d1273a4c82b297fd8d3c844c529a |
Critical bug fixes related to version dependencies, alert throttling, etc.; Improve Interactive Tutorial.
Fixed in this release
Summary
Fixed a race condition that could cause Humio to delete more segments than expected when initializing a digester node.
Updated a dependency to a version fixing a critical bug.
Fixed an issue that in rare cases would cause login errors.
Fixed an issue in the interactive tutorial.
Automation and Alerts
Reverted from 1.33.0 Errors on alerts, which are shown in the alert overview, are now only cleared when either the alert query is updated by a user or the alert triggers. Previously, errors that occurred when actions were triggered would be removed when the alert no longer triggered. Now, they will be displayed until the actions trigger successfully. Conversely, errors that occur when running the query may now remain until the next time the alert triggers, where they would previously be removed as soon as the query run again without errors.
Fixed an issue where an alert would not be throttled until after its actions had completed, which could make the alert trigger multiple times shortly after each other if an action was slow. Now, the alert is throttled as soon as it triggers.
Humio Server 1.33.0 GA (2021-11-15)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.33.0 | GA | 2021-11-15 | Cloud | 2022-12-31 | No | 1.26.0 | Yes |
Available for download two days after release.
Hide file hashes
JAR Checksum | Value |
---|---|
MD5 | 2964782694b82a39cc5b644d040fae68 |
SHA1 | b520dbabec0321b953c496b960710ba5fb6614a7 |
SHA256 | 8b8de633ca08c9592ee2441b0498de1a86aa4093a505821923be7e8501b62526 |
SHA512 | be827349bf450d51b4c42c55c13120ba735196f598eaec798a7eb11b8e1320a6ad86caefca3841a3560013311e30347dc595a96ed59d4cb5c0cf946e72208ac2 |
1.33 REQUIRES minimum version 1.26.0 of Humio to start. Clusters wishing to upgrade from older versions must upgrade to 1.26.0+ first. After running 1.33.0 or later, you cannot run versions prior to 1.26.0.
Once the release has been deployed, all existing personal api tokens will be hashed so they still can be used, but you will not be able to retrieve them again. If you want to preserve the tokens, be sure to copy it into a secrets vault before the release is deployed. The api-token field on the User type in GraphQL has been removed.
You can now use the mouse to resize columns in the event list. Previously you had to click the column header and use the "Increase / Decrease Width" buttons.
New features and improvements
UI Changes
Validation error messages are now more precise and have improved formatting.
Updated the links for Privacy Notice and Terms and Conditions.
Added buttons for stopping all queries, streaming queries, and historical queries from inside the query monitor.
The overall look of message boxes in Humio has been updated.
Added maximum width to tabs on the Group page, so they do not keep expanding forever.
Allow resize of columns in the event list by mouse.
Disable actions if permissions are handled externally.
Dark mode is officially deemed stable enough to be out of beta.
Added autofocus to the first field when opening a dialog using the save as functionality from the Search page.
GraphQL API
Added GraphQL mutation clearRecentQueries which a user can run to clear their recent queries in a specific view or repository.
Renamed the deleteEvents related GraphQL mutations and queries to redactEvents. The redactEvents API is intended for redacting sensitive data from a repository, not for bulk deletion of events. We think the new name invites fewer misunderstandings.
The GraphQL field isEventForwardingEnabled on the HumioMetadata type is deprecated, as it is no longer in use internally. If you rely on this, please let us know.
Added 2-phase migration that will allow old user api tokens to be used and clean global from secrets after a 30 day period.
Changed old personal user token implementation to hash based.
Added three GraphQL mutations for stopping queries: stopAllQueries, stopStreamingQueries, and stopHistoricalQueries.
Configuration
When checking if the ViewAction.EventForwarding action is allowed (with e.g. SearchDomain.isActionAllowed), the answer will now be false if the event forwarding is not enabled on the server.
Functions
Added query function
math:arctan2()
to the query language.Added a
minSpan
parameter totimeChart()
andbucket()
, which can be used to specify a minimum span when using a short time interval.The
kvParse()
query function can now parse unquoted empty values using the new parameter separatorPadding to specify if your data has whitespace around the key-value separator (typically =). The default is "Unknown", which will leave the functionality of the function unchanged.Improved performance of the query functions
drop()
andrename()
by quite a bit.Added the
communityId()
function for calculating hashes of network flow tuples according to the Community ID Spec.Refactored query functions
join()
,selfJoin()
, andselfJoinFilter()
into user-visible and internal implementations.
Other
Minor optimization when using groupBy with a single field.
Added checksum verification within hash filter files on read.
Query editor: improved code completion of function names.
Added management API to put hosts in maintenance mode.
Create, update, and delete of dashboards is now audit logged.
Node roles can now be assigned/removed at runtime.
Retention based on compressed size will no longer account for segment replication.
Improved handling of multiple nodes attempting to create views with the same names at the same time, as might happen when bootstrapping a cluster.
Added validation and a more clear error message for queries with a time span of 0.
A compressed segment with a size of 1GB will now always count for retention as 1 GB. Previously, a compressed segment with a size of 1GB might count for more than 1GB when calculating retention, if that segment had more replicas than configured. The effect on the retention policy was that if you had configured retention of .0GB compressed bytes, Humio might retain less than .0GB of compressed data if any of those segments had too many replicas.
Added support in the humio event collector for organization- and system-wide ingest tokens and the ability to use a parser from a different repo than the one being ingested into.
Added new metric: bucket-storage-upload-latency-max. It shows the amount of time spent for the event that that has been pending for upload to bucket storage the longest.
Query validation has been improved to include several errors which used to first show up after submitting search.
Improved shutdown logic slightly, helping prevent thread pools from getting stuck or logging spurious errors during shutdown.
Improved partition layout auto-balancing algorithm.
Prepopulate email field when invited user is filling in a form with this information.
Added "export as yaml" function to the list pages of parsers, actions and scheduled searches.
Reword regular expression related error messages.
It is now possible to ingest logs into Humio using LogStash v.7.13 and upwards.
Made the transfer coordinator display more clear errors instead of an internal server error for multinode clusters.
Added metric for the number of currently running streaming queries.
Reduce limit on number of datasources for sandbox repositories created when a user is created to .0 by default.
It is now possible to create actions, alerts, scheduled searches, and parsers from YAML template files.
Improved the error reporting when installing, updating or exporting a package fails.
New metric: ingest-request-delay. Histogram of ingest request time spent being delayed due to exceeding limit on concurrent processing of ingest (milliseconds).
Improved error messages when an invalid regular expression is used in replace.
Added Australian states to the States dropdown.
Added a precondition that ensures that the number of ingest partitions cannot be reduced.
Fixed in this release
UI Changes
When an alert query encounters a warning and Humio is not configured to trigger alerts despite warnings
ALERT_DESPITE_WARNINGS
the warning text will now be shown as an error message on the alert in the UI.
Automation and Alerts
Alerts and scheduled searches are now enabled per default when created. The check disabling these entities if no actions are attached has been replaced with a warning, which informs a user that even though the entity is enabled, nothing will trigger since no actions are attached.
Alerts and scheduled searches are no longer run on cloud for organizations with an expired trial license, and on-prem for any expired license.
Functions
Fixed an issue where
sort()
would cause events to be read ina non-optimal order for the entire query.Fixed an issue where
series()
failed to serialize its state properly.Fixed a bug in the validation of the
bits
parameter ofhashMatch()
andhashRewrite()
.
Other
Use a fresh (random) name for the tmp folder below the datadir to ensure that it is a proper subdir of the datadir and not a mount point.
The field vhost in internal Humio logging is now reserved for denoting the host logging the message. Other uses of vhost now uses the field hostId.
Support Java 17.
When performing jobs triggered via the Redact Events API, Humio could restart queries for unrelated views until the delete job completed. This has been improved, so only views affected by the delete will be impacted.
Fixed an issue where clicking on the counters of parsed events on the Parsers page would open an empty search page, except for built-in parsers. Now, it correctly shows the latest parsed events for all parsers (except package parsers).
Fixed an issue where error messages would show wrong input.
Fixed an issue on sandbox renaming, that would allow you to rename a sandbox and end up in a bad state.
Fixed an issue where a digest node could be unable to rejoin the cluster after being shut down if all other digest nodes were also down at the time.
Changed default package type to "application" on the export package wizard.
Fixed styling issue on the search page where long errors would overflow the screen.
Removed a spurious warning log when requesting a non-existent hash file from S3.
Prevent unauthorized analytics requests being sent.
Fixed a number of stability issues with the event redaction job.
Fixed an issue where the web client could start queries from the beginning of time when scrolling backwards through events in the UI.
Fixed an issue where the segment merger would write that the current node had a segment slightly before registering that segment in the local node.
Fixed a bug where offsets from one Kafka partition could be used when deciding where to start consuming for another partition, in the case where there were too many datasources in the repo. This led to a crash loop when the affected node was restarted.
Fixed an issue where a failing event forwarder would be cached indefinitely and could negatively impact Humio performance.
When checking if the ViewAction.ChangeS3ArchivingSettings action is allowed (with e.g. SearchDomain.isActionAllowed), the answer will now be false if checked on a view, as the action only makes sense on repositories.
Errors on alerts, which are shown in the alert overview, are now only cleared when either the alert query is updated by a user or the alert triggers. Previously, errors that occurred when actions were triggered would be removed when the alert no longer triggered. Now, they will be displayed until the actions trigger successfully. On the other hand, errors that occur when running the query may now remain until the next time the alert triggers, where they would previously be removed as soon as the query run again without errors.
This change was reverted in 1.33.1.
Fixed an issue where some regexes could not be used.
Fixed an issue where choosing a UI theme would not get saved properly in the user's settings.
Crash the node if any of a number of critical threads die. This should help prevent zombie nodes.
Fixed incorrect results when searching through saved queries and recent queries.
Fixed a bug with the cache not being populated between restarts on single node clusters.
When creating or updating an action, the backend now verifies that the host url associated with the action is prefixed with either
http://
orhttps://
. This affects Actions of the type: Webhook, OpsGenie, Single-Channel Slack and VictorOps.Fixed a bug where only part of the Users page was loading when navigating from the All organizations page.
Fixed an issue where a dashboard installed with a YAML file could be slightly different than what was specified in the file.
Fixed an issue where a scheduled search failed and was retried, if it had multiple actions and at least one action was unknown to Humio. Now, the unknown action is logged, but the scheduled search completes successfully and continues to the next scheduled run.
Fixed an edge case where Humio might create multiple copies of the same datasource when the number of Kafka partitions is changed. The fix ensures only one copy will be created.
Fixed an issue where comments spanning multiple lines wouldn't be colored correctly.
Fixed an issue on on-prem trial license that would use user count limits from cloud.
Changed field type for zip codes.
Fixed an issue when adding a group to a repository or view than an error message is displayed when the user is not the organization owner or root.
Changes to the state of IOC access on organizations are now reflected in the audit log.
Remove the ability to create ingest tokens and ingest listeners on system repositories.
Fixed a bug where invalid UTF-16 characters could not be ingested. They are now converted to 'ufffd'.
Addressed an issue causing Humio to sometimes error log an ArrayIndexOutOfBoundsException during shutdown.
Fixed an issue where missing undersized segments in a datasource might cause Humio to repeatedly transfer undersized segments between nodes.
Temporary fix of issue with live queries not having first aggregator as
bucket()
ortimeChart()
, but then later in the query having those as a second aggregator. As a temporary fix, such queries will fail. In later releases, this will get fixed more properly.Browser storage is now cleared when initiating while unauthenticated.
Fixed a bug where query coordination partitions would not get updated.
Fixed an issue where the SegmentMoverJob could delete the local copy of a segment, if a pending download of the segment failed the CRC check. The job will now keep the downloaded file at a temporary path until the CRC check completes, to avoid deleting a local copy created by other jobs, e.g. by bucket downloads.
Fixed some widgets on dashboards reporting errors while waiting for data to load.
When checking if the ViewAction.ChangeRepoConnections action is allowed (with e.g. SearchDomain.isActionAllowed), the answer will now be false if checked on a repository, as the action only makes sense on views.
No longer return the "Query Plan" in responses, but return a hash in the new field hashedQueryOnView instead. The plan could leak information not otherwise visible to the user, such as query prefixes being applied.
When a digester fails to start, rather than restarting the JVM as previous releases did, keep retrying to start assuming that the issue is transient, such as data for a single ingest partition being unavailable for a short while. While in this situation the process reports the metric for ingest latency on the affected partitions as being uptime of the JVM process at this point. The idea is to signal that data is not flowing on those partitions, so that a monitored metric can raise an alarm somewhere. In lack of a proper latency in this situation, it's better to have a growing non-zero metrics than having the metrics being zero.
Changes to the state of backend feature flags are now reflected in the audit log.
Fixed an issue where release notes would not close when a release is open.
Fixed an issue where the segment merger could mishandle errors during merge.
Fixed an issue causing Humio running on Java 16+ to return incorrect search results when the input query contains Unicode surrogate pairs (e.g. when searching for an emoji).
Fixed a bug where shared lookup files could not be downloaded from the UI.
Removed error query param from URL when entering Humio.
Fixed an issue where OIDC without a discovery endpoint would fail to configure if
OIDC_TOKEN_ENDPOINT_AUTH_METHOD
was not set.Fixed an issue where certain problems would highlight the first word in a query.
Include view+parser-name in thread dumps when time is spent inside a parser.
Humio Server 1.32.8 LTS (2022-03-09)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.32.8 | LTS | 2022-03-09 | Cloud | 2022-10-31 | No | 1.16.0 | No |
Hide file hashes
JAR Checksum | Value |
---|---|
MD5 | 94782d07a7feda72ffc831747e6740f1 |
SHA1 | 8c1afb6e1b4fe6e5eabe30823e7705b7a03a7af5 |
SHA256 | 76eaa3581ad45443af93dd9e6f92f51142bdc082ba8af234b0ffc7f4346ef630 |
SHA512 | 6929950a72ee073402aea63a3acc016b15ae435ba5c9cf4c24b94fbf51af01c4ccfa4da223374527b597c154f0ca72a114feb0ec45bfae55fad8be17645bccf5 |
Docker Image | SHA256 Checksum |
---|---|
humio | d8f1892f444ab53d2e94419a096978da0eba8d7a995096d9b6f031162fd3bb3c |
humio-core | 49d2631113684a8fc8b3ad53e5545a661961607a3033357ee977ce50d0b49f0a |
kafka | 3bbbaec63eb2ca089800172d94e95f6bb84266dcbac3e76217f50c4169133546 |
zookeeper | 9830902ac961b08c778f38b8714dcfab6be076c1acc5366db459a8560cb6eb61 |
Download: https://repo.humio.com/repository/maven-releases/com/humio/server/1.32.8/server-1.32.8.tar.gz
These notes include entries from the following previous releases: 1.32.0, 1.32.1, 1.32.2, 1.32.3, 1.32.4, 1.32.5, 1.32.6, 1.32.7
Updated dependencies with security fixes and weakness and improved performance.
Deprecation
Items that have been deprecated and may be removed in a future release.
Deprecates the two GraphQL fields id and contentHash on the File type. The two fields are considered unused, so no alternatives are provided. If you rely on them, please let us know.
Deprecates the copyFile GraphQL mutation, as it is no longer used. If you use this mutation, please let us know.
New features and improvements
UI Changes
Updated the style of the email action template and made the wording used dependent on whether an alert or scheduled search was triggered.
Breadcrumbs are aligned across all pages and show the package name with a link when viewing or editing an asset from a package.
The left navigation menu hides, and can be opened again, for mobile devices, on organization settings pages and repository settings pages.
Cluster management pages style updates.
Fixed some styling issue on Query Quotas page.
The signup path was removed, together with the corresponding pages.
Updated design for Package Marketplace and Installed Packages to make them easier to use and more consistent.
Removed the pop-up link to edit an alert or scheduled search when on the form page. This link is only relevant when creating an entity from the search page via a dialog.
Identity provider pages style update.
GraphQL API
Added information about the use of preview fields in the result from calling the GraphQL API. The information will be in the field extensions.preview and will be a list of objects with a name and reason field.
The GraphQL DateTime type now supports non-UTC time. Timestamps like 2021-07-18T14:13:09.517+02.0 are now legal, and will be converted to UTC time internally.
When using the GraphQL field allowedViewActions, the two previously deprecated actions ChangeAlertsAndNotifiers and ReadEvents are no longer returned. Look for their replacements ChangeTriggersAndActions and ReadContents instead.
Deprecates the installPackageFromRegistry and updatePackageFromRegistry GraphQL mutations in favor of installPackageFromRegistryV2 and updatePackageFromRegistryV2.
The name, displayName, and location GraphQL fields on the File type are deprecated in favor of the new nameAndPath field.
The fileName, displayName, and location GraphQL fields on the UploadedFileSnapshot type are deprecated in favor of the new nameAndPath field.
Deprecates the package field on the SearchDomain GraphQL type, in favor of packageV2. The new field has a simpler and more correct return type.
Added a GraphQL mutation cancelDeleteEvents that allows cancelling a previously submitted deletion. Cancellation is best-effort, and events that have already been deleted will not be restored.
Extended 'Relative' field type for schema files to include support for the value 'now'.
Configuration
Added compatibility mode for using IBM Cloud Object Storage as bucket storage via S3_STORAGE_IBM_COMPAT
The Scheduled Searches feature is no longer in beta and can be used by all users without enabling it first
On a node configured as
USING_EPHEMERAL_DISKS=true
allow the local disk management deleting files even if a query may need them later, as the system is able to re-fetch the files from bucket storage when required. This improves the situation when there are active queries that in total have requested access to more segments than the local disk can hold.
Functions
Improved performance in IP database lookups for the functions
ipLocation()
,asn()
andworldMap()
.
Other
Added focus states to text field, selection and text area components.
Added support for importing packages with CSV and JSON files. Exporting packages with files is not fully supported yet, but will be in a future release.
Improved handling of local disk space relative to
LOCAL_STORAGE_MIN_AGE_DAYS
. When the local disk would overflow by respecting that config, Humio can now delete the oldest local segments that are present in bucket storage, even when they are within that time range.Raise size limit on ingest requests from 8MB to 1 GB
Scheduled search "schedule" is explained using human readable text such as "At 9.30 on Tuesdays".
Improved search for users page.
Package installation error messages are now much more readable.
Limit pending ingest requests by rejecting excess invocations. Rejections are signalled as status 429 "Too many requests" and a Retry-After header suggesting to retry in 5 seconds. Limiting starts when queued requests exceed
INGEST_REQUEST_LIMIT_PCT
of the total heap size, default is 5.Warnings when running scheduled searches now show up as errors in the scheduled search overview page if
SCHEDULED_SEARCH_DESPITE_WARNINGS
is set tofalse
(the default).Added a Data subprocessors page under account.
Improved audit log for organization creation.
Added maximum width to tabs on the Group page, so they do not keep expanding forever.
Humio docker images is now based on the Alpine linux.
New metric: "ingest-request-delay". Histogram of ingest request time spent being delayed due to exceeding limit on concurrent processing of ingest.
Added explicit distribution information for elastic bulk API for elasticsearch API compatibility.
Allow launching using JDK-16.
The test action functionality no longer uses alert terminology, as actions can be invoked from both alerts and scheduled searches. Also, it is now possible to also test the scheduled search specific message templates using it.
Improved error handling when running scheduled searches, so that a failed schedules search will be retried as long as it is within the Backfill Limit.
Added loading and error states to the page where user selects to create a new repository or view.
When selecting actions for alerts or scheduled searches, the actions are now grouped by the package they were imported from.
Fixed an issue with using the browser back button while "advanced editing" the query text of a scheduled search or an alert would hide the blue bar that allows saving the query.
Added support for including dashboard and alert labels when exporting a package.
Scheduled search "schedule" field is now validated, showing accurate help for each part of the crontab expression.
You can now export and import packages containing any of the action types: Webhook, Email, Humio Repo, Pager Duty, Slack, Slack multi channel, Ops Genie and Victor Ops.
Added Dark Mode for
Query Monitor
page.
Fixed in this release
Security
Updated dependencies to Akka to fix CVE-2021-42697.
Updated dependencies to address a critical security vulnerability for the
log4j
logging framework, "log4shell", (CVE-2021-44228).Updated dependencies to Netty to fix CVE-2021-43797
Fixed a compatibility issue with Filebeat 7.16.0
Updated dependencies to address a critical security vulnerability for the log4j logging framework, "log4shell", (CVE-2021-44228).
Updated dependencies to log4j 2.16 to remove of message lookups (CVE-2021-45046)
Updated dependencies to log4j 2.17.1 to fix CVE-2021-44832 and CVE-2021-45105
Updated dependencies to jawn to fix CVE-2022-21653.
Summary
Fixed an issue where queries of the form
#someTagField != someValue ...
would sometimes produce incorrect results.Performance improvements of Ingest and internal caching.
Fixed a race condition that could cause Humio to delete more segments than expected when initializing a digester node.
Fixed an issue that would result in a query not completing when one of the involved segments was deleted locally while the query was running. This could happen on clusters using bucket storage with more data than fits the local disks.
Updated dependencies to Jackson to fix a weakness
Fixes issue with epoch and offsets not always being stripped from segments.
Security fix.
Removed a spurious warning log when requesting a non-existent hash file from S3.
Fixed an issue where choosing a UI theme would not get saved properly in the user's settings.
Fixed issue where streaming (exporting) query results in JSON format could include extra "," characters within the output.
It is now possible to ingest logs into Humio using LogStash v.7.13 and upwards.
Updated a dependency to a version fixing a critical bug.
Documentation
Updated the examples on how to use the
match()
query function in the online documentation.
Automation and Alerts
Fixed a bug which potentially have caused alerts to not re-fire after the throttle period for field-based throttling had passed.
Functions
Fixed an issue where
top()
with max= can yield the same key multiple times (for example...| top([queryId, query], max=totalSize))
.Fixed an issue with the
split()
function which caused incorrect (usually, too few) query results in some cases where the output fields were refered to later in the query.
Other
Fixed an issue where the global consistency check job would fail to perform the consistency check, instead logging lines like "Global dump requested but global had expired". This line can still occur, but only when the consistency check takes too long.
Amended an internal limit on how many segments can be fetched from bucket storage concurrently. The old limit was based on the number of running queries. The new limit is 32.
Fixed an issue where, looking at GraphiQL, the dropdown from the navigation menu was partially hidden.
Fixed an issue that could cause cluster nodes to crash when growing the number of digest partitions.
Fixed an issue where new groups added to a repository got a query prefix that disallowed search. The default is now to allow search with the queryprefix *.
Fixed an issue that caused some errors to be hidden behind a message about "internal error".
Reworded a confusing error message when using the
top()
function with a limit parameter exceeding the limits configured withTOP_K_MAX_MAP_SIZE_HISTORICAL
orTOP_K_MAX_MAP_SIZE_LIVE
.Fixed an issue that could cause UploadedFileSyncJob to crash if an uploaded file went missing.
Updated Slack action for messaging multiple channels, so it propagates errors when triggered. Previously errors were ignored.
Truncate long user names on the Users page.
Fixed a bug where a 404 Not Found status on an internal endpoint would be incorrectly reported as an 401 Unauthorized.
Fixed an issue where Humio would retain segments acquired from read-only buckets if those segments were deleted. Humio will now properly delete the segments locally, and drop the reference to the copy in the read-only bucket.
Global snapshots are now uploaded to bucket storage more often when there are a lot of updates to it, leading to shorter replay times on startup.
Introduced a check for compatibility for packages and humio versions.
Security when viewing installed packages and packages on the marketplace are now less strict. Permissions are still required for installing and uninstalling packages.
Fixed an issue where the DiskSpaceJob could continue tracking segments if they were deleted from global, but the files were still present locally.
Fixed an issue where certain problems highlighted the first word in a query, not the location of the problem.
Creating a new dashboard now opens it after creation.
Fixed an issue that caused some metrics of type gauge to be reported with a wrong value.
The DiskSpaceJob now removes newly written backfilled segments off the local disk before it chooses to remove non-backfilled segments.
Fixed an issue where the {time_zone} Message Templates and Variables for actions would show a full description of the scheduled search instead of only the time zone.
Fixed an issue - when creating a repository a user is automatically assigned a role but isn't able to see himself in the roles list. Also, when editing roles the assignment is not counted correctly under usage.
Fixed an issue where Humio attempted to fetch global from other nodes before TLS was initialized.
Fixed a bug on queries that triggered an error while executing due to the input (such as a regex that exceeds limits on execution time) could result in the client getting 404 as status on poll, where it should get .0.
Fixed an issue where Shift+Enter would select the current completion rather than adding a newline.
Removed an old Cloud Signups page. The page is not necessary since organizations were implemented for the Cloud environments.
Fixed an issue where the DiskSpaceJob could mark segments accessed slightly out of order during boot.
Fixed an issue where it was possible to submit queries to the Delete Events API that were not valid for that API. Only pure filtering queries are allowed.
When a search is able to filter out segments based on the hash filter files, and a segment file is not present locally on any node, fetch only the hash filter at first, evaluate that, and only if required, fetch the segment file. This speeds up searches that target segments only present in bucket storage and that have search filters that generate hash filter checks, such as regex and literal text comparisons.
Fixed a bug where a hidden field named "#humioAutoShard" would sometimes show up in the field list.
Split package export page into dialog with multiple steps.
Fixed an issue where the job responsible for deleting segment files off nodes was not deleting as many segments as it should.
When accessing Humio through a URL with either a repository or view name in it and using an ingest token, it is now checked that the view on the token matches the repository or view in the URL, and a 403 Forbidden status is returned, if not.
Fixed an issue where Humio would create a broken hash file for the merge result when merging mini-segments that did not originally have hash files.
The DiskSpaceJob no longer initializes based off of the segment last-modified timestamp, this only happens if no access order snapshot is stored locally. If a snapshot is present, we trust that.
Fixed a bug causing the disk space job to use an expensive code path even when a cheaper one was available.
Fixed an issue where the job responsible for deleting segment files off nodes was not running as often as expected.
Cloning an asset now redirects you to the edit page for the asset for all assets.
Fixed an issue where the query scheduler would spend too much time "shelving" queries, and not enough on getting them executed, leading to little progress on queries.
Fixed an issue where metrics of type gauge with a double value were not reported to the humio-metricsrepository, but only to the humio repository.
Fixed thread safety for a variable involved in fetching from bucket storage for queries.
Updated the new asset dialog button text so that it will say 'Continue' when an asset will not be created directly.
Updated Elastic ingest endpoint to accept 'create' operations in addition to 'index' operations. Both operation types result in the same ingest behavior. This update was added as Fluent-Bit v1.8.3 began using the 'create' operation rather than 'index' for ingest.
Fixed an issue where Humio would create auxiliary files (hash files) for segments unnecessarily when moving segments between nodes.
Updated dependencies with security fixes.
The simple and advanced permission model has been merged, thus allowing users who were using the simple permission model to create their own permission roles and groups, create groups with default roles, and all other features that were previously only available in advanced permissions mode.
Humio Server 1.32.7 LTS (2022-01-06)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.32.7 | LTS | 2022-01-06 | Cloud | 2022-10-31 | No | 1.16.0 | No |
Hide file hashes
JAR Checksum | Value |
---|---|
MD5 | 30080b4b84bb4aba37eb11023019bd78 |
SHA1 | c53f4ac43beee249a68c90cae62d0b768b1284d7 |
SHA256 | 192b0554f6cbdfd5853cc90d8a418497df800da85bc0795238fc359aae8fa309 |
SHA512 | 0119341a2d041b0143a849881279274b7aef6589e8f90730907401562d03c631065bdb6f3e567b9cdffe31e572c570414f9386ef12e39ffaefdb4a8a5d70f479 |
Docker Image | SHA256 Checksum |
---|---|
humio | c286be555ee8e333acaa87ff010a8b6ecda18cd57c57b2925513b300792c021c |
humio-core | c1efbc9ae9dd76ccbaa674e63ff0029625724c1bb82f3b759cf692f65b3861fc |
kafka | e0d67223acc40bc46d9205286655b28e5c5adfdacbd86441e57e4939113805ee |
zookeeper | c5a7a9ab2dc6be82defe163499b943b727697a3e5d06fc8ead128240e596beb9 |
Download: https://repo.humio.com/repository/maven-releases/com/humio/server/1.32.7/server-1.32.7.tar.gz
These notes include entries from the following previous releases: 1.32.0, 1.32.1, 1.32.2, 1.32.3, 1.32.4, 1.32.5, 1.32.6
Updated dependencies with security fixes and weakness.
Deprecation
Items that have been deprecated and may be removed in a future release.
Deprecates the two GraphQL fields id and contentHash on the File type. The two fields are considered unused, so no alternatives are provided. If you rely on them, please let us know.
Deprecates the copyFile GraphQL mutation, as it is no longer used. If you use this mutation, please let us know.
New features and improvements
UI Changes
Updated the style of the email action template and made the wording used dependent on whether an alert or scheduled search was triggered.
Breadcrumbs are aligned across all pages and show the package name with a link when viewing or editing an asset from a package.
The left navigation menu hides, and can be opened again, for mobile devices, on organization settings pages and repository settings pages.
Cluster management pages style updates.
Fixed some styling issue on Query Quotas page.
The signup path was removed, together with the corresponding pages.
Updated design for Package Marketplace and Installed Packages to make them easier to use and more consistent.
Removed the pop-up link to edit an alert or scheduled search when on the form page. This link is only relevant when creating an entity from the search page via a dialog.
Identity provider pages style update.
GraphQL API
Added information about the use of preview fields in the result from calling the GraphQL API. The information will be in the field extensions.preview and will be a list of objects with a name and reason field.
The GraphQL DateTime type now supports non-UTC time. Timestamps like 2021-07-18T14:13:09.517+02.0 are now legal, and will be converted to UTC time internally.
When using the GraphQL field allowedViewActions, the two previously deprecated actions ChangeAlertsAndNotifiers and ReadEvents are no longer returned. Look for their replacements ChangeTriggersAndActions and ReadContents instead.
Deprecates the installPackageFromRegistry and updatePackageFromRegistry GraphQL mutations in favor of installPackageFromRegistryV2 and updatePackageFromRegistryV2.
The name, displayName, and location GraphQL fields on the File type are deprecated in favor of the new nameAndPath field.
The fileName, displayName, and location GraphQL fields on the UploadedFileSnapshot type are deprecated in favor of the new nameAndPath field.
Deprecates the package field on the SearchDomain GraphQL type, in favor of packageV2. The new field has a simpler and more correct return type.
Added a GraphQL mutation cancelDeleteEvents that allows cancelling a previously submitted deletion. Cancellation is best-effort, and events that have already been deleted will not be restored.
Extended 'Relative' field type for schema files to include support for the value 'now'.
Configuration
Added compatibility mode for using IBM Cloud Object Storage as bucket storage via S3_STORAGE_IBM_COMPAT
The Scheduled Searches feature is no longer in beta and can be used by all users without enabling it first
On a node configured as
USING_EPHEMERAL_DISKS=true
allow the local disk management deleting files even if a query may need them later, as the system is able to re-fetch the files from bucket storage when required. This improves the situation when there are active queries that in total have requested access to more segments than the local disk can hold.
Functions
Improved performance in IP database lookups for the functions
ipLocation()
,asn()
andworldMap()
.
Other
Added focus states to text field, selection and text area components.
Added support for importing packages with CSV and JSON files. Exporting packages with files is not fully supported yet, but will be in a future release.
Improved handling of local disk space relative to
LOCAL_STORAGE_MIN_AGE_DAYS
. When the local disk would overflow by respecting that config, Humio can now delete the oldest local segments that are present in bucket storage, even when they are within that time range.Raise size limit on ingest requests from 8MB to 1 GB
Scheduled search "schedule" is explained using human readable text such as "At 9.30 on Tuesdays".
Improved search for users page.
Package installation error messages are now much more readable.
Limit pending ingest requests by rejecting excess invocations. Rejections are signalled as status 429 "Too many requests" and a Retry-After header suggesting to retry in 5 seconds. Limiting starts when queued requests exceed
INGEST_REQUEST_LIMIT_PCT
of the total heap size, default is 5.Warnings when running scheduled searches now show up as errors in the scheduled search overview page if
SCHEDULED_SEARCH_DESPITE_WARNINGS
is set tofalse
(the default).Added a Data subprocessors page under account.
Improved audit log for organization creation.
Added maximum width to tabs on the Group page, so they do not keep expanding forever.
Humio docker images is now based on the Alpine linux.
New metric: "ingest-request-delay". Histogram of ingest request time spent being delayed due to exceeding limit on concurrent processing of ingest.
Added explicit distribution information for elastic bulk API for elasticsearch API compatibility.
Allow launching using JDK-16.
The test action functionality no longer uses alert terminology, as actions can be invoked from both alerts and scheduled searches. Also, it is now possible to also test the scheduled search specific message templates using it.
Improved error handling when running scheduled searches, so that a failed schedules search will be retried as long as it is within the Backfill Limit.
Added loading and error states to the page where user selects to create a new repository or view.
When selecting actions for alerts or scheduled searches, the actions are now grouped by the package they were imported from.
Fixed an issue with using the browser back button while "advanced editing" the query text of a scheduled search or an alert would hide the blue bar that allows saving the query.
Added support for including dashboard and alert labels when exporting a package.
Scheduled search "schedule" field is now validated, showing accurate help for each part of the crontab expression.
You can now export and import packages containing any of the action types: Webhook, Email, Humio Repo, Pager Duty, Slack, Slack multi channel, Ops Genie and Victor Ops.
Added Dark Mode for
Query Monitor
page.
Fixed in this release
Security
Updated dependencies to address a critical security vulnerability for the
log4j
logging framework, "log4shell", (CVE-2021-44228).Updated dependencies to Netty to fix CVE-2021-43797
Fixed a compatibility issue with Filebeat 7.16.0
Updated dependencies to address a critical security vulnerability for the log4j logging framework, "log4shell", (CVE-2021-44228).
Updated dependencies to log4j 2.16 to remove of message lookups (CVE-2021-45046)
Updated dependencies to log4j 2.17.1 to fix CVE-2021-44832 and CVE-2021-45105
Summary
Fixed a race condition that could cause Humio to delete more segments than expected when initializing a digester node.
Fixed an issue that would result in a query not completing when one of the involved segments was deleted locally while the query was running. This could happen on clusters using bucket storage with more data than fits the local disks.
Updated dependencies to Jackson to fix a weakness
Security fix.
Removed a spurious warning log when requesting a non-existent hash file from S3.
Fixed an issue where choosing a UI theme would not get saved properly in the user's settings.
Fixed issue where streaming (exporting) query results in JSON format could include extra "," characters within the output.
It is now possible to ingest logs into Humio using LogStash v.7.13 and upwards.
Updated a dependency to a version fixing a critical bug.
Documentation
Updated the examples on how to use the
match()
query function in the online documentation.
Automation and Alerts
Fixed a bug which potentially have caused alerts to not re-fire after the throttle period for field-based throttling had passed.
Functions
Fixed an issue where
top()
with max= can yield the same key multiple times (for example...| top([queryId, query], max=totalSize))
.Fixed an issue with the
split()
function which caused incorrect (usually, too few) query results in some cases where the output fields were refered to later in the query.
Other
Fixed an issue where the global consistency check job would fail to perform the consistency check, instead logging lines like "Global dump requested but global had expired". This line can still occur, but only when the consistency check takes too long.
Amended an internal limit on how many segments can be fetched from bucket storage concurrently. The old limit was based on the number of running queries. The new limit is 32.
Fixed an issue where, looking at GraphiQL, the dropdown from the navigation menu was partially hidden.
Fixed an issue that could cause cluster nodes to crash when growing the number of digest partitions.
Fixed an issue where new groups added to a repository got a query prefix that disallowed search. The default is now to allow search with the queryprefix *.
Fixed an issue that caused some errors to be hidden behind a message about "internal error".
Reworded a confusing error message when using the
top()
function with a limit parameter exceeding the limits configured withTOP_K_MAX_MAP_SIZE_HISTORICAL
orTOP_K_MAX_MAP_SIZE_LIVE
.Fixed an issue that could cause UploadedFileSyncJob to crash if an uploaded file went missing.
Updated Slack action for messaging multiple channels, so it propagates errors when triggered. Previously errors were ignored.
Truncate long user names on the Users page.
Fixed a bug where a 404 Not Found status on an internal endpoint would be incorrectly reported as an 401 Unauthorized.
Fixed an issue where Humio would retain segments acquired from read-only buckets if those segments were deleted. Humio will now properly delete the segments locally, and drop the reference to the copy in the read-only bucket.
Global snapshots are now uploaded to bucket storage more often when there are a lot of updates to it, leading to shorter replay times on startup.
Introduced a check for compatibility for packages and humio versions.
Security when viewing installed packages and packages on the marketplace are now less strict. Permissions are still required for installing and uninstalling packages.
Fixed an issue where the DiskSpaceJob could continue tracking segments if they were deleted from global, but the files were still present locally.
Fixed an issue where certain problems highlighted the first word in a query, not the location of the problem.
Creating a new dashboard now opens it after creation.
Fixed an issue that caused some metrics of type gauge to be reported with a wrong value.
The DiskSpaceJob now removes newly written backfilled segments off the local disk before it chooses to remove non-backfilled segments.
Fixed an issue where the {time_zone} Message Templates and Variables for actions would show a full description of the scheduled search instead of only the time zone.
Fixed an issue - when creating a repository a user is automatically assigned a role but isn't able to see himself in the roles list. Also, when editing roles the assignment is not counted correctly under usage.
Fixed an issue where Humio attempted to fetch global from other nodes before TLS was initialized.
Fixed a bug on queries that triggered an error while executing due to the input (such as a regex that exceeds limits on execution time) could result in the client getting 404 as status on poll, where it should get .0.
Fixed an issue where Shift+Enter would select the current completion rather than adding a newline.
Removed an old Cloud Signups page. The page is not necessary since organizations were implemented for the Cloud environments.
Fixed an issue where the DiskSpaceJob could mark segments accessed slightly out of order during boot.
Fixed an issue where it was possible to submit queries to the Delete Events API that were not valid for that API. Only pure filtering queries are allowed.
When a search is able to filter out segments based on the hash filter files, and a segment file is not present locally on any node, fetch only the hash filter at first, evaluate that, and only if required, fetch the segment file. This speeds up searches that target segments only present in bucket storage and that have search filters that generate hash filter checks, such as regex and literal text comparisons.
Fixed a bug where a hidden field named "#humioAutoShard" would sometimes show up in the field list.
Split package export page into dialog with multiple steps.
Fixed an issue where the job responsible for deleting segment files off nodes was not deleting as many segments as it should.
When accessing Humio through a URL with either a repository or view name in it and using an ingest token, it is now checked that the view on the token matches the repository or view in the URL, and a 403 Forbidden status is returned, if not.
Fixed an issue where Humio would create a broken hash file for the merge result when merging mini-segments that did not originally have hash files.
The DiskSpaceJob no longer initializes based off of the segment last-modified timestamp, this only happens if no access order snapshot is stored locally. If a snapshot is present, we trust that.
Fixed a bug causing the disk space job to use an expensive code path even when a cheaper one was available.
Fixed an issue where the job responsible for deleting segment files off nodes was not running as often as expected.
Cloning an asset now redirects you to the edit page for the asset for all assets.
Fixed an issue where the query scheduler would spend too much time "shelving" queries, and not enough on getting them executed, leading to little progress on queries.
Fixed an issue where metrics of type gauge with a double value were not reported to the humio-metricsrepository, but only to the humio repository.
Fixed thread safety for a variable involved in fetching from bucket storage for queries.
Updated the new asset dialog button text so that it will say 'Continue' when an asset will not be created directly.
Updated Elastic ingest endpoint to accept 'create' operations in addition to 'index' operations. Both operation types result in the same ingest behavior. This update was added as Fluent-Bit v1.8.3 began using the 'create' operation rather than 'index' for ingest.
Fixed an issue where Humio would create auxiliary files (hash files) for segments unnecessarily when moving segments between nodes.
Updated dependencies with security fixes.
The simple and advanced permission model has been merged, thus allowing users who were using the simple permission model to create their own permission roles and groups, create groups with default roles, and all other features that were previously only available in advanced permissions mode.
Humio Server 1.32.6 LTS (2021-12-15)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.32.6 | LTS | 2021-12-15 | Cloud | 2022-10-31 | No | 1.16.0 | No |
Hide file hashes
JAR Checksum | Value |
---|---|
MD5 | 7faf420472dbc06536e7478f9b62ab9b |
SHA1 | 77ab5617ec6a9ea24d5db93d82e44a6baa9b8bc2 |
SHA256 | f77c2758074e6ed3b0543531e8ad8ca8a62b934484ee2ac820eabcdd9c572b20 |
SHA512 | 777a8f6e1f2371aea2f9aaeed46633c5c0837e96b1a902433adb7ad8fd98efdaf0f1b21fbae5c8ddb9b5099c4f85e70a0d5a7e9ca00a47b2f2293c9b37c129fe |
Download: https://repo.humio.com/repository/maven-releases/com/humio/server/1.32.6/server-1.32.6.tar.gz
These notes include entries from the following previous releases: 1.32.0, 1.32.1, 1.32.2, 1.32.3, 1.32.4, 1.32.5
Security fix related to
log4j
logging, and fix
compatibility with Filebeat.
Deprecation
Items that have been deprecated and may be removed in a future release.
Deprecates the two GraphQL fields id and contentHash on the File type. The two fields are considered unused, so no alternatives are provided. If you rely on them, please let us know.
Deprecates the copyFile GraphQL mutation, as it is no longer used. If you use this mutation, please let us know.
New features and improvements
UI Changes
Updated the style of the email action template and made the wording used dependent on whether an alert or scheduled search was triggered.
Breadcrumbs are aligned across all pages and show the package name with a link when viewing or editing an asset from a package.
The left navigation menu hides, and can be opened again, for mobile devices, on organization settings pages and repository settings pages.
Cluster management pages style updates.
Fixed some styling issue on Query Quotas page.
The signup path was removed, together with the corresponding pages.
Updated design for Package Marketplace and Installed Packages to make them easier to use and more consistent.
Removed the pop-up link to edit an alert or scheduled search when on the form page. This link is only relevant when creating an entity from the search page via a dialog.
Identity provider pages style update.
GraphQL API
Added information about the use of preview fields in the result from calling the GraphQL API. The information will be in the field extensions.preview and will be a list of objects with a name and reason field.
The GraphQL DateTime type now supports non-UTC time. Timestamps like 2021-07-18T14:13:09.517+02.0 are now legal, and will be converted to UTC time internally.
When using the GraphQL field allowedViewActions, the two previously deprecated actions ChangeAlertsAndNotifiers and ReadEvents are no longer returned. Look for their replacements ChangeTriggersAndActions and ReadContents instead.
Deprecates the installPackageFromRegistry and updatePackageFromRegistry GraphQL mutations in favor of installPackageFromRegistryV2 and updatePackageFromRegistryV2.
The name, displayName, and location GraphQL fields on the File type are deprecated in favor of the new nameAndPath field.
The fileName, displayName, and location GraphQL fields on the UploadedFileSnapshot type are deprecated in favor of the new nameAndPath field.
Deprecates the package field on the SearchDomain GraphQL type, in favor of packageV2. The new field has a simpler and more correct return type.
Added a GraphQL mutation cancelDeleteEvents that allows cancelling a previously submitted deletion. Cancellation is best-effort, and events that have already been deleted will not be restored.
Extended 'Relative' field type for schema files to include support for the value 'now'.
Configuration
Added compatibility mode for using IBM Cloud Object Storage as bucket storage via S3_STORAGE_IBM_COMPAT
The Scheduled Searches feature is no longer in beta and can be used by all users without enabling it first
On a node configured as
USING_EPHEMERAL_DISKS=true
allow the local disk management deleting files even if a query may need them later, as the system is able to re-fetch the files from bucket storage when required. This improves the situation when there are active queries that in total have requested access to more segments than the local disk can hold.
Functions
Improved performance in IP database lookups for the functions
ipLocation()
,asn()
andworldMap()
.
Other
Added focus states to text field, selection and text area components.
Added support for importing packages with CSV and JSON files. Exporting packages with files is not fully supported yet, but will be in a future release.
Improved handling of local disk space relative to
LOCAL_STORAGE_MIN_AGE_DAYS
. When the local disk would overflow by respecting that config, Humio can now delete the oldest local segments that are present in bucket storage, even when they are within that time range.Raise size limit on ingest requests from 8MB to 1 GB
Scheduled search "schedule" is explained using human readable text such as "At 9.30 on Tuesdays".
Improved search for users page.
Package installation error messages are now much more readable.
Limit pending ingest requests by rejecting excess invocations. Rejections are signalled as status 429 "Too many requests" and a Retry-After header suggesting to retry in 5 seconds. Limiting starts when queued requests exceed
INGEST_REQUEST_LIMIT_PCT
of the total heap size, default is 5.Warnings when running scheduled searches now show up as errors in the scheduled search overview page if
SCHEDULED_SEARCH_DESPITE_WARNINGS
is set tofalse
(the default).Added a Data subprocessors page under account.
Improved audit log for organization creation.
Added maximum width to tabs on the Group page, so they do not keep expanding forever.
Humio docker images is now based on the Alpine linux.
New metric: "ingest-request-delay". Histogram of ingest request time spent being delayed due to exceeding limit on concurrent processing of ingest.
Added explicit distribution information for elastic bulk API for elasticsearch API compatibility.
Allow launching using JDK-16.
The test action functionality no longer uses alert terminology, as actions can be invoked from both alerts and scheduled searches. Also, it is now possible to also test the scheduled search specific message templates using it.
Improved error handling when running scheduled searches, so that a failed schedules search will be retried as long as it is within the Backfill Limit.
Added loading and error states to the page where user selects to create a new repository or view.
When selecting actions for alerts or scheduled searches, the actions are now grouped by the package they were imported from.
Fixed an issue with using the browser back button while "advanced editing" the query text of a scheduled search or an alert would hide the blue bar that allows saving the query.
Added support for including dashboard and alert labels when exporting a package.
Scheduled search "schedule" field is now validated, showing accurate help for each part of the crontab expression.
You can now export and import packages containing any of the action types: Webhook, Email, Humio Repo, Pager Duty, Slack, Slack multi channel, Ops Genie and Victor Ops.
Added Dark Mode for
Query Monitor
page.
Fixed in this release
Security
Updated dependencies to address a critical security vulnerability for the
log4j
logging framework, "log4shell", (CVE-2021-44228).Fixed a compatibility issue with Filebeat 7.16.0
Updated dependencies to address a critical security vulnerability for the log4j logging framework, "log4shell", (CVE-2021-44228).
Updated dependencies to log4j 2.16 to remove of message lookups (CVE-2021-45046)
Summary
Fixed a race condition that could cause Humio to delete more segments than expected when initializing a digester node.
Fixed an issue that would result in a query not completing when one of the involved segments was deleted locally while the query was running. This could happen on clusters using bucket storage with more data than fits the local disks.
Security fix.
Removed a spurious warning log when requesting a non-existent hash file from S3.
Fixed an issue where choosing a UI theme would not get saved properly in the user's settings.
Fixed issue where streaming (exporting) query results in JSON format could include extra "," characters within the output.
It is now possible to ingest logs into Humio using LogStash v.7.13 and upwards.
Updated a dependency to a version fixing a critical bug.
Documentation
Updated the examples on how to use the
match()
query function in the online documentation.
Automation and Alerts
Fixed a bug which potentially have caused alerts to not re-fire after the throttle period for field-based throttling had passed.
Functions
Fixed an issue where
top()
with max= can yield the same key multiple times (for example...| top([queryId, query], max=totalSize))
.Fixed an issue with the
split()
function which caused incorrect (usually, too few) query results in some cases where the output fields were refered to later in the query.
Other
Fixed an issue where the global consistency check job would fail to perform the consistency check, instead logging lines like "Global dump requested but global had expired". This line can still occur, but only when the consistency check takes too long.
Amended an internal limit on how many segments can be fetched from bucket storage concurrently. The old limit was based on the number of running queries. The new limit is 32.
Fixed an issue where, looking at GraphiQL, the dropdown from the navigation menu was partially hidden.
Fixed an issue that could cause cluster nodes to crash when growing the number of digest partitions.
Fixed an issue where new groups added to a repository got a query prefix that disallowed search. The default is now to allow search with the queryprefix *.
Fixed an issue that caused some errors to be hidden behind a message about "internal error".
Reworded a confusing error message when using the
top()
function with a limit parameter exceeding the limits configured withTOP_K_MAX_MAP_SIZE_HISTORICAL
orTOP_K_MAX_MAP_SIZE_LIVE
.Fixed an issue that could cause UploadedFileSyncJob to crash if an uploaded file went missing.
Updated Slack action for messaging multiple channels, so it propagates errors when triggered. Previously errors were ignored.
Truncate long user names on the Users page.
Fixed a bug where a 404 Not Found status on an internal endpoint would be incorrectly reported as an 401 Unauthorized.
Fixed an issue where Humio would retain segments acquired from read-only buckets if those segments were deleted. Humio will now properly delete the segments locally, and drop the reference to the copy in the read-only bucket.
Global snapshots are now uploaded to bucket storage more often when there are a lot of updates to it, leading to shorter replay times on startup.
Introduced a check for compatibility for packages and humio versions.
Security when viewing installed packages and packages on the marketplace are now less strict. Permissions are still required for installing and uninstalling packages.
Fixed an issue where the DiskSpaceJob could continue tracking segments if they were deleted from global, but the files were still present locally.
Fixed an issue where certain problems highlighted the first word in a query, not the location of the problem.
Creating a new dashboard now opens it after creation.
Fixed an issue that caused some metrics of type gauge to be reported with a wrong value.
The DiskSpaceJob now removes newly written backfilled segments off the local disk before it chooses to remove non-backfilled segments.
Fixed an issue where the {time_zone} Message Templates and Variables for actions would show a full description of the scheduled search instead of only the time zone.
Fixed an issue - when creating a repository a user is automatically assigned a role but isn't able to see himself in the roles list. Also, when editing roles the assignment is not counted correctly under usage.
Fixed an issue where Humio attempted to fetch global from other nodes before TLS was initialized.
Fixed a bug on queries that triggered an error while executing due to the input (such as a regex that exceeds limits on execution time) could result in the client getting 404 as status on poll, where it should get .0.
Fixed an issue where Shift+Enter would select the current completion rather than adding a newline.
Removed an old Cloud Signups page. The page is not necessary since organizations were implemented for the Cloud environments.
Fixed an issue where the DiskSpaceJob could mark segments accessed slightly out of order during boot.
Fixed an issue where it was possible to submit queries to the Delete Events API that were not valid for that API. Only pure filtering queries are allowed.
When a search is able to filter out segments based on the hash filter files, and a segment file is not present locally on any node, fetch only the hash filter at first, evaluate that, and only if required, fetch the segment file. This speeds up searches that target segments only present in bucket storage and that have search filters that generate hash filter checks, such as regex and literal text comparisons.
Fixed a bug where a hidden field named "#humioAutoShard" would sometimes show up in the field list.
Split package export page into dialog with multiple steps.
Fixed an issue where the job responsible for deleting segment files off nodes was not deleting as many segments as it should.
When accessing Humio through a URL with either a repository or view name in it and using an ingest token, it is now checked that the view on the token matches the repository or view in the URL, and a 403 Forbidden status is returned, if not.
Fixed an issue where Humio would create a broken hash file for the merge result when merging mini-segments that did not originally have hash files.
The DiskSpaceJob no longer initializes based off of the segment last-modified timestamp, this only happens if no access order snapshot is stored locally. If a snapshot is present, we trust that.
Fixed a bug causing the disk space job to use an expensive code path even when a cheaper one was available.
Fixed an issue where the job responsible for deleting segment files off nodes was not running as often as expected.
Cloning an asset now redirects you to the edit page for the asset for all assets.
Fixed an issue where the query scheduler would spend too much time "shelving" queries, and not enough on getting them executed, leading to little progress on queries.
Fixed an issue where metrics of type gauge with a double value were not reported to the humio-metricsrepository, but only to the humio repository.
Fixed thread safety for a variable involved in fetching from bucket storage for queries.
Updated the new asset dialog button text so that it will say 'Continue' when an asset will not be created directly.
Updated Elastic ingest endpoint to accept 'create' operations in addition to 'index' operations. Both operation types result in the same ingest behavior. This update was added as Fluent-Bit v1.8.3 began using the 'create' operation rather than 'index' for ingest.
Fixed an issue where Humio would create auxiliary files (hash files) for segments unnecessarily when moving segments between nodes.
Updated dependencies with security fixes.
The simple and advanced permission model has been merged, thus allowing users who were using the simple permission model to create their own permission roles and groups, create groups with default roles, and all other features that were previously only available in advanced permissions mode.
Humio Server 1.32.5 LTS (2021-12-10)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.32.5 | LTS | 2021-12-10 | Cloud | 2022-10-31 | No | 1.16.0 | No |
Hide file hashes
JAR Checksum | Value |
---|---|
MD5 | e4475e8c2f6623ff22465ccd59bbfb8a |
SHA1 | 5affbdbd10674259ec178db75171d73eb2835f1d |
SHA256 | 92c0261e328c43f0ba46ffaa72fc5e8a1ef075438759286ab00ea5dbb5b5ca30 |
SHA512 | e698dd4a2aa8c85ca3d89cc596e05677106d509082f816e46b3dca6b4b3313690d4c41c3a5b5529d08d91e51ebbfb3f9217e7830d1f4ade54fd79b40f366f449 |
Download: https://repo.humio.com/repository/maven-releases/com/humio/server/1.32.5/server-1.32.5.tar.gz
These notes include entries from the following previous releases: 1.32.0, 1.32.1, 1.32.2, 1.32.3, 1.32.4
Security fix related to
log4j
logging.
Deprecation
Items that have been deprecated and may be removed in a future release.
Deprecates the two GraphQL fields id and contentHash on the File type. The two fields are considered unused, so no alternatives are provided. If you rely on them, please let us know.
Deprecates the copyFile GraphQL mutation, as it is no longer used. If you use this mutation, please let us know.
New features and improvements
UI Changes
Updated the style of the email action template and made the wording used dependent on whether an alert or scheduled search was triggered.
Breadcrumbs are aligned across all pages and show the package name with a link when viewing or editing an asset from a package.
The left navigation menu hides, and can be opened again, for mobile devices, on organization settings pages and repository settings pages.
Cluster management pages style updates.
Fixed some styling issue on Query Quotas page.
The signup path was removed, together with the corresponding pages.
Updated design for Package Marketplace and Installed Packages to make them easier to use and more consistent.
Removed the pop-up link to edit an alert or scheduled search when on the form page. This link is only relevant when creating an entity from the search page via a dialog.
Identity provider pages style update.
GraphQL API
Added information about the use of preview fields in the result from calling the GraphQL API. The information will be in the field extensions.preview and will be a list of objects with a name and reason field.
The GraphQL DateTime type now supports non-UTC time. Timestamps like 2021-07-18T14:13:09.517+02.0 are now legal, and will be converted to UTC time internally.
When using the GraphQL field allowedViewActions, the two previously deprecated actions ChangeAlertsAndNotifiers and ReadEvents are no longer returned. Look for their replacements ChangeTriggersAndActions and ReadContents instead.
Deprecates the installPackageFromRegistry and updatePackageFromRegistry GraphQL mutations in favor of installPackageFromRegistryV2 and updatePackageFromRegistryV2.
The name, displayName, and location GraphQL fields on the File type are deprecated in favor of the new nameAndPath field.
The fileName, displayName, and location GraphQL fields on the UploadedFileSnapshot type are deprecated in favor of the new nameAndPath field.
Deprecates the package field on the SearchDomain GraphQL type, in favor of packageV2. The new field has a simpler and more correct return type.
Added a GraphQL mutation cancelDeleteEvents that allows cancelling a previously submitted deletion. Cancellation is best-effort, and events that have already been deleted will not be restored.
Extended 'Relative' field type for schema files to include support for the value 'now'.
Configuration
Added compatibility mode for using IBM Cloud Object Storage as bucket storage via S3_STORAGE_IBM_COMPAT
The Scheduled Searches feature is no longer in beta and can be used by all users without enabling it first
On a node configured as
USING_EPHEMERAL_DISKS=true
allow the local disk management deleting files even if a query may need them later, as the system is able to re-fetch the files from bucket storage when required. This improves the situation when there are active queries that in total have requested access to more segments than the local disk can hold.
Functions
Improved performance in IP database lookups for the functions
ipLocation()
,asn()
andworldMap()
.
Other
Added focus states to text field, selection and text area components.
Added support for importing packages with CSV and JSON files. Exporting packages with files is not fully supported yet, but will be in a future release.
Improved handling of local disk space relative to
LOCAL_STORAGE_MIN_AGE_DAYS
. When the local disk would overflow by respecting that config, Humio can now delete the oldest local segments that are present in bucket storage, even when they are within that time range.Raise size limit on ingest requests from 8MB to 1 GB
Scheduled search "schedule" is explained using human readable text such as "At 9.30 on Tuesdays".
Improved search for users page.
Package installation error messages are now much more readable.
Limit pending ingest requests by rejecting excess invocations. Rejections are signalled as status 429 "Too many requests" and a Retry-After header suggesting to retry in 5 seconds. Limiting starts when queued requests exceed
INGEST_REQUEST_LIMIT_PCT
of the total heap size, default is 5.Warnings when running scheduled searches now show up as errors in the scheduled search overview page if
SCHEDULED_SEARCH_DESPITE_WARNINGS
is set tofalse
(the default).Added a Data subprocessors page under account.
Improved audit log for organization creation.
Added maximum width to tabs on the Group page, so they do not keep expanding forever.
Humio docker images is now based on the Alpine linux.
New metric: "ingest-request-delay". Histogram of ingest request time spent being delayed due to exceeding limit on concurrent processing of ingest.
Added explicit distribution information for elastic bulk API for elasticsearch API compatibility.
Allow launching using JDK-16.
The test action functionality no longer uses alert terminology, as actions can be invoked from both alerts and scheduled searches. Also, it is now possible to also test the scheduled search specific message templates using it.
Improved error handling when running scheduled searches, so that a failed schedules search will be retried as long as it is within the Backfill Limit.
Added loading and error states to the page where user selects to create a new repository or view.
When selecting actions for alerts or scheduled searches, the actions are now grouped by the package they were imported from.
Fixed an issue with using the browser back button while "advanced editing" the query text of a scheduled search or an alert would hide the blue bar that allows saving the query.
Added support for including dashboard and alert labels when exporting a package.
Scheduled search "schedule" field is now validated, showing accurate help for each part of the crontab expression.
You can now export and import packages containing any of the action types: Webhook, Email, Humio Repo, Pager Duty, Slack, Slack multi channel, Ops Genie and Victor Ops.
Added Dark Mode for
Query Monitor
page.
Fixed in this release
Security
Updated dependencies to address a critical security vulnerability for the
log4j
logging framework, "log4shell", (CVE-2021-44228).Fixed a compatibility issue with Filebeat 7.16.0
Updated dependencies to address a critical security vulnerability for the log4j logging framework, "log4shell", (CVE-2021-44228).
Summary
Fixed a race condition that could cause Humio to delete more segments than expected when initializing a digester node.
Fixed an issue that would result in a query not completing when one of the involved segments was deleted locally while the query was running. This could happen on clusters using bucket storage with more data than fits the local disks.
Security fix.
Removed a spurious warning log when requesting a non-existent hash file from S3.
Fixed an issue where choosing a UI theme would not get saved properly in the user's settings.
It is now possible to ingest logs into Humio using LogStash v.7.13 and upwards.
Updated a dependency to a version fixing a critical bug.
Documentation
Updated the examples on how to use the
match()
query function in the online documentation.
Automation and Alerts
Fixed a bug which potentially have caused alerts to not re-fire after the throttle period for field-based throttling had passed.
Functions
Fixed an issue where
top()
with max= can yield the same key multiple times (for example...| top([queryId, query], max=totalSize))
.Fixed an issue with the
split()
function which caused incorrect (usually, too few) query results in some cases where the output fields were refered to later in the query.
Other
Fixed an issue where the global consistency check job would fail to perform the consistency check, instead logging lines like "Global dump requested but global had expired". This line can still occur, but only when the consistency check takes too long.
Amended an internal limit on how many segments can be fetched from bucket storage concurrently. The old limit was based on the number of running queries. The new limit is 32.
Fixed an issue where, looking at GraphiQL, the dropdown from the navigation menu was partially hidden.
Fixed an issue that could cause cluster nodes to crash when growing the number of digest partitions.
Fixed an issue where new groups added to a repository got a query prefix that disallowed search. The default is now to allow search with the queryprefix *.
Fixed an issue that caused some errors to be hidden behind a message about "internal error".
Reworded a confusing error message when using the
top()
function with a limit parameter exceeding the limits configured withTOP_K_MAX_MAP_SIZE_HISTORICAL
orTOP_K_MAX_MAP_SIZE_LIVE
.Fixed an issue that could cause UploadedFileSyncJob to crash if an uploaded file went missing.
Updated Slack action for messaging multiple channels, so it propagates errors when triggered. Previously errors were ignored.
Truncate long user names on the Users page.
Fixed a bug where a 404 Not Found status on an internal endpoint would be incorrectly reported as an 401 Unauthorized.
Fixed an issue where Humio would retain segments acquired from read-only buckets if those segments were deleted. Humio will now properly delete the segments locally, and drop the reference to the copy in the read-only bucket.
Global snapshots are now uploaded to bucket storage more often when there are a lot of updates to it, leading to shorter replay times on startup.
Introduced a check for compatibility for packages and humio versions.
Security when viewing installed packages and packages on the marketplace are now less strict. Permissions are still required for installing and uninstalling packages.
Fixed an issue where the DiskSpaceJob could continue tracking segments if they were deleted from global, but the files were still present locally.
Fixed an issue where certain problems highlighted the first word in a query, not the location of the problem.
Creating a new dashboard now opens it after creation.
Fixed an issue that caused some metrics of type gauge to be reported with a wrong value.
The DiskSpaceJob now removes newly written backfilled segments off the local disk before it chooses to remove non-backfilled segments.
Fixed an issue where the {time_zone} Message Templates and Variables for actions would show a full description of the scheduled search instead of only the time zone.
Fixed an issue - when creating a repository a user is automatically assigned a role but isn't able to see himself in the roles list. Also, when editing roles the assignment is not counted correctly under usage.
Fixed an issue where Humio attempted to fetch global from other nodes before TLS was initialized.
Fixed a bug on queries that triggered an error while executing due to the input (such as a regex that exceeds limits on execution time) could result in the client getting 404 as status on poll, where it should get .0.
Fixed an issue where Shift+Enter would select the current completion rather than adding a newline.
Removed an old Cloud Signups page. The page is not necessary since organizations were implemented for the Cloud environments.
Fixed an issue where the DiskSpaceJob could mark segments accessed slightly out of order during boot.
Fixed an issue where it was possible to submit queries to the Delete Events API that were not valid for that API. Only pure filtering queries are allowed.
When a search is able to filter out segments based on the hash filter files, and a segment file is not present locally on any node, fetch only the hash filter at first, evaluate that, and only if required, fetch the segment file. This speeds up searches that target segments only present in bucket storage and that have search filters that generate hash filter checks, such as regex and literal text comparisons.
Fixed a bug where a hidden field named "#humioAutoShard" would sometimes show up in the field list.
Split package export page into dialog with multiple steps.
Fixed an issue where the job responsible for deleting segment files off nodes was not deleting as many segments as it should.
When accessing Humio through a URL with either a repository or view name in it and using an ingest token, it is now checked that the view on the token matches the repository or view in the URL, and a 403 Forbidden status is returned, if not.
Fixed an issue where Humio would create a broken hash file for the merge result when merging mini-segments that did not originally have hash files.
The DiskSpaceJob no longer initializes based off of the segment last-modified timestamp, this only happens if no access order snapshot is stored locally. If a snapshot is present, we trust that.
Fixed a bug causing the disk space job to use an expensive code path even when a cheaper one was available.
Fixed an issue where the job responsible for deleting segment files off nodes was not running as often as expected.
Cloning an asset now redirects you to the edit page for the asset for all assets.
Fixed an issue where the query scheduler would spend too much time "shelving" queries, and not enough on getting them executed, leading to little progress on queries.
Fixed an issue where metrics of type gauge with a double value were not reported to the humio-metricsrepository, but only to the humio repository.
Fixed thread safety for a variable involved in fetching from bucket storage for queries.
Updated the new asset dialog button text so that it will say 'Continue' when an asset will not be created directly.
Updated Elastic ingest endpoint to accept 'create' operations in addition to 'index' operations. Both operation types result in the same ingest behavior. This update was added as Fluent-Bit v1.8.3 began using the 'create' operation rather than 'index' for ingest.
Fixed an issue where Humio would create auxiliary files (hash files) for segments unnecessarily when moving segments between nodes.
Updated dependencies with security fixes.
The simple and advanced permission model has been merged, thus allowing users who were using the simple permission model to create their own permission roles and groups, create groups with default roles, and all other features that were previously only available in advanced permissions mode.
Humio Server 1.32.4 LTS (2021-12-10)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.32.4 | LTS | 2021-12-10 | Cloud | 2022-10-31 | No | 1.16.0 | No |
Hide file hashes
JAR Checksum | Value |
---|---|
MD5 | 237975851200cd2a21b749ea3994f307 |
SHA1 | 0e6acfcbc13150c83861af51798d8f4510de7bb4 |
SHA256 | 99a86034dc2e26f0779a64a11633563f59dbc94959b85f80f996e5c267053ae4 |
SHA512 | aba67fee43c26d832cc9fb0a5ce3af0faabb596bc33c597eaba83f8f7d3e7da1ec61f6554c257ca7063991a802fdd4149f90e25db856751ca538c1d2f5262a40 |
Download: https://repo.humio.com/repository/maven-releases/com/humio/server/1.32.4/server-1.32.4.tar.gz
These notes include entries from the following previous releases: 1.32.0, 1.32.1, 1.32.2, 1.32.3
Security fix related to
log4j
logging, and fix
compatibility with Filebeat.
Deprecation
Items that have been deprecated and may be removed in a future release.
Deprecates the two GraphQL fields id and contentHash on the File type. The two fields are considered unused, so no alternatives are provided. If you rely on them, please let us know.
Deprecates the copyFile GraphQL mutation, as it is no longer used. If you use this mutation, please let us know.
New features and improvements
UI Changes
Updated the style of the email action template and made the wording used dependent on whether an alert or scheduled search was triggered.
Breadcrumbs are aligned across all pages and show the package name with a link when viewing or editing an asset from a package.
The left navigation menu hides, and can be opened again, for mobile devices, on organization settings pages and repository settings pages.
Cluster management pages style updates.
Fixed some styling issue on Query Quotas page.
The signup path was removed, together with the corresponding pages.
Updated design for Package Marketplace and Installed Packages to make them easier to use and more consistent.
Removed the pop-up link to edit an alert or scheduled search when on the form page. This link is only relevant when creating an entity from the search page via a dialog.
Identity provider pages style update.
GraphQL API
Added information about the use of preview fields in the result from calling the GraphQL API. The information will be in the field extensions.preview and will be a list of objects with a name and reason field.
The GraphQL DateTime type now supports non-UTC time. Timestamps like 2021-07-18T14:13:09.517+02.0 are now legal, and will be converted to UTC time internally.
When using the GraphQL field allowedViewActions, the two previously deprecated actions ChangeAlertsAndNotifiers and ReadEvents are no longer returned. Look for their replacements ChangeTriggersAndActions and ReadContents instead.
Deprecates the installPackageFromRegistry and updatePackageFromRegistry GraphQL mutations in favor of installPackageFromRegistryV2 and updatePackageFromRegistryV2.
The name, displayName, and location GraphQL fields on the File type are deprecated in favor of the new nameAndPath field.
The fileName, displayName, and location GraphQL fields on the UploadedFileSnapshot type are deprecated in favor of the new nameAndPath field.
Deprecates the package field on the SearchDomain GraphQL type, in favor of packageV2. The new field has a simpler and more correct return type.
Added a GraphQL mutation cancelDeleteEvents that allows cancelling a previously submitted deletion. Cancellation is best-effort, and events that have already been deleted will not be restored.
Extended 'Relative' field type for schema files to include support for the value 'now'.
Configuration
Added compatibility mode for using IBM Cloud Object Storage as bucket storage via S3_STORAGE_IBM_COMPAT
The Scheduled Searches feature is no longer in beta and can be used by all users without enabling it first
On a node configured as
USING_EPHEMERAL_DISKS=true
allow the local disk management deleting files even if a query may need them later, as the system is able to re-fetch the files from bucket storage when required. This improves the situation when there are active queries that in total have requested access to more segments than the local disk can hold.
Functions
Improved performance in IP database lookups for the functions
ipLocation()
,asn()
andworldMap()
.
Other
Added focus states to text field, selection and text area components.
Added support for importing packages with CSV and JSON files. Exporting packages with files is not fully supported yet, but will be in a future release.
Improved handling of local disk space relative to
LOCAL_STORAGE_MIN_AGE_DAYS
. When the local disk would overflow by respecting that config, Humio can now delete the oldest local segments that are present in bucket storage, even when they are within that time range.Raise size limit on ingest requests from 8MB to 1 GB
Scheduled search "schedule" is explained using human readable text such as "At 9.30 on Tuesdays".
Improved search for users page.
Package installation error messages are now much more readable.
Limit pending ingest requests by rejecting excess invocations. Rejections are signalled as status 429 "Too many requests" and a Retry-After header suggesting to retry in 5 seconds. Limiting starts when queued requests exceed
INGEST_REQUEST_LIMIT_PCT
of the total heap size, default is 5.Warnings when running scheduled searches now show up as errors in the scheduled search overview page if
SCHEDULED_SEARCH_DESPITE_WARNINGS
is set tofalse
(the default).Added a Data subprocessors page under account.
Improved audit log for organization creation.
Added maximum width to tabs on the Group page, so they do not keep expanding forever.
Humio docker images is now based on the Alpine linux.
New metric: "ingest-request-delay". Histogram of ingest request time spent being delayed due to exceeding limit on concurrent processing of ingest.
Added explicit distribution information for elastic bulk API for elasticsearch API compatibility.
Allow launching using JDK-16.
The test action functionality no longer uses alert terminology, as actions can be invoked from both alerts and scheduled searches. Also, it is now possible to also test the scheduled search specific message templates using it.
Improved error handling when running scheduled searches, so that a failed schedules search will be retried as long as it is within the Backfill Limit.
Added loading and error states to the page where user selects to create a new repository or view.
When selecting actions for alerts or scheduled searches, the actions are now grouped by the package they were imported from.
Fixed an issue with using the browser back button while "advanced editing" the query text of a scheduled search or an alert would hide the blue bar that allows saving the query.
Added support for including dashboard and alert labels when exporting a package.
Scheduled search "schedule" field is now validated, showing accurate help for each part of the crontab expression.
You can now export and import packages containing any of the action types: Webhook, Email, Humio Repo, Pager Duty, Slack, Slack multi channel, Ops Genie and Victor Ops.
Added Dark Mode for
Query Monitor
page.
Fixed in this release
Security
Updated dependencies to address a critical security vulnerability for the
log4j
logging framework, "log4shell", (CVE-2021-44228).Fixed a compatibility issue with Filebeat 7.16.0
Summary
Fixed a race condition that could cause Humio to delete more segments than expected when initializing a digester node.
Fixed an issue that would result in a query not completing when one of the involved segments was deleted locally while the query was running. This could happen on clusters using bucket storage with more data than fits the local disks.
Security fix.
Removed a spurious warning log when requesting a non-existent hash file from S3.
Fixed an issue where choosing a UI theme would not get saved properly in the user's settings.
It is now possible to ingest logs into Humio using LogStash v.7.13 and upwards.
Updated a dependency to a version fixing a critical bug.
Documentation
Updated the examples on how to use the
match()
query function in the online documentation.
Automation and Alerts
Fixed a bug which potentially have caused alerts to not re-fire after the throttle period for field-based throttling had passed.
Functions
Fixed an issue where
top()
with max= can yield the same key multiple times (for example...| top([queryId, query], max=totalSize))
.Fixed an issue with the
split()
function which caused incorrect (usually, too few) query results in some cases where the output fields were refered to later in the query.
Other
Fixed an issue where the global consistency check job would fail to perform the consistency check, instead logging lines like "Global dump requested but global had expired". This line can still occur, but only when the consistency check takes too long.
Amended an internal limit on how many segments can be fetched from bucket storage concurrently. The old limit was based on the number of running queries. The new limit is 32.
Fixed an issue where, looking at GraphiQL, the dropdown from the navigation menu was partially hidden.
Fixed an issue that could cause cluster nodes to crash when growing the number of digest partitions.
Fixed an issue where new groups added to a repository got a query prefix that disallowed search. The default is now to allow search with the queryprefix *.
Fixed an issue that caused some errors to be hidden behind a message about "internal error".
Reworded a confusing error message when using the
top()
function with a limit parameter exceeding the limits configured withTOP_K_MAX_MAP_SIZE_HISTORICAL
orTOP_K_MAX_MAP_SIZE_LIVE
.Fixed an issue that could cause UploadedFileSyncJob to crash if an uploaded file went missing.
Updated Slack action for messaging multiple channels, so it propagates errors when triggered. Previously errors were ignored.
Truncate long user names on the Users page.
Fixed a bug where a 404 Not Found status on an internal endpoint would be incorrectly reported as an 401 Unauthorized.
Fixed an issue where Humio would retain segments acquired from read-only buckets if those segments were deleted. Humio will now properly delete the segments locally, and drop the reference to the copy in the read-only bucket.
Global snapshots are now uploaded to bucket storage more often when there are a lot of updates to it, leading to shorter replay times on startup.
Introduced a check for compatibility for packages and humio versions.
Security when viewing installed packages and packages on the marketplace are now less strict. Permissions are still required for installing and uninstalling packages.
Fixed an issue where the DiskSpaceJob could continue tracking segments if they were deleted from global, but the files were still present locally.
Fixed an issue where certain problems highlighted the first word in a query, not the location of the problem.
Creating a new dashboard now opens it after creation.
Fixed an issue that caused some metrics of type gauge to be reported with a wrong value.
The DiskSpaceJob now removes newly written backfilled segments off the local disk before it chooses to remove non-backfilled segments.
Fixed an issue where the {time_zone} Message Templates and Variables for actions would show a full description of the scheduled search instead of only the time zone.
Fixed an issue - when creating a repository a user is automatically assigned a role but isn't able to see himself in the roles list. Also, when editing roles the assignment is not counted correctly under usage.
Fixed an issue where Humio attempted to fetch global from other nodes before TLS was initialized.
Fixed a bug on queries that triggered an error while executing due to the input (such as a regex that exceeds limits on execution time) could result in the client getting 404 as status on poll, where it should get .0.
Fixed an issue where Shift+Enter would select the current completion rather than adding a newline.
Removed an old Cloud Signups page. The page is not necessary since organizations were implemented for the Cloud environments.
Fixed an issue where the DiskSpaceJob could mark segments accessed slightly out of order during boot.
Fixed an issue where it was possible to submit queries to the Delete Events API that were not valid for that API. Only pure filtering queries are allowed.
When a search is able to filter out segments based on the hash filter files, and a segment file is not present locally on any node, fetch only the hash filter at first, evaluate that, and only if required, fetch the segment file. This speeds up searches that target segments only present in bucket storage and that have search filters that generate hash filter checks, such as regex and literal text comparisons.
Fixed a bug where a hidden field named "#humioAutoShard" would sometimes show up in the field list.
Split package export page into dialog with multiple steps.
Fixed an issue where the job responsible for deleting segment files off nodes was not deleting as many segments as it should.
When accessing Humio through a URL with either a repository or view name in it and using an ingest token, it is now checked that the view on the token matches the repository or view in the URL, and a 403 Forbidden status is returned, if not.
Fixed an issue where Humio would create a broken hash file for the merge result when merging mini-segments that did not originally have hash files.
The DiskSpaceJob no longer initializes based off of the segment last-modified timestamp, this only happens if no access order snapshot is stored locally. If a snapshot is present, we trust that.
Fixed a bug causing the disk space job to use an expensive code path even when a cheaper one was available.
Fixed an issue where the job responsible for deleting segment files off nodes was not running as often as expected.
Cloning an asset now redirects you to the edit page for the asset for all assets.
Fixed an issue where the query scheduler would spend too much time "shelving" queries, and not enough on getting them executed, leading to little progress on queries.
Fixed an issue where metrics of type gauge with a double value were not reported to the humio-metricsrepository, but only to the humio repository.
Fixed thread safety for a variable involved in fetching from bucket storage for queries.
Updated the new asset dialog button text so that it will say 'Continue' when an asset will not be created directly.
Updated Elastic ingest endpoint to accept 'create' operations in addition to 'index' operations. Both operation types result in the same ingest behavior. This update was added as Fluent-Bit v1.8.3 began using the 'create' operation rather than 'index' for ingest.
Fixed an issue where Humio would create auxiliary files (hash files) for segments unnecessarily when moving segments between nodes.
Updated dependencies with security fixes.
The simple and advanced permission model has been merged, thus allowing users who were using the simple permission model to create their own permission roles and groups, create groups with default roles, and all other features that were previously only available in advanced permissions mode.
Humio Server 1.32.3 LTS (2021-12-01)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.32.3 | LTS | 2021-12-01 | Cloud | 2022-10-31 | No | 1.16.0 | No |
Hide file hashes
JAR Checksum | Value |
---|---|
MD5 | a930565c1e42104fcf6b963089db33ec |
SHA1 | b4818b51348ae64bf79104813d89fb34ec183a71 |
SHA256 | a100e4eb4161b0dc01ac745dffd5bb22839c557fd120f2ddb8359981558af5d5 |
SHA512 | fb2439171324b098a870bb143e4442bacd007c0f18c2fce9ead458e6f05d8672f3f4309e4c59c7b74a6c74763b6936707f67f36eb645847f0daaee75badd4f9a |
Download: https://repo.humio.com/repository/maven-releases/com/humio/server/1.32.3/server-1.32.3.tar.gz
These notes include entries from the following previous releases: 1.32.0, 1.32.1, 1.32.2
Bug fix to resolve problem with clusters using bucket storage.
Deprecation
Items that have been deprecated and may be removed in a future release.
Deprecates the two GraphQL fields id and contentHash on the File type. The two fields are considered unused, so no alternatives are provided. If you rely on them, please let us know.
Deprecates the copyFile GraphQL mutation, as it is no longer used. If you use this mutation, please let us know.
New features and improvements
UI Changes
Updated the style of the email action template and made the wording used dependent on whether an alert or scheduled search was triggered.
Breadcrumbs are aligned across all pages and show the package name with a link when viewing or editing an asset from a package.
The left navigation menu hides, and can be opened again, for mobile devices, on organization settings pages and repository settings pages.
Cluster management pages style updates.
Fixed some styling issue on Query Quotas page.
The signup path was removed, together with the corresponding pages.
Updated design for Package Marketplace and Installed Packages to make them easier to use and more consistent.
Removed the pop-up link to edit an alert or scheduled search when on the form page. This link is only relevant when creating an entity from the search page via a dialog.
Identity provider pages style update.
GraphQL API
Added information about the use of preview fields in the result from calling the GraphQL API. The information will be in the field extensions.preview and will be a list of objects with a name and reason field.
The GraphQL DateTime type now supports non-UTC time. Timestamps like 2021-07-18T14:13:09.517+02.0 are now legal, and will be converted to UTC time internally.
When using the GraphQL field allowedViewActions, the two previously deprecated actions ChangeAlertsAndNotifiers and ReadEvents are no longer returned. Look for their replacements ChangeTriggersAndActions and ReadContents instead.
Deprecates the installPackageFromRegistry and updatePackageFromRegistry GraphQL mutations in favor of installPackageFromRegistryV2 and updatePackageFromRegistryV2.
The name, displayName, and location GraphQL fields on the File type are deprecated in favor of the new nameAndPath field.
The fileName, displayName, and location GraphQL fields on the UploadedFileSnapshot type are deprecated in favor of the new nameAndPath field.
Deprecates the package field on the SearchDomain GraphQL type, in favor of packageV2. The new field has a simpler and more correct return type.
Added a GraphQL mutation cancelDeleteEvents that allows cancelling a previously submitted deletion. Cancellation is best-effort, and events that have already been deleted will not be restored.
Extended 'Relative' field type for schema files to include support for the value 'now'.
Configuration
Added compatibility mode for using IBM Cloud Object Storage as bucket storage via S3_STORAGE_IBM_COMPAT
The Scheduled Searches feature is no longer in beta and can be used by all users without enabling it first
On a node configured as
USING_EPHEMERAL_DISKS=true
allow the local disk management deleting files even if a query may need them later, as the system is able to re-fetch the files from bucket storage when required. This improves the situation when there are active queries that in total have requested access to more segments than the local disk can hold.
Functions
Improved performance in IP database lookups for the functions
ipLocation()
,asn()
andworldMap()
.
Other
Added focus states to text field, selection and text area components.
Added support for importing packages with CSV and JSON files. Exporting packages with files is not fully supported yet, but will be in a future release.
Improved handling of local disk space relative to
LOCAL_STORAGE_MIN_AGE_DAYS
. When the local disk would overflow by respecting that config, Humio can now delete the oldest local segments that are present in bucket storage, even when they are within that time range.Raise size limit on ingest requests from 8MB to 1 GB
Scheduled search "schedule" is explained using human readable text such as "At 9.30 on Tuesdays".
Improved search for users page.
Package installation error messages are now much more readable.
Limit pending ingest requests by rejecting excess invocations. Rejections are signalled as status 429 "Too many requests" and a Retry-After header suggesting to retry in 5 seconds. Limiting starts when queued requests exceed
INGEST_REQUEST_LIMIT_PCT
of the total heap size, default is 5.Warnings when running scheduled searches now show up as errors in the scheduled search overview page if
SCHEDULED_SEARCH_DESPITE_WARNINGS
is set tofalse
(the default).Added a Data subprocessors page under account.
Improved audit log for organization creation.
Added maximum width to tabs on the Group page, so they do not keep expanding forever.
Humio docker images is now based on the Alpine linux.
New metric: "ingest-request-delay". Histogram of ingest request time spent being delayed due to exceeding limit on concurrent processing of ingest.
Added explicit distribution information for elastic bulk API for elasticsearch API compatibility.
Allow launching using JDK-16.
The test action functionality no longer uses alert terminology, as actions can be invoked from both alerts and scheduled searches. Also, it is now possible to also test the scheduled search specific message templates using it.
Improved error handling when running scheduled searches, so that a failed schedules search will be retried as long as it is within the Backfill Limit.
Added loading and error states to the page where user selects to create a new repository or view.
When selecting actions for alerts or scheduled searches, the actions are now grouped by the package they were imported from.
Fixed an issue with using the browser back button while "advanced editing" the query text of a scheduled search or an alert would hide the blue bar that allows saving the query.
Added support for including dashboard and alert labels when exporting a package.
Scheduled search "schedule" field is now validated, showing accurate help for each part of the crontab expression.
You can now export and import packages containing any of the action types: Webhook, Email, Humio Repo, Pager Duty, Slack, Slack multi channel, Ops Genie and Victor Ops.
Added Dark Mode for
Query Monitor
page.
Fixed in this release
Summary
Fixed a race condition that could cause Humio to delete more segments than expected when initializing a digester node.
Fixed an issue that would result in a query not completing when one of the involved segments was deleted locally while the query was running. This could happen on clusters using bucket storage with more data than fits the local disks.
Security fix.
Removed a spurious warning log when requesting a non-existent hash file from S3.
Fixed an issue where choosing a UI theme would not get saved properly in the user's settings.
It is now possible to ingest logs into Humio using LogStash v.7.13 and upwards.
Updated a dependency to a version fixing a critical bug.
Documentation
Updated the examples on how to use the
match()
query function in the online documentation.
Automation and Alerts
Fixed a bug which potentially have caused alerts to not re-fire after the throttle period for field-based throttling had passed.
Functions
Fixed an issue where
top()
with max= can yield the same key multiple times (for example...| top([queryId, query], max=totalSize))
.Fixed an issue with the
split()
function which caused incorrect (usually, too few) query results in some cases where the output fields were refered to later in the query.
Other
Fixed an issue where the global consistency check job would fail to perform the consistency check, instead logging lines like "Global dump requested but global had expired". This line can still occur, but only when the consistency check takes too long.
Amended an internal limit on how many segments can be fetched from bucket storage concurrently. The old limit was based on the number of running queries. The new limit is 32.
Fixed an issue where, looking at GraphiQL, the dropdown from the navigation menu was partially hidden.
Fixed an issue that could cause cluster nodes to crash when growing the number of digest partitions.
Fixed an issue where new groups added to a repository got a query prefix that disallowed search. The default is now to allow search with the queryprefix *.
Fixed an issue that caused some errors to be hidden behind a message about "internal error".
Reworded a confusing error message when using the
top()
function with a limit parameter exceeding the limits configured withTOP_K_MAX_MAP_SIZE_HISTORICAL
orTOP_K_MAX_MAP_SIZE_LIVE
.Fixed an issue that could cause UploadedFileSyncJob to crash if an uploaded file went missing.
Updated Slack action for messaging multiple channels, so it propagates errors when triggered. Previously errors were ignored.
Truncate long user names on the Users page.
Fixed a bug where a 404 Not Found status on an internal endpoint would be incorrectly reported as an 401 Unauthorized.
Fixed an issue where Humio would retain segments acquired from read-only buckets if those segments were deleted. Humio will now properly delete the segments locally, and drop the reference to the copy in the read-only bucket.
Global snapshots are now uploaded to bucket storage more often when there are a lot of updates to it, leading to shorter replay times on startup.
Introduced a check for compatibility for packages and humio versions.
Security when viewing installed packages and packages on the marketplace are now less strict. Permissions are still required for installing and uninstalling packages.
Fixed an issue where the DiskSpaceJob could continue tracking segments if they were deleted from global, but the files were still present locally.
Fixed an issue where certain problems highlighted the first word in a query, not the location of the problem.
Creating a new dashboard now opens it after creation.
Fixed an issue that caused some metrics of type gauge to be reported with a wrong value.
The DiskSpaceJob now removes newly written backfilled segments off the local disk before it chooses to remove non-backfilled segments.
Fixed an issue where the {time_zone} Message Templates and Variables for actions would show a full description of the scheduled search instead of only the time zone.
Fixed an issue - when creating a repository a user is automatically assigned a role but isn't able to see himself in the roles list. Also, when editing roles the assignment is not counted correctly under usage.
Fixed an issue where Humio attempted to fetch global from other nodes before TLS was initialized.
Fixed a bug on queries that triggered an error while executing due to the input (such as a regex that exceeds limits on execution time) could result in the client getting 404 as status on poll, where it should get .0.
Fixed an issue where Shift+Enter would select the current completion rather than adding a newline.
Removed an old Cloud Signups page. The page is not necessary since organizations were implemented for the Cloud environments.
Fixed an issue where the DiskSpaceJob could mark segments accessed slightly out of order during boot.
Fixed an issue where it was possible to submit queries to the Delete Events API that were not valid for that API. Only pure filtering queries are allowed.
When a search is able to filter out segments based on the hash filter files, and a segment file is not present locally on any node, fetch only the hash filter at first, evaluate that, and only if required, fetch the segment file. This speeds up searches that target segments only present in bucket storage and that have search filters that generate hash filter checks, such as regex and literal text comparisons.
Fixed a bug where a hidden field named "#humioAutoShard" would sometimes show up in the field list.
Split package export page into dialog with multiple steps.
Fixed an issue where the job responsible for deleting segment files off nodes was not deleting as many segments as it should.
When accessing Humio through a URL with either a repository or view name in it and using an ingest token, it is now checked that the view on the token matches the repository or view in the URL, and a 403 Forbidden status is returned, if not.
Fixed an issue where Humio would create a broken hash file for the merge result when merging mini-segments that did not originally have hash files.
The DiskSpaceJob no longer initializes based off of the segment last-modified timestamp, this only happens if no access order snapshot is stored locally. If a snapshot is present, we trust that.
Fixed a bug causing the disk space job to use an expensive code path even when a cheaper one was available.
Fixed an issue where the job responsible for deleting segment files off nodes was not running as often as expected.
Cloning an asset now redirects you to the edit page for the asset for all assets.
Fixed an issue where the query scheduler would spend too much time "shelving" queries, and not enough on getting them executed, leading to little progress on queries.
Fixed an issue where metrics of type gauge with a double value were not reported to the humio-metricsrepository, but only to the humio repository.
Fixed thread safety for a variable involved in fetching from bucket storage for queries.
Updated the new asset dialog button text so that it will say 'Continue' when an asset will not be created directly.
Updated Elastic ingest endpoint to accept 'create' operations in addition to 'index' operations. Both operation types result in the same ingest behavior. This update was added as Fluent-Bit v1.8.3 began using the 'create' operation rather than 'index' for ingest.
Fixed an issue where Humio would create auxiliary files (hash files) for segments unnecessarily when moving segments between nodes.
Updated dependencies with security fixes.
The simple and advanced permission model has been merged, thus allowing users who were using the simple permission model to create their own permission roles and groups, create groups with default roles, and all other features that were previously only available in advanced permissions mode.
Humio Server 1.32.2 LTS (2021-11-19)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.32.2 | LTS | 2021-11-19 | Cloud | 2022-10-31 | No | 1.16.0 | No |
Hide file hashes
JAR Checksum | Value |
---|---|
MD5 | 098c372a532df86516d89e7d257f86b6 |
SHA1 | ea67955c0cad253a0278fe408fcce7d300491455 |
SHA256 | f2289d4bf0321e0dde905a8d4148d709e5ac846fdd055c7133172f5d33ef9fe0 |
SHA512 | 1bb52a2333339201a7c56f2d0d93997e60ea5b62d5b2ec1dc9f84d812b50c3e5ebca8039de029ecdb14e00885f9e9965532acc13dd339589a45a836a5548b849 |
Download: https://repo.humio.com/repository/maven-releases/com/humio/server/1.32.2/server-1.32.2.tar.gz
These notes include entries from the following previous releases: 1.32.0, 1.32.1
Critical bug fix regarding version dependency, and race conditions.
Deprecation
Items that have been deprecated and may be removed in a future release.
Deprecates the two GraphQL fields id and contentHash on the File type. The two fields are considered unused, so no alternatives are provided. If you rely on them, please let us know.
Deprecates the copyFile GraphQL mutation, as it is no longer used. If you use this mutation, please let us know.
New features and improvements
UI Changes
Updated the style of the email action template and made the wording used dependent on whether an alert or scheduled search was triggered.
Breadcrumbs are aligned across all pages and show the package name with a link when viewing or editing an asset from a package.
The left navigation menu hides, and can be opened again, for mobile devices, on organization settings pages and repository settings pages.
Cluster management pages style updates.
Fixed some styling issue on Query Quotas page.
The signup path was removed, together with the corresponding pages.
Updated design for Package Marketplace and Installed Packages to make them easier to use and more consistent.
Removed the pop-up link to edit an alert or scheduled search when on the form page. This link is only relevant when creating an entity from the search page via a dialog.
Identity provider pages style update.
GraphQL API
Added information about the use of preview fields in the result from calling the GraphQL API. The information will be in the field extensions.preview and will be a list of objects with a name and reason field.
The GraphQL DateTime type now supports non-UTC time. Timestamps like 2021-07-18T14:13:09.517+02.0 are now legal, and will be converted to UTC time internally.
When using the GraphQL field allowedViewActions, the two previously deprecated actions ChangeAlertsAndNotifiers and ReadEvents are no longer returned. Look for their replacements ChangeTriggersAndActions and ReadContents instead.
Deprecates the installPackageFromRegistry and updatePackageFromRegistry GraphQL mutations in favor of installPackageFromRegistryV2 and updatePackageFromRegistryV2.
The name, displayName, and location GraphQL fields on the File type are deprecated in favor of the new nameAndPath field.
The fileName, displayName, and location GraphQL fields on the UploadedFileSnapshot type are deprecated in favor of the new nameAndPath field.
Deprecates the package field on the SearchDomain GraphQL type, in favor of packageV2. The new field has a simpler and more correct return type.
Added a GraphQL mutation cancelDeleteEvents that allows cancelling a previously submitted deletion. Cancellation is best-effort, and events that have already been deleted will not be restored.
Extended 'Relative' field type for schema files to include support for the value 'now'.
Configuration
Added compatibility mode for using IBM Cloud Object Storage as bucket storage via S3_STORAGE_IBM_COMPAT
The Scheduled Searches feature is no longer in beta and can be used by all users without enabling it first
On a node configured as
USING_EPHEMERAL_DISKS=true
allow the local disk management deleting files even if a query may need them later, as the system is able to re-fetch the files from bucket storage when required. This improves the situation when there are active queries that in total have requested access to more segments than the local disk can hold.
Functions
Improved performance in IP database lookups for the functions
ipLocation()
,asn()
andworldMap()
.
Other
Added focus states to text field, selection and text area components.
Added support for importing packages with CSV and JSON files. Exporting packages with files is not fully supported yet, but will be in a future release.
Improved handling of local disk space relative to
LOCAL_STORAGE_MIN_AGE_DAYS
. When the local disk would overflow by respecting that config, Humio can now delete the oldest local segments that are present in bucket storage, even when they are within that time range.Raise size limit on ingest requests from 8MB to 1 GB
Scheduled search "schedule" is explained using human readable text such as "At 9.30 on Tuesdays".
Improved search for users page.
Package installation error messages are now much more readable.
Limit pending ingest requests by rejecting excess invocations. Rejections are signalled as status 429 "Too many requests" and a Retry-After header suggesting to retry in 5 seconds. Limiting starts when queued requests exceed
INGEST_REQUEST_LIMIT_PCT
of the total heap size, default is 5.Warnings when running scheduled searches now show up as errors in the scheduled search overview page if
SCHEDULED_SEARCH_DESPITE_WARNINGS
is set tofalse
(the default).Added a Data subprocessors page under account.
Improved audit log for organization creation.
Added maximum width to tabs on the Group page, so they do not keep expanding forever.
Humio docker images is now based on the Alpine linux.
New metric: "ingest-request-delay". Histogram of ingest request time spent being delayed due to exceeding limit on concurrent processing of ingest.
Added explicit distribution information for elastic bulk API for elasticsearch API compatibility.
Allow launching using JDK-16.
The test action functionality no longer uses alert terminology, as actions can be invoked from both alerts and scheduled searches. Also, it is now possible to also test the scheduled search specific message templates using it.
Improved error handling when running scheduled searches, so that a failed schedules search will be retried as long as it is within the Backfill Limit.
Added loading and error states to the page where user selects to create a new repository or view.
When selecting actions for alerts or scheduled searches, the actions are now grouped by the package they were imported from.
Fixed an issue with using the browser back button while "advanced editing" the query text of a scheduled search or an alert would hide the blue bar that allows saving the query.
Added support for including dashboard and alert labels when exporting a package.
Scheduled search "schedule" field is now validated, showing accurate help for each part of the crontab expression.
You can now export and import packages containing any of the action types: Webhook, Email, Humio Repo, Pager Duty, Slack, Slack multi channel, Ops Genie and Victor Ops.
Added Dark Mode for
Query Monitor
page.
Fixed in this release
Summary
Fixed a race condition that could cause Humio to delete more segments than expected when initializing a digester node.
Security fix.
Removed a spurious warning log when requesting a non-existent hash file from S3.
Fixed an issue where choosing a UI theme would not get saved properly in the user's settings.
It is now possible to ingest logs into Humio using LogStash v.7.13 and upwards.
Updated a dependency to a version fixing a critical bug.
Documentation
Updated the examples on how to use the
match()
query function in the online documentation.
Automation and Alerts
Fixed a bug which potentially have caused alerts to not re-fire after the throttle period for field-based throttling had passed.
Functions
Fixed an issue where
top()
with max= can yield the same key multiple times (for example...| top([queryId, query], max=totalSize))
.Fixed an issue with the
split()
function which caused incorrect (usually, too few) query results in some cases where the output fields were refered to later in the query.
Other
Fixed an issue where the global consistency check job would fail to perform the consistency check, instead logging lines like "Global dump requested but global had expired". This line can still occur, but only when the consistency check takes too long.
Amended an internal limit on how many segments can be fetched from bucket storage concurrently. The old limit was based on the number of running queries. The new limit is 32.
Fixed an issue where, looking at GraphiQL, the dropdown from the navigation menu was partially hidden.
Fixed an issue that could cause cluster nodes to crash when growing the number of digest partitions.
Fixed an issue where new groups added to a repository got a query prefix that disallowed search. The default is now to allow search with the queryprefix *.
Fixed an issue that caused some errors to be hidden behind a message about "internal error".
Reworded a confusing error message when using the
top()
function with a limit parameter exceeding the limits configured withTOP_K_MAX_MAP_SIZE_HISTORICAL
orTOP_K_MAX_MAP_SIZE_LIVE
.Fixed an issue that could cause UploadedFileSyncJob to crash if an uploaded file went missing.
Updated Slack action for messaging multiple channels, so it propagates errors when triggered. Previously errors were ignored.
Truncate long user names on the Users page.
Fixed a bug where a 404 Not Found status on an internal endpoint would be incorrectly reported as an 401 Unauthorized.
Fixed an issue where Humio would retain segments acquired from read-only buckets if those segments were deleted. Humio will now properly delete the segments locally, and drop the reference to the copy in the read-only bucket.
Global snapshots are now uploaded to bucket storage more often when there are a lot of updates to it, leading to shorter replay times on startup.
Introduced a check for compatibility for packages and humio versions.
Security when viewing installed packages and packages on the marketplace are now less strict. Permissions are still required for installing and uninstalling packages.
Fixed an issue where the DiskSpaceJob could continue tracking segments if they were deleted from global, but the files were still present locally.
Fixed an issue where certain problems highlighted the first word in a query, not the location of the problem.
Creating a new dashboard now opens it after creation.
Fixed an issue that caused some metrics of type gauge to be reported with a wrong value.
The DiskSpaceJob now removes newly written backfilled segments off the local disk before it chooses to remove non-backfilled segments.
Fixed an issue where the {time_zone} Message Templates and Variables for actions would show a full description of the scheduled search instead of only the time zone.
Fixed an issue - when creating a repository a user is automatically assigned a role but isn't able to see himself in the roles list. Also, when editing roles the assignment is not counted correctly under usage.
Fixed an issue where Humio attempted to fetch global from other nodes before TLS was initialized.
Fixed a bug on queries that triggered an error while executing due to the input (such as a regex that exceeds limits on execution time) could result in the client getting 404 as status on poll, where it should get .0.
Fixed an issue where Shift+Enter would select the current completion rather than adding a newline.
Removed an old Cloud Signups page. The page is not necessary since organizations were implemented for the Cloud environments.
Fixed an issue where the DiskSpaceJob could mark segments accessed slightly out of order during boot.
Fixed an issue where it was possible to submit queries to the Delete Events API that were not valid for that API. Only pure filtering queries are allowed.
When a search is able to filter out segments based on the hash filter files, and a segment file is not present locally on any node, fetch only the hash filter at first, evaluate that, and only if required, fetch the segment file. This speeds up searches that target segments only present in bucket storage and that have search filters that generate hash filter checks, such as regex and literal text comparisons.
Fixed a bug where a hidden field named "#humioAutoShard" would sometimes show up in the field list.
Split package export page into dialog with multiple steps.
Fixed an issue where the job responsible for deleting segment files off nodes was not deleting as many segments as it should.
When accessing Humio through a URL with either a repository or view name in it and using an ingest token, it is now checked that the view on the token matches the repository or view in the URL, and a 403 Forbidden status is returned, if not.
Fixed an issue where Humio would create a broken hash file for the merge result when merging mini-segments that did not originally have hash files.
The DiskSpaceJob no longer initializes based off of the segment last-modified timestamp, this only happens if no access order snapshot is stored locally. If a snapshot is present, we trust that.
Fixed a bug causing the disk space job to use an expensive code path even when a cheaper one was available.
Fixed an issue where the job responsible for deleting segment files off nodes was not running as often as expected.
Cloning an asset now redirects you to the edit page for the asset for all assets.
Fixed an issue where the query scheduler would spend too much time "shelving" queries, and not enough on getting them executed, leading to little progress on queries.
Fixed an issue where metrics of type gauge with a double value were not reported to the humio-metricsrepository, but only to the humio repository.
Fixed thread safety for a variable involved in fetching from bucket storage for queries.
Updated the new asset dialog button text so that it will say 'Continue' when an asset will not be created directly.
Updated Elastic ingest endpoint to accept 'create' operations in addition to 'index' operations. Both operation types result in the same ingest behavior. This update was added as Fluent-Bit v1.8.3 began using the 'create' operation rather than 'index' for ingest.
Fixed an issue where Humio would create auxiliary files (hash files) for segments unnecessarily when moving segments between nodes.
Updated dependencies with security fixes.
The simple and advanced permission model has been merged, thus allowing users who were using the simple permission model to create their own permission roles and groups, create groups with default roles, and all other features that were previously only available in advanced permissions mode.
Humio Server 1.32.1 LTS (2021-11-16)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.32.1 | LTS | 2021-11-16 | Cloud | 2022-10-31 | No | 1.16.0 | No |
Hide file hashes
JAR Checksum | Value |
---|---|
MD5 | 7717b075d0e60780caf64c8a70afdf57 |
SHA1 | 02e400753b475b1ba668399bc27126a12304e4a6 |
SHA256 | c5d36f6bbcd5c8bc90e5ba2a28ca15cf5f1c3657a56c40e9f46f7bfaf48149e1 |
SHA512 | 09d31840cd1442cfed31402bf52132a4c9960e3177b2c14e4b4da687333ae3995732a048c3b71ae7eb81ccfd40afee40e34085d6cc35da6b3cb826a57049f0a6 |
Download: https://repo.humio.com/repository/maven-releases/com/humio/server/1.32.1/server-1.32.1.tar.gz
These notes include entries from the following previous releases: 1.32.0
Bug fixes related to Amazon S3 log entries, saving a User Interface theme, Logtash, and general security.
Deprecation
Items that have been deprecated and may be removed in a future release.
Deprecates the two GraphQL fields id and contentHash on the File type. The two fields are considered unused, so no alternatives are provided. If you rely on them, please let us know.
Deprecates the copyFile GraphQL mutation, as it is no longer used. If you use this mutation, please let us know.
New features and improvements
UI Changes
Updated the style of the email action template and made the wording used dependent on whether an alert or scheduled search was triggered.
Breadcrumbs are aligned across all pages and show the package name with a link when viewing or editing an asset from a package.
The left navigation menu hides, and can be opened again, for mobile devices, on organization settings pages and repository settings pages.
Cluster management pages style updates.
Fixed some styling issue on Query Quotas page.
The signup path was removed, together with the corresponding pages.
Updated design for Package Marketplace and Installed Packages to make them easier to use and more consistent.
Removed the pop-up link to edit an alert or scheduled search when on the form page. This link is only relevant when creating an entity from the search page via a dialog.
Identity provider pages style update.
GraphQL API
Added information about the use of preview fields in the result from calling the GraphQL API. The information will be in the field extensions.preview and will be a list of objects with a name and reason field.
The GraphQL DateTime type now supports non-UTC time. Timestamps like 2021-07-18T14:13:09.517+02.0 are now legal, and will be converted to UTC time internally.
When using the GraphQL field allowedViewActions, the two previously deprecated actions ChangeAlertsAndNotifiers and ReadEvents are no longer returned. Look for their replacements ChangeTriggersAndActions and ReadContents instead.
Deprecates the installPackageFromRegistry and updatePackageFromRegistry GraphQL mutations in favor of installPackageFromRegistryV2 and updatePackageFromRegistryV2.
The name, displayName, and location GraphQL fields on the File type are deprecated in favor of the new nameAndPath field.
The fileName, displayName, and location GraphQL fields on the UploadedFileSnapshot type are deprecated in favor of the new nameAndPath field.
Deprecates the package field on the SearchDomain GraphQL type, in favor of packageV2. The new field has a simpler and more correct return type.
Added a GraphQL mutation cancelDeleteEvents that allows cancelling a previously submitted deletion. Cancellation is best-effort, and events that have already been deleted will not be restored.
Extended 'Relative' field type for schema files to include support for the value 'now'.
Configuration
Added compatibility mode for using IBM Cloud Object Storage as bucket storage via S3_STORAGE_IBM_COMPAT
The Scheduled Searches feature is no longer in beta and can be used by all users without enabling it first
On a node configured as
USING_EPHEMERAL_DISKS=true
allow the local disk management deleting files even if a query may need them later, as the system is able to re-fetch the files from bucket storage when required. This improves the situation when there are active queries that in total have requested access to more segments than the local disk can hold.
Functions
Improved performance in IP database lookups for the functions
ipLocation()
,asn()
andworldMap()
.
Other
Added focus states to text field, selection and text area components.
Added support for importing packages with CSV and JSON files. Exporting packages with files is not fully supported yet, but will be in a future release.
Improved handling of local disk space relative to
LOCAL_STORAGE_MIN_AGE_DAYS
. When the local disk would overflow by respecting that config, Humio can now delete the oldest local segments that are present in bucket storage, even when they are within that time range.Raise size limit on ingest requests from 8MB to 1 GB
Scheduled search "schedule" is explained using human readable text such as "At 9.30 on Tuesdays".
Improved search for users page.
Package installation error messages are now much more readable.
Limit pending ingest requests by rejecting excess invocations. Rejections are signalled as status 429 "Too many requests" and a Retry-After header suggesting to retry in 5 seconds. Limiting starts when queued requests exceed
INGEST_REQUEST_LIMIT_PCT
of the total heap size, default is 5.Warnings when running scheduled searches now show up as errors in the scheduled search overview page if
SCHEDULED_SEARCH_DESPITE_WARNINGS
is set tofalse
(the default).Added a Data subprocessors page under account.
Improved audit log for organization creation.
Added maximum width to tabs on the Group page, so they do not keep expanding forever.
Humio docker images is now based on the Alpine linux.
New metric: "ingest-request-delay". Histogram of ingest request time spent being delayed due to exceeding limit on concurrent processing of ingest.
Added explicit distribution information for elastic bulk API for elasticsearch API compatibility.
Allow launching using JDK-16.
The test action functionality no longer uses alert terminology, as actions can be invoked from both alerts and scheduled searches. Also, it is now possible to also test the scheduled search specific message templates using it.
Improved error handling when running scheduled searches, so that a failed schedules search will be retried as long as it is within the Backfill Limit.
Added loading and error states to the page where user selects to create a new repository or view.
When selecting actions for alerts or scheduled searches, the actions are now grouped by the package they were imported from.
Fixed an issue with using the browser back button while "advanced editing" the query text of a scheduled search or an alert would hide the blue bar that allows saving the query.
Added support for including dashboard and alert labels when exporting a package.
Scheduled search "schedule" field is now validated, showing accurate help for each part of the crontab expression.
You can now export and import packages containing any of the action types: Webhook, Email, Humio Repo, Pager Duty, Slack, Slack multi channel, Ops Genie and Victor Ops.
Added Dark Mode for
Query Monitor
page.
Fixed in this release
Summary
Security fix.
Removed a spurious warning log when requesting a non-existent hash file from S3.
Fixed an issue where choosing a UI theme would not get saved properly in the user's settings.
It is now possible to ingest logs into Humio using LogStash v.7.13 and upwards.
Documentation
Updated the examples on how to use the
match()
query function in the online documentation.
Automation and Alerts
Fixed a bug which potentially have caused alerts to not re-fire after the throttle period for field-based throttling had passed.
Functions
Fixed an issue where
top()
with max= can yield the same key multiple times (for example...| top([queryId, query], max=totalSize))
.Fixed an issue with the
split()
function which caused incorrect (usually, too few) query results in some cases where the output fields were refered to later in the query.
Other
Fixed an issue where the global consistency check job would fail to perform the consistency check, instead logging lines like "Global dump requested but global had expired". This line can still occur, but only when the consistency check takes too long.
Amended an internal limit on how many segments can be fetched from bucket storage concurrently. The old limit was based on the number of running queries. The new limit is 32.
Fixed an issue where, looking at GraphiQL, the dropdown from the navigation menu was partially hidden.
Fixed an issue that could cause cluster nodes to crash when growing the number of digest partitions.
Fixed an issue where new groups added to a repository got a query prefix that disallowed search. The default is now to allow search with the queryprefix *.
Fixed an issue that caused some errors to be hidden behind a message about "internal error".
Reworded a confusing error message when using the
top()
function with a limit parameter exceeding the limits configured withTOP_K_MAX_MAP_SIZE_HISTORICAL
orTOP_K_MAX_MAP_SIZE_LIVE
.Fixed an issue that could cause UploadedFileSyncJob to crash if an uploaded file went missing.
Updated Slack action for messaging multiple channels, so it propagates errors when triggered. Previously errors were ignored.
Truncate long user names on the Users page.
Fixed a bug where a 404 Not Found status on an internal endpoint would be incorrectly reported as an 401 Unauthorized.
Fixed an issue where Humio would retain segments acquired from read-only buckets if those segments were deleted. Humio will now properly delete the segments locally, and drop the reference to the copy in the read-only bucket.
Global snapshots are now uploaded to bucket storage more often when there are a lot of updates to it, leading to shorter replay times on startup.
Introduced a check for compatibility for packages and humio versions.
Security when viewing installed packages and packages on the marketplace are now less strict. Permissions are still required for installing and uninstalling packages.
Fixed an issue where the DiskSpaceJob could continue tracking segments if they were deleted from global, but the files were still present locally.
Fixed an issue where certain problems highlighted the first word in a query, not the location of the problem.
Creating a new dashboard now opens it after creation.
Fixed an issue that caused some metrics of type gauge to be reported with a wrong value.
The DiskSpaceJob now removes newly written backfilled segments off the local disk before it chooses to remove non-backfilled segments.
Fixed an issue where the {time_zone} Message Templates and Variables for actions would show a full description of the scheduled search instead of only the time zone.
Fixed an issue - when creating a repository a user is automatically assigned a role but isn't able to see himself in the roles list. Also, when editing roles the assignment is not counted correctly under usage.
Fixed an issue where Humio attempted to fetch global from other nodes before TLS was initialized.
Fixed a bug on queries that triggered an error while executing due to the input (such as a regex that exceeds limits on execution time) could result in the client getting 404 as status on poll, where it should get .0.
Fixed an issue where Shift+Enter would select the current completion rather than adding a newline.
Removed an old Cloud Signups page. The page is not necessary since organizations were implemented for the Cloud environments.
Fixed an issue where the DiskSpaceJob could mark segments accessed slightly out of order during boot.
Fixed an issue where it was possible to submit queries to the Delete Events API that were not valid for that API. Only pure filtering queries are allowed.
When a search is able to filter out segments based on the hash filter files, and a segment file is not present locally on any node, fetch only the hash filter at first, evaluate that, and only if required, fetch the segment file. This speeds up searches that target segments only present in bucket storage and that have search filters that generate hash filter checks, such as regex and literal text comparisons.
Fixed a bug where a hidden field named "#humioAutoShard" would sometimes show up in the field list.
Split package export page into dialog with multiple steps.
Fixed an issue where the job responsible for deleting segment files off nodes was not deleting as many segments as it should.
When accessing Humio through a URL with either a repository or view name in it and using an ingest token, it is now checked that the view on the token matches the repository or view in the URL, and a 403 Forbidden status is returned, if not.
Fixed an issue where Humio would create a broken hash file for the merge result when merging mini-segments that did not originally have hash files.
The DiskSpaceJob no longer initializes based off of the segment last-modified timestamp, this only happens if no access order snapshot is stored locally. If a snapshot is present, we trust that.
Fixed a bug causing the disk space job to use an expensive code path even when a cheaper one was available.
Fixed an issue where the job responsible for deleting segment files off nodes was not running as often as expected.
Cloning an asset now redirects you to the edit page for the asset for all assets.
Fixed an issue where the query scheduler would spend too much time "shelving" queries, and not enough on getting them executed, leading to little progress on queries.
Fixed an issue where metrics of type gauge with a double value were not reported to the humio-metricsrepository, but only to the humio repository.
Fixed thread safety for a variable involved in fetching from bucket storage for queries.
Updated the new asset dialog button text so that it will say 'Continue' when an asset will not be created directly.
Updated Elastic ingest endpoint to accept 'create' operations in addition to 'index' operations. Both operation types result in the same ingest behavior. This update was added as Fluent-Bit v1.8.3 began using the 'create' operation rather than 'index' for ingest.
Fixed an issue where Humio would create auxiliary files (hash files) for segments unnecessarily when moving segments between nodes.
Updated dependencies with security fixes.
The simple and advanced permission model has been merged, thus allowing users who were using the simple permission model to create their own permission roles and groups, create groups with default roles, and all other features that were previously only available in advanced permissions mode.
Humio Server 1.32.0 LTS (2021-10-26)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.32.0 | LTS | 2021-10-26 | Cloud | 2022-10-31 | No | 1.16.0 | Yes |
Hide file hashes
JAR Checksum | Value |
---|---|
MD5 | 27311d8fb5172cc95aa7cb8d02f5c58c |
SHA1 | 0378ceb4d0ba9255dba6c213e1a728f7095e8ced |
SHA256 | 3dd0434039b1d8eaeb1d1242c860d51593ec222b3d09ae1d732503214edb6d06 |
SHA512 | b29267cd2e89cbbf88386a9c2e617c634ff05567388680b605804a0e3978fca4a55c483395d819ff27fbc53da6e29c09ff573c5f7ff00ab57dda45f5e20c1dd2 |
Download: https://repo.humio.com/repository/maven-releases/com/humio/server/1.32.0/server-1.32.0.tar.gz
We now distribute Humio as a tarball in addition to the fat jar format we've previously used. We will continue to distribute the fat jar for the time being. The tarball includes a launcher script, which will set a number of JVM arguments for users automatically. We believe this will help users configure Humio for good performance out of the box. For more information, see LogScale Launcher Script.
Search performance via hashfilter-first on segments in buckets
Some searches, including regex and literal string matches, now allow searching without fetching the actual segment files from the bucket, in case the segment is only present in the bucket and not on any local disk. Humio now fetches the hash filter file and uses that to decide if the segment file may have a match before downloading the segment file in this case.
Humio packages can now carry scheduled searches, all types of actions, and files with lookup data (either CSV or JSON formatted). Additionally, we have improved the UI for managing packages, to make it easier to find the package you are looking for. This also marks the point where packages are brought out of beta.
Deprecation
Items that have been deprecated and may be removed in a future release.
Deprecates the two GraphQL fields id and contentHash on the File type. The two fields are considered unused, so no alternatives are provided. If you rely on them, please let us know.
Deprecates the copyFile GraphQL mutation, as it is no longer used. If you use this mutation, please let us know.
New features and improvements
UI Changes
Updated the style of the email action template and made the wording used dependent on whether an alert or scheduled search was triggered.
Breadcrumbs are aligned across all pages and show the package name with a link when viewing or editing an asset from a package.
The left navigation menu hides, and can be opened again, for mobile devices, on organization settings pages and repository settings pages.
Cluster management pages style updates.
Fixed some styling issue on Query Quotas page.
The signup path was removed, together with the corresponding pages.
Updated design for Package Marketplace and Installed Packages to make them easier to use and more consistent.
Removed the pop-up link to edit an alert or scheduled search when on the form page. This link is only relevant when creating an entity from the search page via a dialog.
Identity provider pages style update.
GraphQL API
Added information about the use of preview fields in the result from calling the GraphQL API. The information will be in the field extensions.preview and will be a list of objects with a name and reason field.
The GraphQL DateTime type now supports non-UTC time. Timestamps like 2021-07-18T14:13:09.517+02.0 are now legal, and will be converted to UTC time internally.
When using the GraphQL field allowedViewActions, the two previously deprecated actions ChangeAlertsAndNotifiers and ReadEvents are no longer returned. Look for their replacements ChangeTriggersAndActions and ReadContents instead.
Deprecates the installPackageFromRegistry and updatePackageFromRegistry GraphQL mutations in favor of installPackageFromRegistryV2 and updatePackageFromRegistryV2.
The name, displayName, and location GraphQL fields on the File type are deprecated in favor of the new nameAndPath field.
The fileName, displayName, and location GraphQL fields on the UploadedFileSnapshot type are deprecated in favor of the new nameAndPath field.
Deprecates the package field on the SearchDomain GraphQL type, in favor of packageV2. The new field has a simpler and more correct return type.
Added a GraphQL mutation cancelDeleteEvents that allows cancelling a previously submitted deletion. Cancellation is best-effort, and events that have already been deleted will not be restored.
Extended 'Relative' field type for schema files to include support for the value 'now'.
Configuration
Added compatibility mode for using IBM Cloud Object Storage as bucket storage via S3_STORAGE_IBM_COMPAT
The Scheduled Searches feature is no longer in beta and can be used by all users without enabling it first
On a node configured as
USING_EPHEMERAL_DISKS=true
allow the local disk management deleting files even if a query may need them later, as the system is able to re-fetch the files from bucket storage when required. This improves the situation when there are active queries that in total have requested access to more segments than the local disk can hold.
Functions
Improved performance in IP database lookups for the functions
ipLocation()
,asn()
andworldMap()
.
Other
Added focus states to text field, selection and text area components.
Added support for importing packages with CSV and JSON files. Exporting packages with files is not fully supported yet, but will be in a future release.
Improved handling of local disk space relative to
LOCAL_STORAGE_MIN_AGE_DAYS
. When the local disk would overflow by respecting that config, Humio can now delete the oldest local segments that are present in bucket storage, even when they are within that time range.Raise size limit on ingest requests from 8MB to 1 GB
Scheduled search "schedule" is explained using human readable text such as "At 9.30 on Tuesdays".
Improved search for users page.
Package installation error messages are now much more readable.
Limit pending ingest requests by rejecting excess invocations. Rejections are signalled as status 429 "Too many requests" and a Retry-After header suggesting to retry in 5 seconds. Limiting starts when queued requests exceed
INGEST_REQUEST_LIMIT_PCT
of the total heap size, default is 5.Warnings when running scheduled searches now show up as errors in the scheduled search overview page if
SCHEDULED_SEARCH_DESPITE_WARNINGS
is set tofalse
(the default).Added a Data subprocessors page under account.
Improved audit log for organization creation.
Added maximum width to tabs on the Group page, so they do not keep expanding forever.
Humio docker images is now based on the Alpine linux.
New metric: "ingest-request-delay". Histogram of ingest request time spent being delayed due to exceeding limit on concurrent processing of ingest.
Added explicit distribution information for elastic bulk API for elasticsearch API compatibility.
Allow launching using JDK-16.
The test action functionality no longer uses alert terminology, as actions can be invoked from both alerts and scheduled searches. Also, it is now possible to also test the scheduled search specific message templates using it.
Improved error handling when running scheduled searches, so that a failed schedules search will be retried as long as it is within the Backfill Limit.
Added loading and error states to the page where user selects to create a new repository or view.
When selecting actions for alerts or scheduled searches, the actions are now grouped by the package they were imported from.
Fixed an issue with using the browser back button while "advanced editing" the query text of a scheduled search or an alert would hide the blue bar that allows saving the query.
Added support for including dashboard and alert labels when exporting a package.
Scheduled search "schedule" field is now validated, showing accurate help for each part of the crontab expression.
You can now export and import packages containing any of the action types: Webhook, Email, Humio Repo, Pager Duty, Slack, Slack multi channel, Ops Genie and Victor Ops.
Added Dark Mode for
Query Monitor
page.
Fixed in this release
Documentation
Updated the examples on how to use the
match()
query function in the online documentation.
Automation and Alerts
Fixed a bug which potentially have caused alerts to not re-fire after the throttle period for field-based throttling had passed.
Functions
Fixed an issue where
top()
with max= can yield the same key multiple times (for example...| top([queryId, query], max=totalSize))
.Fixed an issue with the
split()
function which caused incorrect (usually, too few) query results in some cases where the output fields were refered to later in the query.
Other
Fixed an issue where the global consistency check job would fail to perform the consistency check, instead logging lines like "Global dump requested but global had expired". This line can still occur, but only when the consistency check takes too long.
Amended an internal limit on how many segments can be fetched from bucket storage concurrently. The old limit was based on the number of running queries. The new limit is 32.
Fixed an issue where, looking at GraphiQL, the dropdown from the navigation menu was partially hidden.
Fixed an issue that could cause cluster nodes to crash when growing the number of digest partitions.
Fixed an issue where new groups added to a repository got a query prefix that disallowed search. The default is now to allow search with the queryprefix *.
Fixed an issue that caused some errors to be hidden behind a message about "internal error".
Reworded a confusing error message when using the
top()
function with a limit parameter exceeding the limits configured withTOP_K_MAX_MAP_SIZE_HISTORICAL
orTOP_K_MAX_MAP_SIZE_LIVE
.Fixed an issue that could cause UploadedFileSyncJob to crash if an uploaded file went missing.
Updated Slack action for messaging multiple channels, so it propagates errors when triggered. Previously errors were ignored.
Truncate long user names on the Users page.
Fixed a bug where a 404 Not Found status on an internal endpoint would be incorrectly reported as an 401 Unauthorized.
Fixed an issue where Humio would retain segments acquired from read-only buckets if those segments were deleted. Humio will now properly delete the segments locally, and drop the reference to the copy in the read-only bucket.
Global snapshots are now uploaded to bucket storage more often when there are a lot of updates to it, leading to shorter replay times on startup.
Introduced a check for compatibility for packages and humio versions.
Security when viewing installed packages and packages on the marketplace are now less strict. Permissions are still required for installing and uninstalling packages.
Fixed an issue where the DiskSpaceJob could continue tracking segments if they were deleted from global, but the files were still present locally.
Fixed an issue where certain problems highlighted the first word in a query, not the location of the problem.
Creating a new dashboard now opens it after creation.
Fixed an issue that caused some metrics of type gauge to be reported with a wrong value.
The DiskSpaceJob now removes newly written backfilled segments off the local disk before it chooses to remove non-backfilled segments.
Fixed an issue where the {time_zone} Message Templates and Variables for actions would show a full description of the scheduled search instead of only the time zone.
Fixed an issue - when creating a repository a user is automatically assigned a role but isn't able to see himself in the roles list. Also, when editing roles the assignment is not counted correctly under usage.
Fixed an issue where Humio attempted to fetch global from other nodes before TLS was initialized.
Fixed a bug on queries that triggered an error while executing due to the input (such as a regex that exceeds limits on execution time) could result in the client getting 404 as status on poll, where it should get .0.
Fixed an issue where Shift+Enter would select the current completion rather than adding a newline.
Removed an old Cloud Signups page. The page is not necessary since organizations were implemented for the Cloud environments.
Fixed an issue where the DiskSpaceJob could mark segments accessed slightly out of order during boot.
Fixed an issue where it was possible to submit queries to the Delete Events API that were not valid for that API. Only pure filtering queries are allowed.
When a search is able to filter out segments based on the hash filter files, and a segment file is not present locally on any node, fetch only the hash filter at first, evaluate that, and only if required, fetch the segment file. This speeds up searches that target segments only present in bucket storage and that have search filters that generate hash filter checks, such as regex and literal text comparisons.
Fixed a bug where a hidden field named "#humioAutoShard" would sometimes show up in the field list.
Split package export page into dialog with multiple steps.
Fixed an issue where the job responsible for deleting segment files off nodes was not deleting as many segments as it should.
When accessing Humio through a URL with either a repository or view name in it and using an ingest token, it is now checked that the view on the token matches the repository or view in the URL, and a 403 Forbidden status is returned, if not.
Fixed an issue where Humio would create a broken hash file for the merge result when merging mini-segments that did not originally have hash files.
The DiskSpaceJob no longer initializes based off of the segment last-modified timestamp, this only happens if no access order snapshot is stored locally. If a snapshot is present, we trust that.
Fixed a bug causing the disk space job to use an expensive code path even when a cheaper one was available.
Fixed an issue where the job responsible for deleting segment files off nodes was not running as often as expected.
Cloning an asset now redirects you to the edit page for the asset for all assets.
Fixed an issue where the query scheduler would spend too much time "shelving" queries, and not enough on getting them executed, leading to little progress on queries.
Fixed an issue where metrics of type gauge with a double value were not reported to the humio-metricsrepository, but only to the humio repository.
Fixed thread safety for a variable involved in fetching from bucket storage for queries.
Updated the new asset dialog button text so that it will say 'Continue' when an asset will not be created directly.
Updated Elastic ingest endpoint to accept 'create' operations in addition to 'index' operations. Both operation types result in the same ingest behavior. This update was added as Fluent-Bit v1.8.3 began using the 'create' operation rather than 'index' for ingest.
Fixed an issue where Humio would create auxiliary files (hash files) for segments unnecessarily when moving segments between nodes.
Updated dependencies with security fixes.
The simple and advanced permission model has been merged, thus allowing users who were using the simple permission model to create their own permission roles and groups, create groups with default roles, and all other features that were previously only available in advanced permissions mode.
Humio Server 1.31.0 GA (2021-09-27)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.31.0 | GA | 2021-09-27 | Cloud | 2022-10-31 | No | 1.16.0 | Yes |
Available for download two days after release.
Hide file hashes
JAR Checksum | Value |
---|---|
MD5 | 4b4b4411d930d00275bc3a412c0c5c50 |
SHA1 | ca22652ca303a162f81711f43977e14e528f6e85 |
SHA256 | 908a907b5c1ffbdce667b932daea0aab46a8ece4e164072f5bae0eb77d3bdd6f |
SHA512 | bbc65ea8641ac3d9910da4a9dcda0c1817d2a8acc2e81491a389d2671af7e57c12f019ad71af7309d4ad68b67997a6cbd3b0fba3a2048f07b7c7567abd05b8ae |
We now distribute Humio as a tarball in addition to the fat jar format we've previously used. We will continue to distribute the fat jar for the time being. The tarball includes a launcher script, which will set a number of JVM arguments for users automatically. We believe this will help users configure Humio for good performance out of the box. For more information, see LogScale Launcher Script.
Search performance via hashfilter-first on segments in buckets
Some searches, including regex and literal string matches, now allow searching without fetching the actual segment files from the bucket, in case the segment is only present in the bucket and not on any local disk. Humio now fetches the hash filter file and uses that to decide if the segment file may have a match before downloading the segment file in this case.
Humio packages can now carry scheduled searches, all types of actions, and files with lookup data (either CSV or JSON formatted). Additionally, we have improved the UI for managing packages, to make it easier to find the package you are looking for. This also marks the point where packages are brought out of beta.
Deprecation
Items that have been deprecated and may be removed in a future release.
Deprecates the copyFile GraphQL mutation, as it is no longer used. If you use this mutation, please let us know.
New features and improvements
UI Changes
Updated the style of the email action template and made the wording used dependent on whether an alert or scheduled search was triggered.
The signup path was removed, together with the corresponding pages.
Identity provider pages style update.
The left navigation menu hides, and can be opened again, for mobile devices, on organization settings pages and repository settings pages.
Breadcrumbs are aligned across all pages and show the package name with a link when viewing or editing an asset from a package.
Cluster management pages style updates.
Removed the pop-up link to edit an alert or scheduled search when on the form page. This link is only relevant when creating an entity from the search page via a dialog.
Updated design for Package Marketplace and Installed Packages to make them easier to use and more consistent.
Fixed some styling issue on Query Quotas page.
Automation and Alerts
When selecting actions for alerts or scheduled searches, the actions are now grouped by the package they were imported from.
GraphQL API
Added a GraphQL mutation cancelDeleteEvents that allows cancelling a previously submitted deletion. Cancellation is best-effort, and events that have already been deleted will not be restored.
Deprecates the installPackageFromRegistry and updatePackageFromRegistry GraphQL mutations in favor of installPackageFromRegistryV2 and updatePackageFromRegistryV2.
When using the GraphQL field allowedViewActions, the two previously deprecated actions ChangeAlertsAndNotifiers and ReadEvents are no longer returned. Look for their replacements ChangeTriggersAndActions and ReadContents instead.
Added information about the use of preview fields in the result from calling the GraphQL API. The information will be in the field extensions.preview and will be a list of objects with a name and reason field.
Extended 'Relative' field type for schema files to include support for the value 'now'.
Deprecates the two GraphQL fields id and contentHash on the File type. The two fields are considered unused, so no alternatives are provided. If you rely on them, please let us know.
Deprecates the package field on the SearchDomain GraphQL type, in favor of packageV2. The new field has a simpler and more correct return type.
The name, displayName, and location GraphQL fields on the File type are deprecated in favor of the new nameAndPath field.
The fileName, displayName, and location GraphQL fields on the UploadedFileSnapshot type are deprecated in favor of the new nameAndPath field.
The GraphQL DateTime type now supports non-UTC time. Timestamps like 2021-07-18T14:13:09.517+02.0 are now legal, and will be converted to UTC time internally.
Configuration
The Scheduled Searches feature is no longer in beta and can be used by all users without enabling it first
On a node configured as
USING_EPHEMERAL_DISKS=true
allow the local disk management deleting files even if a query may need them later, as the system is able to re-fetch the files from bucket storage when required. This improves the situation when there are active queries that in total have requested access to more segments than the local disk can hold.Added compatibility mode for using IBM Cloud Object Storage as bucket storage via S3_STORAGE_IBM_COMPAT
Functions
Improved performance in IP database lookups for the functions
ipLocation()
,asn()
andworldMap()
.
Other
Fixed an issue with using the browser back button while "advanced editing" the query text of a scheduled search or an alert would hide the blue bar that allows saving the query.
Added support for including dashboard and alert labels when exporting a package.
Warnings when running scheduled searches now show up as errors in the scheduled search overview page if
SCHEDULED_SEARCH_DESPITE_WARNINGS
is set to 'false' (the default).Scheduled search "schedule" is explained using human readable text such as "At 9.30 on Tuesdays".
Allow launching using JDK-16.
Improved error handling when running scheduled searches, so that a failed schedules search will be retried as long as it is within the Backfill Limit. [backfill
You can now export and import packages containing any of the action types: Webhook, Email, Humio Repo, Pager Duty, Slack, Slack multi channel, Ops Genie and Victor Ops.
Package installation error messages are now much more readable.
Added focus states to text field, selection and text area components.
The test action functionality no longer uses alert terminology, as actions can be invoked from both alerts and scheduled searches. Also, it is now possible to also test the scheduled search specific message templates using it.
Added Dark Mode for
Query Monitor
page.Improved handling of local disk space relative to
LOCAL_STORAGE_MIN_AGE_DAYS
. When the local disk would overflow by respecting that config, Humio can now delete the oldest local segments that are present in bucket storage, even when they are within that time range.Improved search for users page.
Added loading and error states to the page where user selects to create a new repository or view.
Added explicit distribution information for elastic bulk API for elasticsearch API compatibility.
Added support for importing packages with CSV and JSON files. Exporting packages with files is not fully supported yet, but will be in a future release.
Humio docker images is now based on the Alpine linux.
Added maximum width to tabs on the Group page, so they do not keep expanding forever.
Improved audit log for organization creation.
Scheduled search "schedule" field is now validated, showing accurate help for each part of the crontab expression.
Added a Data subprocessors page under account.
Fixed in this release
Documentation
Updated the examples on how to use the
match()
query function in the online documentation.
Functions
Other
Fixed an issue where the job responsible for deleting segment files off nodes was not running as often as expected.
Fixed an issue where Shift+Enter would select the current completion rather than adding a newline.
Removed an old Cloud Signups page. The page is not necessary since organizations were implemented for the Cloud environments.
Updated the new asset dialog button text so that it will say 'Continue' when an asset will not be created directly.
When a search is able to filter out segments based on the hash filter files, and a segment file is not present locally on any node, fetch only the hash filter at first, evaluate that, and only if required, fetch the segment file. This speeds up searches that target segments only present in bucket storage and that have search filters that generate hash filter checks, such as regex and literal text comparisons.
Cloning an asset now redirects you to the edit page for the asset for all assets.
Split package export page into dialog with multiple steps.
Amended an internal limit on how many segments can be fetched from bucket storage concurrently. The old limit was based on the number of running queries. The new limit is 32.
Updated dependencies with security fixes.
Fixed an issue where it was possible to submit queries to the Delete Events API that were not valid for that API. Only pure filtering queries are allowed.
Fixed an issue where the query scheduler would spend too much time "shelving" queries, and not enough on getting them executed, leading to little progress on queries.
Fixed an issue where the global consistency check job would fail to perform the consistency check, instead logging lines like "Global dump requested but global had expired". This line can still occur, but only when the consistency check takes too long.
Fixed a bug where a hidden field named "#humioAutoShard" would sometimes show up in the field list.
Fixed an issue that could cause UploadedFileSyncJob to crash if an uploaded file went missing.
Global snapshots are now uploaded to bucket storage more often when there are a lot of updates to it, leading to shorter replay times on startup.
Introduced a check for compatibility for packages and humio versions.
Updated Elastic ingest endpoint to accept 'create' operations in addition to 'index' operations. Both operation types result in the same ingest behavior. This update was added as Fluent-Bit v1.8.3 began using the 'create' operation rather than 'index' for ingest.
Fixed a bug which potentially have caused alerts to not re-fire after the throttle period for field-based throttling had passed.
Fixed an issue where, looking at GraphiQL, the dropdown from the navigation menu was partially hidden.
Truncate long user names on the Users page.
Fixed an issue where the DiskSpaceJob could mark segments accessed slightly out of order during boot.
Fixed thread safety for a variable involved in fetching from bucket storage for queries.
Fixed an issue where Humio attempted to fetch global from other nodes before TLS was initialized.
The simple and advanced permission model has been merged, thus allowing users who were using the simple permission model to create their own permission roles and groups, create groups with default roles, and all other features that were previously only available in advanced permissions mode.
Updated Slack action for messaging multiple channels, so it propagates errors when triggered. Previously errors were ignored.
Fixed an issue where new groups added to a repository got a query prefix that disallowed search. The default is now to allow search with the queryprefix *.
The DiskSpaceJob now removes newly written backfilled segments off the local disk before it chooses to remove non-backfilled segments.
Fixed an issue where the job responsible for deleting segment files off nodes was not deleting as many segments as it should.
Security when viewing installed packages and packages on the marketplace are now less strict. Permissions are still required for installing and uninstalling packages.
Fixed an issue where the {time_zone} Message Templates and Variables for actions would show a full description of the scheduled search instead of only the time zone.
Fixed an issue where certain problems highlighted the first word in a query, not the location of the problem.
Fixed an issue that caused some metrics of type gauge to be reported with a wrong value.
Fixed an issue that caused some errors to be hidden behind a message about "internal error".
Fixed an issue where Humio would create a broken hash file for the merge result when merging mini-segments that did not originally have hash files.
The DiskSpaceJob no longer initializes based off of the segment last-modified timestamp, this only happens if no access order snapshot is stored locally. If a snapshot is present, we trust that.
Fixed a bug causing the disk space job to use an expensive code path even when a cheaper one was available.
Fixed an issue where the DiskSpaceJob could continue tracking segments if they were deleted from global, but the files were still present locally.
Reworded a confusing error message when using the
top()
function with a limit parameter exceeding the limits configured withTOP_K_MAX_MAP_SIZE_HISTORICAL
orTOP_K_MAX_MAP_SIZE_LIVE
.Fixed an issue that could cause cluster nodes to crash when growing the number of digest partitions.
Creating a new dashboard now opens it after creation.
Fixed an issue where metrics of type gauge with a double value were not reported to the humio-metrics repository, but only to the humio repository.
Fixed a bug where a 404 Not Found status on an internal endpoint would be incorrectly reported as an 401 Unauthorized.
Fixed an issue where Humio would create auxiliary files (hash files) for segments unnecessarily when moving segments between nodes.
When accessing Humio through a URL with either a repository or view name in it and using an ingest token, it is now checked that the view on the token matches the repository or view in the URL, and a 403 Forbidden status is returned, if not.
Fixed a bug on queries that triggered an error while executing due to the input (such as a regex that exceeds limits on execution time) could result in the client getting 404 as status on poll, where it should get .0.
Humio Server 1.30.7 LTS (2022-01-06)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.30.7 | LTS | 2022-01-06 | Cloud | 2022-09-30 | No | 1.16.0 | No |
Hide file hashes
JAR Checksum | Value |
---|---|
MD5 | 9612de72bdd0275e39d6af23c51067c8 |
SHA1 | 4ecdf75a4deae2b2c75abcf1101d9b4711cacf31 |
SHA256 | 3ae7ba0f40043f10c7fad02ea471bb55373cf38afd31ddfa2f47bdc87528f475 |
SHA512 | 5d3cadcb47c1c792c2e643fbaffef5207a2d327a41142a337196863a1be39a5b39de1a527853bc08a285497b6f6791b592c9e0bf69f8aef3a2e16d9db6cee7de |
These notes include entries from the following previous releases: 1.30.0, 1.30.1, 1.30.2, 1.30.3, 1.30.4, 1.30.5, 1.30.6
Updated dependencies with security fixes.
Fixed in this release
Security
Updated dependencies to log4j 2.17.1 to fix CVE-2021-44832 and CVE-2021-45105
Updated dependencies to Netty to fix CVE-2021-43797
Kafka and xmlsec have been upgraded to address CVE-2021-38153 and CVE-2021-38153.
Updated dependencies to address a critical security vulnerability for the log4j logging framework, "log4shell", (CVE-2021-44228).
Updated dependencies to log4j 2.16 to remove of message lookups (CVE-2021-45046)
Updated dependencies to address a critical security vulnerability for the log4j logging framework, "log4shell", (CVE-2021-44228).
Summary
Fixed a compatibility issue with Filebeat 7.16.0
Other
Fixed an issue where the UI page for new parser could have overflow in some browsers.
Fixed an issue where a URL without content other than the protocol would break installing a package.
Fixed a race condition that could cause Humio to delete more segments than expected when initializing a digester node.
Fixed an issue causing Humio to log MatchExceptions from the calculateStartPoint method.
Fixed an issue where the query scheduler would spend too much time "shelving" queries, and not enough on getting them executed, leading to little progress on queries.
On a node configured as
USING_EPHEMERAL_DISKS=true
allow the local disk management deleting files even if a query may need them later, as the system is able to re-fetch the files from bucket storage when required. This improves the situation when there are active queries that in total have requested access to more segments than the local disk can hold.Fixed an issue where the job responsible for deleting segment files off nodes was not running as often as expected.
Require organization level permission when changing role permissions that possibly affects all views and repositories.
Fixed an issue where the job responsible for deleting segment files off nodes was not deleting as many segments as it should.
Updated a dependency to a version fixing a critical bug.
Fixed an issue where offsets from one Kafka partition could be used when deciding where to start consuming for another partition, in the case where there are too many datasources in the repo. This led to a crash loop when the affected node was restarted.
Humio Server 1.30.6 LTS (2021-12-15)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.30.6 | LTS | 2021-12-15 | Cloud | 2022-09-30 | No | 1.16.0 | No |
Hide file hashes
JAR Checksum | Value |
---|---|
MD5 | 00b0fbed6da70bb28c130be8bab2f573 |
SHA1 | 033f9cb06618af70ebf605534af3d97692b1279e |
SHA256 | f389991e007489d9f5a0d6ca8b4cd3905b072ff40efcff3284bef47cf65d4e86 |
SHA512 | 12fec5a5954f8ec49cbc30f1b206302bdb8d21fc34330cc5120d5af68a08e80c4fa53faad2d9c707535ced94e6134beaaca141212da7daa1d748b29b9df286ee |
These notes include entries from the following previous releases: 1.30.0, 1.30.1, 1.30.2, 1.30.3, 1.30.4, 1.30.5
Fix log4j
dependencies.
Fixed in this release
Security
Kafka and xmlsec have been upgraded to address CVE-2021-38153 and CVE-2021-38153.
Updated dependencies to address a critical security vulnerability for the log4j logging framework, "log4shell", (CVE-2021-44228).
Updated dependencies to log4j 2.16 to remove of message lookups (CVE-2021-45046)
Updated dependencies to address a critical security vulnerability for the log4j logging framework, "log4shell", (CVE-2021-44228).
Summary
Fixed a compatibility issue with Filebeat 7.16.0
Other
Fixed an issue where the UI page for new parser could have overflow in some browsers.
Fixed an issue where a URL without content other than the protocol would break installing a package.
Fixed a race condition that could cause Humio to delete more segments than expected when initializing a digester node.
Fixed an issue causing Humio to log MatchExceptions from the calculateStartPoint method.
Fixed an issue where the query scheduler would spend too much time "shelving" queries, and not enough on getting them executed, leading to little progress on queries.
On a node configured as
USING_EPHEMERAL_DISKS=true
allow the local disk management deleting files even if a query may need them later, as the system is able to re-fetch the files from bucket storage when required. This improves the situation when there are active queries that in total have requested access to more segments than the local disk can hold.Fixed an issue where the job responsible for deleting segment files off nodes was not running as often as expected.
Require organization level permission when changing role permissions that possibly affects all views and repositories.
Fixed an issue where the job responsible for deleting segment files off nodes was not deleting as many segments as it should.
Updated a dependency to a version fixing a critical bug.
Fixed an issue where offsets from one Kafka partition could be used when deciding where to start consuming for another partition, in the case where there are too many datasources in the repo. This led to a crash loop when the affected node was restarted.
Humio Server 1.30.5 LTS (2021-12-10)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.30.5 | LTS | 2021-12-10 | Cloud | 2022-09-30 | No | 1.16.0 | No |
Hide file hashes
JAR Checksum | Value |
---|---|
MD5 | e4475e8c2f6623ff22465ccd59bbfb8a |
SHA1 | 5affbdbd10674259ec178db75171d73eb2835f1d |
SHA256 | 92c0261e328c43f0ba46ffaa72fc5e8a1ef075438759286ab00ea5dbb5b5ca30 |
SHA512 | e698dd4a2aa8c85ca3d89cc596e05677106d509082f816e46b3dca6b4b3313690d4c41c3a5b5529d08d91e51ebbfb3f9217e7830d1f4ade54fd79b40f366f449 |
These notes include entries from the following previous releases: 1.30.0, 1.30.1, 1.30.2, 1.30.3, 1.30.4
Fix log4j
dependencies.
Fixed in this release
Security
Kafka and xmlsec have been upgraded to address CVE-2021-38153 and CVE-2021-38153.
Updated dependencies to address a critical security vulnerability for the log4j logging framework, "log4shell", (CVE-2021-44228).
Updated dependencies to address a critical security vulnerability for the log4j logging framework, "log4shell", (CVE-2021-44228).
Summary
Fixed a compatibility issue with Filebeat 7.16.0
Other
Fixed an issue where the UI page for new parser could have overflow in some browsers.
Fixed an issue where a URL without content other than the protocol would break installing a package.
Fixed a race condition that could cause Humio to delete more segments than expected when initializing a digester node.
Fixed an issue causing Humio to log MatchExceptions from the calculateStartPoint method.
Fixed an issue where the query scheduler would spend too much time "shelving" queries, and not enough on getting them executed, leading to little progress on queries.
On a node configured as
USING_EPHEMERAL_DISKS=true
allow the local disk management deleting files even if a query may need them later, as the system is able to re-fetch the files from bucket storage when required. This improves the situation when there are active queries that in total have requested access to more segments than the local disk can hold.Fixed an issue where the job responsible for deleting segment files off nodes was not running as often as expected.
Require organization level permission when changing role permissions that possibly affects all views and repositories.
Fixed an issue where the job responsible for deleting segment files off nodes was not deleting as many segments as it should.
Updated a dependency to a version fixing a critical bug.
Fixed an issue where offsets from one Kafka partition could be used when deciding where to start consuming for another partition, in the case where there are too many datasources in the repo. This led to a crash loop when the affected node was restarted.
Humio Server 1.30.4 LTS (2021-12-10)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.30.4 | LTS | 2021-12-10 | Cloud | 2022-09-30 | No | 1.16.0 | No |
Hide file hashes
JAR Checksum | Value |
---|---|
MD5 | 815e193fd910b712d9be4420473101ec |
SHA1 | f909e62433a8ee4ff3ac2a4b78ab966c311eae54 |
SHA256 | bb2fcd20a21e9d8a7a8e2866ff579259f3f4b9975f31731bb4d57d879072d9ca |
SHA512 | 96f31a00f96e44bf64128b9c58b9007cef78ccd12a2abc7264b884415c62f44df7837f4540780aa5a75c04c64488a33a6977d256333e950a181b861d6966b686 |
These notes include entries from the following previous releases: 1.30.0, 1.30.1, 1.30.2, 1.30.3
Security fix related to
log4j
logging, and fix
compatibility with Filebeat.
Fixed in this release
Security
Kafka and xmlsec have been upgraded to address CVE-2021-38153 and CVE-2021-38153.
Updated dependencies to address a critical security vulnerability for the log4j logging framework, "log4shell", (CVE-2021-44228).
Summary
Fixed a compatibility issue with Filebeat 7.16.0
Other
Fixed an issue where the UI page for new parser could have overflow in some browsers.
Fixed an issue where a URL without content other than the protocol would break installing a package.
Fixed a race condition that could cause Humio to delete more segments than expected when initializing a digester node.
Fixed an issue causing Humio to log MatchExceptions from the calculateStartPoint method.
Fixed an issue where the query scheduler would spend too much time "shelving" queries, and not enough on getting them executed, leading to little progress on queries.
On a node configured as
USING_EPHEMERAL_DISKS=true
allow the local disk management deleting files even if a query may need them later, as the system is able to re-fetch the files from bucket storage when required. This improves the situation when there are active queries that in total have requested access to more segments than the local disk can hold.Fixed an issue where the job responsible for deleting segment files off nodes was not running as often as expected.
Require organization level permission when changing role permissions that possibly affects all views and repositories.
Fixed an issue where the job responsible for deleting segment files off nodes was not deleting as many segments as it should.
Updated a dependency to a version fixing a critical bug.
Fixed an issue where offsets from one Kafka partition could be used when deciding where to start consuming for another partition, in the case where there are too many datasources in the repo. This led to a crash loop when the affected node was restarted.
Humio Server 1.30.3 LTS (2021-11-25)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.30.3 | LTS | 2021-11-25 | Cloud | 2022-09-30 | No | 1.16.0 | No |
Hide file hashes
JAR Checksum | Value |
---|---|
MD5 | 1d3b908f6da95f2a02a69f220715b9ac |
SHA1 | 110fb83be1979070a55a6d9fc6662abcf8ef551a |
SHA256 | c87ae3a41303afa5d11750d2ae8fa36bb9892723cefb1ffcb19bf1d93e87dde0 |
SHA512 | 1c634b3401485aab33ca5ee6042d88596785f054c37933d37e5830a9b7817ec9dc995dda14bd1620f511f0cf685327b9c78f0e2d118f0b24ef04462c4d132136 |
These notes include entries from the following previous releases: 1.30.0, 1.30.1, 1.30.2
Bug fix to resolve problem with race conditions.
Fixed in this release
Security
Kafka and xmlsec have been upgraded to address CVE-2021-38153 and CVE-2021-38153.
Other
Fixed an issue where the UI page for new parser could have overflow in some browsers.
Fixed an issue where a URL without content other than the protocol would break installing a package.
Fixed a race condition that could cause Humio to delete more segments than expected when initializing a digester node.
Fixed an issue causing Humio to log MatchExceptions from the calculateStartPoint method.
Fixed an issue where the query scheduler would spend too much time "shelving" queries, and not enough on getting them executed, leading to little progress on queries.
On a node configured as
USING_EPHEMERAL_DISKS=true
allow the local disk management deleting files even if a query may need them later, as the system is able to re-fetch the files from bucket storage when required. This improves the situation when there are active queries that in total have requested access to more segments than the local disk can hold.Fixed an issue where the job responsible for deleting segment files off nodes was not running as often as expected.
Require organization level permission when changing role permissions that possibly affects all views and repositories.
Fixed an issue where the job responsible for deleting segment files off nodes was not deleting as many segments as it should.
Updated a dependency to a version fixing a critical bug.
Fixed an issue where offsets from one Kafka partition could be used when deciding where to start consuming for another partition, in the case where there are too many datasources in the repo. This led to a crash loop when the affected node was restarted.
Humio Server 1.30.2 LTS (2021-11-19)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.30.2 | LTS | 2021-11-19 | Cloud | 2022-09-30 | No | 1.16.0 | No |
Hide file hashes
JAR Checksum | Value |
---|---|
MD5 | 2395ce5b8017a632b02372da3dc0159b |
SHA1 | 049f9a0ed9c4e9acafcefe1a65997b65ba57d3f7 |
SHA256 | 815bcce962ac9f43022424e2abdfa587f8377ba1ecf3b4c5ef423a43175fe424 |
SHA512 | bc93c9bbf9fe89eb0a279d265b775bc4b4590b897f7f08a31d2516cd767b4952c59e7a3bad9986b26592487246c8a54f2e29e85d9a2a248dc790418ec68627d7 |
These notes include entries from the following previous releases: 1.30.0, 1.30.1
Bug fixes related to version dependency, problems with incomplete URLS, as well as requiring organization level permissions in certain situations.
Fixed in this release
Security
Kafka and xmlsec have been upgraded to address CVE-2021-38153 and CVE-2021-38153.
Other
Fixed an issue where the UI page for new parser could have overflow in some browsers.
Fixed an issue where a URL without content other than the protocol would break installing a package.
Fixed an issue causing Humio to log MatchExceptions from the calculateStartPoint method.
Fixed an issue where the query scheduler would spend too much time "shelving" queries, and not enough on getting them executed, leading to little progress on queries.
On a node configured as
USING_EPHEMERAL_DISKS=true
allow the local disk management deleting files even if a query may need them later, as the system is able to re-fetch the files from bucket storage when required. This improves the situation when there are active queries that in total have requested access to more segments than the local disk can hold.Fixed an issue where the job responsible for deleting segment files off nodes was not running as often as expected.
Require organization level permission when changing role permissions that possibly affects all views and repositories.
Fixed an issue where the job responsible for deleting segment files off nodes was not deleting as many segments as it should.
Updated a dependency to a version fixing a critical bug.
Fixed an issue where offsets from one Kafka partition could be used when deciding where to start consuming for another partition, in the case where there are too many datasources in the repo. This led to a crash loop when the affected node was restarted.
Humio Server 1.30.1 LTS (2021-10-01)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.30.1 | LTS | 2021-10-01 | Cloud | 2022-09-30 | No | 1.16.0 | Yes |
Hide file hashes
JAR Checksum | Value |
---|---|
MD5 | 790fc08715648deadf23b204f6e77cc9 |
SHA1 | 85ea236e0abbaf29740e7288d7cefeb2b1069260 |
SHA256 | e4f8dcc73fbeaa5dcc7d68aa6a972e3ab5ccbb66848c189743b2f50b8bcea832 |
SHA512 | 963ec5f550f5b496b08c9025e3fa9ed08c563e4270973092a4a1944a05bd79192316f3324d9a58e78dd014c2119ab389c1d6c566ef395b73c4df96f6d216e2c2 |
These notes include entries from the following previous releases: 1.30.0
Fixes Humio ignoring MatchExceptions, the frequency of jobs
which delete segment files, problems with
USING_EPHEMERAL_DISKS
, and upgrades Kafka and
xmlsec addresses.
Fixed in this release
Security
Kafka and xmlsec have been upgraded to address CVE-2021-38153 and CVE-2021-38153.
Other
Fixed an issue where the UI page for new parser could have overflow in some browsers.
Fixed an issue causing Humio to log MatchExceptions from the calculateStartPoint method.
Fixed an issue where the query scheduler would spend too much time "shelving" queries, and not enough on getting them executed, leading to little progress on queries.
On a node configured as
USING_EPHEMERAL_DISKS=true
allow the local disk management deleting files even if a query may need them later, as the system is able to re-fetch the files from bucket storage when required. This improves the situation when there are active queries that in total have requested access to more segments than the local disk can hold.Fixed an issue where the job responsible for deleting segment files off nodes was not running as often as expected.
Fixed an issue where the job responsible for deleting segment files off nodes was not deleting as many segments as it should.
Fixed an issue where offsets from one Kafka partition could be used when deciding where to start consuming for another partition, in the case where there are too many datasources in the repo. This led to a crash loop when the affected node was restarted.
Humio Server 1.30.0 LTS (2021-09-17)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.30.0 | LTS | 2021-09-17 | Cloud | 2022-09-30 | No | 1.16.0 | Yes |
Hide file hashes
JAR Checksum | Value |
---|---|
MD5 | c8989abb219663a61ae54be021fcd367 |
SHA1 | 44e1d48d09ca3ae8fd59a185e77b3c9a55a6de8f |
SHA256 | 25a880f34a0fef72ab3cb4bf93f92eae2fdac0022d116485bed1013b3f6683b4 |
SHA512 | 53b8055035bfcdd7176fa4369e607874ca9d5ee84de4a3beb964e3f0caba9578b3c46c04c10cb68657be34d8a165fa1089f1679716ecd4896ff57822d0b1e851 |
As a new feature Humio now includes an IOC (indicator of
compromise) database from CrowdStrike to enable lookup of IP
addresses, URLs and domains for malicious activity. This
database is updated hourly. This is described in more detail at
ioc:lookup()
Deprecation
Items that have been deprecated and may be removed in a future release.
Deprecated GraphQL mutation setRecentQueries, use addRecentQuery in future. The mutation will be removed after 2021-10-01. While setRecentQueries will remain in the API to not break existing clients, it will not modify the set of recent queries.
Fixed in this release
Other
Fixed an issue where the UI page for new parser could have overflow in some browsers.
Fixed an issue where the query scheduler would spend too much time "shelving" queries, and not enough on getting them executed, leading to little progress on queries.
Fixed an issue where offsets from one Kafka partition could be used when deciding where to start consuming for another partition, in the case where there are too many datasources in the repo. This led to a crash loop when the affected node was restarted.
Humio Server 1.30.0 Includes the following changes made in Humio Server 1.29.0
Version? Type? Release Date? Availability? End of Support Security
UpdatesUpgrades
From?Config.
Changes?1.29.0 GA 2021-07-09 Cloud
2022-09-30 No 1.16.0 Yes Available for download two days after release.
Warning
This release has been revoked as it contained a known bug fixed in 1.29.1.
As a new feature Humio now includes an IOC (indicator of compromise) database from CrowdStrike to enable lookup of IP addresses, URLs and domains for malicious activity. This database is updated hourly. This is described in more detail at
ioc:lookup()
Removed
Items that have been removed as of this release.
Deprecation
Items that have been deprecated and may be removed in a future release.
Deprecated GraphQL mutation setRecentQueries, use addRecentQuery in future. The mutation will be removed after 2021-10-01. While setRecentQueries will remain in the API to not break existing clients, it will not modify the set of recent queries.
Field
addIngestToken
was deprecated inMutation
type, useaddIngestTokenV2
instead.Field
assignIngestToken
was deprecated inMutation
type, useassignParserToIngestToken
instead.New features and improvements
Fixed in this release
Humio Server 1.30.0 Includes the following changes made in Humio Server 1.29.1
Version? Type? Release Date? Availability? End of Support Security
UpdatesUpgrades
From?Config.
Changes?1.29.1 GA 2021-07-12 Cloud
2022-09-30 No 1.16.0 Yes Available for download two days after release.
Hide file hashes
JAR Checksum Value MD5 6cea304226bb9eb096785375bb8f834f SHA1 77d7b92df1884b8ec457246d602cc276e46ee032 SHA256 e48d6a5c80e6979621b817c1ac53f778eae170185180ab9e70c295692dd1a7bc SHA512 c9e8019067a9ae1bd0b62215ee458ecabcee2b3a971688c6f66c58dc1009d498cdd560d4733f64f31e6d4204bebb6c8bc0934354ab04aaa008b3e21ef8bc7dee Bug fixes.
Deprecation
Items that have been deprecated and may be removed in a future release.
Deprecated GraphQL mutation setRecentQueries, use addRecentQuery in future. The mutation will be removed after 2021-10-01. While setRecentQueries will remain in the API to not break existing clients, it will not modify the set of recent queries.
Fixed in this release
Humio Server 1.30.0 Includes the following changes made in Humio Server 1.29.2
Version? Type? Release Date? Availability? End of Support Security
UpdatesUpgrades
From?Config.
Changes?1.29.2 GA 2021-09-02 Cloud
2022-09-30 No 1.16.0 No Available for download two days after release.
Hide file hashes
JAR Checksum Value MD5 0b32f9520b7a5ec3f1af3321ce7f1dfe SHA1 4ae2567f9b3348d115819eefd5b9c078d6c2c6ad SHA256 0383277fd91b8933dfcdb94783e03d151975b01ee62dcc74515cbfb2d3299cdc SHA512 bbfb1f343f0567394128976ecddc6ad3b74f2996b0dc51e2fe0f9c5e0b60cbd7a935fc709b2ffc3b6a811a0e6a1c2fcc09e4e1552e5c7f87dd3670f07cf33b31 Minor bug fixes
Deprecation
Items that have been deprecated and may be removed in a future release.
Deprecated GraphQL mutation setRecentQueries, use addRecentQuery in future. The mutation will be removed after 2021-10-01. While setRecentQueries will remain in the API to not break existing clients, it will not modify the set of recent queries.
Fixed in this release
Humio Server 1.30.0 Includes the following changes made in Humio Server 1.29.3
Version? Type? Release Date? Availability? End of Support Security
UpdatesUpgrades
From?Config.
Changes?1.29.3 GA 2021-09-07 Cloud
2022-09-30 No 1.16.0 No Available for download two days after release.
Hide file hashes
JAR Checksum Value MD5 96b4a0ff0c02a2dce2bb5ee467bdb9dd SHA1 b2a1ef55259f6c49ee7bf7de6efecc0f743d1bfa SHA256 95d9b52d6213d0af43dfc43cb66c878f1f584f446e4ba890137c5fd9923db1a4 SHA512 392489e94b1e6f6799a6340df78e7bdb7d9547b97ba1844cfbf3c525e1e418fd2221543378f2b759acf46d5c02d99627f9b98650447d7c1c32ba599157718bf8 Minor bug fixes
Deprecation
Items that have been deprecated and may be removed in a future release.
Deprecated GraphQL mutation setRecentQueries, use addRecentQuery in future. The mutation will be removed after 2021-10-01. While setRecentQueries will remain in the API to not break existing clients, it will not modify the set of recent queries.
Fixed in this release
Humio Server 1.30.0 Includes the following changes made in Humio Server 1.29.4
Version? Type? Release Date? Availability? End of Support Security
UpdatesUpgrades
From?Config.
Changes?1.29.4 GA 2021-09-09 Cloud
2022-09-30 No 1.16.0 No Available for download two days after release.
Hide file hashes
JAR Checksum Value MD5 8145f9ddc7804a44efe1e46cd71f5c17 SHA1 8db4079f580357162891d0153a70668b4f29d642 SHA256 32adde840164668a96be651ede6052dbef3cf046ec9f83f39f62925d26e14104 SHA512 cadcd2eb07e949f052bd51ecc272d79f7decfec0f3b7613dfdf712aa1ff287471060d25443c5be1b06744aefa43194fb2e78e223c362005cea62a5917a93b62e Minor bug fixes
Deprecation
Items that have been deprecated and may be removed in a future release.
Deprecated GraphQL mutation setRecentQueries, use addRecentQuery in future. The mutation will be removed after 2021-10-01. While setRecentQueries will remain in the API to not break existing clients, it will not modify the set of recent queries.
Fixed in this release
Humio Server 1.29.4 GA (2021-09-09)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.29.4 | GA | 2021-09-09 | Cloud | 2022-09-30 | No | 1.16.0 | No |
Available for download two days after release.
Hide file hashes
JAR Checksum | Value |
---|---|
MD5 | 8145f9ddc7804a44efe1e46cd71f5c17 |
SHA1 | 8db4079f580357162891d0153a70668b4f29d642 |
SHA256 | 32adde840164668a96be651ede6052dbef3cf046ec9f83f39f62925d26e14104 |
SHA512 | cadcd2eb07e949f052bd51ecc272d79f7decfec0f3b7613dfdf712aa1ff287471060d25443c5be1b06744aefa43194fb2e78e223c362005cea62a5917a93b62e |
Minor bug fixes
Deprecation
Items that have been deprecated and may be removed in a future release.
Deprecated GraphQL mutation setRecentQueries, use addRecentQuery in future. The mutation will be removed after 2021-10-01. While setRecentQueries will remain in the API to not break existing clients, it will not modify the set of recent queries.
Fixed in this release
Other
Added a GraphQL mutation cancelDeleteEvents that allows cancelling a previously submitted deletion. Cancellation is best-effort, and events that have already been deleted will not be restored.
Fixed an issue where it was possible to submit queries to the Delete Events API that were not valid for that API. Only purely filtering queries are allowed.
Humio Server 1.29.3 GA (2021-09-07)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.29.3 | GA | 2021-09-07 | Cloud | 2022-09-30 | No | 1.16.0 | No |
Available for download two days after release.
Hide file hashes
JAR Checksum | Value |
---|---|
MD5 | 96b4a0ff0c02a2dce2bb5ee467bdb9dd |
SHA1 | b2a1ef55259f6c49ee7bf7de6efecc0f743d1bfa |
SHA256 | 95d9b52d6213d0af43dfc43cb66c878f1f584f446e4ba890137c5fd9923db1a4 |
SHA512 | 392489e94b1e6f6799a6340df78e7bdb7d9547b97ba1844cfbf3c525e1e418fd2221543378f2b759acf46d5c02d99627f9b98650447d7c1c32ba599157718bf8 |
Minor bug fixes
Deprecation
Items that have been deprecated and may be removed in a future release.
Deprecated GraphQL mutation setRecentQueries, use addRecentQuery in future. The mutation will be removed after 2021-10-01. While setRecentQueries will remain in the API to not break existing clients, it will not modify the set of recent queries.
Fixed in this release
Other
Fixed an issue where the error TooManyTagValueCombination would prevent Humio from starting
Remove limit on search interval on cloud sandboxes
Humio Server 1.29.2 GA (2021-09-02)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.29.2 | GA | 2021-09-02 | Cloud | 2022-09-30 | No | 1.16.0 | No |
Available for download two days after release.
Hide file hashes
JAR Checksum | Value |
---|---|
MD5 | 0b32f9520b7a5ec3f1af3321ce7f1dfe |
SHA1 | 4ae2567f9b3348d115819eefd5b9c078d6c2c6ad |
SHA256 | 0383277fd91b8933dfcdb94783e03d151975b01ee62dcc74515cbfb2d3299cdc |
SHA512 | bbfb1f343f0567394128976ecddc6ad3b74f2996b0dc51e2fe0f9c5e0b60cbd7a935fc709b2ffc3b6a811a0e6a1c2fcc09e4e1552e5c7f87dd3670f07cf33b31 |
Minor bug fixes
Deprecation
Items that have been deprecated and may be removed in a future release.
Deprecated GraphQL mutation setRecentQueries, use addRecentQuery in future. The mutation will be removed after 2021-10-01. While setRecentQueries will remain in the API to not break existing clients, it will not modify the set of recent queries.
Fixed in this release
Other
Fixed an issue where if a package failed to be installed, and it contained an action, the failed installation might not be cleaned up properly.
Fixed an issue where, looking at GraphQL, the dropdown from the navigation menu was partially hidden.
Fixed an issue that could cause UploadedFileSyncJob to crash, if an uploaded file went missing
Fixed an issue where new groups added to a repository got a query prefix that disallowed search. The default is now to allow search with the query prefix * (wildcard).
Humio Server 1.29.1 GA (2021-07-12)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.29.1 | GA | 2021-07-12 | Cloud | 2022-09-30 | No | 1.16.0 | Yes |
Available for download two days after release.
Hide file hashes
JAR Checksum | Value |
---|---|
MD5 | 6cea304226bb9eb096785375bb8f834f |
SHA1 | 77d7b92df1884b8ec457246d602cc276e46ee032 |
SHA256 | e48d6a5c80e6979621b817c1ac53f778eae170185180ab9e70c295692dd1a7bc |
SHA512 | c9e8019067a9ae1bd0b62215ee458ecabcee2b3a971688c6f66c58dc1009d498cdd560d4733f64f31e6d4204bebb6c8bc0934354ab04aaa008b3e21ef8bc7dee |
Bug fixes.
Deprecation
Items that have been deprecated and may be removed in a future release.
Deprecated GraphQL mutation setRecentQueries, use addRecentQuery in future. The mutation will be removed after 2021-10-01. While setRecentQueries will remain in the API to not break existing clients, it will not modify the set of recent queries.
Fixed in this release
Other
Fixed an issue that made it appear as though ingest tokens had no associated parser.
Humio Server 1.29.0 GA (2021-07-09)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.29.0 | GA | 2021-07-09 | Cloud | 2022-09-30 | No | 1.16.0 | Yes |
Available for download two days after release.
Warning
This release has been revoked as it contained a known bug fixed in 1.29.1.
As a new feature Humio now includes an IOC (indicator of
compromise) database from CrowdStrike to enable lookup of IP
addresses, URLs and domains for malicious activity. This
database is updated hourly. This is described in more detail at
ioc:lookup()
Removed
Items that have been removed as of this release.
GraphQL API
Deprecated argument repositoryName was removed from
Mutation.updateParser
field.Deprecated argument name was removed from
Mutation.updateParser
field.
Deprecation
Items that have been deprecated and may be removed in a future release.
Deprecated GraphQL mutation setRecentQueries, use addRecentQuery in future. The mutation will be removed after 2021-10-01. While setRecentQueries will remain in the API to not break existing clients, it will not modify the set of recent queries.
Field
addIngestToken
was deprecated inMutation
type, useaddIngestTokenV2
instead.Field
assignIngestToken
was deprecated inMutation
type, useassignParserToIngestToken
instead.
New features and improvements
Automation and Alerts
Integrates the editing of alert searches and scheduled searches better with the search page.
Packages now support Webhook actions and references between these and alerts in the Alert schema.
GraphQL API
Field
createIngestListener
was deprecated inMutation
type, usecreateIngestListenerV2
insteadRemoved the Usage feature flag which is now always enabled. This breaks backwards compatibility for internal graphql feature flag mutations and queries.
Field
updateIngestListener
was deprecated inMutation
type, useupdateIngestListenerV2
insteadField
copyParser
was deprecated inMutation
type, usecloneParser
insteadRemoved the argument includeUsageView from the GraphQL mutation createOrganizationsViews which breaks backwards compatibility for this internal utility method.
Configuration
Humio nodes will now pick a UUID for themselves using the
ZOOKEEPER_PREFIX_FOR_NODE_UUID
prefix, even if ZooKeeper is not used. This should make it easier to enable ZooKeeper id management in existing clusters going forward.Allow the internal profiler to be configured via an environment variable. See Environment Variables
Add a soft limit on the primary disk based on
PRIMARY_STORAGE_PERCENTAGE
andPRIMARY_STORAGE_MAX_FILL_PERCENTAGE
(roughly the average of the two values). When the soft limit is hit and secondary storage is configured, the segment mover will prefer moving segments to secondary storage right away, instead of fetching them to primary and waiting for the secondary storage transfer job to move them.
Other
Internal change to parsers adding an id, where previously they only had a name as key.
Enabled dark mode for cluster administration pages.
The "Save Search as Dashboard" Widget dialog now gives user feedback about missing input in a manner consistent with other forms.
Make GlobalConsistencyCheckerJob shut down more cleanly, it could previously log some ugly exceptions during shutdown.
When editing a query, Enter no longer accepts a suggestion. Use Tab instead. The Enter key conflicted with the "Run" button for running the query.
Organization pages refactoring.
Previously, the server could report that a user was allowed to update parsers for a view, even though parsers cannot be used on views, only repositories. Now the server will always say the user cannot change parsers on views.
Improved global snapshot selection in cases where a Kafka reset has been performed
In thread dumps include the job and query names in separate fields rather than as part of the thread name.
Return the responder's vhost in the metadata json.
Added dark mode support to Identity provider pages.
Created a new Dropdown component, and replaced some uses of the old component with the new.
Speed up the SecondaryStorageTransferJob. The job will now delete primary copies much earlier after moving them to the secondary volume.
Scheduled searches are now allowed to run once every minute instead of only once every hour.
Fixed in this release
Functions
Fixed a bug causing
match()
to let an empty key field match a table with no rows.
Other
Fixed an issue with "show in context" feature of the event list did not quote the field names in the produced query string.
Fixed a bug in the Search View. After editing and saving a saved query in the Search View, the notification message would disappear in an instant, making it impossible to read and to click the link therein.
Fixed an issue where exporting a saved query did not include the options for the visualization, e.g. column layout on the event list.
Avoiding a costly corner case in some uses of glob-patterns.
Fixed a bug in the blocklist which caused "exact pattern" query patterns to be interpreted as glob patterns.
Fixed an issue related to validation of integer arguments. Large integer arguments would be silently truncated and lower limits weren't checked, which led to unspecified behavior. Range errors are now reported in the following functions:
concatArray()
formatDuration()
Fixed an issue where the axis titles on the timechart were not showing up in dark mode
Fixed race condition that could cause parsers to not update correctly in rare cases
Fixed a bug where word wrapping in the event list was not always working for log messages with syntax highlighting (e.g. JSON or XML messages)
Fixed race condition that could cause event forwarding rules to not update correctly in rare cases
When testing a Parser and more events are returned in a test an info message is now displayed conveying that only the first event is shown.
Fixed bugs in the test parser UI, so that it should now always produce a result and be able to handle parsers that either drop events or produce multiple events per input event.
Address edge cases where QueryScheduler could throw exceptions with messages similar to "Requirement failed on activeMapperCount=-36"
Humio Server 1.28.2 LTS (2021-09-29)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.28.2 | LTS | 2021-09-29 | Cloud | 2022-06-30 | No | 1.16.0 | No |
Hide file hashes
JAR Checksum | Value |
---|---|
MD5 | 87f31a6c318d9383a944521ee34b572a |
SHA1 | fc185c9a2f5f7e6d5425778995f679c49d430105 |
SHA256 | 62f25f3fe69239e1615fb6cb2bffe6f16ae0fe8adc2a17a0a9fa03ba226f7b5f |
SHA512 | 6fb65258a9c21a4d681690a0dcc050cd5be96440cc2f1c6273707658504fc0f21fa5fd1ba9cdb664ab0a5f9aabede181dca5e1ff882305db30a0b95c57f6fba9 |
These notes include entries from the following previous releases: 1.28.0, 1.28.1
Bug fixes and updates.
Deprecation
Items that have been deprecated and may be removed in a future release.
Deprecated GraphQL mutation setRecentQueries, use addRecentQuery in future. The mutation will be removed after 2021-10-01. While setRecentQueries will remain in the API to not break existing clients, it will not modify the set of recent queries.
Fixed in this release
Security
Kafka and xmlsec have been upgraded to address CVE-2021-38153 and CVE-2021-38153.
Summary
When searching through files in a dashboard parameter, users with CSV files greater than 50.0 records could see incomplete results.
Fixed a bug that caused previous 1.27.x but not earlier versions to add "host xyz is slow" warnings to query results also when that was not the case.
While waiting for the upload of files to bucket to complete during shutdown, the threaddumping will continue running, and the node will report as alive as seen from the other nodes.
All users (including existing users) need to accept the privacy notice and terms and conditions before using Humio.
Humio trial installations now require a trial license. To request a trial license go to Getting Started.
Backfilled data gets lower priority on local disk when in over-commit mode using bucket storage.
Humio will now try to upload more segments concurrently during a shutdown than during normal operation.
Other
The signup path was removed, together with the corresponding pages. Before, anyone could sign up for the Humio SaaS solution. However, with stricter policies, this became obsolete and had to be removed. The new process redirecta a potential customer to Humio's official website, where they have to fill in a form in order to be vetted. Once the vetting process is complete, Humio support creates an organization for the customer.
Fixed an issue that could cause UploadedFileSyncJob to crash if an uploaded file went missing
Fixed an issue that could cause cluster nodes to crash when growing the number of digest partitions.
Fix a bug where offsets from one Kafka partition could be used when deciding where to start consuming for another partition, in the case where there are too many datasources in the repo. This led to a crash loop when the affected node was restarted.
Humio Server 1.28.1 LTS (2021-08-24)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.28.1 | LTS | 2021-08-24 | Cloud | 2022-06-30 | No | 1.16.0 | No |
Hide file hashes
JAR Checksum | Value |
---|---|
MD5 | 1a2b8d3acc15ac6789534f7624265ecb |
SHA1 | 3cf96a324147967c1bc7753a7f9d94cbed8cd843 |
SHA256 | 8a355f8bbab9a74422ee3456f9bfe93302734a69197b7e12c254f7c0fd9905de |
SHA512 | e572352be7e71403f52fe0767567232a198861a60f100b7e9e32475e8e2720285902daf6493b62b79052ca71aa6550abe173d5ae5246e14e5ac30bda0883667c |
These notes include entries from the following previous releases: 1.28.0
Bug fixes and updates.
Deprecation
Items that have been deprecated and may be removed in a future release.
Deprecated GraphQL mutation setRecentQueries, use addRecentQuery in future. The mutation will be removed after 2021-10-01. While setRecentQueries will remain in the API to not break existing clients, it will not modify the set of recent queries.
Fixed in this release
Summary
When searching through files in a dashboard parameter, users with CSV files greater than 50.0 records could see incomplete results.
Fixed a bug that caused previous 1.27.x but not earlier versions to add "host xyz is slow" warnings to query results also when that was not the case.
While waiting for the upload of files to bucket to complete during shutdown, the threaddumping will continue running, and the node will report as alive as seen from the other nodes.
All users (including existing users) need to accept the privacy notice and terms and conditions before using Humio.
Humio trial installations now require a trial license. To request a trial license go to Getting Started.
Backfilled data gets lower priority on local disk when in over-commit mode using bucket storage.
Humio will now try to upload more segments concurrently during a shutdown than during normal operation.
Other
The signup path was removed, together with the corresponding pages. Before, anyone could sign up for the Humio SaaS solution. However, with stricter policies, this became obsolete and had to be removed. The new process redirecta a potential customer to Humio's official website, where they have to fill in a form in order to be vetted. Once the vetting process is complete, Humio support creates an organization for the customer.
Fixed an issue that could cause UploadedFileSyncJob to crash if an uploaded file went missing
Fixed an issue that could cause cluster nodes to crash when growing the number of digest partitions.
Humio Server 1.28.0 LTS (2021-06-15)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.28.0 | LTS | 2021-06-15 | Cloud | 2022-06-30 | No | 1.16.0 | Yes |
Hide file hashes
JAR Checksum | Value |
---|---|
MD5 | 1a2b8d3acc15ac6789534f7624265ecb |
SHA1 | 3cf96a324147967c1bc7753a7f9d94cbed8cd843 |
SHA256 | 8a355f8bbab9a74422ee3456f9bfe93302734a69197b7e12c254f7c0fd9905de |
SHA512 | e572352be7e71403f52fe0767567232a198861a60f100b7e9e32475e8e2720285902daf6493b62b79052ca71aa6550abe173d5ae5246e14e5ac30bda0883667c |
Major changes, as well as requiring at least a trial license, require accepting privacy, terms, and conditions.
Deprecation
Items that have been deprecated and may be removed in a future release.
Deprecated GraphQL mutation setRecentQueries, use addRecentQuery in future. The mutation will be removed after 2021-10-01. While setRecentQueries will remain in the API to not break existing clients, it will not modify the set of recent queries.
Fixed in this release
Summary
When searching through files in a dashboard parameter, users with CSV files greater than 50.0 records could see incomplete results.
Fixed a bug that caused previous 1.27.x but not earlier versions to add "host xyz is slow" warnings to query results also when that was not the case.
While waiting for the upload of files to bucket to complete during shutdown, the threaddumping will continue running, and the node will report as alive as seen from the other nodes.
All users (including existing users) need to accept the privacy notice and terms and conditions before using Humio.
Humio trial installations now require a trial license. To request a trial license go to Getting Started.
Backfilled data gets lower priority on local disk when in over-commit mode using bucket storage.
Humio will now try to upload more segments concurrently during a shutdown than during normal operation.
Humio Server 1.28.0 Includes the following changes made in Humio Server 1.27.0
Version? Type? Release Date? Availability? End of Support Security
UpdatesUpgrades
From?Config.
Changes?1.27.0 GA 2021-06-14 Cloud
2022-06-30 No 1.16.0 Yes Available for download two days after release.
Hide file hashes
JAR Checksum Value MD5 cce0478c744d183db8491e338949bdfe SHA1 6dd0bf5c8e0d4ca1f74116e31f28f8cfa7b58323 SHA256 a3425e6141358cbc1115ab3c2691768c64d62f84cea3e018eb7b6debcb05f803 SHA512 ca63e89946e4a12422124adb052b59f8da30ca026b5c34b0bfe1ddcc35cf7546fedd4127d46def6c36b3e0f7278675cf4fe47362fe5070a6d7b29ffbcbd0bc49 Bug fixes and updates.
Deprecation
Items that have been deprecated and may be removed in a future release.
Deprecated GraphQL mutation setRecentQueries, use addRecentQuery in future. The mutation will be removed after 2021-10-01. While setRecentQueries will remain in the API to not break existing clients, it will not modify the set of recent queries.
New features and improvements
Fixed in this release
Humio Server 1.28.0 Includes the following changes made in Humio Server 1.27.1
Version? Type? Release Date? Availability? End of Support Security
UpdatesUpgrades
From?Config.
Changes?1.27.1 GA 2021-06-15 Cloud
2022-06-30 No 1.16.0 Yes Available for download two days after release.
Hide file hashes
JAR Checksum Value MD5 d45c8b1dd900bfaaae7ab6d7173f122a SHA1 9602b62cab67ca3f1d35fd888303bc642c024518 SHA256 ec2db12be413b83e52fb2f1cefa04bc9634e7c17657349c4e6b9c71c26a804f9 SHA512 6179ff0307e3cd804cb4dfcf56a98d30b67cebf51eb2673accde437a308a2d16fd433db983e2948a06768dfac8fa71aa302a357524ed78b675ccbb8491844f6e Security fixes and some minor fixes.
Deprecation
Items that have been deprecated and may be removed in a future release.
Deprecated GraphQL mutation setRecentQueries, use addRecentQuery in future. The mutation will be removed after 2021-10-01. While setRecentQueries will remain in the API to not break existing clients, it will not modify the set of recent queries.
Fixed in this release
Humio Server 1.27.1 GA (2021-06-15)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.27.1 | GA | 2021-06-15 | Cloud | 2022-06-30 | No | 1.16.0 | Yes |
Available for download two days after release.
Hide file hashes
JAR Checksum | Value |
---|---|
MD5 | d45c8b1dd900bfaaae7ab6d7173f122a |
SHA1 | 9602b62cab67ca3f1d35fd888303bc642c024518 |
SHA256 | ec2db12be413b83e52fb2f1cefa04bc9634e7c17657349c4e6b9c71c26a804f9 |
SHA512 | 6179ff0307e3cd804cb4dfcf56a98d30b67cebf51eb2673accde437a308a2d16fd433db983e2948a06768dfac8fa71aa302a357524ed78b675ccbb8491844f6e |
Security fixes and some minor fixes.
Deprecation
Items that have been deprecated and may be removed in a future release.
Deprecated GraphQL mutation setRecentQueries, use addRecentQuery in future. The mutation will be removed after 2021-10-01. While setRecentQueries will remain in the API to not break existing clients, it will not modify the set of recent queries.
Fixed in this release
Summary
Fixed an issue where Humio could prematurely clean up local copies of segments involved in queries, causing queries to fail with a "Did not query segment" warning.
Updated dependencies with security fixes.
Fixed issue where certain queries would cause
NullPointerException
inOneForOneStrategy
.
Humio Server 1.27.0 GA (2021-06-14)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.27.0 | GA | 2021-06-14 | Cloud | 2022-06-30 | No | 1.16.0 | Yes |
Available for download two days after release.
Hide file hashes
JAR Checksum | Value |
---|---|
MD5 | cce0478c744d183db8491e338949bdfe |
SHA1 | 6dd0bf5c8e0d4ca1f74116e31f28f8cfa7b58323 |
SHA256 | a3425e6141358cbc1115ab3c2691768c64d62f84cea3e018eb7b6debcb05f803 |
SHA512 | ca63e89946e4a12422124adb052b59f8da30ca026b5c34b0bfe1ddcc35cf7546fedd4127d46def6c36b3e0f7278675cf4fe47362fe5070a6d7b29ffbcbd0bc49 |
Bug fixes and updates.
Deprecation
Items that have been deprecated and may be removed in a future release.
Deprecated GraphQL mutation setRecentQueries, use addRecentQuery in future. The mutation will be removed after 2021-10-01. While setRecentQueries will remain in the API to not break existing clients, it will not modify the set of recent queries.
New features and improvements
Automation and Alerts
Fixed an issue where it was possible to create an alert with an empty time interval or a blank name or throttle field.
The Alert and Scheduled Search dialogs have gotten a makeover.
GraphQL API
Deprecated GraphQL field
SearchDomain.recentQueries
in favor ofSearchDomain.recentQueriesV2
.
Configuration
Removed the log4j2-stdout-json.xml configuration file. The replacement log4j2-json-stdout.xml has been available for a while, and we want everyone to move to the new configuration, as the old configuration produces logs incompatible with the Insights Package.
Limit how many times we'll repeat a repeating regexp. The default max number of repetitions is .0 but the value is configurable between 50 and .0 by setting the
MAX_REGEX_REPETITIONS
env variable.
Functions
With
worldMap()
function, you can now see the magnitude value by hovering marks on the map.Fixed an issue in
timeChart()
where the horizontal line did not showing up.Reduced memory usage for
groupBy()
function, etc.; worst-case in particular but also average-case to some degree.
Other
Inviting users on cloud now requires the invited user to accept the invitation before assigning permissions to him. Moreover, it is possible to invite users who are in another organization on cloud.
Fixed an issue where worldmap widgets would revert to event list widgets when changing styling options.
Working on merging of advanced and simple permission models, so that the roles can be added directly to users.
Fixed a problem when some user-defined styles weren't being applied to a chart after a page refresh or when exported to a dashboard widget
Improve thread safety of updates to global Hosts entities during bootup
Started internal work on memory quotas on queries' aggregation states. This should not have any user-visible impact yet.
Changed implementation of cluster host alive stats to attempt to reduce thread contention when threads are checking for host liveness.
Removed requirement that SAML Id needs to be an URL (Now, only requirement is that the field is not empty)
Fixed an issue which caused queries to crash when "beta:repeating()" was used with a time interval ending before "now".
The New Action dialog validates user input in a more indulgent fashion and provides all validation errors consistently.
Add a label to the empty option for default queries on the repository settings page.
Fixed an issue with AuthenticationMethod.SetByProxy where the search page would constantly reload.
Users with read repository permissions can now access and see files.
Added button to delete the organization from the Organization Overview page
Reimplement several part of Humio to use a safer mechanism for listening to changes from global. This should eliminate a class of race condition that could cause nodes to ignore updates received during the boot process.
The UI now consistently marks required field with a red asterisk across a number of dialogs.
Fixed various bugs for the worldmap widget. The bug fixes may cause your world map marks to look different that previously, but should now work as intended and correcting it should be as simple as tweaking the style parameters.
When looking at the details of an event, long field values will now extend beyond the viewport by default. Word wrapping can be enabled to stop it from extending outside the viewport.
Improved error messages when exporting invalid dashboards as templates
Changed implementation of cluster host alive stats to trigger updates in the in-memory state based on changes in global, rather than running periodic updates.
Updated the interactive tutorial with better descriptions
Fixed an issue where UI stalled on the "Data Sources" page
When assigning a role, all the user which need a new role are choosen, and then the same role is assigned to them all.
Added frontend validation on welcome page and invitation page fields
Improved styling of header on organization overview page
The list of recent queries on the search page now has headers with the date the query was run.
Added ability to set organization usage limits manually for cases where automatic synchronization is not possible.
Automatically reduce the precision of world maps when they exceed a certain size limit
Fixed an issue for Firefox 78.10.1 ESR where the event list and event distribution chart would not be scrollable and resize incorrectly.
The Humio frontend no longer sends the
Humio-Query-Session
header to the backend, since it is no longer used.Fixed an issue where optimizeAndSaveQueryCoordinationPartitions could attempt to save a partitioning table to global with gaps in the partition list. This caused queries to fail, and repeated logging of a validation error.
The event distribution chart would sometimes show a bucket span reported in milliseconds instead of a more appropriate unit, when those milliseconds did not add up cleanly (e.g. "1h"). Now the bucket span can be reported with multiple units (e.g. "1h 30m")
Add a bit more debug logging to DataSnapshotLoader, for visibility around choice of global snapshot during boot
In the time selector, you can now write "24" in the time-of-day field to denote the end of the day.
Debug logs which relate to the invocation of an action now contain an
actionInvocationId
. This trace id is the same for all logs generated by the same action invocation.Fixed an issue in the Query State Cache that could fail a query on time intervals with a fixed timestamp as start and now as end.
Fixed an OIDC group synchronization issue where users where denied access even though their group membership gave them access.
Included both ws and wss in csp header
Fixed a problem where the global consistency check would report spurious inconsistencies because of trivial differences in the underlying JSON data
Added a quickfix feature for reserved keywords
Fixed a rare issue that could fail to trigger a JVM shutdown if the Kafka digest leader loop thread became nonfunctional.
Slightly improve performance of id lookups in global
Fixed in this release
Other
Humio trial installations now require a trial license. To request a trial license go to getting-started
All users (including existing users) need to accept the privacy notice and terms and conditions
https://www.crowdstrike.com/terms-conditions/humio-self-hosted
before using Humio.
Humio Server 1.26.3 LTS (2021-06-17)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.26.3 | LTS | 2021-06-17 | Cloud | 2022-05-31 | No | 1.16.0 | No |
Hide file hashes
JAR Checksum | Value |
---|---|
MD5 | 8c880514f42787b5e5f85607bca347de |
SHA1 | e97e3ea8ebc5de0076186e6daa7aa16d4ca48c8c |
SHA256 | d906d0938e8c2283ab91dc999cac4f6a966591e3ee0089e309378123d13cd637 |
SHA512 | 826aea0625a8daac1801f11f12429a6bcb3b8ee58f4b536172d4602fc5f09252fe0be0afd72a5300520980c962a9caa99aeb4ee6c9ada79544963b4c9687b972 |
These notes include entries from the following previous releases: 1.26.0, 1.26.1, 1.26.2
Security fixes and some minor fixes related to Firefox, Worldmap widgets, and problems with local file clean-up.
Fixed in this release
Summary
Fixed an issue where Worldmap widgets would revert to event list widgets when changing styling options.
Fix an issue where data was not visible on the
World Map
until the opacity setting had been changed.Fix an issue when some user-defined styles weren't being applied to a chart after a page refresh or when exported to a dashboard widget.
Fixed an issue for Firefox 78.10.1 ESR where the event list and event distribution chart would not be scrollable and resize incorrectly.
Update the minimum Humio version for Hosts in global when downgrading a node
Fixes an OIDC group synchronization issue where users where denied access even though their group membership gave them access.
Fixed an issue where Humio could prematurely clean up local copies of segments involved in queries, causing queries to fail with a "Did not query segment" warning.
Fixes issue where the world map widget would misbehave in different ways.
Fixes an issue in Timechart with horizontal line not showing up.
Fix an issue where optimizeAndSaveQueryCoordinationPartitions could attempt to save a partitioning table to global with gaps in the partition list. This caused queries to fail, and repeated logging of a validation error.
Updated dependencies with security fixes.
Fix a number of cases where Humio could attempt to write a message to global larger than permitted.
All users (including existing users) need to accept the privacy notice and terms and conditions before using Humio.
Humio trial installations requires a trial license from this version, to request a trial license go to getting-started
https://www.humio.com/getting-started/
Other
Update the minimum Humio version for Hosts in global when downgrading a node
Fixes issue where the world map widget would misbehave in different ways
Humio trial installations requires a trial license from this version, to request a trial license go to: https://www.humio.com/getting-started/
All users (including existing users) needs to accept the privacy notice: https://www.crowdstrike.com/privacy-notice and terms and conditions https://www.crowdstrike.com/terms-conditions/humio-self-hosted before using Humio.
Fix a number of cases where Humio could attempt to write a message to global larger than permitted.
Known Issues
Other
A regression can cause 1.26.0 to repeatedly error log and fail to start queries in cases where the list of hosts in the cluster is not fixed. This is particularly likely to affect clusters running with ephemeral disks. The regression is fixed in 1.26.1.
Humio Server 1.26.2 LTS (2021-06-07)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.26.2 | LTS | 2021-06-07 | Cloud | 2022-05-31 | No | 1.16.0 | No |
Hide file hashes
JAR Checksum | Value |
---|---|
MD5 | 1ebc3d89c531e0ea11be7378e2f1d27c |
SHA1 | 4ac77b9dd3532d6e1f3bce39a72a2f337c300ccc |
SHA256 | 6d06e3734f6a5f30715754d210092b31f4225a42d97495d8e0c8a3d6eea53bfa |
SHA512 | d5c6e9b1b08ed10b5549359306e35734a28557088d3efa98d30ccc0080700ba053a8f83d694790686a6333c54734392e8199127cbaf1719a841979e5e9188e27 |
These notes include entries from the following previous releases: 1.26.0, 1.26.1
Several fixes related to the WorldMap and TimeChart widgets, OIDC group synchronization, and requirements for Humio trial installations, as well as privacy notices and terms and conditions, and other bugs.
Fixed in this release
Summary
Fix an issue where data was not visible on the
World Map
until the opacity setting had been changed.Fix an issue when some user-defined styles weren't being applied to a chart after a page refresh or when exported to a dashboard widget.
Update the minimum Humio version for Hosts in global when downgrading a node
Fixes an OIDC group synchronization issue where users where denied access even though their group membership gave them access.
Fixes issue where the world map widget would misbehave in different ways.
Fixes an issue in Timechart with horizontal line not showing up.
Fix an issue where optimizeAndSaveQueryCoordinationPartitions could attempt to save a partitioning table to global with gaps in the partition list. This caused queries to fail, and repeated logging of a validation error.
Fix a number of cases where Humio could attempt to write a message to global larger than permitted.
All users (including existing users) need to accept the privacy notice and terms and conditions before using Humio.
Humio trial installations requires a trial license from this version, to request a trial license go to getting-started
https://www.humio.com/getting-started/
Other
Update the minimum Humio version for Hosts in global when downgrading a node
Fixes issue where the world map widget would misbehave in different ways
Humio trial installations requires a trial license from this version, to request a trial license go to: https://www.humio.com/getting-started/
All users (including existing users) needs to accept the privacy notice: https://www.crowdstrike.com/privacy-notice and terms and conditions https://www.crowdstrike.com/terms-conditions/humio-self-hosted before using Humio.
Fix a number of cases where Humio could attempt to write a message to global larger than permitted.
Known Issues
Other
A regression can cause 1.26.0 to repeatedly error log and fail to start queries in cases where the list of hosts in the cluster is not fixed. This is particularly likely to affect clusters running with ephemeral disks. The regression is fixed in 1.26.1.
Humio Server 1.26.1 LTS (2021-05-31)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.26.1 | LTS | 2021-05-31 | Cloud | 2022-05-31 | No | 1.16.0 | No |
Hide file hashes
JAR Checksum | Value |
---|---|
MD5 | 8322d138b790cb9839e44c96e0cb4ba8 |
SHA1 | d4f47c97d3faa5ae5d014b5c47ec0221d24b4496 |
SHA256 | bb3fde18139e575a053709c042322604e8e01188f096a764bca9f4338286118e |
SHA512 | 44e699237aca10e0e3c5c219a06260e8715b16d8da6fa8a6e5e878f2c66ad9c639ba0d69ecc5aef8d2afb91c42b7a36fe0b7969d3b36d3b2ace21c6bb2c512c2 |
These notes include entries from the following previous releases: 1.26.0
Several fixes related to WorldMap widget, applying user-defined styles to a dashboard chart, and partitions.
Fixed in this release
Summary
Fix an issue where data was not visible on the
World Map
until the opacity setting had been changed.Fix an issue when some user-defined styles weren't being applied to a chart after a page refresh or when exported to a dashboard widget.
Fix an issue where optimizeAndSaveQueryCoordinationPartitions could attempt to save a partitioning table to global with gaps in the partition list. This caused queries to fail, and repeated logging of a validation error.
Other
Update the minimum Humio version for Hosts in global when downgrading a node
Fixes issue where the world map widget would misbehave in different ways
Humio trial installations requires a trial license from this version, to request a trial license go to: https://www.humio.com/getting-started/
All users (including existing users) needs to accept the privacy notice: https://www.crowdstrike.com/privacy-notice and terms and conditions https://www.crowdstrike.com/terms-conditions/humio-self-hosted before using Humio.
Fix a number of cases where Humio could attempt to write a message to global larger than permitted.
Known Issues
Other
A regression can cause 1.26.0 to repeatedly error log and fail to start queries in cases where the list of hosts in the cluster is not fixed. This is particularly likely to affect clusters running with ephemeral disks. The regression is fixed in 1.26.1.
Humio Server 1.26.0 LTS (2021-05-20)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.26.0 | LTS | 2021-05-20 | Cloud | 2022-05-31 | No | 1.16.0 | No |
Hide file hashes
JAR Checksum | Value |
---|---|
MD5 | 5f8abd037b3f73eaac8d5882fa58e4ec |
SHA1 | 85971d1f18228a81f46f6f6334196fd918647438 |
SHA256 | 4e0ad70a2c9275742aee8a0181eee2e20c6496ba16b8f6fb93490b84b5031b5e |
SHA512 | 425cacd3de3f54c2813dd3915ad35aed67de2f11a80d9c2fbf96055b030158b5d60c31686e121e613df855e6f0bdaad9d02f1c9e61a4df24b8f69cfd3193f56d |
The HEC ingest endpoint will no longer implicitly parse logs
using the built-in kv
parser. Previously, a log ingested using this endpoint would
implicitly be parsed with the
kv
parser when the
supplied event
field was
given as a string. For instance, this log:
{
"time": 1537537729.0,
"event": "Fri, 21 Sep 2018 13:48:49 GMT - system started name=webserver",
"source": "/var/log/application.log",
"sourcetype": "applog",
"fields": { "#env": "prod" }
}
Would be parsed, so that the resulting Humio event would contain
the field name=webserver
.
If you don't wish this behavior to change, you will have to perform this parsing operation explicitly.
When ingesting into the HEC endpoint, you are using an ingest token to authenticate with Humio. If that token does not have an associated parser, all you need to do is assign the kv parser to the token.
If your ingest token already has an assigned parser, you will need to prepend the code of that parser with this code snippet:
kvParse(@rawstring) | findTimestamp(addErrors=false) |
Dark Mode is a new visual theme throughout Humio (except some settings pages) that is tailored to offer great readability in dark environments, to not brighten the entire room when used on dashboards, and offer a unique visual style that some users prefer simply for its aesthetics. In 1.25 users are going to see a modal dialogue that asks what mode users would like to have; dark mode, light mode or follow the OS theme. This setting can later be changed in the settings menu.
Fixed in this release
Other
Update the minimum Humio version for Hosts in global when downgrading a node
Fixes issue where the world map widget would misbehave in different ways
Humio trial installations requires a trial license from this version, to request a trial license go to: https://www.humio.com/getting-started/
All users (including existing users) needs to accept the privacy notice: https://www.crowdstrike.com/privacy-notice and terms and conditions https://www.crowdstrike.com/terms-conditions/humio-self-hosted before using Humio.
Fix a number of cases where Humio could attempt to write a message to global larger than permitted.
Known Issues
Other
A regression can cause 1.26.0 to repeatedly error log and fail to start queries in cases where the list of hosts in the cluster is not fixed. This is particularly likely to affect clusters running with ephemeral disks. The regression is fixed in 1.26.1.
Humio Server 1.25.3 GA (2021-05-10)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.25.3 | GA | 2021-05-10 | Cloud | 2022-05-31 | No | 1.16.0 | No |
Available for download two days after release.
Hide file hashes
JAR Checksum | Value |
---|---|
MD5 | 08d8b9bfd9692b992a0f564426041717 |
SHA1 | 463df975e2f50c3ae3de888b454a2d1a7285f148 |
SHA256 | 06167df8dabc6211a26edc4aede8d2461ded183e7bb6f1ef527f1b8a441f52b0 |
SHA512 | dbbed239b977af08889433e9f020bbdcb02b5f9496ab2a7eb410d967eb72ef7a4dcfde93374d36a3fe8c5798b35527f2594c025b6c68a3517c20e6f32e5d4ad9 |
Minor bug fixes, including removing error logs from alert jobs running in a Sandbox.
Fixed in this release
Summary
Minor bug fixes and improvements.
Other
Removes error logs from the alert job when running alerts on a sandbox repository.
Humio Server 1.25.2 GA (2021-05-06)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.25.2 | GA | 2021-05-06 | Cloud | 2022-05-31 | No | 1.16.0 | No |
Available for download two days after release.
Hide file hashes
JAR Checksum | Value |
---|---|
MD5 | 29b0554b42990118d22fb674e0f6b397 |
SHA1 | ce0ee5e134123fa5ccde8dc012fb1822722140ff |
SHA256 | 9374d762c00d5aa1884f163391fa13149e74f22e428ff849f15124ded001c033 |
SHA512 | 52b14b36db509515fb1d25e2e346b3a920493cdfd7f4862ebdee892014d14394d19e2324da60ec7d1f7137b1eef50c9716499e174e1f910f7a33f437aee1c053 |
Bug fix related to global consistency checks with nodes.
Fixed in this release
Summary
Fixes problem where having many nodes and a large global could lead to deadlocks in the global consistency check.
Humio Server 1.25.1 GA (2021-05-04)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.25.1 | GA | 2021-05-04 | Cloud | 2022-05-31 | No | 1.16.0 | No |
Available for download two days after release.
Hide file hashes
JAR Checksum | Value |
---|---|
MD5 | 7e4477532960f4662ee7f3c11681d640 |
SHA1 | 8f7d79294eb27af73333252b2177e3b1467fa01e |
SHA256 | 44b0be0f312c747f1b4f32ca34613bccdc47330100d3d74381731e556e2da999 |
SHA512 | b3624257bd69d0d0e123e7af5010da3d49680f0d286c1ad4daae50f75bfc8ea0cea35836c8e233c09c1f3ee4e5c02ecbc91850c0a0af30afb59e26206c0de125 |
There is a serious issue affecting larger clusters in this
release. The global inconsistency checker job can cause the
thread responsible for reading changes from global to hang. It
is possible to work around this by disabling the job using
RUN_GLOBAL_CONSISTENCY_CHECKER_JOB=false
.
This is fixed in 1.25.2.
Fixed in this release
Other
Makes disabled items in the main menu look disabled in dark mode
Humio Server 1.25.0 GA (2021-04-29)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.25.0 | GA | 2021-04-29 | Cloud | 2022-05-31 | No | 1.16.0 | No |
Available for download two days after release.
Hide file hashes
JAR Checksum | Value |
---|---|
MD5 | aa7d71e3617d71ec800f1e5e898cb328 |
SHA1 | 8e815fd8c1d6a0bc1c22e096e8f82d08d40ea172 |
SHA256 | 7f26e858df2f1e16a64f44c6ac72948267eb317d884ad1db72b3799adcc95696 |
SHA512 | 1f447d204ecb3e60c80835e02a4fe8e6c7984728df5b31827b23f2278e00a0862a78dabe8ef01a2295f925a351693bdbb1d649c83465bf0951b040eab466fbad |
There is a serious issue affecting larger clusters in this
release. The global inconsistency checker job can cause the
thread responsible for reading changes from global to hang. It
is possible to work around this by disabling the job using
RUN_GLOBAL_CONSISTENCY_CHECKER_JOB=false
.
This is fixed in 1.25.2 (and 1.26.0).
The HEC ingest endpoint will no longer implicitly parse logs
using the built-in kv parser. Previously, a log ingested using
this endpoint would implicitly be parsed with the
kv
parser when the
supplied event
field was
given as a string. For instance, this log:
json{ "time": 1537537729.0, "event": "Fri, 21 Sep 2018 13:48:49 GMT - system started name=webserver", "source": "/var/log/application.log", "sourcetype": "applog", "fields": { "#env": "prod" } }
Would be parsed, so that the resulting Humio event would contain the field
name=webserver
.If you don't wish this behavior to change, you will have to perform this parsing operation explicitly.
When ingesting into the HEC endpoint, you are using an ingest token to authenticate with Humio. If that token does not have an associated parser, all you need to do is assign the
kv
parser to the token.If your ingest token already has an assigned parser, you will need to prepend the code of that parser with this code snippet:
kvParse(@rawstring) | findTimestamp(addErrors=false) |
Dark Mode is a new visual theme throughout Humio (except some settings pages) that is tailored to offer great readability in dark environments, to not brighten the entire room when used on dashboards, and offer a unique visual style that some users prefer simply for its aesthetics. In 1.25 users are going to see a modal dialogue that asks what mode users would like to have; dark mode, light mode or follow the OS theme. This setting can later be changed in the settings menu.
New features and improvements
Other
The query scheduler now prioritizes new queries started by a user based on the cumulative cost of recent queries started by that user. Added new configuration
QUERY_SPENT_FACTOR
with the default value 0.5, which defines the weight of recent query costs when scheduling. Higher values mean that users with high recent query costs will see their queries penalized harder in the scheduling.
Fixed in this release
Automation and Alerts
Refreshing actions while creating alerts and scheduled searches now happens automatically, but can also be triggered manually using a button.
When running alerts and scheduled searches, all logging related to a specific alert or scheduled search will now be logged to the System Repositories repository, instead of the humio repository. Error logs will still be logged to the humio repository as well.
GraphQL API
The
SearchDomain.viewerCanChangeConnections
GraphQL field has been deprecated. UseSearchDomain.isActionAllowed
instead.Deprecates GraphQL fields
UserSettings.isEventListOrderChangedMessageDismissed
,UserSettings.isNewRepoHelpDismissed
, andUserSettings.settings
since they are not used for anything anymore, and will be removed in a future release.Removes the deprecated
Repository.isFreemium
GraphQL field.The updateSettings GraphQL mutation has been marked as unstable, as it can control unstable and ephemeral settings.
The
SearchDomain.queries
GraphQL field has been deprecated. UseSearchDomain.savedQueries
instead.
Configuration
Removed the
QUERY_QUOTA_EXCEEDED_PENALTY
configuration.SEGMENTMOVER_EXECUTOR_CORES
allows tuning number of concurrent fetches of segments from other nodes to this node. Defaults tovCPU/8
, must be at least 2.S3_ARCHIVING_IBM_COMPAT
for compatility with S3 archiving to IBM Cloud Object Storage.
Ingestion
Added audit logging when assigning a parser to an ingest token or unassigning a parser from an ingest token. Added the parser name to all audit logs for ingest tokens.
Functions
Make the
parseLEEF()
function more robust and optimize its memory usage.Fixed a bug which could cause
head()
,tail()
,sort()
within eitherbucket()
or a live query to return too few results in certain cases.Optimized the
splitString()
function.Added a new query function:
base64Decode()
.Fixed a bug where
cidr()
did not respect theinclude
parameter.
Other
Added documentation link to autocomplete description in the Humio search field
Added new parameters
handleNull
andexcludeEmpty
toparseJson()
to control how null and empty string values are handled.When installing an application package, you sometimes had to refresh the page to get the assets in the package linked to their installed counter parts.
Added IP ASN Database license information to the Cluster Administration page
Added a warning to the Cluster Nodes page that warns you if not all Humio servers are running the same Humio version.
Some minor performance improvements in the ingest pipeline
Rework how Humio caches data from global. This fixes a number of data races, where Humio nodes could temporarily get an incorrect view of global.
Improved error logging for event forwarding
Fixed a bug that made it possible to rename a parser to an existing name and thereby overwriting the existing parser.
Changed the built-in
audit-log
parser so that null values are stored as an empty string value. Previously, they were stored as the string "null". The defaults are consistent with the old behavior, so that null values become a "null" string and empty string values are kept.Bumped minimum supported versions of Chrome and Chromium from 60 to 69 due to updated dependencies
Allow user groups to be represented as a json string and not only as an array when logging in with OAuth.
Query poll responses meta data now include Query Quota spent for current user across queries. The cost so far of the current query was there already.
Made it possible to delete a parser overriding a built-in parser, even though it is used in an ingest token.
Reworked initialization of Humio's async listener infrastructure, to ensure that listeners do not miss any updates. This fixes a number of flakiness issues that could arise when a node was rebooted.
The HEC ingest endpoint is no longer implicitly running
kvParse
. This used to be the case when ingesting events of the form"event" : "Log line..."
. If the ingested data is to be key-value parsed, addkvParse()
to the relevant parser for the input data.When a query is cancelled, a reason for canceling the query is now always logged. Previously, this was only done if the query was cancelled due to an internal exception. Look for log lines starting with query is cancelled.
Fixed an issue where clicking the label of a parser rerouted erroneously
Fixed a bug that made it impossible to copy a parser to override a built-in parser.
Fixed a bug where a scheduled search would be executed repeatedly, as long as at least one out of multiple actions was failing. Now, execution is only repeated if all actions are failing.
Humio Server 1.24.4 LTS (2021-05-31)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.24.4 | LTS | 2021-05-31 | Cloud | 2022-04-30 | No | 1.16.0 | No |
Hide file hashes
JAR Checksum | Value |
---|---|
MD5 | 83588b8570c97dc8a89b46524ea8d92a |
SHA1 | 15958a097eaba9689a43c557e1d6d14e2d566d6d |
SHA256 | cafca561035aec00f5971a15cf283b757794c8a2706829518a05ce089f92ea88 |
SHA512 | 15352a96bb6b4ed27ef1c7b238cf97142c36cfefe17be25bf671a706bd6afcf0c8ad8951a325838e259eeb52e643f3950b8f47f3186ecea049f34fc81e066eed |
These notes include entries from the following previous releases: 1.24.0, 1.24.1, 1.24.2, 1.24.3
Minor bug fixes, as well as fix a stackoverflow bug in large clusters.
Fixed in this release
Summary
Minor bug fixes and improvements.
Minor bug fixes and improvements.
Fix a stackoverflow that could occur during startup in larger clusters.
Other
Removed the
QUERY_QUOTA_EXCEEDED_PENALTY
config (introduced in 1.19.0).Fixed an issue on the search page that prevented the event list from scrolling correctly.
Fixed a bug where ingestOnly nodes could not start on a more recent version that the statefull nodes in the cluster
Fixed an issue which prevented Safari users from seeing alert actions
Fixed an issue where cost spent in a long-running query got accounted as spent "now" when the query ended in terms of Query Quota
Fixed an issue which caused problems with forward/backward compatibility of LanguageVersion in GraphQL
Major changes: 1.23.0 and 1.23.1.
Fixed an issue where a repository with very high number of datasources could trigger an error writing an oversized message to kafka from the ingest-reader-leader thread
The query scheduler now prioritizes new queries started by a user based on the cumulative cost of recent queries started by that user. Added new config
QUERY_SPENT_FACTOR
with the default value 0.5, which defines the weight of recent query costs when scheduling. Higher values mean that users with high recent query costs will see their queries penalized harder in the scheduling.Ensure that if the Kafka leader loop thread dies, it kills the Humio process. In rare cases it was possible for this thread to die, leaving the node incapable of performing digest work
Fixes an issue where the user would get stuck in infinite loading after having been invited into an organization
Fixed a scrolling issues on the Kafka cluster admin page
Allow reverse proxies using 10s as timeout to work also for a query that takes longer than that to initialize
Reduced off-heap memory usage
Humio Server 1.24.3 LTS (2021-05-10)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.24.3 | LTS | 2021-05-10 | Cloud | 2022-04-30 | No | 1.16.0 | No |
Hide file hashes
JAR Checksum | Value |
---|---|
MD5 | adb3c4827d86d476d104ab2ba8ec3171 |
SHA1 | 06a25ba15bf28659900807a8505860f5b32a65d6 |
SHA256 | bae28e700dc6806496fd351ffb865a80420f0757613546076b4a2e9aae719d8d |
SHA512 | aff4adf71e1140ffdbeebfef7eae451d0bbff41f5af1753dee13d1287635644228336da0de300208fe1e667ae151198c74b74fdf10d830d37ae9b072f136a7a4 |
These notes include entries from the following previous releases: 1.24.0, 1.24.1, 1.24.2
Minor bug fixes.
Fixed in this release
Summary
Minor bug fixes and improvements.
Other
Removed the
QUERY_QUOTA_EXCEEDED_PENALTY
config (introduced in 1.19.0).Fixed an issue on the search page that prevented the event list from scrolling correctly.
Fixed a bug where ingestOnly nodes could not start on a more recent version that the statefull nodes in the cluster
Fixed an issue which prevented Safari users from seeing alert actions
Fixed an issue where cost spent in a long-running query got accounted as spent "now" when the query ended in terms of Query Quota
Fixed an issue which caused problems with forward/backward compatibility of LanguageVersion in GraphQL
Major changes: 1.23.0 and 1.23.1.
Fixed an issue where a repository with very high number of datasources could trigger an error writing an oversized message to kafka from the ingest-reader-leader thread
The query scheduler now prioritizes new queries started by a user based on the cumulative cost of recent queries started by that user. Added new config
QUERY_SPENT_FACTOR
with the default value 0.5, which defines the weight of recent query costs when scheduling. Higher values mean that users with high recent query costs will see their queries penalized harder in the scheduling.Ensure that if the Kafka leader loop thread dies, it kills the Humio process. In rare cases it was possible for this thread to die, leaving the node incapable of performing digest work
Fixes an issue where the user would get stuck in infinite loading after having been invited into an organization
Fixed a scrolling issues on the Kafka cluster admin page
Allow reverse proxies using 10s as timeout to work also for a query that takes longer than that to initialize
Reduced off-heap memory usage
Humio Server 1.24.2 LTS (2021-04-19)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.24.2 | LTS | 2021-04-19 | Cloud | 2022-04-30 | No | 1.16.0 | No |
Hide file hashes
JAR Checksum | Value |
---|---|
MD5 | 69a6ef22d7f295f807ceef46c8c0c6d1 |
SHA1 | 6284ee05bd7c583044f6620f385864a3be3bab84 |
SHA256 | cc648f1c4efe01d20e9f93a77b1892291077ae76339d1b5dcbc0e504f9975e25 |
SHA512 | 54c60624a5c0ef19c12fcda05b4fecbe8ddf626246c3160d90f29cc76143dfbe4b07f7d884616699486848f0e7b8c519c0e1d5f5dd3e2be3e8b0a6f49fe960c5 |
These notes include entries from the following previous releases: 1.24.0, 1.24.1
Important Information about Upgrading
Beginning with version 1.17.0, if your current version of Humio is not directly able to upgrade to the new version, you will get an error if you attempt to start up the incompatible version. The 1.24.1 release is only compatible with Humio release 1.16.0 and newer. This means that you will have to ensure that you have upgraded at least to 1.16.0 before trying to upgrade to 1.24.1. In case you need to do a rollback, this can also ONLY happen back to 1.16.0 or newer. Rolling directly back to an earlier release can result in data loss.
Fixed in this release
Other
Removed the
QUERY_QUOTA_EXCEEDED_PENALTY
config (introduced in 1.19.0).Fixed an issue on the search page that prevented the event list from scrolling correctly.
Fixed a bug where ingestOnly nodes could not start on a more recent version that the statefull nodes in the cluster
Fixed an issue which prevented Safari users from seeing alert actions
Fixed an issue where cost spent in a long-running query got accounted as spent "now" when the query ended in terms of Query Quota
Fixed an issue which caused problems with forward/backward compatibility of LanguageVersion in GraphQL
Major changes: 1.23.0 and 1.23.1.
Fixed an issue where a repository with very high number of datasources could trigger an error writing an oversized message to kafka from the ingest-reader-leader thread
The query scheduler now prioritizes new queries started by a user based on the cumulative cost of recent queries started by that user. Added new config
QUERY_SPENT_FACTOR
with the default value 0.5, which defines the weight of recent query costs when scheduling. Higher values mean that users with high recent query costs will see their queries penalized harder in the scheduling.Ensure that if the Kafka leader loop thread dies, it kills the Humio process. In rare cases it was possible for this thread to die, leaving the node incapable of performing digest work
Fixes an issue where the user would get stuck in infinite loading after having been invited into an organization
Fixed a scrolling issues on the Kafka cluster admin page
Allow reverse proxies using 10s as timeout to work also for a query that takes longer than that to initialize
Reduced off-heap memory usage
Humio Server 1.24.1 LTS (2021-04-12)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.24.1 | LTS | 2021-04-12 | Cloud | 2022-04-30 | No | 1.16.0 | No |
Hide file hashes
JAR Checksum | Value |
---|---|
MD5 | b7fec8d3a5c0b521a371dbeb6e56d8b2 |
SHA1 | fbabef325eaaf1bd0a3052f51497f00779efad5f |
SHA256 | 223e48fe647fc681456b47cad81d0fca5936d6283791b2a41fa4c8dc199c1c76 |
SHA512 | 51663220d67bb25f67b7e44837e84c87c36c3f24da69918b1a6978fc751543a2f1f25f49f166681def7a0276881907b284a52bb302fa6d745d7b16859d3f7d96 |
These notes include entries from the following previous releases: 1.24.0
Important Information about Upgrading
Beginning with version 1.17.0, if your current version of Humio is not directly able to upgrade to the new version, you will get an error if you attempt to start up the incompatible version. The 1.24.1 release is only compatible with Humio release 1.16.0 and newer. This means that you will have to ensure that you have upgraded at least to 1.16.0 before trying to upgrade to 1.24.1. In case you need to do a rollback, this can also ONLY happen back to 1.16.0 or newer. Rolling directly back to an earlier release can result in data loss.
Fixed in this release
Other
Removed the
QUERY_QUOTA_EXCEEDED_PENALTY
config (introduced in 1.19.0).Fixed an issue on the search page that prevented the event list from scrolling correctly.
Fixed a bug where ingestOnly nodes could not start on a more recent version that the statefull nodes in the cluster
Fixed an issue which prevented Safari users from seeing alert actions
Fixed an issue which caused problems with forward/backward compatibility of LanguageVersion in GraphQL
Major changes: 1.23.0 and 1.23.1.
The query scheduler now prioritizes new queries started by a user based on the cumulative cost of recent queries started by that user. Added new config
QUERY_SPENT_FACTOR
with the default value 0.5, which defines the weight of recent query costs when scheduling. Higher values mean that users with high recent query costs will see their queries penalized harder in the scheduling.Ensure that if the Kafka leader loop thread dies, it kills the Humio process. In rare cases it was possible for this thread to die, leaving the node incapable of performing digest work
Fixes an issue where the user would get stuck in infinite loading after having been invited into an organization
Fixed a scrolling issues on the Kafka cluster admin page
Allow reverse proxies using 10s as timeout to work also for a query that takes longer than that to initialize
Humio Server 1.24.0 LTS (2021-04-06)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.24.0 | LTS | 2021-04-06 | Cloud | 2022-04-30 | No | 1.16.0 | No |
Hide file hashes
JAR Checksum | Value |
---|---|
MD5 | cbee15b1c1af8d13984fbd938e9f9488 |
SHA1 | 8562af8753aca033d6b4966311787c2a681116dc |
SHA256 | 7cda1dd2188ec2ed5899c5a4c26cf45c2f2d4199439342da64819194475b5cf2 |
SHA512 | ad9e05ed84cd08f8774b5bbec44b8075ebb9e2cd7898f7d6f88fc445a5e1476284b86b1d3d5536ecdf1609b7c8b4e05d3ad27302dd79e41f95094140c12ce002 |
Important Information about Upgrading
This release promotes the latest 1.23 release from preview to stable.
Beginning with version 1.17.0, if your current version of Humio is not directly able to upgrade to the new version, you will get an error if you attempt to start up the incompatible version. The 1.24.0 release is only compatible with Humio release 1.16.0 and newer. This means that you will have to ensure that you have upgraded at least to 1.16.0 before trying to upgrade to 1.24.0. In case you need to do a rollback, this can also ONLY happen back to 1.16.0 or newer. Rolling directly back to an earlier release can result in data loss.
Humio will make some internal logs available in a new repository called humio-activity. This is meant for logs that are relevant to users of Humio, as compared to logs that are only relevant for operators. The latter logs are still put into the humio repository. For this release, only new log events will be put into humio-activity, but in later releases, some existing log events that are relevant for users, will be put into the humio-activity repository instead of the humio repository.
For cloud users, the logs for your organization can be accessed
through the
humio-organization-activity
view.
For on-prem users, the logs can be accessed directly through the
humio-activity repository. They are also output
into a new log file named
humio-activity.log
which can be ingested
into the humio repository, if you want it available
there as well. If you do and you are using the
Insights Package, you should upgrade that to
version 0.0.4. For more information, see the
LogScale Internal Logging.
Humio has decided to adopt an evolutionary approach to its GraphQL API, meaning that we will strive to do only backwards compatible changes. Instead of making non-backwards compatible changes to existing fields, we will instead add new fields alongside the existing fields. The existing fields will be deprecated and might be removed in some later release. We reserve the right to still do non-backwards compatible changes, for instance to fix security issues.
For new experimental features, we will mark the corresponding
GraphQL fields as PREVIEW
.
There will be no guarantees on backwards compatibility on fields
marked as PREVIEW
.
Deprecated and preview fields and enum values will be marked as
such in the GraphQL schema and will be shown as deprecated or
preview in the API Explorer. Apart from that, the result of
running a GraphQL query using a deprecated or preview field will
contain a new field
extensions
, which contains
a field deprecated
with a
list of all deprecated fields used in the query and a field
preview
with a list of all
preview fields used in the query.
Example:
json{ "data": "...", "extensions": { "deprecated": [ { "name": "alert", "reason": "[DEPRECATED: Since 2020-11-26. Deprecated since 1.19.0. Will be removed March 2021. Use 'searchDomain.alert' instead]" } ] } }
Deprecated fields and enum values will also be noted in the
release note, when they are first deprecated. All use of
deprecated fields and enum values will also be logged in the
Humio repository humio-activity. They will have
#category=GraphQL
,
subCategory=Deprecation
and #severity=Warning
. If
you are using the API, consider creating an alert for such logs.
Removed Support for CIDR Shorthand
Previous version of Humio supported a shorthand for IPv4 CIDR
expressions. For example
127.1/16
would be
equivalent to
127.1.0.0/16
. This was
contrary to other implementations like the Linux function
inet_aton
, where
127.1
expands to
127.0.0.1
. Support for
this shorthand has been removed and the complete address must
now be written instead.
Fixed in this release
Other
Removed the
QUERY_QUOTA_EXCEEDED_PENALTY
config (introduced in 1.19.0).Fixed an issue on the search page that prevented the event list from scrolling correctly.
Fixed an issue which prevented Safari users from seeing alert actions
Fixed an issue which caused problems with forward/backward compatibility of LanguageVersion in GraphQL
Major changes: 1.23.0 and 1.23.1.
The query scheduler now prioritizes new queries started by a user based on the cumulative cost of recent queries started by that user. Added new config
QUERY_SPENT_FACTOR
with the default value 0.5, which defines the weight of recent query costs when scheduling. Higher values mean that users with high recent query costs will see their queries penalized harder in the scheduling.Fixed a scrolling issues on the Kafka cluster admin page
Humio Server 1.23.1 LTS (2021-03-24)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.23.1 | LTS | 2021-03-24 | Cloud | 2022-03-31 | No | 1.16.0 | No |
Important Information about Upgrading
Beginning with version 1.17.0, if your current version of Humio is not directly able to upgrade to the new version, you will get an error if you attempt to start up the incompatible version. The 1.23.1 release is only compatible with Humio release 1.16.0 and newer. This means that you will have to ensure that you have upgraded at least to 1.16.0 before trying to upgrade to 1.23.1. In case you need to do a rollback, this can also ONLY happen back to 1.16.0 or newer. Rolling directly back to an earlier release can result in data loss.
Fixed in this release
Configuration
S3_ARCHIVING_IBM_COMPAT
for compatibility with S3 archiving to IBM Cloud Object Storage.
Other
Allow users group to be represented as a json string and not only array when logging in with OAuth.
Humio Server 1.23.0 GA (2021-03-18)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.23.0 | GA | 2021-03-18 | Cloud | 2022-03-31 | No | 1.16.0 | No |
Available for download two days after release.
Hide file hashes
JAR Checksum | Value |
---|---|
MD5 | 882c77cb19e867084fbb26dc80c079d8 |
SHA1 | 053d49648f03fd49f0766aa9df64f66921c72638 |
SHA256 | 898f1670010d25866f9fb27e054509a2ade615dbae612cdc70ce34371e03ac59 |
SHA512 | eba333bfec11983f6140ca4e64ec725c91c0d724f245c0250f2264a9221036a7d9e89aace2bf096ce7b5ecca72b4c24659348feba7098d89a5a4035359d8b8d3 |
Important Information about Upgrading
Beginning with version 1.17.0, if your current version of Humio is not directly able to upgrade to the new version, you will get an error if you attempt to start up the incompatible version. The 1.23.0 release is only compatible with Humio release 1.16.0 and newer. This means that you will have to ensure that you have upgraded at least to 1.16.0 before trying to upgrade to 1.23.0. In case you need to do a rollback, this can also ONLY happen back to 1.16.0 or newer. Rolling directly back to an earlier release can result in data loss.
Humio will make some internal logs available in a new repository called humio-activity. This is meant for logs that are relevant to users of Humio, as compared to logs that are only relevant for operators. The latter logs are still put into the humio repository. For this release, only new log events will be put into humio-activity, but in later releases, some existing log events that are relevant for users, will be put into the humio-activity repository instead of the humio repository.
For cloud users, the logs for your organization can be accessed
through the
humio-organization-activity
view.
For on-prem users, the logs can be accessed directly through the
humio-activity repository. They are also output
into a new log file named
humio-activity.log
which can be ingested
into the humio repository, if you want it available
there as well. If you do and you are using the
Insights Package, you should upgrade that to
version 0.0.4. For more information, see
LogScale Internal Logging.
Humio has decided to adopt an evolutionary approach to its GraphQL API, meaning that we will strive to do only backwards compatible changes. Instead of making non-backwards compatible changes to existing fields, we will instead add new fields alongside the existing fields. The existing fields will be deprecated and might be removed in some later release. We reserve the right to still do non-backwards compatible changes, for instance to fix security issues.
For new experimental features, we will mark the corresponding GraphQL fields as PREVIEW. There will be no guarantees on backwards compatibility on fields marked as PREVIEW.
Deprecated and preview fields and enum values will be marked as
such in the GraphQL schema and will be shown as deprecated or
preview in the API Explorer. Apart from that, the result of
running a GraphQL query using a deprecated or preview field will
contain a new field
extensions
, which contains
a field deprecated
with a
list of all deprecated fields used in the query and a field
preview
with a list of all
preview fields used in the query.
Example:
{
"data": "...",
"extensions": {
"deprecated": [
{
"name": "alert",
"reason": "[DEPRECATED: Since 2020-11-26. Deprecated since 1.19.0. Will be removed March 2021. Use 'searchDomain.alert' instead]"
}
]
}
}
Deprecated fields and enum values will also be noted in the
release note, when they are first deprecated. All use of
deprecated fields and enum values will also be logged in the
Humio repository humio-activity. They will have
#category=GraphQL
,
subCategory=Deprecation
and #severity=Warning
. If
you are using the API, consider creating an alert for such logs.
Removed Support for CIDR Shorthand
Previous version of Humio supported a shorthand for IPv4 CIDR
expressions. For example
127.1/16
would be
equivalent to
127.1.0.0/16
. This was
contrary to other implementations like the Linux function
inet_aton
, where
127.1
expands to
127.0.0.1
. Support for
this shorthand has been removed and the complete address must
now be written instead.
Deprecation
Items that have been deprecated and may be removed in a future release.
Deprecated GraphQL mutations addAlertLabel, removeAlertLabel, addStarToAlert and removeStarFromAlert as they did not follow the standard for mutation input.
New features and improvements
Summary
Added GraphQL queries and mutations for alerts and actions, which correspond to the deprecated REST endpoints for those entities.
GraphQL API
Added GraphQL mutations addAlertLabelV2, removeAlertLabelV2, addStarToAlertV2 and removeStarFromAlertV2.
Fixed in this release
Automation and Alerts
Restyled the alert dialogue.
Deprecated the REST endpoints for alerts and actions.
Functions
Deprecated
file
andcolumn
parameter oncidr()
. Usematch()
withmode=cidr
instead.Fixed a bug which caused glob-patterns in
match()
to not match newline characters.Negated, non-strict
match()
orlookup()
is no longer allowed.Added
mode
parameter tomatch()
, allowing different ways to match the key.Fixed a bug which caused tag-filters in anonymous functions to not work in certain cases (causing to many events to be let through).
Deprecated
glob
parameter onmatch()
, usemode=glob
instead.Removed support for shorthand IPv4 CIDR notation in
cidr()
. See section "Removed support for CIDR shorthand".Fixed a bug in event forwarding that made
start()
,end()
andnow()
return the time at which the event forwarding rule was cached. Instead,now()
will return the time at which the event forwarding rule was run.start()
andend()
were never meant to be used in an event forwarding rule and will return 0, which means Unix Epoch.Fixed a bug which caused
in()
with values=[] to give incorrect results.Improved performance when using
match()
withmode=cidr
compared to usingcidr()
withfile()
.
Other
Enforce permissions to enter Organization Settings page.
Added a new introduction message to empty repositories.
Fixed an issue which caused Ingesting Data to Multiple Repositories to break, when the parser used copyEvent to duplicate the input events into multiple repositories
Refactor how the width of the repository name in the main navigation bar is calculated.
Improved performance of free-text search using regular expressions.
The GraphQL API Explorer has been upgraded to a newer version. The new version includes a history of the queries that have been run.
Added an option to make it easier to diagnose problems by detecting inconsistencies between globals in different Humio instances. Each Humio instance has its own copy of the global state which must all be identical. It has happened that they were not, so now we check and if there is a difference we report an error and dump the global state into a file.
Allow turning encryption of files stored in bucket storage off by explicitly setting
S3_STORAGE_ENCRYPTION_KEY=off
(similar forGCP_
)The GraphQL API Explorer is now available from inside Humio. You can access it using the Help->API Explorer menu.
Fixed the requirement condition for the time retention on a repository.
Removed the deprecated
Repository.isFreemium
GraphQL field.Fixed a bug where the same regex pattern occurring multiple times in a query could cause incorrect results
Deprecated the
ReadEvents
enum variant from theViewAction
enum in GraphQL. Use theReadContents
variant instead, which has the same semantics, but a more accurate name.ReadEvents
will be removed in a future release.UI enhancements for the new repository Access Permissions page.
Fixed an issue where changes to files would not propagate to parsers or event forwarders.
Fixed an issue causing undersized segment merging to repeatedly fetch the same segments, in cases where the merger job took too long to finish.
Fixed an issue where Prometheus metrics always reported 0.0 for humio_primary_disk_usage
Enforce permissions to enter creating new repository page.
Refactor Organization Overview page.
Fixed a bug which caused
match()
to give incorrect results in certain cases due to incorrect cachingFixes a bug where events deleted with the delete-event API would appear deleted at first, but then resurface again after 24h. If user applying delete did not have permission to search the events being deleted.
Made the S3 archiving save button work again.
Changed the URL of the Kafka cluster page in the settings.
Enforce accepting terms and conditions.
Improved memory use for certain numerical aggregrating functions
Fixed an issue where regular expressions too large to handle would sometimes cause the query to hang. Now we report an error.
The
SearchDomain.queries
GraphQL field has been deprecated, and will be removed in a future release. UseSearchDomain.savedQueries
instead.Refactor All Organizations page.
Added IP Filter for readonly dashboard links, and started to audit log readonly dashboard access. In this initial version. The readonly ip filter can be configured with the graphql mutation:
graphqlmutation { updateReadonlyDashboardIPFilter(ipFilter: "FILTER") }
The FILTER is expected in this format: IP Filter. From Humio 1.25 this can be configured in the configuration UI.
Mark required fields on the Accept Terms and Conditions page.
Fixed an issue with the Missing Segments API that caused missing segments to not appear in the missing segments list if they had a replacement segment.
Refactor client side action cache of allowed permissions.
Implemented toggle button for dark mode.
It is again possible to sort the events on the test parser page.
The
SearchDomain.viewerCanChangeConnections
GraphQL field has been deprecated, and will be removed in a future release. UseSearchDomain.isActionAllowed
instead.
Humio Server 1.22.1 LTS (2021-03-02)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.22.1 | LTS | 2021-03-02 | Cloud | 2022-03-31 | No | 1.16.0 | No |
Hide file hashes
JAR Checksum | Value |
---|---|
MD5 | 7d2fb4c0736d1c7f03197fba78a33c13 |
SHA1 | 36c3c2b7ffe3485fb9f87a75b515cb7b9efc5efc |
SHA256 | bc3122c71e8c1bdf57fc044912c660b86933c871758818fb93a8d609c9bc340d |
SHA512 | 54e1da57da87480a2b1b0cc577eb731b734b8fe7af0e2ac3afcd406d28f29ffa5b834d836b283dc5e730b10bba43d55484cd8579746c800bd26a26280a088200 |
These notes include entries from the following previous releases: 1.22.0
Important Information about Upgrading
Beginning with version 1.17.0, if your current version of Humio is not directly able to upgrade to the new version, you will get an error if you attempt to start up the incompatible version. The 1.22.1 release is only compatible with Humio release 1.16.0 and newer. This means that you will have to ensure that you have upgraded at least to 1.16.0 before trying to upgrade to 1.22.1. In case you need to do a rollback, this can also ONLY happen back to 1.16.0 or newer. Rolling directly back to an earlier release can result in data loss.
Fixed in this release
Other
Restrict concurrency when mirroring uploaded files within the cluster
Fixed issue where tag filters in anonymous parts within an aggregate did not get applied
Fix issue where updating account settings would present the user with an error even though the update was successful
Change log lines creating a 'kind' field. Kind is used as a tag for the different humio logs
Add the "ProxyOrganization" header to the list of general auth headers used on REST calls
Fixed issue where local segment files would not get deleted in time, potentially filling the disk
Major changes: (see version 1.21.0 and 1.21.1 release notes)
Fixed issue where root users were not allowed to set unlimited time in retention settings
Fixes overflowing editor bug.
Increase HTTP chunk size from 16MB to 128MB
Fixes parserlist having no height on Safari.
Fixes problem where alertpages have no height in Safari.
Humio Server 1.22.0 LTS (2021-03-02)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.22.0 | LTS | 2021-03-02 | Cloud | 2022-03-31 | No | 1.16.0 | Yes |
Hide file hashes
JAR Checksum | Value |
---|---|
MD5 | 115532e6c7c321a1fe737bdde5731f4b |
SHA1 | 8a42ec6f7ba168939c60d521d30eba202479173e |
SHA256 | 4cff315346b9edb1ba625f48fe36105666dc9b95e711190578ff62752fa763e4 |
SHA512 | 0173d8b48f8612198cb4a20146b15d23769cacee4fac12a7c710a93ef2f17a30140ccee8e09c8f7c5262815cbb2e03876fe30986f276a9d36155832c4e03a233 |
Important Information about Upgrading
This release promotes the latest 1.21 release from preview to stable.
Beginning with version 1.17.0, if your current version of Humio is not directly able to upgrade to the new version, you will get an error if you attempt to start up the incompatible version. The 1.22.0 release is only compatible with Humio release 1.16.0 and newer. This means that you will have to ensure that you have upgraded at least to 1.16.0 before trying to upgrade to 1.22.0. In case you need to do a rollback, this can also ONLY happen back to 1.16.0 or newer. Rolling directly back to an earlier release can result in data loss.
UI revamp In this version the UI has been given a complete makeover.
Fixed in this release
Other
Restrict concurrency when mirroring uploaded files within the cluster
Fixed issue where tag filters in anonymous parts within an aggregate did not get applied
Fix issue where updating account settings would present the user with an error even though the update was successful
Change log lines creating a 'kind' field. Kind is used as a tag for the different humio logs
Add the "ProxyOrganization" header to the list of general auth headers used on REST calls
Fixed issue where local segment files would not get deleted in time, potentially filling the disk
Major changes: (see version 1.21.0 and 1.21.1 release notes)
Fixed issue where root users were not allowed to set unlimited time in retention settings
Increase HTTP chunk size from 16MB to 128MB
Humio Server 1.21.1 GA (2021-02-23)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.21.1 | GA | 2021-02-23 | Cloud | 2022-03-31 | No | 1.16.0 | No |
Available for download two days after release.
Hide file hashes
JAR Checksum | Value |
---|---|
MD5 | 3f2b7ae5265ab8e81d339ebbd260b802 |
SHA1 | e406e539b9d3474a25accc82d49efdbcc94c7f4d |
SHA256 | 0ec0db6bfaa232c30d1e0056442f58e5d2c20d9b7716ba136b8350d2c5c05bdd |
SHA512 | 01383adbe54ceb73679aece980b0a7752b07478a40ac3f811a3aadbf9fb55077ebf08842cb2442628af1200e677d0db289a6007b3706794c79377787ef56e898 |
Important Information about Upgrading
Beginning with version 1.17.0, if your current version of Humio is not directly able to upgrade to the new version, you will get an error if you attempt to start up the incompatible version. The 1.21.1 release is only compatible with Humio release 1.16.0 and newer. This means that you will have to ensure that you have upgraded at least to 1.16.0 before trying to upgrade to 1.21.1. In case you need to do a rollback, this can also ONLY happen back to 1.16.0 or newer. Rolling directly back to an earlier release can result in data loss.
Fixed in this release
Other
New "prefetch from bucket" job. When a node starts with an empty disk it will download a relevant subset of segment files from the bucket in order to have them present locally for queries.
Server:
header in responses from from Humio HTTP server now includes (Vhost, NodeRole) after the version string.Improve performance of "decrypt step" in downloads from bucket storage
Humio Server 1.21.0 GA (2021-02-22)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.21.0 | GA | 2021-02-22 | Cloud | 2022-03-31 | No | 1.16.0 | No |
Available for download two days after release.
Hide file hashes
JAR Checksum | Value |
---|---|
MD5 | 3175d041a4c0a6948d5e23993b7a3bcd |
SHA1 | 1356a57098602623b5cab8511f530aab3b04a080 |
SHA256 | 8f576aca2a00533180ed3710971bd9c4c419e275d618c4c745d004b9a5ad9987 |
SHA512 | 475c72b5655744be0a900269478d930942cd7aae9ec8acf0e38c1eff2a4c7ec243c91293996ad8288ec2ed9c72b896436bb8e12b67f44b999fc03d1f43db4a2d |
Important Information about Upgrading
Beginning with version 1.17.0, if your current version of Humio is not directly able to upgrade to the new version, you will get an error if you attempt to start up the incompatible version. The 1.21.0 release is only compatible with Humio release 1.16.0 and newer. This means that you will have to ensure that you have upgraded at least to 1.16.0 before trying to upgrade to 1.21.0. In case you need to do a rollback, this can also ONLY happen back to 1.16.0 or newer. Rolling directly back to an earlier release can result in data loss.
Removed
Items that have been removed as of this release.
Other
The deprecated built-in parser bro-json has been deleted. It has been replaced by the parser zeek-json.
The deprecated built-in parser json-for-notifier has been deleted. It has been replaced by the parser json-for-action.
Fixed in this release
Automation and Alerts
Create, update and delete of an alert, scheduled search or action is now recorded in the audit log.
Functions
Fixed a bug in
lowercase()
which caused the caselowercase(field="\*", include="values")
to not process all fields but only the field named"\*"
.Fixed a bug which caused validation to miss rejecting window() inside
window()
andsession()
.subnet()
now reports an error if its argumentbits
is outside the range 0 to 32.The
replace()
function now reports an error if the argumentsreplacement
andwith
are provided at the same time.The
split()
function no longer adds a @display field to the event it outputs.The
replace()
function now reports an error if an unsupported flag is provided in theflags
argument.Change handling of
groupBy()
in live-queries which should in many cases reduce memory cost.The functions
worldMap()
andgeohash()
now generated errors if the requested precision is greater than 12.Fixed a memory leak in
rdns()
in cases where many different name servers are used.Fixed a bug which caused
eventInternals()
to crash if used late in the pipeline.The
transpose()
function now reports an error if the argumentsheader
orcolumn
is provided together with the argumentpivot
.Fixed bugs in
format()
which caused output from%e
and%g
to be incorrect in certain cases.Fixed a performance and a robustness problem with the function
unit:convert()
. The formatting of the numbers in its output may in some cases be different now.The
findTimestamp()
function has been changed, so that it no longer has a default value for thetimezone
parameter. Previously, the default wasUTC
. If no timezone argument is supplied to the function, it will not parse timestamps that do not contain a timezone. To get the old functionality, simply addtimezone=UTC
to the function. This can be done before upgrading to this release.The experimental function
moment()
has been removed.
Other
Humio insights package installed if missing on the humio view when humio is started.
Fixed an issue causing event redirection to break when using copyEvent to get the same events ingested into multiple repositories.
Raised the note widget text length limit to .00.
kvParse()
now unescapes backslashes when they're inside ('
or"
) quotes.Fixed an issue where repeating queries would not validate in alerts.
Make the thread dump job run on a dedicated thread, rather than running on the thread pool shared with other jobs.
Fixed an issue with lack of escaping in filename when downloading.
Running test of a parser is no longer recorded in the audit log, and irrelevant fields are no longer recorded upon parser deletion.
Made loggings for running alerts more consistent and more structured. All loggings regarding a specific alert will contain the keys
alertId
,alertName
andviewId
. Loggings regarding the alert query will always contain the keyexternalQueryId
and sometimes also the keysqueryId
with the internal id andquery
with the actual query string. If there are problems with the run-as-user, the id of that user is logged with the keyuser
.Fixed a bug where analysis of a regex could consume extreme amounts of memory.
Raised the parser test character length limit to .00.
Fixed an issue where the segment mover might schedule too many segments for transfer at a time.
Fixed a number of potential concurrency issues.
Fixed an issue causing Humio to crash when attempting to delete an idle empty datasource right as the datasource receives new data.
Made sure the humio view humio default parser is only installed when missing, instead of overwriting it every time humio starts.
Improve number formatting in certain places by being better at removing trailing zeros.
Lowered the severity level for some loggings for running alerts.
Fixed a bug where referenced saved queries were not referenced correctly after exporting them as part of a package.
kvParse()
now also unescapes single quotes. ('
)Improve hit rate of query state cache by allowing similar but not identical queries to share cache when the entry in the cache can form the basis for both. The cache format is incompatible with previous versions, this is handled internally by handling incompatible cache entries as cache misses.
Fixed a bug which could cause saving of query state cache to take a rather long time.
The default parser
kv
has been changed from using theparseemacs vTimestamp()
function to use thefindTimestamp()
function. This will make it able to parse more timestamp formats. It will still only parse timestamps with a timezone. It also no longer adds atimezone
field with the extracted timestamp string. This was only done for parsing the timestamp and not meant for storing on the event. To keep the old functionality, clone thekv
parser in the relevant repositories and store the cloned parser with the namekv
. This can be done before upgrading to this release. See kv.Fixed a bug in
parseJson()
which resulted in failed JSON parsing if an object contained an empty key (""
).Fixed an issue with the validation of the query prefix set on a view for each repository within the view: Invoking macros is not allowed and was correctly rejected when creating a view, but was not rejected when editing an existing connection.
Fixed a bug which could potentially cause a query state cache file to be read in an incomplete state.
Improve performance of
writeJson()
a bit.When using filters on dashboards, you can now easily reset the filter, either removing it completely, or using the default filter if one is present.
Prevent Humio from booting when ZooKeeper has been reset but Kafka has not.
Fixed an issue causing segment tombstones to potentially be deleted too early if bucket storage is enabled, causing an error log.
Made loggings for running scheduled searches more consistent and more structured. All loggings regarding a specific alert will contain the keys
scheduledSearchId
,scheduledSearchName
andviewId
. Loggings regarding the alert query will always contain the keyexternalQueryId
and sometimes also the keysqueryId
with the internal id andquery
with the actual query string. If there are problems with the run-as-user, the id of that user is logged with the keyuser
.Fixed an issue where cancelled queries could be cached.
Fixed a bug in
upper()
andlower()
which could cause its output to be corrupted (in cases where no characters had been changed).Fixed an issue where merge of segments were reported as failed due to input files being deleted while merging. This is not an error, and is no longer reported as such.
kvParse()
now only unescapes quotes and backslashes that are inside a quoted string.Added support for disaster recovery of a cluster where all nodes including Kafka has been lost, restoring the state present in bucket storage as a fresh cluster using the old bucket as read-only, and forming a fresh cluster from that. New Configs:
S3_RECOVER_FROM_REPLACE_REGION
andS3_RECOVER_FROM_REPLACE_BUCKET
to allow modifying names of region/bucket while recovering to allow running on a replica, specifying read-only source usingS3_RECOVER_FROM*
for all the bucket storage target parameters otherwise namedS3_STORAGE
*When using ephemeral disks on nodes are being replaced with new ones on empty disks no longer download most of the segments they had before being replaced, but instead schedule downloads based on is being searched.
The Auth0 login page will no longer load a local version of the Auth-Lock library, but instead load a login script hosted on Auth0's CDN. This may require opening access to
https://cdn.auth0.com/
if hosting Humio behind a firewall.
Packages
When exporting a package, you now get a preview of the icon you've added for the package.
Packages can now be updated with the same version but new content. This makes iterating over a package before finalizing it easier.
Humio Server 1.20.4 LTS (2021-02-22)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.20.4 | LTS | 2021-02-22 | Cloud | 2022-01-31 | No | 1.16.0 | No |
Hide file hashes
JAR Checksum | Value |
---|---|
MD5 | e84da693a3ca628775ca56a33f27db91 |
SHA1 | fd2a37750456d861a92ee41840701c2040b71d79 |
SHA256 | 0fc7d2b010abbc263ef97ae4cb019994804133ff858f23de03e0bce69760011a |
SHA512 | 9af984c0a5896046cf0f3febd3b64aef7af501404df75a612c04452140aa9cd9c97f4ed2a5a95fe241ac227db3f3bbbe072ddaa2bed058053b6cfeafcbc388e5 |
These notes include entries from the following previous releases: 1.20.0, 1.20.1, 1.20.2, 1.20.3
Important Information about Upgrading
Beginning with version 1.17.0, if your current version of Humio is not directly able to upgrade to the new version, you will get an error if you attempt to start up the incompatible version. The 1.20.4 release is only compatible with Humio release 1.16.0 and newer. This means that you will have to ensure that you have upgraded at least to 1.16.0 before trying to upgrade to 1.20.4. In case you need to do a rollback, this can also ONLY happen back to 1.16.0 or newer. Rolling directly back to an earlier release can result in data loss.
Fixed in this release
Functions
Other
Fixed an issue causing event redirection to break when using
copyEvent()
function to get the same events forwarded to multiple repositories.Fixed a bug that exporting a package with dashboard parameters would not set the correct name space for a saved query called in a parameter.
Fixed a bug that exporting a package using a saved query with spaces in the name would not export the correct name.
New "prefetch from bucket" job - When a node starts with an empty disk it will download a relevant subset of segment files from the bucket in order to have them present locally for queries.
Minor fix to Humio internal JSON logging when using the configuration
HUMIO_LOG4J_CONFIGURATION=log4j2-json-stdout.xml
.Fixed an issue where cloning a dashboard or parser would clone the wrong entity.
Enable Package marketplace (in beta)
Query scheduling now tracks cost spent across queries for each user and tends to select next task so that users (rather than queries) each get a fair share of available CPU time.
Segment download timeout raised from 120s to 1.0s. Avoids situations where large segments could not be moved around a cluster.
Reduce triggering of auto completion in the query editor.
Fixed an issue where some parts of regexes did not shown in the parser editor.
Improve query cache hit rate by not starting queries locally when the preferred nodes are down, if the local node has just started — as there is a fair chance the preferred nodes will show up shortly too.
Fixed a bug in
parseJson()
which resulted in failed JSON parsing if an object contained an empty key (""
).Connecting to the Packages now respects the Humio proxy configuration
Improve auto completion suggestions.
Fixed an issue causing the secondary storage transfer job to plan more segments for transfer than necessary.
Handle inconsistencies in the global entities file gracefully rather than crashing.
Fixed a bug where merged segments could grow too large if the source events were large.
Major changes (see version 1.19.0 release notes)
Fixed a bug where fields panel was not scrollable in Safari.
Humio Server 1.20.3 LTS (2021-02-11)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.20.3 | LTS | 2021-02-11 | Cloud | 2022-01-31 | No | 1.16.0 | No |
Hide file hashes
JAR Checksum | Value |
---|---|
MD5 | eb0a1ddfb3bd3440206fc7ce95fedd74 |
SHA1 | 0eedbdc67554a9efbb0fcc5f667a99c899ae43bc |
SHA256 | ecdcb4a299eb3668ed6d5435c97469b4f7b1423f239f4e0a4b95a88d9b8fd566 |
SHA512 | 53dd06169c01eaccc6766f53f6a7c71910c131d7600bfdcdb3f9796d61e7ccdcfa801d8b2a2b9ed4b205fa7c3e567dc2a9a3b6879c94e10f408892cd0d57d77d |
These notes include entries from the following previous releases: 1.20.0, 1.20.1, 1.20.2
Important Information about Upgrading
Beginning with version 1.17.0, if your current version of Humio is not directly able to upgrade to the new version, you will get an error if you attempt to start up the incompatible version. The 1.20.3 release is only compatible with Humio release 1.16.0 and newer. This means that you will have to ensure that you have upgraded at least to 1.16.0 before trying to upgrade to 1.20.3. In case you need to do a rollback, this can also only happen back to 1.16.0 or newer. Rolling directly back to an earlier release can result in data loss.
Fixed in this release
Functions
Other
Fixed a bug that exporting a package with dashboard parameters would not set the correct name space for a saved query called in a parameter.
Fixed a bug that exporting a package using a saved query with spaces in the name would not export the correct name.
Minor fix to Humio internal JSON logging when using the configuration
HUMIO_LOG4J_CONFIGURATION=log4j2-json-stdout.xml
.Fixed an issue where cloning a dashboard or parser would clone the wrong entity.
Enable Package marketplace (in beta)
Query scheduling now tracks cost spent across queries for each user and tends to select next task so that users (rather than queries) each get a fair share of available CPU time.
Segment download timeout raised from 120s to 1.0s. Avoids situations where large segments could not be moved around a cluster.
Reduce triggering of auto completion in the query editor.
Fixed an issue where some parts of regexes did not shown in the parser editor.
Improve query cache hit rate by not starting queries locally when the preferred nodes are down, if the local node has just started — as there is a fair chance the preferred nodes will show up shortly too.
Fixed a bug in
parseJson()
which resulted in failed JSON parsing if an object contained an empty key (""
).Improve auto completion suggestions.
Fixed an issue causing the secondary storage transfer job to plan more segments for transfer than necessary.
Handle inconsistencies in the global entities file gracefully rather than crashing.
Fixed a bug where merged segments could grow too large if the source events were large.
Major changes (see version 1.19.0 release notes)
Fixed a bug where fields panel was not scrollable in Safari.
Humio Server 1.20.2 LTS (2021-02-11)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.20.2 | LTS | 2021-02-11 | Cloud | 2022-01-31 | No | 1.16.0 | No |
Hide file hashes
JAR Checksum | Value |
---|---|
MD5 | c8369fd679e4ef4fb1ddad2a632156e1 |
SHA1 | 7dc7214d24c3314fa9e60128106aed4078b2c7c9 |
SHA256 | e1c8188fd7e0dc92c648a675514627c4a9ae0f52658aac748d35a0105347c4ba |
SHA512 | fea474b22bebcbe7c5ad1e142fbf157f495b58726951e8b52820475fdda5eb48503a76bef7e2178ac3aeddcc6087d6bdd66c3ab5c67dbcf721da1ebc4bd671fa |
These notes include entries from the following previous releases: 1.20.0, 1.20.1
Important Information about Upgrading
Beginning with version 1.17.0, if your current version of Humio is not directly able to upgrade to the new version, you will get an error if you attempt to start up the incompatible version. The 1.20.2 release is only compatible with Humio release 1.16.0 and newer. This means that you will have to ensure that you have upgraded at least to 1.16.0 before trying to upgrade to 1.20.2. In case you need to do a rollback, this can also only happen back to 1.16.0 or newer. Rolling directly back to an earlier release can result in data loss.
Fixed in this release
Functions
Other
Fixed a bug that exporting a package with dashboard parameters would not set the correct name space for a saved query called in a parameter.
Fixed a bug that exporting a package using a saved query with spaces in the name would not export the correct name.
Minor fix to Humio internal JSON logging when using the configuration
HUMIO_LOG4J_CONFIGURATION=log4j2-json-stdout.xml
.Fixed an issue where cloning a dashboard or parser would clone the wrong entity.
Enable Package marketplace (in beta)
Query scheduling now tracks cost spent across queries for each user and tends to select next task so that users (rather than queries) each get a fair share of available CPU time.
Segment download timeout raised from 120s to 1.0s. Avoids situations where large segments could not be moved around a cluster.
Reduce triggering of auto completion in the query editor.
Fixed an issue where some parts of regexes did not shown in the parser editor.
Improve query cache hit rate by not starting queries locally when the preferred nodes are down, if the local node has just started — as there is a fair chance the preferred nodes will show up shortly too.
Fixed a bug in
parseJson()
which resulted in failed JSON parsing if an object contained an empty key (""
).Fixed an issue causing the secondary storage transfer job to plan more segments for transfer than necessary.
Handle inconsistencies in the global entities file gracefully rather than crashing.
Fixed a bug where merged segments could grow too large if the source events were large.
Major changes (see version 1.19.0 release notes)
Fixed a bug where fields panel was not scrollable in Safari.
Humio Server 1.20.1 LTS (2021-02-01)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.20.1 | LTS | 2021-02-01 | Cloud | 2022-01-31 | No | 1.16.0 | No |
Hide file hashes
JAR Checksum | Value |
---|---|
MD5 | 5f2ccaea9f97572ec047be936d7ff0de |
SHA1 | 5e434cc0df8b437a3974be645e80c3f2c56ec78a |
SHA256 | 74f2fcf3d5d2bc5b1e5830d98a03512395ac7beb6456bf754e69e48e89e1d7cf |
SHA512 | 16a58326069b3dd9e401688627e62e825249654501f63c939d245c6b501bd683c7682a634f9cb0265c17029ac8b7cdf038e57daa27a57478980115bce18c53ce |
These notes include entries from the following previous releases: 1.20.0
Important Information about Upgrading
Beginning with version 1.17.0, if your current version of Humio is not directly able to upgrade to the new version, you will get an error if you attempt to start up the incompatible version. The 1.20.1 release is only compatible with Humio release 1.16.0 and newer. This means that you will have to ensure that you have upgraded at least to 1.16.0 before trying to upgrade to 1.20.1. In case you need to do a rollback, this can also only happen back to 1.16.0 or newer. Rolling directly back to an earlier release can result in data loss.
Fixed in this release
Other
Minor fix to Humio internal JSON logging when using the configuration
HUMIO_LOG4J_CONFIGURATION=log4j2-json-stdout.xml
.Enable Package marketplace (in beta)
Segment download timeout raised from 120s to 1.0s. Avoids situations where large segments could not be moved around a cluster.
Fixed an issue causing the secondary storage transfer job to plan more segments for transfer than necessary.
Fixed a bug where merged segments could grow too large if the source events were large.
Major changes (see version 1.19.0 release notes)
Humio Server 1.20.0 LTS (2021-01-28)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.20.0 | LTS | 2021-01-28 | Cloud | 2022-01-31 | No | 1.16.0 | Yes |
Hide file hashes
JAR Checksum | Value |
---|---|
MD5 | 095aa44f1cbaf9cdffc9753286206248 |
SHA1 | fd4eb8cc488d18e7fedff3af212a9919d399b256 |
SHA256 | 79868d79d893ebb3114208e5e7f64d20ded44b76f4342b3b2d932df6288c927e |
SHA512 | 9b91b51b21654ec1961ee9ba31193ad5511850cdb3623d82403356191535a74febeb51a49fb711438408898fe4222f2522f3c37b33c81c75091b6a415c2f0924 |
Important Information about Upgrading
This release promotes the latest 1.19 release from preview to stable.
Beginning with version 1.17.0, if your current version of Humio is not directly able to upgrade to the new version, you will get an error if you attempt to start up the incompatible version. The 1.20.0 release is only compatible with Humio release 1.16.0 and newer. This means that you will have to ensure that you have upgraded at least to 1.16.0 before trying to upgrade to 1.20.0. In case you need to do a rollback, this can also only happen back to 1.16.0 or newer. Rolling directly back to an earlier release can result in data loss.
This version introduces Humio packages - a way of bundling and sharing assets such as dashboards and parsers. You can create your own packages to keep your Humio assets in Git or create utility packages that can be installed in multiple repositories. All assets can be serialized to YAML files (like what has been possible for dashboards for a while). With tight integration with Humio's CLI humioctl you can install packages from local disk, URL, or directly from a Github repository. Packages are still in beta, but we encourage you do start creating packages yourself, and sharing them with the community. At Humio we are also very interested in talking with package authors about getting your packages on our upcoming marketplace.
Read more about packages on our Packages.
With the introduction of Humio packages we have created the application Insights Package. The application is a collection of dashboards and saved searches making it possible to monitor and observe a Humio cluster.
The new query editor has a much better integration with Humio's query language. It will give you suggestions as you type, and gives you inline errors if you make a mistake. We will continue to improve the capabilities of the query editor to be aware of fields, saved queries, and other contextual information.
A new function called test()
has been added
for convenience. What used to be done like:
tmp := expression | tmp=true
can
now be done using: test( expression
)
. Inside expression
,
field names appearing on the right hand side of an equality
test, such as field1==field2
compares the values of the two fields. When comparing using
=
at top-level
field1=field2
compares the value
of field1
against the string
"field2"
. This distinction is a
cause of confusion for some users, and using
test()
simplifies that.
We have made small changes to how Humio logs internally. We did this to better support the new humio/insights. We have tried to keep the changes as small and compatible as possible, but we have made some changes that can break existing searches in the humio repository (or other repositories receiving Humio logs). We made these changes as we think they are important in order to improve things moving forward. One of the benefits is the new humio/insights. Read more about the details LogScale Internal Logging.
To see more details, go through the individual 1.19.x release notes.
Fixed in this release
Other
Enable Package marketplace (in beta)
Segment download timeout raised from 120s to 1.0s. Avoids situations where large segments could not be moved around a cluster.
Fixed an issue causing the secondary storage transfer job to plan more segments for transfer than necessary.
Fixed a bug where merged segments could grow too large if the source events were large.
Major changes (see version 1.19.0 release notes)
Humio Server 1.19.2 GA (2021-01-25)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.19.2 | GA | 2021-01-25 | Cloud | 2022-01-31 | No | 1.16.0 | No |
Available for download two days after release.
Hide file hashes
JAR Checksum | Value |
---|---|
MD5 | 01e72a42b56685f50111466e91475e9b |
SHA1 | 7d09c09579a2c00cb2cc0a38c421969861f9c6a0 |
SHA256 | 83e7efdf3bd38eb806b30c81603e4eefe5ee17f88fa9c6440fd56d11c86d2869 |
SHA512 | 3ccc62d52899ecad9fd7626bb15f90b05bcabaf761d997c4aa63849f93702c6caf32d1f6ed191c981054c1df853ec6fbcdbd07fba0e76e289d4f71d39b542d9b |
Important Information about Upgrading
Beginning with version 1.17.0, if your current version of Humio is not directly able to upgrade to the new version, you will get an error if you attempt to start up the incompatible version. The 1.19.2 release is only compatible with Humio release 1.16.0 and newer. This means that you will have to ensure that you have upgraded to minimum 1.16.0 before trying to upgrade to 1.19.2. In case you need to do a rollback, this can also ONLY happen back to 1.16.0 or newer, rolling directly back to an earlier release can result in data loss.
Fixed in this release
Other
Fixed an issue for on-prem users not on multitenant setup by reverted a metric change introduced in 1.18.0, jmx and Slf4j included an OrgId in all metrics for repositories.
Packages
Fixed automatic installation of Humio insights package to the humio repository.
Humio Server 1.19.1 GA (2021-01-19)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.19.1 | GA | 2021-01-19 | Cloud | 2022-01-31 | No | 1.16.0 | No |
Available for download two days after release.
Hide file hashes
JAR Checksum | Value |
---|---|
MD5 | 7ba0f9e286f6933cb19b49be72df8078 |
SHA1 | 489be0b931419cd128d5a03e2b2fbce819095257 |
SHA256 | 2f8a22da0e6f85c4cd4479367635ad9c0a0586f2fc69df55533e349e3ea29bab |
SHA512 | b306f424ae1425145a382be5847e46f6f56d91777202ef94117c755be5f713dab787ae6da2a4acd0091d04d07d75b3b4154185eab5327f8db558071d1b4d301f |
Important Information about Upgrading
Beginning with version 1.17.0, if your current version of Humio is not directly able to upgrade to the new version, you will get an error if you attempt to start up the incompatible version. The 1.19.1 release is only compatible with Humio release 1.16.0 and newer. This means that you will have to ensure that you have upgraded to minimum 1.16.0 before trying to upgrade to 1.19.1. In case you need to do a rollback, this can also ONLY happen back to 1.16.0 or newer, rolling directly back to an earlier release can result in data loss.
Fixed in this release
Functions
Fixed bug where the
format()
function produced wrong output for some floating-point numbers.
Other
Fixed an issue - Do not delete datasource before the segments have been deleted also in bucket storage if present there.
Update dependencies with known vulnerabilities
Do not retry a query when getting a HTTP .0 error
Do not cache cancelled queries.
Packages
Fixed bug in a saved query in the Humio insights package.
Humio Server 1.19.0 GA (2021-01-14)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.19.0 | GA | 2021-01-14 | Cloud | 2022-01-31 | No | 1.16.0 | Yes |
Available for download two days after release.
Hide file hashes
JAR Checksum | Value |
---|---|
MD5 | 63d03b5a7d362d1d9a5dfcb5a7d6fcea |
SHA1 | 7ed7776a690ff76afd4ff77ac585a28ef7ee1b2c |
SHA256 | 532dd54bc612b6f771a142899277430469a85c3a431a7824105c1ab69d21974e |
SHA512 | 9edc286d2409cdf36496cc9a7c69ab525ac3207006f1ce3aa194bd17e59e4601f676dbdec0c71cfecd197364b45d83398e5566cebbed82e0a10e1d19ae2e91eb |
Important Information about Upgrading
Beginning with version 1.17.0, if your current version of Humio is not directly able to upgrade to the new version, you will get an error if you attempt to start up the incompatible version. The 1.19.0 release is only compatible with Humio release 1.16.0 and newer. This means that you will have to ensure that you have upgraded at least to 1.16.0 before trying to upgrade to 1.19.0. In case you need to do a rollback, this can also ONLY happen back to 1.16.0 or newer. Rolling directly back to an earlier release can result in data loss.
Deprecation
Items that have been deprecated and may be removed in a future release.
New config
MAXMIND_IP_LOCATION_EDITION_ID
for selecting the maxmind edition of the IP location database. DeprecatesMAXMIND_EDITION_ID
, but old config will continue to work.
New features and improvements
Other
Stateless Ingest-only nodes: A node that the rest of the cluster does not know exists, but is capable of ingesting events into the ingest queue. Enable using
NODE_ROLES=ingestonly
.Custom ingest tokens making it possible for root users to create ingest tokens with a custom string.
Fixed in this release
Configuration
New config
AUTO_UPDATE_MAXMIND
for enabling/disabling updating of all maxmind databases. DeprecatesAUTO_UPDATE_IP_LOCATION_DB
, but old config will continue to work.New config
QUERY_QUOTA_EXCEEDED_PENALTY
with value 50 by default. When set >= 1.0 then this throttles queries from users that are over their quota by this factor rather than stopping their queries. Set to 0 to disable and revert to stopping queries.
Functions
New function
hash()
for computing hashes of fields. Seehash()
reference page.Fixed an issue with the
cidr()
function that would make some IPv4 subnets accept IPv6 addresses and some strings that were not valid IP addresses.Make the query functions
window()
andseries()
be enabled by default. They can be disabled by setting the configuration optionsWINDOW_ENABLED
andSERIES_ENABLED
tofalse
, respectively.Added a new function for retrieving the ASN number for a given IP address, see
asn()
reference page.Fixed an issue causing queries using
kvParse()
to be executed incorrectly in certain circumstances whenkvParse()
assigned fields starting with a non-alphanumeric character.Fixed an issue where unit-conversion (by timechart) did not take effect through
groupBy()
andwindow()
.Fixed an issue causing queries using
kvParse()
to filter out too much in specific circumstances when filtering on a field assigned beforekvParse()
.
Other
New filter function
test()
.Removed config
IDLE_POLL_TIME_BEFORE_DASHBOARD_QUERY_IS_CANCELLED_MINUTES
. Queries on dashboards now have the same life cycle as other queries.API Changes (Non-Documented API):
getFileContent
has been moved to a field on the SearchDomain type.The built-in
json-for-notifier
parser used by the Humio Repository action (formerly notifier) is deprecated and will be removed in a later release. It has been replaced by an identical parser with the namejson-for-action
, see json-for-action.Notifiers have been renamed to Actions throughout the UI and in log statements. The REST APIs have not been changed and all message templates can still be used.
New feature "Event forwarding" making it possible to forward events during ingest out of Humio to a Kafka server. See Event Forwarding documentation. Currently only available for on-prem customers.
When a host dies and Humio reassigns digest, it will warn if a fallback host is picked that is in the same zone as existing replicas. Eliminate warning if falling back to a host in the null zone.
Renamed
LOG4J_CONFIGURATION
environment variable toHUMIO_LOG4J_CONFIGURATION
. See Configuration Settings.Custom made saved queries, alerts and dashboards in the humio repository searching for events of the kinds metrics, requests or nonsensitive may need to be modified. This is described in more detail in LogScale Internal Logging.
Reduced the number of writes to global on restart, due to merge targets not being properly reused.
Raised the limit for note widget text length to .00
API Changes (Non-Documented API): Queries and Mutations for Parser now expects an
id
field in place of aname
field, when fetching and updating parsers.Improve handling of broken local cache files
The Humio Repository action (formerly notifier) now replaces a prefix '#' character in field names with @tag. so that
#source
becomes@tag.source
. This is done to make them searchable in Humio. You can change the name by creating a custom parser. See Action Type: Falcon LogScale Repository.Fixed bug where repeating queries would not validate in alerts.
Updated the permission checks when polling queries. This will results in dashboard links "created by users who are either deleted or lost permissions to the view" to get unauthorized. To list all dashboard links, run this graphql query as root:
graphqlquery { searchDomains { dashboards { readOnlyTokens { createdBy name token } } } }
Fixed an rare issue where the digest coordinator would assign digest fewer hosts than configured.
The function
parseCEF()
now deals with extension fields with labels, i.e.cs1=Value cs1Label=Key
becomescef.label.Key=Value
.In the GraphQL API, the value
ChangeAlertsAndNotifiers
on thePermission
enum has been deprecated and will be removed in a later release. It has been replaced by theChangeTriggersAndActions
value. The same is true for theViewAction
enum. On theViewPermissionsType
type, theadministerAlerts
field has been deprecated and will be removed in a later release. It has been replaced by theadministerTriggersAndActions
field.Fixed an issue where segment merge occasionally reported BrokenSegmentException when merging, while the segments where not broken.
Introduction of the new log file
humio-requests.log
. Also the log format for the fileshumio-metrics.log
andhumio-nonsensitive.log
has changed as described above. See Log LogScale to LogScale.Cluster management stats now shows segments as underreplicated if they are replicated to enough hosts, but are not present on all configured hosts.
unit
on timechart (and bucket) now works also when the function within uses nesting and anonymous pipelines.Fixed a bug where fullscreen mode could end up blank
Made cluster nodes log their own version as well as the versions of all other nodes. This makes it easier to tell which versions are running in the cluster.
API Changes (Non-Documented API): Getting Alert by ID has been moved to a field on the SearchDomain type.
Improved app loading logic.
The transfer job will delete primary copies shortly after transferring the segments to secondary storage. The copies would previously only be deleted once a full bulk had been moved.
New ingest endpoint
/api/v1/ingest/raw
for ingesting singular webcalls as events. See Ingest API - Raw Data documentation.Fixed an issue where canceling queries could produce a spurious error log.
Raised the parser test character length to .00.
Fixed crash in CleanupDatasourceFilesJob when examining a file size fails due to that file being deleted concurrently.
Fixed timeout issue in S3 Archiving
Fixed an issue causing Humio to retain deleted mini-segments in global for longer than expected.
The configuration option
HTTP_PROXY_ALLOW_NOTIFIERS_NOT_USE
has been renamed toHTTP_PROXY_ALLOW_ACTIONS_NOT_USE
. The old name will continue to work.In the GraphQL API, on the
Alert
type, thenotifiers
field has been deprecated and will be removed in a later release. It has been replaced by theactions
field.The names of the metadata fields added by the Humio Repository action (formerly notifier) has been changed to accomodate that it can now also be used from scheduled searches. See Action Type: Falcon LogScale Repository.
The configuration option
IP_FILTER_NOTIFIERS
has been renamed toIP_FILTER_ACTIONS
. The old name will continue to work.New feature "Scheduled Searches" making it possible to run queries on a schedule and trigger actions (formerly notifiers) upon query results. See Scheduled Searches.
No longer overwrite the humio parser in the humio repository on startup.
Fixed an issue with updating user profile, in some situations save failed.
Fixed an issue that could cause node id assignment to fail when running on ephemeral disks and using ZooKeeper for node id assignment. Nodes in this configuration will now try to pick a new id if their old id has been acquired by another node.
New validation when creating an ingest token using the API that the parser, if specified, actually exists in the repository.
For ingest using a URL with a repository name in it, Humio now fails ingest if the repository in the URL does not match the repository of the ingest token. Previously, it would just use the repository of the ingest token.
The built-in
bro-json
parser is deprecated and will be removed in a later release. It has been replaced by an identical parser with the namezeek-json
, see zeek-json.Added config option for Auth0 based sign on method:
AUTH_ALLOW_SIGNUP
defaults to true. The config is forwarded to the auth0 configuration for the lock widget setting: allowSignUpFixed an issue causing the secondary storage transfer job to select and queue too many segments for transfer at once. The job will now stop and recalculate the bulk to transfer periodically.
Kafka client inside Humio has been bumped from 2.4.1 to 2.6.0.
Fixed an issue where the filter and groupBy buttons on the search page would not restart the search automatically
Fixed a rare issue where a node that was previously assigned digest could write a segment to global, even though it was no longer assigned the associated partition.
Fixed an issue where the segment rewrite job handling event deletion might rewrite segments sooner than configured.
Add an error message to the event if the user is trying to redirect it to another repo using #repo, and the target repo is invalid.
Fixed logic for when the organization owner panel should be shown in the User's Danger zone.
Upgraded Log4j2 from 2.13.3 to 2.14.0.
Added timeout for TCP ingest listeners. By default the connection is closed if no data is received after 5 minutes. This can be changed by setting
TCP_INGEST_MAX_TIMEOUT_SECONDS
. See Ingest Listeners.Added mutation to update the runAsUser for a read only dashboard token.
Humio no longer deletes an existing humio-search-all view if the
CREATE_HUMIO_SEARCH_ALL
environment variable is false. The view instead becomes deleteable via the admin page.Reduce contention on the query scheduler input queue. It was previously possible for large queries to prevent each other from starting, leading to timeouts.
Humio will only allow using ZooKeeper for node id assignment (
ZOOKEEPER_URL_FOR_NODE_UUID
) when configured for ephemeral disks (USING_EPHEMERAL_DISKS
). When using persistent disks, there is no need for the extra complexity added by ZooKeeper.Fixed an issue which caused free-text-search to not work correctly for large (>64KB) events.
Packages
Introduced humio insights package that is installed per default on startup on the humio repository.
Improvement
UI Changes
The new query editor has a much better integration with Humio's query language. It will give you suggestions as you type, and gives you inline errors if you make a mistake. We will continue to improve the capabilities of the query editor to be aware of fields, saved queries, and other contextual information.
Functions
A new function called
test()
has been added for convenience. What used to be executed like:tmp := ;expression | tmp=true
can now be done using:test( <expression> )
. Insideexpression
field names appearing on the right hand side of an equality test, such asfield1==field2
compares the values of the two fields. When comparing using=
at top-levelfield1=field2
compares the value offield1
against the string"field2"
. This distinction is a cause of confusion for some users, and usingtest()
simplifies that.
Other
With the introduction of Humio packages we have created the Insights Package. The application is a collection of dashboards and saved searches making it possible to monitor and observe a Humio cluster.
We have made small changes to how Humio logs internally. We did this to better support the new humio/insights. We have tried to keep the changes as small and compatible as possible, but we have made some changes that can break existing searches in the humio repository (or other repositories receiving Humio logs). We made these changes as we think they are important in order to improve things moving forward.
Read more about the details of LogScale Internal Logging.
Packages
This version introduces Humio packages - a way of bundling and sharing assets such as dashboards and parsers. You can create your own packages to keep your Humio assets in Git or create utility packages that can be installed in multiple repositories. All assets can be serialized to YAML files (like what has been possible for dashboards for a while). With tight integration with Humio's CLI humioctl you can install packages from local disk, URL, or directly from a Github repository. Packages are still in beta, but we encourage you do start creating packages yourself, and sharing them with the community. At Humio we are also very interested in talking with package authors about getting your packages on our upcoming marketplace.
Read more about Packages.
Humio Server 1.18.4 LTS (2021-01-25)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.18.4 | LTS | 2021-01-25 | Cloud | 2021-11-30 | No | 1.16.0 | No |
Hide file hashes
JAR Checksum | Value |
---|---|
MD5 | d6d2999c1640e0b922b79163c9ec83b5 |
SHA1 | a0452151aaa9437a2d4f772f23d3f817080d015c |
SHA256 | b786cb7edf1d2b8729a44ff73ce8239688aefd2431940d283a15537b3be01b37 |
SHA512 | cfce44d51dce08213bfc26f5bd74cf3596185bc15e87518f63cf9b601f0e8118923d61afa17770827cd1fae6547a5d11decdacc9cffb57f95f0f3b351151632d |
These notes include entries from the following previous releases: 1.18.0, 1.18.1, 1.18.2, 1.18.3
Important Information about Upgrading
Beginning with version 1.17.0, if your current version of Humio is not directly able to upgrade to the new version, you will get an error if you attempt to start up the incompatible version. The 1.18.4 release is only compatible with Humio release 1.16.0 and newer. This means that you will have to ensure that you have upgraded at least to 1.16.0 before trying to upgrade to 1.18.4. In case you need to do a rollback, this can also ONLY happen back to 1.16.0 or newer. Rolling directly back to an earlier release can result in data loss.
Fixed in this release
Automation and Alerts
Fixes a bug where some valid repeating queries would not validate in alerts.
Other
Changed behaviour when the config
ZONE
is set to the empty string. It is now considered the same as omittingZONE
.Major changes (see 1.17.0 release notes)
Fixed a bug where TCP listener threads could take all resources from HTTP threads
Do not retry a query when getting a HTTP .0 error
Update dependencies with known vulnerabilities
Fixes a bug that would allow users with read access to be able to delete a file (#10133)
Improve handling of a node being missing from the cluster for a long time by letting other nodes handle the parts of the query that node would normally do.
Add non-sensitive logging that lists the versions of Humio running in the cluster. These logs can be found by searching the Humio debug log for "cluster_versions".
Improve performance of S3 archiving when many repositories have the feature enabled.
Resolves problem when starting a query spanning very large data sets, a time-out could prevent the browser from getting responses initially.
Adds a new configuration option for auth0:
AUTH_ALLOW_SIGNUP
. Default value is true.Fixes a bug where
top([a,b], sum=f)
ignored events where f was not a positive integer. Now it ignores negative and non-numerical input but rounds decimal numbers to integer value.Do not cache cancelled queries.
Removed config
IDLE_POLL_TIME_BEFORE_DASHBOARD_QUERY_IS_CANCELLED_MINUTES
. Queries on dashboards now have the same life cycle as other queries.Improves handling when many transfers to secondary storage are pending.
Fixes a bug where the
to
parameter to unit:convert would cause internal server errors instead of validation errors.Add GraphQL mutation to update the runAsUser for a read only dashboard token.
Fixes a bug where queries with
@timestamp=x
where x was a timestamp with the current search interval could failFixes a bug where a query would not start automatically when requesting to filter or group by a value.
Fixes a bug where the merge of mini segments could fail during sampling of input for compression.
Fixes a bug where the permissions check on editing a connection from a view to a repository allowed altering the search prefix of connections other than the one the user currently was allowed to edit.
Fixed an issue for on-prem users not on multitenant setup by reverted a metric change introduced in 1.18.0, jmx and Slf4j included an OrgId in all metrics for repositories.
Fixed bug where the
format()
function produced wrong output for some floating-point numbers.Increase number of vCPUs used when parsing TCP ingest, twice the number of the 1.18.0 build.
Fixed bug so as to reduce contention on the Query input queue.
Only install default Humio parser to the Humio view if it is missing. No longer overwriting local changes.
Fixed bug where Humio could end in a corrupted state, needing manual intervention before working again.
Humio Server 1.18.3 LTS (2021-01-20)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.18.3 | LTS | 2021-01-20 | Cloud | 2021-11-30 | No | 1.16.0 | No |
Hide file hashes
JAR Checksum | Value |
---|---|
MD5 | 2db608cedd7603e6cd595c58aa9fad5d |
SHA1 | ea622d5f1a706daa3c6345bc1237918836498f9f |
SHA256 | 00f4f144647918d42cf85cb26ece7c0529a76af02d6a56dd483ada74b8028816 |
SHA512 | 2335597bf91028e1b476178039e15b47645cca2f8dfdf445514e279133a042ed44a5450c9456710a9843279d431b37fff85ba4c41abab56780d8923e81e6dcc9 |
These notes include entries from the following previous releases: 1.18.0, 1.18.1, 1.18.2
Important Information about Upgrading
Beginning with version 1.17.0, if your current version of Humio is not directly able to upgrade to the new version, you will get an error if you attempt to start up the incompatible version. The 1.18.3 release is only compatible with Humio release 1.16.0 and newer. This means that you will have to ensure that you have upgraded at least to 1.16.0 before trying to upgrade to 1.18.3. In case you need to do a rollback, this can also ONLY happen back to 1.16.0 or newer. Rolling directly back to an earlier release can result in data loss.
Fixed in this release
Automation and Alerts
Fixes a bug where some valid repeating queries would not validate in alerts.
Other
Changed behaviour when the config
ZONE
is set to the empty string. It is now considered the same as omittingZONE
.Major changes (see 1.17.0 release notes)
Fixed a bug where TCP listener threads could take all resources from HTTP threads
Do not retry a query when getting a HTTP .0 error
Update dependencies with known vulnerabilities
Fixes a bug that would allow users with read access to be able to delete a file (#10133)
Improve handling of a node being missing from the cluster for a long time by letting other nodes handle the parts of the query that node would normally do.
Add non-sensitive logging that lists the versions of Humio running in the cluster. These logs can be found by searching the Humio debug log for "cluster_versions".
Improve performance of S3 archiving when many repositories have the feature enabled.
Resolves problem when starting a query spanning very large data sets, a time-out could prevent the browser from getting responses initially.
Adds a new configuration option for auth0:
AUTH_ALLOW_SIGNUP
. Default value is true.Fixes a bug where
top([a,b], sum=f)
ignored events where f was not a positive integer. Now it ignores negative and non-numerical input but rounds decimal numbers to integer value.Do not cache cancelled queries.
Removed config
IDLE_POLL_TIME_BEFORE_DASHBOARD_QUERY_IS_CANCELLED_MINUTES
. Queries on dashboards now have the same life cycle as other queries.Improves handling when many transfers to secondary storage are pending.
Fixes a bug where the
to
parameter to unit:convert would cause internal server errors instead of validation errors.Add GraphQL mutation to update the runAsUser for a read only dashboard token.
Fixes a bug where queries with
@timestamp=x
where x was a timestamp with the current search interval could failFixes a bug where a query would not start automatically when requesting to filter or group by a value.
Fixes a bug where the merge of mini segments could fail during sampling of input for compression.
Fixes a bug where the permissions check on editing a connection from a view to a repository allowed altering the search prefix of connections other than the one the user currently was allowed to edit.
Fixed bug where the
format()
function produced wrong output for some floating-point numbers.Increase number of vCPUs used when parsing TCP ingest, twice the number of the 1.18.0 build.
Fixed bug so as to reduce contention on the Query input queue.
Only install default Humio parser to the Humio view if it is missing. No longer overwriting local changes.
Fixed bug where Humio could end in a corrupted state, needing manual intervention before working again.
Humio Server 1.18.2 LTS (2021-01-08)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.18.2 | LTS | 2021-01-08 | Cloud | 2021-11-30 | No | 1.16.0 | No |
Hide file hashes
JAR Checksum | Value |
---|---|
MD5 | 325cfefc1a6eef3b4f9d1e6beeb5a53a |
SHA1 | a8af4a4a70c93cb6ff945c999bd0e6ea579d1a2a |
SHA256 | f2201aab5190ec92b540bc8be9b6ea59c2e840fece63b2d87891aff9d7fc284a |
SHA512 | 8a1e56622041d45db967e6aa003da14fb78143fc63d3dfb936e318d385761f8a10a57823040d68b8f200048fc28006df1d281e74baad1bcdef080334a8634af5 |
These notes include entries from the following previous releases: 1.18.0, 1.18.1
Important Information about Upgrading
Beginning with version 1.17.0, if your current version of Humio is not directly able to upgrade to the new version, you will get an error if you attempt to start up the incompatible version. The 1.18.2 release is only compatible with Humio release 1.16.0 and newer. This means that you will have to ensure that you have upgraded at least to 1.16.0 before trying to upgrade to 1.18.2. In case you need to do a rollback, this can also ONLY happen back to 1.16.0 or newer. Rolling directly back to an earlier release can result in data loss.
Fixed in this release
Automation and Alerts
Fixes a bug where some valid repeating queries would not validate in alerts.
Other
Changed behaviour when the config
ZONE
is set to the empty string. It is now considered the same as omittingZONE
.Major changes (see 1.17.0 release notes)
Fixed a bug where TCP listener threads could take all resources from HTTP threads
Fixes a bug that would allow users with read access to be able to delete a file (#10133)
Improve handling of a node being missing from the cluster for a long time by letting other nodes handle the parts of the query that node would normally do.
Add non-sensitive logging that lists the versions of Humio running in the cluster. These logs can be found by searching the Humio debug log for "cluster_versions".
Improve performance of S3 archiving when many repositories have the feature enabled.
Resolves problem when starting a query spanning very large data sets, a time-out could prevent the browser from getting responses initially.
Adds a new configuration option for auth0:
AUTH_ALLOW_SIGNUP
. Default value is true.Fixes a bug where
top([a,b], sum=f)
ignored events where f was not a positive integer. Now it ignores negative and non-numerical input but rounds decimal numbers to integer value.Removed config
IDLE_POLL_TIME_BEFORE_DASHBOARD_QUERY_IS_CANCELLED_MINUTES
. Queries on dashboards now have the same life cycle as other queries.Improves handling when many transfers to secondary storage are pending.
Fixes a bug where the
to
parameter to unit:convert would cause internal server errors instead of validation errors.Add GraphQL mutation to update the runAsUser for a read only dashboard token.
Fixes a bug where queries with
@timestamp=x
where x was a timestamp with the current search interval could failFixes a bug where a query would not start automatically when requesting to filter or group by a value.
Fixes a bug where the merge of mini segments could fail during sampling of input for compression.
Fixes a bug where the permissions check on editing a connection from a view to a repository allowed altering the search prefix of connections other than the one the user currently was allowed to edit.
Increase number of vCPUs used when parsing TCP ingest, twice the number of the 1.18.0 build.
Fixed bug so as to reduce contention on the Query input queue.
Only install default Humio parser to the Humio view if it is missing. No longer overwriting local changes.
Fixed bug where Humio could end in a corrupted state, needing manual intervention before working again.
Humio Server 1.18.1 LTS (2020-12-17)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.18.1 | LTS | 2020-12-17 | Cloud | 2021-11-30 | No | 1.16.0 | No |
Hide file hashes
JAR Checksum | Value |
---|---|
MD5 | 23a1b98323ea040221d9314e181f5048 |
SHA1 | 2364af27ee4d0e12ff3732252e31aceaa12534a9 |
SHA256 | 9f83caa9a0aa483087c28e083fb10e2d418c9f66c109abe0b7464dd312d3e873 |
SHA512 | 682242210795e8b72f82589a01a3933072c7b744a8abdc54adca3964d08c32f02504e86abd44639278cf104d79587f8b0d0b89b807fc3ba321e04c3644c6dfab |
These notes include entries from the following previous releases: 1.18.0
Important Information about Upgrading
Beginning with version 1.17.0, if your current version of Humio is not directly able to upgrade to the new version, you will get an error if you attempt to start up the incompatible version. The 1.18.1 release is only compatible with Humio release 1.16.0 and newer. This means that you will have to ensure that you have upgraded to minimum 1.16.0 before trying to upgrade to 1.18.1. In case you need to do a rollback, this can also ONLY happen back to 1.16.0 or newer, rolling directly back to an earlier release can result in data loss.
Fixed in this release
Automation and Alerts
Fixes a bug where some valid repeating queries would not validate in alerts.
Other
Changed behaviour when the config
ZONE
is set to the empty string. It is now considered the same as omittingZONE
.Major changes (see 1.17.0 release notes)
Fixed a bug where TCP listener threads could take all resources from HTTP threads
Fixes a bug that would allow users with read access to be able to delete a file (#10133)
Improve handling of a node being missing from the cluster for a long time by letting other nodes handle the parts of the query that node would normally do.
Add non-sensitive logging that lists the versions of Humio running in the cluster. These logs can be found by searching the Humio debug log for "cluster_versions".
Improve performance of S3 archiving when many repositories have the feature enabled.
Fixes a bug where
top([a,b], sum=f)
ignored events where f was not a positive integer. Now it ignores negative and non-numerical input but rounds decimal numbers to integer value.Removed config
IDLE_POLL_TIME_BEFORE_DASHBOARD_QUERY_IS_CANCELLED_MINUTES
. Queries on dashboards now have the same life cycle as other queries.Fixes a bug where the
to
parameter to unit:convert would cause internal server errors instead of validation errors.Add GraphQL mutation to update the runAsUser for a read only dashboard token.
Fixes a bug where queries with
@timestamp=x
where x was a timestamp with the current search interval could failFixes a bug where a query would not start automatically when requesting to filter or group by a value.
Fixes a bug where the merge of mini segments could fail during sampling of input for compression.
Fixes a bug where the permissions check on editing a connection from a view to a repository allowed altering the search prefix of connections other than the one the user currently was allowed to edit.
Increase number of vCPUs used when parsing TCP ingest, twice the number of the 1.18.0 build.
Only install default Humio parser to the Humio view if it is missing. No longer overwriting local changes.
Humio Server 1.18.0 LTS (2020-11-26)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.18.0 | LTS | 2020-11-26 | Cloud | 2021-11-30 | No | 1.16.0 | Yes |
Hide file hashes
JAR Checksum | Value |
---|---|
MD5 | 0525cf2284207efde5655fd9174c411f |
SHA1 | 9d28fc3e60a033c27746584ac10fd5abedb2af69 |
SHA256 | 5f2a5cfa60bc13c859caa2e07a1dbd3a907d15483cc7d5829d02646f8350d61c |
SHA512 | be22b5126137fa75d0d5ac5a870c716fadee85155bb58822d710397e633de9acbb060c7500f46031a3996e826c5d3b3ee5c0b0c1d572b3944f1be5ebc05cffca |
Important Information about Upgrading
This release promotes the latest 1.17 release from preview to stable.
Beginning with version 1.17.0, if your current version of Humio is not directly able to upgrade to the new version, you will get an error if you attempt to start up the incompatible version. The 1.18.0 release is only compatible with Humio release 1.16.0 and newer. This means that you will have to ensure that you have upgraded to minimum 1.16.0 before trying to upgrade to 1.18.0. In case you need to do a rollback, this can also ONLY happen back to 1.16.0 or newer, rolling directly back to an earlier release can result in data loss.
Humio can now run repeating queries using the beta:repeating()
function. These are live queries that are implemented by
repeatedly making a query. This allows using functions in alerts
and dashboards that typically do not work in live queries, such
as selfJoin()
or
selfJoinFilter()
. See the
beta:repeating()
reference page for more
information.
In order to prevent alert notifiers being used to probe services
on the internal network (eg. ZooKeeper or the AWS metadata
service), Humio now has an IP filter on alert notifiers. The
default is to block access to all link-local addresses and any
addresses on the internal network; however, you can opt-in to
the old behavior by setting the configuration option
IP_FILTER_NOTIFIERS
to
allow all
. See
IP Filter documentation.
New experimental query function series()
A new experimental query function called
series()
has been added. It needs to be
explicitly enabled on the cluster using the configuration option
SERIES_ENABLED=true
.
The function series()
improves upon
session()
and
collect()
for grouping events into
transactions. What used to be done with:
groupby(id, function=session(function=collect([fields, ...])))
can now be done using:
groupby(id, function=series([fields, ...]))
See series()
reference page for more
details.
This new feature stores a copy of live search results to the
local disk in the server nodes, and reuses the relevant parts of
that cached result when an identical live search is later
started. Caching is controlled with the config option
QUERY_CACHE_MIN_COST
, which has a default value
of .0. To disable caching, set the config option to a very high
number, such as 9223372036854775807 (max long value).
To see more details, go through the individual 1.17.x release notes (links in the changelog).
Fixed in this release
Other
Changed behaviour when the config
ZONE
is set to the empty string. It is now considered the same as omittingZONE
.Major changes (see 1.17.0 release notes)
Fixed a bug where TCP listener threads could take all resources from HTTP threads
Removed config
IDLE_POLL_TIME_BEFORE_DASHBOARD_QUERY_IS_CANCELLED_MINUTES
. Queries on dashboards now have the same life cycle as other queries.
Humio Server 1.17.0 GA (2020-11-18)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.17.0 | GA | 2020-11-18 | Cloud | 2021-11-30 | No | 1.16.0 | Yes |
Available for download two days after release.
Hide file hashes
JAR Checksum | Value |
---|---|
MD5 | 9c47009810132eb67177966d10471544 |
SHA1 | 341a2607265420b1a2d461ad111c88aca06419ed |
SHA256 | 59001d9a738930f8774c11864ce3c36b3b4b4003f8cad0d4b36af13275ceefa9 |
SHA512 | ee8e2837a8fba2b28aab8d18b16269f452f19a5c80ad9826ecbeb73eaa062236acdf2191cc448e60d81390f1a2881374637adce7603ebf9f0861ca6246f6c82f |
Important Information about Upgrading
Beginning with version 1.17.0, if your current version of Humio is not directly able to upgrade to the new version, you will get an error if you attempt to start up the incompatible version. The 1.17.0 release is only compatible with Humio release 1.16.0 and newer. This means that you will have to ensure that you have upgraded to minimum 1.16.0 before trying to upgrade to 1.17.0. In case you need to do a rollback, this can also ONLY happen back to 1.16.0 or newer, rolling directly back to earlier release can result in data loss.
Humio can now run repeating queries using the
beta:repeating()
function. These are live
queries that are implemented by repeatedly making a query. This
allows using functions in alerts and dashboards that typically
do not work in live queries, such as
selfJoin()
or
selfJoinFilter()
. See the
beta:repeating()
reference page for more
information.
In order to prevent alert notifiers being used to probe services
on the internal network (eg. ZooKeeper or the AWS metadata
service), Humio now has an IP filter on alert notifiers. The
default is to block access to all link-local addresses and any
addresses on the internal network; however, you can opt-in to
the old behavior by setting the configuration option
IP_FILTER_NOTIFIERS
to
allow all
. See
IP Filter documentation.
A new experimental query function called
series()
has been added. It needs to be
explicitly enabled on the cluster using the config option
SERIES_ENABLED
set to
true
.
The function series()
improves upon
session()
and
collect()
for grouping events into
transactions. What used to be executed with:
groupby(id,function=session(function=collect([fields, ...])))
Can now be executed using:
groupby(id, function=series([fields, ...]))
See series()
reference page for more
details.
This new feature stores a copy of live search results to the
local disk in the server nodes, and reuses the relevant parts of
that cached result when an identical live search is later
started. Caching is controlled with the config option
QUERY_CACHE_MIN_COST
, which has a default value
of .0. To disable caching, set the config option to a very high
number, such as 9223372036854775807 (max long value).
New features and improvements
Functions
New query function parameter to
parseJson()
,removePrefixes
, seeparseJson()
reference page.New query function concatArray, see
concatArray()
reference apge.
Fixed in this release
UI Changes
Setting the default query for a view in the UI has been moved from the "Save as Query" to the View's "Settings" tab.
Automation and Alerts
The notifier list is sorted when selecting notifiers for an alert.
Configuration
New configuration option
ALERT_DESPITE_WARNINGS
makes it possible to trigger alerts even when warnings occur.New configuration option
IP_FILTER_NOTIFIERS
to set up IP filters for Alert Notifications, see IP Filter reference page.New configuration option
DEFAULT_MAX_NUMBER_OF_GLOBALDATA_DUMPS_TO_KEEP
.New configuration option
ENABLE_ALERTS
makes it possible to disable alerts from running (enabled by default).
Functions
New experimental query function, see
beta:repeating()
reference page.Fixes a bug causing the sub-queries of
join()
etc. to not see events with an @ingesttimestamp occurring later than the search time interval.New experimental query function
window()
, enabled by configuration optionWINDOW_ENABLED=true
, seewindow()
reference page.Fixes a bug causing
join()
to not work after an aggregating function.Fixes a bug where
join()
function in some circumstances would fetch subquery results from other cluster nodes more than once.Fixes a bug causing
sort()
,head()
,tail()
to work incorrectly after other aggregating functions.New experimental query function
series()
, enabled by configuration optionSERIES_ENABLED=true
, seeseries()
reference page.New query function used to parse events which are formatted according to the Common Event Format (CEF), see
parseCEF()
documentation page.
Other
Reduce the max fetch size for Kafka requests, as the previous size would sometimes lead to request timeouts.
API Changes (Non-Documented API): Saved Query REST API has been replaced by GraphQL.
Fixes the issue where Humio could behave incompatibly with Kafka versions prior to 2.3.0 if
KAFKA_MANAGED_BY_HUMIO
was true.Fixes an issue causing Humio to fail to upload files to bucket storage in rare cases.
Crash the node if an exception occurs while reading from the global Kafka topic, rather than trying to recover.
API Changes (Non-Documented API): View Settings REST API has been replaced by GraphQL.
The Humio-search-all view will no longer be removed if
CREATE_HUMIO_SEARCH_ALL
is set to false. The view will instead become possible to delete manually via the admin UI.Refuse to boot if the global topic in Kafka does not contain the expected starting offset.
Periodically release object pools used by mapper pipeline, to avoid a possible source of memory leaks.
Tweaked location of diagnostics regarding missing function arguments.
Fixes an issue where Humio might try to get admin access to Kafka when
KAFKA_MANAGED_BY_HUMIO
was false.It is again possible to override a built-in parser in a repository by creating a parser with the same name.
Fix negating join expressions.
Changed default TLS ciphers and protocols accepted by Humio, see TLS.
Fix several cases where Humio might attempt to write a message to Kafka larger than what Kafka will allow.
Fixes the case where datasources receiving data might not be marked idle, causing Humio to retain too much ingest data in Kafka.
Fixes an issue which caused free-text-search to not work correctly for large (>64KB) events.
Switch from JDK to BouncyCastle provider for AES decrypt to reduce memory usage.
Allow running Humio on JDK-14 and JDK-15 to allow testing these new builds.
Rename a few scheduler threads so they reflect whether they're associated with streaming queries ("streaming-scheduler") or not ("normal-scheduler")
The
{events_html}
notifier template will now respect the field order from the query.Improve logic attempting to ensure other live nodes can act as substitutes in case the preferred digest nodes are not available when writing new segments.
Reduce the number of merge target updates Humio will write to global on digest leader reassignment or reboot.
Free-text search has been fixed to behave more in line with the specification.
Improved wording of diagnostics regarding function arguments.
If
KAFKA_MANAGED_BY_HUMIO
is true, Humio will ensure unclean leader election is disabled on the global-events topic.Fixes a bug where unit:convert couldn't handle numbers in scientific notation.
Fixes the case where Humio would consider local node state when deciding which ingest data was safe to delete from Kafka.
Refuse to boot if the booting node would cause violations of the "Minimum previous Humio version" as listed in the release notes.
Humio Server 1.16.4 LTS (2020-11-26)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.16.4 | LTS | 2020-11-26 | Cloud | 2021-10-31 | No | 1.12.0 | No |
Hide file hashes
JAR Checksum | Value |
---|---|
MD5 | d83c60916b9cb3c12501c541253b5aee |
SHA1 | fa164160b4874f1f1524ff61d49584a62c471868 |
SHA256 | 362d0640d4673b985c6ad9fcdcac7404dc127a65d33d0d423d15e20a5478c642 |
SHA512 | 1803bf4fa4cd533e7fd569584cd9d5ae86d649dfccbf92fdd6cf3eae6155fee9baf40ab7d252681e84ea4045264b5408cc9144fa0d6ef3b0a8a9b631cfdce45c |
These notes include entries from the following previous releases: 1.16.0, 1.16.1, 1.16.2, 1.16.3
Many bug fixes: related to join(
)
, TCP listerner threads, etc.
Fixed in this release
Summary
Avoid logging the license key.
Fixed an issue when starting a query, where resources related to HTTP requests were not released in a timely manner, causing an error log when the resources were released by hitting a timeout.
Fixed an issue where errors were not properly shown in the Humio UI.
Fixed an issue where it was impossible to bootstrap a new cluster if ingest or storage replication factors had been configured greater than 1.
Returning bad request when hitting authentication endpoint without a provider id.
Improved the performance for
GroupBy()
.Ensure metric label names can be sent to Prometheus.
Fixed an issue where RegEx field extraction did not work in a query.
HTML sanitization for user fields in invitation mails.
Switched from JDK to BouncyCastle provider for AES decrypt to reduce memory usage.
Fix negating join expressions.
Optimize how certain delete operations in the global database are performed to improve performance in large clusters.
Fixed an issue where sorting of work in the Humio input could end up being wrong.
Convert some non-fatal logs to warning level instead of error.
Add query parameter sanitization for login and signup pages.
Fixed an issue with truncating files on the XFS file system, leading to excess data usage.
Fixed an issue preventing the metric
datasource-count
from counting datasources correctly.Fixed a bug where TCP listener threads could take all resources from HTTP threads.
Prevent automatic URL to link conversion in email clients.
Raise time to wait until deleting data to improve handling of node failures.
Added new metric
jvm-hiccup
for measuring stalls/pauses in the JVM.Log information about sorting of snapshots.
Fixed an issue causing Humio to fail to upload files to bucket storage in rare cases.
Fixed an issue which caused free-text-search to not work correctly for large (>64KB) events.
Automation and Alerts
Fixed an issue where missing input validation in alerts could lead to HTML injection in email notifications.
Fixed a bug where the {events_html} message template was formatted as raw HTML in alert emails.
Add view to log lines for alerts
Functions
Other
Bulk Global operations for segments in S3 to avoid overloading Kafka with writes.
Log Humio cluster version in non-sensitive log.
Fixed a problem where some deleted segments could show up as missing.
Added metrics for:
JVM Garbage Collection
JVM Memory usage
Missing nodes count
Fixed a problem where errors would not be shown in the UI
Major changes: (see 1.15.0 release notes)
Other changes: (see 1.15.2 release notes)
Fixed an issue where cleanup of empty datasource directories could race with other parts of the system and cause issues.
Fixed a problem with auto sharding not working when two repositories had the same tags but differing shard counts.
Fixed an issue where Humio could behave incompatibly with Kafka versions prior to 2.3.0 if
KAFKA_MANAGED_BY_HUMIO
was true.Fixed a problem where the Zone configuration would not be propagated correctly.
Fixed a problem where the QueryScheduler could spend time idling even though there was work to do in situations where digest delays were high.
Reduce memory usage when using the
match()
orregex()
query functions.Fixed a bug causing the sub-queries of
join()
etc. not to see events with an @ingesttimestamp occurring later than the search time interval.Support for license files in ES512 format.
Log query total cost when logging query information.
Improved merging of segments by evaluating less data.
Changed limits for what can be fetched via HTTP from inside Humio.
Other changes: (see 1.15.1 release notes)
Fixed several cases where Humio might attempt to write a larger message to Kafka than what Kafka allows.
Fixed a problem preventing saved queries from being edited.
Added background job to fix problems with inconsistent data in global.
Fixed a problem preventing file export/download from the search page.
Fixed a problem where it was not possible to rename a dashboard.
Fixed missing cache update when deleting a view.
Fixed a problem with the retention job calculating what segments to delete.
Humio Server 1.16.3 LTS (2020-11-10)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.16.3 | LTS | 2020-11-10 | Cloud | 2021-10-31 | No | 1.12.0 | No |
Hide file hashes
JAR Checksum | Value |
---|---|
MD5 | 294a84c0d1a54ccb83045af2a59a711e |
SHA1 | 74088d0dcce3e2e853d236e00d18af18ad06c0e6 |
SHA256 | 8bd243f29dbafa1ece847aff22ce9754b3ec3fd6fe8726d1517ed5462c5653fe |
SHA512 | ad95a600b21990508e9e174273726b8c2514b2a0f688bf45422394971375c0ad8821626ca6ca7672faffd41e4b2f80a62c357e581cdc6b34f4c482ddb5389410 |
These notes include entries from the following previous releases: 1.16.0, 1.16.1, 1.16.2
Improved memory usage of some query functions, and fixes problems with datasource cleanup, resource usage of HTTP requests, and with large free-text searches.
Fixed in this release
Summary
Avoid logging the license key.
Fixed an issue when starting a query, where resources related to HTTP requests were not released in a timely manner, causing an error log when the resources were released by hitting a timeout.
Fixed an issue where errors were not properly shown in the Humio UI.
Fixed an issue where it was impossible to bootstrap a new cluster if ingest or storage replication factors had been configured greater than 1.
Returning bad request when hitting authentication endpoint without a provider id.
Improved the performance for
GroupBy()
.Ensure metric label names can be sent to Prometheus.
Fixed an issue where RegEx field extraction did not work in a query.
HTML sanitization for user fields in invitation mails.
Optimize how certain delete operations in the global database are performed to improve performance in large clusters.
Fixed an issue where sorting of work in the Humio input could end up being wrong.
Convert some non-fatal logs to warning level instead of error.
Add query parameter sanitization for login and signup pages.
Fixed an issue with truncating files on the XFS file system, leading to excess data usage.
Prevent automatic URL to link conversion in email clients.
Raise time to wait until deleting data to improve handling of node failures.
Added new metric
jvm-hiccup
for measuring stalls/pauses in the JVM.Log information about sorting of snapshots.
Fixed an issue which caused free-text-search to not work correctly for large (>64KB) events.
Automation and Alerts
Fixed an issue where missing input validation in alerts could lead to HTML injection in email notifications.
Add view to log lines for alerts
Other
Bulk Global operations for segments in S3 to avoid overloading Kafka with writes.
Log Humio cluster version in non-sensitive log.
Fixed a problem where some deleted segments could show up as missing.
Added metrics for:
JVM Garbage Collection
JVM Memory usage
Missing nodes count
Fixed a problem where errors would not be shown in the UI
Major changes: (see 1.15.0 release notes)
Other changes: (see 1.15.2 release notes)
Fixed an issue where cleanup of empty datasource directories could race with other parts of the system and cause issues.
Fixed a problem with auto sharding not working when two repositories had the same tags but differing shard counts.
Fixed an issue where Humio could behave incompatibly with Kafka versions prior to 2.3.0 if
KAFKA_MANAGED_BY_HUMIO
was true.Fixed a problem where the Zone configuration would not be propagated correctly.
Fixed a problem where the QueryScheduler could spend time idling even though there was work to do in situations where digest delays were high.
Reduce memory usage when using the
match()
orregex()
query functions.Support for license files in ES512 format.
Log query total cost when logging query information.
Improved merging of segments by evaluating less data.
Changed limits for what can be fetched via HTTP from inside Humio.
Other changes: (see 1.15.1 release notes)
Fixed several cases where Humio might attempt to write a larger message to Kafka than what Kafka allows.
Fixed a problem preventing saved queries from being edited.
Added background job to fix problems with inconsistent data in global.
Fixed a problem preventing file export/download from the search page.
Fixed a problem where it was not possible to rename a dashboard.
Fixed missing cache update when deleting a view.
Fixed a problem with the retention job calculating what segments to delete.
Humio Server 1.16.2 LTS (2020-10-30)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.16.2 | LTS | 2020-10-30 | Cloud | 2021-10-31 | No | 1.12.0 | No |
Hide file hashes
JAR Checksum | Value |
---|---|
MD5 | 625e7c2ab7d9ae2ad61c6b5a6847e0ec |
SHA1 | 092049bbc1604af6fd2e0a6b8f6f53fe7f6132ae |
SHA256 | 3850217dcbda7666178a475fcf135b3c94715ef83a4b66236921b3bb2ee4c2f9 |
SHA512 | d7aec9e4c0bdd6102ce56df5dbac36cb896182c987b1cb8c84d1065610ed78e9e519bce7ef94b784412fa8946b63bfe51f6ef78071b60e59e955588c22ccf6d4 |
These notes include entries from the following previous releases: 1.16.0, 1.16.1
Improved delete options in large clusters, and fix problems with generating http links.
Fixed in this release
Summary
Avoid logging the license key.
Fixed an issue where errors were not properly shown in the Humio UI.
Fixed an issue where it was impossible to bootstrap a new cluster if ingest or storage replication factors had been configured greater than 1.
Returning bad request when hitting authentication endpoint without a provider id.
Improved the performance for
GroupBy()
.Ensure metric label names can be sent to Prometheus.
Fixed an issue where RegEx field extraction did not work in a query.
HTML sanitization for user fields in invitation mails.
Optimize how certain delete operations in the global database are performed to improve performance in large clusters.
Fixed an issue where sorting of work in the Humio input could end up being wrong.
Convert some non-fatal logs to warning level instead of error.
Add query parameter sanitization for login and signup pages.
Fixed an issue with truncating files on the XFS file system, leading to excess data usage.
Prevent automatic URL to link conversion in email clients.
Raise time to wait until deleting data to improve handling of node failures.
Added new metric
jvm-hiccup
for measuring stalls/pauses in the JVM.Log information about sorting of snapshots.
Automation and Alerts
Fixed an issue where missing input validation in alerts could lead to HTML injection in email notifications.
Add view to log lines for alerts
Other
Bulk Global operations for segments in S3 to avoid overloading Kafka with writes.
Log Humio cluster version in non-sensitive log.
Fixed a problem where some deleted segments could show up as missing.
Added metrics for:
JVM Garbage Collection
JVM Memory usage
Missing nodes count
Fixed a problem where errors would not be shown in the UI
Major changes: (see 1.15.0 release notes)
Other changes: (see 1.15.2 release notes)
Fixed a problem with auto sharding not working when two repositories had the same tags but differing shard counts.
Fixed an issue where Humio could behave incompatibly with Kafka versions prior to 2.3.0 if
KAFKA_MANAGED_BY_HUMIO
was true.Fixed a problem where the Zone configuration would not be propagated correctly.
Fixed a problem where the QueryScheduler could spend time idling even though there was work to do in situations where digest delays were high.
Support for license files in ES512 format.
Log query total cost when logging query information.
Improved merging of segments by evaluating less data.
Changed limits for what can be fetched via HTTP from inside Humio.
Other changes: (see 1.15.1 release notes)
Fixed several cases where Humio might attempt to write a larger message to Kafka than what Kafka allows.
Fixed a problem preventing saved queries from being edited.
Added background job to fix problems with inconsistent data in global.
Fixed a problem preventing file export/download from the search page.
Fixed a problem where it was not possible to rename a dashboard.
Fixed missing cache update when deleting a view.
Fixed a problem with the retention job calculating what segments to delete.
Humio Server 1.16.1 LTS (2020-10-21)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.16.1 | LTS | 2020-10-21 | Cloud | 2021-10-31 | No | 1.12.0 | No |
Hide file hashes
JAR Checksum | Value |
---|---|
MD5 | f5a2ac232db869d279159781ba5cdf57 |
SHA1 | e165eff2f323df2702e8eb697602739ad8e71af8 |
SHA256 | eda9a02789d70732451e0bdd01a329ca878a39dc160c95c3902acc5db693f22b |
SHA512 | aa13e65ce4b6896aaecfeb8dcb3996681d7014bacde126045c01d230c8f3b75a34a594653b9b7785ee8c4417a71df1908263c2931c8806893db30293780e1bf4 |
These notes include entries from the following previous releases: 1.16.0
Several bug fixes related to Humio UI, Prometheus, Clusters,
RegEx queries, etc. — as well as improved
GroupBy()
,
jvm-hiccup
, etc.
Fixed in this release
Summary
Avoid logging the license key.
Fixed an issue where errors were not properly shown in the Humio UI.
Fixed an issue where it was impossible to bootstrap a new cluster if ingest or storage replication factors had been configured greater than 1.
Returning bad request when hitting authentication endpoint without a provider id.
Improved the performance for
GroupBy()
.Ensure metric label names can be sent to Prometheus.
Fixed an issue where RegEx field extraction did not work in a query.
HTML sanitization for user fields in invitation mails.
Fixed an issue where sorting of work in the Humio input could end up being wrong.
Convert some non-fatal logs to warning level instead of error.
Add query parameter sanitization for login and signup pages.
Fixed an issue with truncating files on the XFS file system, leading to excess data usage.
Raise time to wait until deleting data to improve handling of node failures.
Added new metric
jvm-hiccup
for measuring stalls/pauses in the JVM.Log information about sorting of snapshots.
Automation and Alerts
Fixed an issue where missing input validation in alerts could lead to HTML injection in email notifications.
Add view to log lines for alerts
Other
Bulk Global operations for segments in S3 to avoid overloading Kafka with writes.
Log Humio cluster version in non-sensitive log.
Fixed a problem where some deleted segments could show up as missing.
Added metrics for:
JVM Garbage Collection
JVM Memory usage
Missing nodes count
Fixed a problem where errors would not be shown in the UI
Major changes: (see 1.15.0 release notes)
Other changes: (see 1.15.2 release notes)
Fixed a problem with auto sharding not working when two repositories had the same tags but differing shard counts.
Fixed an issue where Humio could behave incompatibly with Kafka versions prior to 2.3.0 if
KAFKA_MANAGED_BY_HUMIO
was true.Fixed a problem where the Zone configuration would not be propagated correctly.
Fixed a problem where the QueryScheduler could spend time idling even though there was work to do in situations where digest delays were high.
Support for license files in ES512 format.
Log query total cost when logging query information.
Improved merging of segments by evaluating less data.
Changed limits for what can be fetched via HTTP from inside Humio.
Other changes: (see 1.15.1 release notes)
Fixed several cases where Humio might attempt to write a larger message to Kafka than what Kafka allows.
Fixed a problem preventing saved queries from being edited.
Added background job to fix problems with inconsistent data in global.
Fixed a problem preventing file export/download from the search page.
Fixed a problem where it was not possible to rename a dashboard.
Fixed missing cache update when deleting a view.
Fixed a problem with the retention job calculating what segments to delete.
Humio Server 1.16.0 LTS (2020-10-09)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.16.0 | LTS | 2020-10-09 | Cloud | 2021-10-31 | No | 1.12.0 | No |
Hide file hashes
JAR Checksum | Value |
---|---|
MD5 | 41a3eb30f6d92f92b414661545f0c8bc |
SHA1 | 31829cf2558430a4a6f170d4adf00573f7a7db08 |
SHA256 | f04127a4195e8e1e576bc9a32e9b582b48ffb6eb3fdbc368b4f2c0e2fc22e1c6 |
SHA512 | 45e067a2b79618e979e9839cb987ce6bd76a27514a0d3da22dd3e9b5259f48b8b75c5c9730078ccba81f10516a456b733956bc48bdec6c9bd5adb9dd0de13363 |
This release promotes the latest 1.15 release from preview to stable. To see more details, go through the individual 1.15.x release notes (links in the changelog).
Humio will set ingest timestamps on all events. This is set in
the field named
@ingesttimestamp
. In later
versions, Humio will also support specifying the search time
interval using
@ingesttimestamp
when
searching. This will support use cases where data is backfilled
etc.
Field based throttling: It is now possible to make an alert throttle based on a field, so that new values for the field trigger the alert, but already seen values do not until the throttle period has elapsed.
Notifier logging to a Humio repository: It is now possible to configure an alert notifier that will log all events to a Humio repository.
Slack notifier upgrade to notify multiple Slack channels: It is now possible to use the Slack notifier to notify multiple Slack channels at once.
Events as HTML table: In an email notifier, it is now possible to format the events as an HTML table using the new message template {events_html}. Currently, the order of the columns is not well-defined. This problem will be fixed in the 1.17.0 release.
Configure notifier to not use the internet proxy: It is now possible to configure an alert notifier to not use the HTTP proxy configured in Humio.
Redesigned signup and login pages. For cloud, we have have split the behavior so users have to explicitly either login or signup.
Invite flow: When adding a user to Humio they will now by default get an email telling them that they have been invited to use Humio.
Configure Humio to not use the internet proxy for S3: It is now possible to configure Humio to not use the globally configured HTTP proxy for communcation with S3.
Auto-Balanced Partition Table Suggestions
When changing digest and storage partitions it is now possible
to get auto-balanced suggestions based on node zone and
replication factor settings (via ZONE
,
DIGEST_REPLICATION_FACTOR
and
STORAGE_REPLICATION_FACTOR
configurations). See
Configuration Settings.
The AWS SDK Humio uses has been upgraded to v2. When configuring
Humio bucket storage with Java system properties, the access key
must now be in the
aws.secretAccessKey
property instead of the
aws.secretKey
property.
Fixed in this release
Automation and Alerts
Add view to log lines for alerts
Other
Bulk Global operations for segments in S3 to avoid overloading Kafka with writes.
Log Humio cluster version in non-sensitive log.
Fixed a problem where some deleted segments could show up as missing.
Added metrics for:
JVM Garbage Collection
JVM Memory usage
Missing nodes count
Fixed a problem where errors would not be shown in the UI
Major changes: (see 1.15.0 release notes)
Other changes: (see 1.15.2 release notes)
Fixed a problem with auto sharding not working when two repositories had the same tags but differing shard counts.
Fixed an issue where Humio could behave incompatibly with Kafka versions prior to 2.3.0 if
KAFKA_MANAGED_BY_HUMIO
was true.Fixed a problem where the Zone configuration would not be propagated correctly.
Fixed a problem where the QueryScheduler could spend time idling even though there was work to do in situations where digest delays were high.
Support for license files in ES512 format.
Log query total cost when logging query information.
Improved merging of segments by evaluating less data.
Changed limits for what can be fetched via HTTP from inside Humio.
Other changes: (see 1.15.1 release notes)
Fixed several cases where Humio might attempt to write a larger message to Kafka than what Kafka allows.
Fixed a problem preventing saved queries from being edited.
Added background job to fix problems with inconsistent data in global.
Fixed a problem preventing file export/download from the search page.
Fixed a problem where it was not possible to rename a dashboard.
Fixed missing cache update when deleting a view.
Fixed a problem with the retention job calculating what segments to delete.
Humio Server 1.15.2 GA (2020-09-29)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.15.2 | GA | 2020-09-29 | Cloud | 2021-10-31 | No | 1.12.0 | No |
Available for download two days after release.
Hide file hashes
JAR Checksum | Value |
---|---|
MD5 | 682ce1370121f12351ade4d22aac9d59 |
SHA1 | 4b0a7d089cadc867582ab067e80ccbaa421d6869 |
SHA256 | 64179665b0993212736c526d4c9e5af57511d2225673646c6de9f54c6685e9f9 |
SHA512 | 3d63d22285b955637128bfaaf8223e4f59c32a92b66180997307b22cdda6d360c2436f0b63fb55ee57b52cdb34a0b0ddc2ad28f4db9d63ae62e10c8f0846dc80 |
Many bug fixes, including fixes related to login from Safari and
Firefox, the join()
function.
Fixed in this release
Summary
Fixed a problem with scrolling on the login page on screens with low resolution.
Fixed a bug causing an authentication error when trying to download a file when authenticating by proxy.
Fixed an issue showing duplicate entries when searching for users.
Generate ingest tokens in UUID format, replacing the current format for any new tokens being created.
Changed priorities when fetching segments to a node which have been offline for a longer period. This avoids waiting too long before the cluster becomes fully synced.
Fixed an issue where a slow data stream could cause Humio to retain more data in Kafka than necessary, as well as cause a restarted Humio node to reprocess too much data.
Only consider fully replicated data when calculating which offsets can be pruned from Kafka.
Improved naming of threads to get more usable thread dumps.
Made the login and sign up pages responsive to the device.
Fixed a memory leak when authenticating in AWS setups.
Added logging to detect issues when truncating finished files.
Fixed a bug in the partition table optimizer that lead to unbalanced layouts.
Avoid overloading kafka with updates for the global database by collecting operations in bulk.
Improved handling of sub-queries polling state from the main query when using
join()
.Fixed a problem where the login link did not work in Safari and Firefox.
Changed the query scheduling to account for the work of the overall query, rather than per job started. This allows fairer scheduling of queries hitting many dataspaces e.g. when using search-all.
In the dialog for saving a search as an alert, the save button is no longer always grey and boring, but can actually save alerts again.
Humio Server 1.15.1 GA (2020-09-22)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.15.1 | GA | 2020-09-22 | Cloud | 2021-10-31 | No | 1.12.0 | No |
Available for download two days after release.
Hide file hashes
JAR Checksum | Value |
---|---|
MD5 | ec66230819ad11297969c2f8d6641c6d |
SHA1 | f3b0201a632842dd67f104c325c241b201d246a6 |
SHA256 | 81de2fd8b1ccc692f30083f0461364155f83b96b893c0cdff4444fbbb97abf32 |
SHA512 | 2281a3fbf186bd96d56343a7ec46caefcd59fa6656ca8885f841af2ce16c32068739d510b7a0913e9c5db5670948fbf43e998595b8a535246299d7ecd418e21b |
Fixes bugs related to AWS and STS tokens, timestamp display results, reverting Humio UI login method.
Fixed in this release
Summary
Revert login Humio User Interface to same behavior as before version 1.15.0.
Fixed a problem in the UI, where the wrong timestamp was displayed as
@ingesttimestamp
.The job for updating the IP location database now uses the configured HTTP proxy, if present.
Fixed a problem with AWS, where STS tokens would fail to authenticate.
Humio Server 1.15.0 GA (2020-09-15)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.15.0 | GA | 2020-09-15 | Cloud | 2021-10-31 | No | 1.12.0 | Yes |
Available for download two days after release.
Hide file hashes
JAR Checksum | Value |
---|---|
MD5 | 516ed995115eeaa9218bdbc96aec01b2 |
SHA1 | 79bd7daeaa49644e7d7bcc429862f43c5d028057 |
SHA256 | 26c95dacd1164ba20db16917dcc436cc8a8f5a86811b697917b6febfae9fbd61 |
SHA512 | 64841a7ddfcb7c9cff0e7636e78c05c8e52cd63f5c537c858f70795e6dfb5ff52efa8705cf9d3e3fccc1f2ca3901fbae25fb3920fe09f440813f2e587b367173 |
Improves ingest timestamps, field based throttling, the ability to configure better alert notifiers, Slack notifiers, etc.
Humio will set ingest timestamps on all events. This is set in
the field named
@ingesttimestamp
. In later
versions, Humio will also support specifying the search time
interval using
@ingesttimestamp
when
searching. This will support use cases where data is backfilled
etc.
It is now possible to make an alert throttle based on a field, so that new values for the field trigger the alert, but already seen values do not until the throttle period has elapsed.
Notifier Logging to Humio Repository
It is now possible to configure an alert notifier that will log all events to a Humio repository.
It is now possible to use the Slack notifier to notify multiple slack channels at once.
In an email notifier, it is now possible to format the events as an HTML table using the new message template {events_html}.
Configure Notifier Not to use Internet Proxy
It is now possible to configure an alert notifier to not use the HTTP proxy configured in Humio.
We introduce new signup/login pages for social login and have split the behavior so users have to explicitly either login or signup.
When adding a user to Humio they will now by default get an email telling them that they have been invited to use Humio.
The AWS SDK Humio uses has been upgraded to v2. When configuring
Humio bucket storage with Java system properties, the access key
must now be in the
aws.secretAccessKey
property instead of the
aws.secretKey
property.
Configure Humio Not to use Internet Proxy for S3
It is now possible to configure Humio to not use the globally configured HTTP proxy for communication with S3.
Auto-Balanced Partition Table Suggestions
When changing digest and storage partitions it is now possible
to get auto-balanced suggestions based on node zone and
replication factor settings (via ZONE
,
DIGEST_REPLICATION_FACTOR
,
STORAGE_REPLICATION_FACTOR
configurations). See
Configuration Settings.
Fixed in this release
Automation and Alerts
Alert notifiers can be configured to not use an HTTP proxy.
Field based throttling on alerts.
New alert notifier template {events_html} formatting events as an HTML table.
Other
S3 communication can be configured to not use an HTTP proxy.
Humio will set the field
@ingesttimestamp
on all events.If automatically creating users upon login and syncing their groups from the authentication mechanims, the configuration
ONLY_CREATE_USER_IF_SYNCED_GROUPS_HAVE_ACCESS
now controls whether users should only be created if the synced groups have access to a repository or view. The default is false.Upgraded to AWS SDK v2. When using Java system properties for configuring Humio bucket storage, use
aws.secretAccessKey
instead ofaws.secretKey
.Newly added users will by default get an email.
New alert notifier type logging to a Humio repository.
Auto-balanced partition table suggestions. See
ZONE
,DIGEST_REPLICATION_FACTOR
,STORAGE_REPLICATION_FACTOR
in configuration. See Configuration Settings.Improved error handling when a parser cannot be loaded. Before, this resulted in Humio returning an error to the log shipper. Now, data is ingested without being parsed, but marked with an error as described in Parser Errors.
CSV files can no longer contain unnamed columns and also trailing commas are disallowed. Queries based on such files will now fail with an error.
New explicit signup and login pages for social login.
Humio Server 1.14.6 LTS (2020-10-30)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.14.6 | LTS | 2020-10-30 | Cloud | 2021-08-31 | No | 1.12.0 | No |
Hide file hashes
JAR Checksum | Value |
---|---|
MD5 | d49223f102edc33a255dd9d7178d95f0 |
SHA1 | 70b446b461381e1e02d3a994deb88c5ba3e682f2 |
SHA256 | 17d82d593a4f290867819f8095a59d0367083c022d45d9db1c33417337d7a5b8 |
SHA512 | 1d7bcdd2bd5d8385af487b8cd5a1b6da11d6a1893ade5ac5b06d44817f57e48e2679df13e9fe0f99e73723b2c306e282807fdfde9650176c19ef461c646c39fc |
These notes include entries from the following previous releases: 1.14.0, 1.14.1, 1.14.2, 1.14.3, 1.14.4, 1.14.5
Email Notification Improvements
Fixed in this release
Summary
Fixed a problem where too many segments could be generated when restarting nodes.
Fixed an issue where Humio could behave incompatibly with Kafka versions prior to 2.3.0 if
KAFKA_MANAGED_BY_HUMIO
was true.Fix missing cache update when deleting a view.
Changed limits for what can be fetched via HTTP from inside Humio.
Changed the query scheduling to account for the work of the overall query, rather than per job started. This allows fairer scheduling of queries hitting many dataspaces e.g. when using search-all.
Improve naming of threads to get more usable thread dumps.
Fixed a race condition when cleaning up datasources.
Log Humio cluster version in non-sensitive log.
The job for updating the IP location database now uses the configured HTTP proxy, if present.
Add logging to detect issues when truncating finished files.
New metrics for scheduling of queries:
local-query-jobs-wait: Histogram of time in milliseconds that each query waited between getting any work done including exports
local-query-jobs-queue: Count queries currently queued or active on node including exports
local-query-segments-queue-exports-part: Count of elements in queue as number of segments currently queued for query for exports
local-query-jobs-queue-exports-part: Count queries currently queued or active on node for exports
Improve performance when processing streaming queries.
Added log rotation for humio-non-sensitive logs.
Change priorities when fetching segments to a node which have been offline for a longer period. This avoids waiting too long before the cluster becomes fully synced.
Include user email in metrics when queries end.
Fixed a problem where some deleted segments could show up as missing.
Fixed an issue where Humio might attempt to write a larger message to Kafka than what Kafka allows.
Remove restriction on expire time when creating emergency user through the emergency user API. See Enabling Emergency Access.
Remove restriction on length of group names from LDAP.
Fixed an issue where a slow data stream could cause Humio to retain more data in Kafka than necessary, as well as cause a restarted Humio node to reprocess too much data.
Fixed a problem where duplicated uploaded files would not be deleted from /tmp.
Improved handling of data replication when nodes are offline.
Avoid overloading Kafka with updates for the global database by collecting operations in bulk.
Improve handling of sub-queries polling state from the main query when using
join()
.Added new metric
jvm-hiccup
for measuring stalls/pauses in the JVM.Fixed a problem where segments could be downloaded to stateless frontend nodes from Bucket storage.
Fixed an issue where missing input validation in alerts could lead to HTML injection in email notifications.
Prevent automatic url to link conversion in email clients.
Fixed several cases where Humio might attempt to write a larger message to Kafka than Kafka allows.
HEC endpoint is now strictly validated as documented for top-level fields, which means non-valid input will be rejected. See Ingesting with HTTP Event Collector (HEC).
Configuration
Improved handling of query restarts to avoid unnecessary restarts in some scenarios.
Handling of digest in the case where a node has been offline for a long time has been improved. As an example, running a Humio cluster with a replication factor of 2 and having one node go offline for a long time would leave some ingested data to only reside on one Humio node (and on the ingest queue in Kafka). But this data would not be regarded as properly replicated until the second node returned. If the only node that was left handling a digest partition did a failover, Humio would end up going far back on the Kafka ingest queue to reread data. This has been changed. Now another node from the set of digest nodes will take over if a node goes offline, to keep the replication factor as desired. This means that other hosts, than those specified for a given digest partition on the cluster management page, can actually be handling the digest data for that partition. Only digest nodes will be selected as hosts.
Humio Server 1.14.5 LTS (2020-10-21)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.14.5 | LTS | 2020-10-21 | Cloud | 2021-08-31 | No | 1.12.0 | No |
Hide file hashes
JAR Checksum | Value |
---|---|
MD5 | 097c33aba188611a3157a00cdb7366af |
SHA1 | 3ac3a1722ff46530ebdf6cf1a2e977437a8917e9 |
SHA256 | cb4aa5b17dba7ca6a79494401cfaaa3e1aefce77fb029b33e24a551e9fab9a22 |
SHA512 | e737fa3bddab0c236a37b0725f2210efd4d761977646581addb29833c31081f0cc6e84f5c514be7e860dc4e848afeae5a77ab0b3536347a1d6757ba4b1547be7 |
These notes include entries from the following previous releases: 1.14.0, 1.14.1, 1.14.2, 1.14.3, 1.14.4
Bug Fixes and New Metric
Fixed in this release
Summary
Fixed a problem where too many segments could be generated when restarting nodes.
Fixed an issue where Humio could behave incompatibly with Kafka versions prior to 2.3.0 if
KAFKA_MANAGED_BY_HUMIO
was true.Fix missing cache update when deleting a view.
Changed limits for what can be fetched via HTTP from inside Humio.
Changed the query scheduling to account for the work of the overall query, rather than per job started. This allows fairer scheduling of queries hitting many dataspaces e.g. when using search-all.
Improve naming of threads to get more usable thread dumps.
Fixed a race condition when cleaning up datasources.
Log Humio cluster version in non-sensitive log.
The job for updating the IP location database now uses the configured HTTP proxy, if present.
Add logging to detect issues when truncating finished files.
New metrics for scheduling of queries:
local-query-jobs-wait: Histogram of time in milliseconds that each query waited between getting any work done including exports
local-query-jobs-queue: Count queries currently queued or active on node including exports
local-query-segments-queue-exports-part: Count of elements in queue as number of segments currently queued for query for exports
local-query-jobs-queue-exports-part: Count queries currently queued or active on node for exports
Improve performance when processing streaming queries.
Added log rotation for humio-non-sensitive logs.
Change priorities when fetching segments to a node which have been offline for a longer period. This avoids waiting too long before the cluster becomes fully synced.
Include user email in metrics when queries end.
Fixed a problem where some deleted segments could show up as missing.
Fixed an issue where Humio might attempt to write a larger message to Kafka than what Kafka allows.
Remove restriction on expire time when creating emergency user through the emergency user API. See Enabling Emergency Access.
Remove restriction on length of group names from LDAP.
Fixed an issue where a slow data stream could cause Humio to retain more data in Kafka than necessary, as well as cause a restarted Humio node to reprocess too much data.
Fixed a problem where duplicated uploaded files would not be deleted from /tmp.
Improved handling of data replication when nodes are offline.
Avoid overloading Kafka with updates for the global database by collecting operations in bulk.
Improve handling of sub-queries polling state from the main query when using
join()
.Added new metric
jvm-hiccup
for measuring stalls/pauses in the JVM.Fixed a problem where segments could be downloaded to stateless frontend nodes from Bucket storage.
Fixed several cases where Humio might attempt to write a larger message to Kafka than Kafka allows.
HEC endpoint is now strictly validated as documented for top-level fields, which means non-valid input will be rejected. See Ingesting with HTTP Event Collector (HEC).
Configuration
Improved handling of query restarts to avoid unnecessary restarts in some scenarios.
Handling of digest in the case where a node has been offline for a long time has been improved. As an example, running a Humio cluster with a replication factor of 2 and having one node go offline for a long time would leave some ingested data to only reside on one Humio node (and on the ingest queue in Kafka). But this data would not be regarded as properly replicated until the second node returned. If the only node that was left handling a digest partition did a failover, Humio would end up going far back on the Kafka ingest queue to reread data. This has been changed. Now another node from the set of digest nodes will take over if a node goes offline, to keep the replication factor as desired. This means that other hosts, than those specified for a given digest partition on the cluster management page, can actually be handling the digest data for that partition. Only digest nodes will be selected as hosts.
Humio Server 1.14.4 LTS (2020-10-09)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.14.4 | LTS | 2020-10-09 | Cloud | 2021-08-31 | No | 1.12.0 | No |
Hide file hashes
JAR Checksum | Value |
---|---|
MD5 | 1a728a2eee16db13d2c5114ada74382c |
SHA1 | ec61329a161dce7a387976343babf7d3ff44527b |
SHA256 | 2b49f98b2b112e54ec310ffb08013debcf9005af35bf97eca36cb69e438f1366 |
SHA512 | 61e0628975aee35f01b9b6d63770c307b0a2f47fe41623bef56a0bdc2f6758ec3fe3145ed1e2b4118a482c5cd0f65b1d8fc4dbf881cc63e53a724b271f4634a7 |
These notes include entries from the following previous releases: 1.14.0, 1.14.1, 1.14.2, 1.14.3
Bug Fixes and Stability Enhancements
Fixed in this release
Summary
Fixed a problem where too many segments could be generated when restarting nodes.
Fix missing cache update when deleting a view.
Changed limits for what can be fetched via HTTP from inside Humio.
Changed the query scheduling to account for the work of the overall query, rather than per job started. This allows fairer scheduling of queries hitting many dataspaces e.g. when using search-all.
Improve naming of threads to get more usable thread dumps.
Fixed a race condition when cleaning up datasources.
Log Humio cluster version in non-sensitive log.
The job for updating the IP location database now uses the configured HTTP proxy, if present.
Add logging to detect issues when truncating finished files.
New metrics for scheduling of queries:
local-query-jobs-wait: Histogram of time in milliseconds that each query waited between getting any work done including exports
local-query-jobs-queue: Count queries currently queued or active on node including exports
local-query-segments-queue-exports-part: Count of elements in queue as number of segments currently queued for query for exports
local-query-jobs-queue-exports-part: Count queries currently queued or active on node for exports
Improve performance when processing streaming queries.
Added log rotation for humio-non-sensitive logs.
Change priorities when fetching segments to a node which have been offline for a longer period. This avoids waiting too long before the cluster becomes fully synced.
Include user email in metrics when queries end.
Fixed a problem where some deleted segments could show up as missing.
Remove restriction on expire time when creating emergency user through the emergency user API. See Enabling Emergency Access.
Remove restriction on length of group names from LDAP.
Fixed an issue where a slow data stream could cause Humio to retain more data in Kafka than necessary, as well as cause a restarted Humio node to reprocess too much data.
Fixed a problem where duplicated uploaded files would not be deleted from /tmp.
Improved handling of data replication when nodes are offline.
Avoid overloading Kafka with updates for the global database by collecting operations in bulk.
Improve handling of sub-queries polling state from the main query when using
join()
.Fixed a problem where segments could be downloaded to stateless frontend nodes from Bucket storage.
Fixed several cases where Humio might attempt to write a larger message to Kafka than Kafka allows.
HEC endpoint is now strictly validated as documented for top-level fields, which means non-valid input will be rejected. See Ingesting with HTTP Event Collector (HEC).
Configuration
Improved handling of query restarts to avoid unnecessary restarts in some scenarios.
Handling of digest in the case where a node has been offline for a long time has been improved. As an example, running a Humio cluster with a replication factor of 2 and having one node go offline for a long time would leave some ingested data to only reside on one Humio node (and on the ingest queue in Kafka). But this data would not be regarded as properly replicated until the second node returned. If the only node that was left handling a digest partition did a failover, Humio would end up going far back on the Kafka ingest queue to reread data. This has been changed. Now another node from the set of digest nodes will take over if a node goes offline, to keep the replication factor as desired. This means that other hosts, than those specified for a given digest partition on the cluster management page, can actually be handling the digest data for that partition. Only digest nodes will be selected as hosts.
Humio Server 1.14.3 LTS (2020-09-24)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.14.3 | LTS | 2020-09-24 | Cloud | 2021-08-31 | No | 1.12.0 | No |
Hide file hashes
JAR Checksum | Value |
---|---|
MD5 | 98d17b3fcca4f158e5cfe0c7d95d0ac7 |
SHA1 | 57dd7e54bc414213de946119b102d6612cd0f7a1 |
SHA256 | 661b21efd3128da29f20d987cd6d9e89541bdfacd8e0391a99bef5ba3255d7ba |
SHA512 | 7ccf6574c4da5a92975029236e762713be319c09032f066d28473ded25462d39e446ed402fa6463f8172077e05f9d91cd224c6d3a9ad3de5876fdd472bae8c0d |
These notes include entries from the following previous releases: 1.14.0, 1.14.1, 1.14.2
Bug Fixes and Improved Query Scheduling
Fixed in this release
Summary
Fixed a problem where too many segments could be generated when restarting nodes.
Fix missing cache update when deleting a view.
Changed the query scheduling to account for the work of the overall query, rather than per job started. This allows fairer scheduling of queries hitting many dataspaces e.g. when using search-all.
Improve naming of threads to get more usable thread dumps.
Fixed a race condition when cleaning up datasources.
The job for updating the IP location database now uses the configured HTTP proxy, if present.
Add logging to detect issues when truncating finished files.
New metrics for scheduling of queries:
local-query-jobs-wait: Histogram of time in milliseconds that each query waited between getting any work done including exports
local-query-jobs-queue: Count queries currently queued or active on node including exports
local-query-segments-queue-exports-part: Count of elements in queue as number of segments currently queued for query for exports
local-query-jobs-queue-exports-part: Count queries currently queued or active on node for exports
Improve performance when processing streaming queries.
Added log rotation for humio-non-sensitive logs.
Change priorities when fetching segments to a node which have been offline for a longer period. This avoids waiting too long before the cluster becomes fully synced.
Include user email in metrics when queries end.
Remove restriction on expire time when creating emergency user through the emergency user API. See Enabling Emergency Access.
Remove restriction on length of group names from LDAP.
Fixed an issue where a slow data stream could cause Humio to retain more data in Kafka than necessary, as well as cause a restarted Humio node to reprocess too much data.
Improved handling of data replication when nodes are offline.
Improve handling of sub-queries polling state from the main query when using
join()
.Fixed a problem where segments could be downloaded to stateless frontend nodes from Bucket storage.
HEC endpoint is now strictly validated as documented for top-level fields, which means non-valid input will be rejected. See Ingesting with HTTP Event Collector (HEC).
Configuration
Improved handling of query restarts to avoid unnecessary restarts in some scenarios.
Handling of digest in the case where a node has been offline for a long time has been improved. As an example, running a Humio cluster with a replication factor of 2 and having one node go offline for a long time would leave some ingested data to only reside on one Humio node (and on the ingest queue in Kafka). But this data would not be regarded as properly replicated until the second node returned. If the only node that was left handling a digest partition did a failover, Humio would end up going far back on the Kafka ingest queue to reread data. This has been changed. Now another node from the set of digest nodes will take over if a node goes offline, to keep the replication factor as desired. This means that other hosts, than those specified for a given digest partition on the cluster management page, can actually be handling the digest data for that partition. Only digest nodes will be selected as hosts.
Humio Server 1.14.2 LTS (2020-09-17)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.14.2 | LTS | 2020-09-17 | Cloud | 2021-08-31 | No | 1.12.0 | No |
Hide file hashes
JAR Checksum | Value |
---|---|
MD5 | c3eb561c270dce0fe7468c34cbd4322f |
SHA1 | 85c1c0e16507fa152c7adbba5844aed2c83c0e03 |
SHA256 | 5f4e382586a6069c5ebde1d1a620ab3d1f8f1c532e9ecce972a348ab669b2c2d |
SHA512 | f3be7379941c9f3ae677b351dc65f90da71aaf43fdc78e93eca99d3b8dfc4fd28618c9fd4d22bf4657cdc0d9b9f80d59589c3984860958aefff61f0a769251a7 |
These notes include entries from the following previous releases: 1.14.0, 1.14.1
Bug Fixes, HEC Endpoint Validation and New Metrics
Fixed in this release
Summary
Fixed a problem where too many segments could be generated when restarting nodes.
Fixed a race condition when cleaning up datasources.
The job for updating the IP location database now uses the configured HTTP proxy, if present.
New metrics for scheduling of queries:
local-query-jobs-wait: Histogram of time in milliseconds that each query waited between getting any work done including exports
local-query-jobs-queue: Count queries currently queued or active on node including exports
local-query-segments-queue-exports-part: Count of elements in queue as number of segments currently queued for query for exports
local-query-jobs-queue-exports-part: Count queries currently queued or active on node for exports
Improve performance when processing streaming queries.
Added log rotation for humio-non-sensitive logs.
Include user email in metrics when queries end.
Remove restriction on expire time when creating emergency user through the emergency user API. See Enabling Emergency Access.
Remove restriction on length of group names from LDAP.
Improved handling of data replication when nodes are offline.
Fixed a problem where segments could be downloaded to stateless frontend nodes from Bucket storage.
HEC endpoint is now strictly validated as documented for top-level fields, which means non-valid input will be rejected. See Ingesting with HTTP Event Collector (HEC).
Configuration
Improved handling of query restarts to avoid unnecessary restarts in some scenarios.
Handling of digest in the case where a node has been offline for a long time has been improved. As an example, running a Humio cluster with a replication factor of 2 and having one node go offline for a long time would leave some ingested data to only reside on one Humio node (and on the ingest queue in Kafka). But this data would not be regarded as properly replicated until the second node returned. If the only node that was left handling a digest partition did a failover, Humio would end up going far back on the Kafka ingest queue to reread data. This has been changed. Now another node from the set of digest nodes will take over if a node goes offline, to keep the replication factor as desired. This means that other hosts, than those specified for a given digest partition on the cluster management page, can actually be handling the digest data for that partition. Only digest nodes will be selected as hosts.
Humio Server 1.14.1 LTS (2020-09-08)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.14.1 | LTS | 2020-09-08 | Cloud | 2021-08-31 | No | 1.12.0 | No |
Hide file hashes
JAR Checksum | Value |
---|---|
MD5 | b57e75be1b07018a158585f04cdcb9d8 |
SHA1 | 6e1754ba60abeb35233a728dcb78ae11f0986d8a |
SHA256 | 5939bb412601b4356ccc431d87e3e8290a48db967a0739f638b0ea587e1a9eb7 |
SHA512 | a90b03b4081cd8ee73d06c0f925705740c769ff405b09cb1c06a51b8566775d7e2540bd6275ae1f8f7e4e0a65d241d7f1f195d26c6c3474894639d6b19b7d3d3 |
These notes include entries from the following previous releases: 1.14.0
Bug fixes and updates.
Fixed in this release
Summary
Improve performance when processing streaming queries.
Remove restriction on expire time when creating emergency user through the emergency user API. See Enabling Emergency Access.
Remove restriction on length of group names from LDAP.
Configuration
Improved handling of query restarts to avoid unnecessary restarts in some scenarios.
Handling of digest in the case where a node has been offline for a long time has been improved. As an example, running a Humio cluster with a replication factor of 2 and having one node go offline for a long time would leave some ingested data to only reside on one Humio node (and on the ingest queue in Kafka). But this data would not be regarded as properly replicated until the second node returned. If the only node that was left handling a digest partition did a failover, Humio would end up going far back on the Kafka ingest queue to reread data. This has been changed. Now another node from the set of digest nodes will take over if a node goes offline, to keep the replication factor as desired. This means that other hosts, than those specified for a given digest partition on the cluster management page, can actually be handling the digest data for that partition. Only digest nodes will be selected as hosts.
Humio Server 1.14.0 LTS (2020-08-26)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.14.0 | LTS | 2020-08-26 | Cloud | 2021-08-31 | No | 1.12.0 | No |
Hide file hashes
JAR Checksum | Value |
---|---|
MD5 | d0ede2c5d1075119507701bff7a04b29 |
SHA1 | b4fc3f50fabe0abdea5db2a2b502c0b2b0b71aa7 |
SHA256 | e9ddafa574576eb890cf22d241e0307caf613cc5b1bd9fdc84e50e975a40d67b |
SHA512 | 16506530541f87579660b630265171c137b8de787b5c5d11b145fc1d18ff04038514b91469b040212dbd27bddc2cee4cb5cca0054f547917624137fedb23ba20 |
Bug fixes and updates.
Free Text Search, Load Balancing of Queries and TLS Support. This release promotes the latest 1.13 release from preview to stable. To see more details, go through the individual 1.13.x release notes (links in the changelog).
Free text search now searches all fields rather than only the
@rawstring
field.
Humio can now balance and reuse existing queries internally in the cluster. Load balancer configuration to achieve this is no longer needed. See Configuration Settings and Installing Using Containers.
TLS encrypts communication using TLS to/from ZooKeeper, Kafka, and other Humio nodes.
IPlocation Database Management Changed
The database used as data source for the
ipLocation()
query function must be updated
within 30 days when a new version of the database is made public
by MaxMind. To comply with this, the database is no longer
shipped as part of the Humio artifacts but will either:
Be fetched automatically by Humio provided that Humio is allowed to connect to the db update service hosted by Humio. This is the default behaviour.
Have to be updated manually (See
ipLocation()
reference page).
If the database cannot be automatically updated and no database
is provided manually, the ipLocation()
query function will no longer work.
Controlling what nodes to use as query coordinators. Due to the
load balancing in Humio, customers that previously relied on
load balancing to control which nodes are query coordinators now
need to set QUERY_COORDINATOR
to false on nodes
they do not want to become query coordinators. See
Installing Using Containers and
Configuration Settings.
Fixed in this release
Configuration
Improved handling of query restarts to avoid unnecessary restarts in some scenarios.
Handling of digest in the case where a node has been offline for a long time has been improved. As an example, running a Humio cluster with a replication factor of 2 and having one node go offline for a long time would leave some ingested data to only reside on one Humio node (and on the ingest queue in Kafka). But this data would not be regarded as properly replicated until the second node returned. If the only node that was left handling a digest partition did a failover, Humio would end up going far back on the Kafka ingest queue to reread data. This has been changed. Now another node from the set of digest nodes will take over if a node goes offline, to keep the replication factor as desired. This means that other hosts, than those specified for a given digest partition on the cluster management page, can actually be handling the digest data for that partition. Only digest nodes will be selected as hosts.
Humio Server 1.13.5 GA (2020-08-12)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.13.5 | GA | 2020-08-12 | Cloud | 2021-08-31 | No | 1.12.0 | Yes |
Available for download two days after release.
Hide file hashes
JAR Checksum | Value |
---|---|
MD5 | 0e9345ea59cb8f76a14cf84c7889de19 |
SHA1 | 6e27b7e2ec68090ad2c6019e0b3dea56d03d4059 |
SHA256 | 3f9e0865df1c28dc69d45f764c65c39aeccfacfadeb0ec04165655ef423c7fff |
SHA512 | 256ba3cbd99245327176569cdfde8d8cee3f8c15c2f60c0c2e02115e8fd6eefec0be3cd3dac5592d4fd2b6d5e2a609e414a963af2c6ead317171228cc001666b |
Security and Bug Fixes
Fixed in this release
Summary
export to file now allows for fieldnames with special characters.
missing migration of non-default groups would result in alerts failing until the user backing the alert logs in again.
This release fixes a security issue. More information will follow when Humio customers have had time to upgrade. See: Security Disclosures
export to file can now include query parameters
Humio Server 1.13.4 GA (2020-08-05)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.13.4 | GA | 2020-08-05 | Cloud | 2021-08-31 | No | 1.12.0 | Yes |
Available for download two days after release.
Hide file hashes
JAR Checksum | Value |
---|---|
MD5 | 56dd1acc53af27871de4260ff14c03f2 |
SHA1 | 10e4f5ad2079fb9c85f5fbf7f970a40a3951910c |
SHA256 | 256b45cb00097da6ff219713d8ed4d5bf8c4dc5b1094f856de3586c2a9f7618f |
SHA512 | 411a0f5aa3b6c96eb72dd7680d9fa94ea51e3da927c318bac0df37a2c7dee0cac9714e9807473a10417f11a3106f65107982669f9c3e29f2ac7a7a1a901a93df |
Security and Bug Fixes
Fixed in this release
Summary
Fix issue where a query could fail to search all segments if digest reassignment was occurring at the same time as the query.
Fix issue where a node with no digest assignment could fail to delete local segment copies in some cases.
This release fixes a security issue. For more information see: Security Disclosures
Humio Server 1.13.3 GA (2020-08-04)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.13.3 | GA | 2020-08-04 | Cloud | 2021-08-31 | No | 1.12.0 | Yes |
Available for download two days after release.
Hide file hashes
JAR Checksum | Value |
---|---|
MD5 | 93a48eb23b149780fac8e065d28e3bc1 |
SHA1 | 6f78a25d48e79482c6e98cf2dc2125208d3c906f |
SHA256 | d526f5d0d1b9f1d1061b93e0a5c1c66d5ecd2a88cf732318d6f8fe8fcee9df30 |
SHA512 | a7fed48de308a679b5907a485ac7dc54008d9dfee32a08d41c23df2f3bc2ff7f3184a0283cf211562eaf4a9b7cc926e70ef22243ba8e3d3ed463dc872b3662bd |
Security and Bug Fix
Fixed in this release
Summary
This release fixes a security issue. For more information see: Security Disclosures
avoid forbidden access error on shared dashboard links by ensuring correct use of time stamps
Humio Server 1.13.2 GA (2020-08-03)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.13.2 | GA | 2020-08-03 | Cloud | 2021-08-31 | No | 1.12.0 | Yes |
Available for download two days after release.
Hide file hashes
JAR Checksum | Value |
---|---|
MD5 | 32b80ba726b370fc740ba7960d537ccc |
SHA1 | fc8a67b2fad623b0806405f9409b6e2d94d713a6 |
SHA256 | fcdac4812ef43784c3dc575e8970ecdc2d45f954773867a2b43bde97c9176142 |
SHA512 | 4a0e86276d19da61b795bd937ecb0145bcfdb59a9678a5d1ed93045964db7b61d937ec69d1bd13bc79447ef74555f9de925bc416388b6cb742765191f79cb08e |
Bug Fixes
Fixed in this release
Summary
joins will now propagate limit warnings from sub queries to the main query
avoid saving invalid bucket storage configurations
all ingest methods now support the
ALLOW_CHANGE_REPO_ON_EVENTS
configuration parametermake sure join-subqueries gets canceled when the main query is canceled
Default groups added
export to file no longer fails/timeouts on heavy sub queries
Humio Server 1.13.1 GA (2020-07-03)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.13.1 | GA | 2020-07-03 | Cloud | 2021-08-31 | No | 1.12.0 | No |
Available for download two days after release.
Hide file hashes
JAR Checksum | Value |
---|---|
MD5 | f097c9c95c06f12ec55c008b9963f9ee |
SHA1 | 8acc122717d30f5a3333175a172e4c26a3c20569 |
SHA256 | 9dd7736569cc823182140ffd7ed91609cdafb97f99586d0d91def412a37bd548 |
SHA512 | eaceff5d2ffc55c18f6175f519ecd0c97a684cb2a1d46c3e242513181bf29217331deb934eba45ba7d55f795a1cccd0ae25cf76b685dd484ed681a2f66791f01 |
Bug Fixes and Improved Search Speeds for Many-Core Systems
Fixed in this release
Summary
Support for a new storage format for segment files that will be introduced in a later release (to support rollback)
S3Archiving could write events twice in a special case (When a merge happens where all inputs have been archived, write in global that the merge-result was archived too).
Improved query scheduling on machines with many cores. This can improve search speeds significantly.
Bucket storage in GCP could did not clean up all tmp files
Humio Server 1.13.0 GA (2020-06-24)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.13.0 | GA | 2020-06-24 | Cloud | 2021-08-31 | No | 1.12.0 | No |
Available for download two days after release.
Hide file hashes
JAR Checksum | Value |
---|---|
MD5 | c9dca247bec107fa41c6ec1fe5bb5d3b |
SHA1 | 51e898057d10d80e6a50ab86ab5bedf71d348f5b |
SHA256 | b5c21c0028f1d61104821e5d4749b15cea7e208aa9bfb9307a9b10e077f6f7b6 |
SHA512 | 5b148d8d8fd8680fd61c642bda45f7dba2cb0aa21efeb2752b02a6467a7063036f1b31ea347816ecdcba676cf1d9d199e409620bd6b1acd2cc2fdeff78767375 |
Many improvements, including some related to free-text
searching, load balancing queries, TLS support,
IPlocation()
query
function, and some configuration changes.
Fixed in this release
Configuration
Humio can now balance and reuse existing queries internally in the cluster. See Configuration Settings.
The data source for the
ipLocation()
query function is no longer shipped with humio but installed/updated separately.Free text search now searches all fields rather than only @rawstring.
Added support for WebIdentityTokenCredentialsProvider on AWS.
Introduced a new
ChangeViewOrRepositoryDescription
permission for editing the description of a view or repository. This was previously tied toConnectView
and any user with that permission will now have the new permission as well.Internal communication in a Humio installation can now be encrypted using TLS. See TLS.
Improvement
Configuration
Controlling what nodes to use as query coordinators. Due to the load balancing in Humio, customers that previously relied on load balancing to control which nodes are query coordinators now need to set
QUERY_COORDINATOR
to false on nodes they do not want to become query coordinators. See Configuration Settings and Installing Using Containers.
Other
Humio can now balance and reuse existing queries internally in the cluster. Load balancer configuration to achieve this is no longer needed. See Configuration Settings and Installing Using Containers.
Free text search now searches all fields rather than only @rawstring.
TLS encrypts communication using TLS to/from ZooKeeper, Kafka, and other Humio nodes.
The database used as data source for the
ipLocation( )
query function must be updated within 30 days when a new version of the database is made public by MaxMind. To comply with this, the database is no longer shipped as part of the humio artifacts but will either:Be fetched automatically by Humio provided that Humio is allowed to connect to the db update service hosted by Humio. This is the default behaviour.
Have to be updated manually (see
ipLocation()
reference page).
If the database cannot be automatically updated and no database is provided manually, the
ipLocation()
query function will no longer work.
Humio Server 1.12.7 LTS (2020-09-17)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.12.7 | LTS | 2020-09-17 | Cloud | 2021-06-30 | No | 1.10.0 | No |
Hide file hashes
JAR Checksum | Value |
---|---|
MD5 | 438fc3f94d9e252184cbeaee2dacc740 |
SHA1 | 08254ec21b51d6522a0336b92548943d9fe0b140 |
SHA256 | 814d522ef2aef4f3a81ab8380c5ad746ef9d13f302124bc0c13c64378ef4feec |
SHA512 | e2e936e1d20a6e60e312e995d1557085976531cfa2c50daa19cd0ce474ff398acdda3cf2431ab4a0aa33625ee45dc798685e6e485d282281f668c2c587bf69be |
These notes include entries from the following previous releases: 1.12.0, 1.12.1, 1.12.2, 1.12.3, 1.12.4, 1.12.5, 1.12.6
Bug Fix and Additional Metrics
Fixed in this release
Summary
Fixed a race condition when cleaning up datasources
This release fixes a security issue. More information will follow when Humio customers have had time to upgrade. See Security Disclosures
Fixed an issue with CSP that could cause the Humio UI to freeze on Safari browsers
Fixed an issue where queries using lookahead in regex would fail to parse - "invalid or unsupported Perl syntax"
S3Archiving could write events twice in a special case (When a merge happens where all inputs have been archived, write in global that the merge-result was archived too).
Fixed an issue where events could be skipped even though they should not, for queries containing field-aliasing (e.g., a:=b) and using subsequent checks on the aliasing field.
Remove restriction on length of group names from LDAP.
Fix issue where a node with no digest assignment could fail to delete local segment copies in some cases.
missing migration of non-default groups would result in alerts failing until the user backing the alert logs in again.
This release fixes a security issue. For more information see: Security Disclosures
Improved handling of query restarts to avoid unnecessary restarts in some scenarios.
Fixed an issue with SAML IDPs requiring query parameters to be passed via the configuration
SAML_IDP_SIGN_ON_URL
Improved query scheduling on machines with many cores. This can improve search speeds significantly.
Support for a new storage format for segment files that will be introduced in a later release (to support rollback)
avoid forbidden access error on shared dashboard links by ensuring correct use of time stamps.
New metrics for scheduling of queries:
local-query-jobs-wait: Histogram of time in milliseconds that each query waited between getting any work done including exports
local-query-jobs-queue: Count queries currently queued or active on node including exports
local-query-segments-queue-exports-part: Count of elements in queue as number of segments currently queued for query for exports
local-query-jobs-queue-exports-part: Count queries currently queued or active on node for exports
Bucket storage in GCP could did not clean up all tmp files
Fixed an issue that prevented deletion of unused objects in bucket storage, if the bucket contained .0 millions of objects or more
This release fixes a security issue. For more information see: Security Disclosures
Other
Other changes: (see 1.11.1 release notes)
Major changes: (see 1.11.0 release notes)
Humio Server 1.12.6 LTS (2020-09-03)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.12.6 | LTS | 2020-09-03 | Cloud | 2021-06-30 | No | 1.10.0 | No |
Hide file hashes
JAR Checksum | Value |
---|---|
MD5 | 43c5c6e2a7de76e73d63002c04f8693c |
SHA1 | dfff8f766cedca6a7b1888e2d060b07fba035b61 |
SHA256 | 3928f42354195c4217d60a86d4df70307477af81fa4ca2d87d29c921dbc1ae5a |
SHA512 | 32e8e1b008b945545159b87d183ab1ef76d649a8a96ac92f54799446c345e205a1e13953f43a8feac264596ee69de7e10e57e2a532bb860bef062559c60442a6 |
These notes include entries from the following previous releases: 1.12.0, 1.12.1, 1.12.2, 1.12.3, 1.12.4, 1.12.5
Bug Fixes
Fixed in this release
Summary
This release fixes a security issue. More information will follow when Humio customers have had time to upgrade. See Security Disclosures
Fixed an issue with CSP that could cause the Humio UI to freeze on Safari browsers
Fixed an issue where queries using lookahead in regex would fail to parse - "invalid or unsupported Perl syntax"
S3Archiving could write events twice in a special case (When a merge happens where all inputs have been archived, write in global that the merge-result was archived too).
Fixed an issue where events could be skipped even though they should not, for queries containing field-aliasing (e.g., a:=b) and using subsequent checks on the aliasing field.
Remove restriction on length of group names from LDAP.
Fix issue where a node with no digest assignment could fail to delete local segment copies in some cases.
missing migration of non-default groups would result in alerts failing until the user backing the alert logs in again.
This release fixes a security issue. For more information see: Security Disclosures
Improved handling of query restarts to avoid unnecessary restarts in some scenarios.
Fixed an issue with SAML IDPs requiring query parameters to be passed via the configuration
SAML_IDP_SIGN_ON_URL
Improved query scheduling on machines with many cores. This can improve search speeds significantly.
Support for a new storage format for segment files that will be introduced in a later release (to support rollback)
avoid forbidden access error on shared dashboard links by ensuring correct use of time stamps.
Bucket storage in GCP could did not clean up all tmp files
Fixed an issue that prevented deletion of unused objects in bucket storage, if the bucket contained .0 millions of objects or more
This release fixes a security issue. For more information see: Security Disclosures
Other
Other changes: (see 1.11.1 release notes)
Major changes: (see 1.11.0 release notes)
Humio Server 1.12.5 LTS (2020-08-12)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.12.5 | LTS | 2020-08-12 | Cloud | 2021-06-30 | No | 1.10.0 | No |
Hide file hashes
JAR Checksum | Value |
---|---|
MD5 | 99fc31c507d0a6193c0257b3a1a1e708 |
SHA1 | d0c7fa652713473d04923d1c5921374144a38aed |
SHA256 | 94c3f8cbb84dfe6870bdcfa4771f46a124ba8bca5d85332f0a0150a3e6f54a49 |
SHA512 | 1b25c775757af29f65597dbb3d5504637485a75f2fc5b29412ea646527aebee31b4f0b96f732c70e7ed9e0d4e1c48776c4c893398e3da3222c9ccce8d762676c |
These notes include entries from the following previous releases: 1.12.0, 1.12.1, 1.12.2, 1.12.3, 1.12.4
Security and Bug Fixes
Fixed in this release
Summary
This release fixes a security issue. More information will follow when Humio customers have had time to upgrade. See Security Disclosures
Fixed an issue with CSP that could cause the Humio UI to freeze on Safari browsers
Fixed an issue where queries using lookahead in regex would fail to parse - "invalid or unsupported Perl syntax"
S3Archiving could write events twice in a special case (When a merge happens where all inputs have been archived, write in global that the merge-result was archived too).
Fixed an issue where events could be skipped even though they should not, for queries containing field-aliasing (e.g., a:=b) and using subsequent checks on the aliasing field.
Fix issue where a node with no digest assignment could fail to delete local segment copies in some cases.
missing migration of non-default groups would result in alerts failing until the user backing the alert logs in again.
This release fixes a security issue. For more information see: Security Disclosures
Fixed an issue with SAML IDPs requiring query parameters to be passed via the configuration
SAML_IDP_SIGN_ON_URL
Improved query scheduling on machines with many cores. This can improve search speeds significantly.
Support for a new storage format for segment files that will be introduced in a later release (to support rollback)
avoid forbidden access error on shared dashboard links by ensuring correct use of time stamps.
Bucket storage in GCP could did not clean up all tmp files
Fixed an issue that prevented deletion of unused objects in bucket storage, if the bucket contained .0 millions of objects or more
This release fixes a security issue. For more information see: Security Disclosures
Other
Other changes: (see 1.11.1 release notes)
Major changes: (see 1.11.0 release notes)
Humio Server 1.12.4 LTS (2020-08-05)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.12.4 | LTS | 2020-08-05 | Cloud | 2021-06-30 | No | 1.10.0 | No |
Hide file hashes
JAR Checksum | Value |
---|---|
MD5 | 33fe0d36c64e58ff875496facf68e65a |
SHA1 | 7122a470bf146e40ddce711349007dd1a4f7961c |
SHA256 | 4cd4fa7b21fbdd14b095cb20e7d5d6af0c78bccc0bbc152c7417149db5d0d194 |
SHA512 | 879822ef357dbb1dcdd969ba053bd09cabaf42b4748495543b72b63f43f8c9c6b0feb25911446f07ae37468ec663e7b278c752dc441a52136762d429c18a7f5b |
These notes include entries from the following previous releases: 1.12.0, 1.12.1, 1.12.2, 1.12.3
Security Fix
Fixed in this release
Summary
Fixed an issue with CSP that could cause the Humio UI to freeze on Safari browsers
Fixed an issue where queries using lookahead in regex would fail to parse - "invalid or unsupported Perl syntax"
S3Archiving could write events twice in a special case (When a merge happens where all inputs have been archived, write in global that the merge-result was archived too).
Fixed an issue where events could be skipped even though they should not, for queries containing field-aliasing (e.g., a:=b) and using subsequent checks on the aliasing field.
This release fixes a security issue. For more information see: Security Disclosures
Fixed an issue with SAML IDPs requiring query parameters to be passed via the configuration
SAML_IDP_SIGN_ON_URL
Improved query scheduling on machines with many cores. This can improve search speeds significantly.
Support for a new storage format for segment files that will be introduced in a later release (to support rollback)
Bucket storage in GCP could did not clean up all tmp files
Fixed an issue that prevented deletion of unused objects in bucket storage, if the bucket contained .0 millions of objects or more
This release fixes a security issue. For more information see: Security Disclosures
Other
Other changes: (see 1.11.1 release notes)
Major changes: (see 1.11.0 release notes)
Humio Server 1.12.3 LTS (2020-08-04)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.12.3 | LTS | 2020-08-04 | Cloud | 2021-06-30 | No | 1.10.0 | No |
Hide file hashes
JAR Checksum | Value |
---|---|
MD5 | b6de082e67d662de761d5b79915152b8 |
SHA1 | 78cf6419a7a092e7d2a8200000ada4535f2f5b5c |
SHA256 | cec300a428f0f5998bbd13d56283fb54dbe47f1205d97115b23dfabf4f3dcbc7 |
SHA512 | 0704b189c70073e304ce115808441e6b44b6aa267df66fe9cac3998a80adc340fe5692829469c99dfceae3cb60e6bcc152ba7f77d509331d50d9621cfc7a1bfc |
These notes include entries from the following previous releases: 1.12.0, 1.12.1, 1.12.2
Security Fix
Fixed in this release
Summary
Fixed an issue with CSP that could cause the Humio UI to freeze on Safari browsers
Fixed an issue where queries using lookahead in regex would fail to parse - "invalid or unsupported Perl syntax"
S3Archiving could write events twice in a special case (When a merge happens where all inputs have been archived, write in global that the merge-result was archived too).
Fixed an issue where events could be skipped even though they should not, for queries containing field-aliasing (e.g., a:=b) and using subsequent checks on the aliasing field.
This release fixes a security issue. For more information see: Security Disclosures
Fixed an issue with SAML IDPs requiring query parameters to be passed via the configuration
SAML_IDP_SIGN_ON_URL
Improved query scheduling on machines with many cores. This can improve search speeds significantly.
Support for a new storage format for segment files that will be introduced in a later release (to support rollback)
Bucket storage in GCP could did not clean up all tmp files
Fixed an issue that prevented deletion of unused objects in bucket storage, if the bucket contained .0 millions of objects or more
Other
Other changes: (see 1.11.1 release notes)
Major changes: (see 1.11.0 release notes)
Humio Server 1.12.2 LTS (2020-07-03)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.12.2 | LTS | 2020-07-03 | Cloud | 2021-06-30 | No | 1.10.0 | No |
Hide file hashes
JAR Checksum | Value |
---|---|
MD5 | 33890ac97e42a107f8f8618312029ca0 |
SHA1 | 0a097631134aa0078e0aea54f32cf58c0be8066b |
SHA256 | 446acdcd64f3c33cdffd886e530201576e75c781184708da30af2dfc7b4bcd14 |
SHA512 | 1931cf27039d17b5aeb4c6fe4022b88a263d41464cb304ba4d30dd936a9cb6202af36dd6c97a5c3e183dd60042def96538e62de3ee86a3204e1108b60a9d9852 |
These notes include entries from the following previous releases: 1.12.0, 1.12.1
Bug Fixes and Improved Search Speeds for Many-Core Systems
Fixed in this release
Summary
Fixed an issue with CSP that could cause the Humio UI to freeze on Safari browsers
Fixed an issue where queries using lookahead in regex would fail to parse - "invalid or unsupported Perl syntax"
S3Archiving could write events twice in a special case (When a merge happens where all inputs have been archived, write in global that the merge-result was archived too).
Fixed an issue where events could be skipped even though they should not, for queries containing field-aliasing (e.g., a:=b) and using subsequent checks on the aliasing field.
Fixed an issue with SAML IDPs requiring query parameters to be passed via the configuration
SAML_IDP_SIGN_ON_URL
Improved query scheduling on machines with many cores. This can improve search speeds significantly.
Support for a new storage format for segment files that will be introduced in a later release (to support rollback)
Bucket storage in GCP could did not clean up all tmp files
Fixed an issue that prevented deletion of unused objects in bucket storage, if the bucket contained .0 millions of objects or more
Other
Other changes: (see 1.11.1 release notes)
Major changes: (see 1.11.0 release notes)
Humio Server 1.12.1 LTS (2020-06-24)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.12.1 | LTS | 2020-06-24 | Cloud | 2021-06-30 | No | 1.10.0 | No |
Hide file hashes
JAR Checksum | Value |
---|---|
MD5 | d6a8a2fa268c92a46024af8e8ba7b27a |
SHA1 | 20660ae440fdcaf818c3919f4b3d467f71da37ce |
SHA256 | 16c96f36554425939b78c5df889cc689368f598a3b14ce336e3a28ad1dc1abbd |
SHA512 | e518b4778b7c6c50861fbaa25a2d95fc3f639e1dfcbb526c0a7727aefe650177d6975ee50caa51d8ccfd636eacdb6fba2ee3cd3a7d1f150eae09147a06c3e2f4 |
These notes include entries from the following previous releases: 1.12.0
Bug Fixes: Safari Freeze, SAML, Bucket Storage Clean-Up, Regex and Field-Aliasing
Fixed in this release
Summary
Fixed an issue with CSP that could cause the Humio UI to freeze on Safari browsers
Fixed an issue where queries using lookahead in regex would fail to parse - "invalid or unsupported Perl syntax"
Fixed an issue where events could be skipped even though they should not, for queries containing field-aliasing (e.g., a:=b) and using subsequent checks on the aliasing field.
Fixed an issue with SAML IDPs requiring query parameters to be passed via the configuration
SAML_IDP_SIGN_ON_URL
Fixed an issue that prevented deletion of unused objects in bucket storage, if the bucket contained .0 millions of objects or more
Other
Other changes: (see 1.11.1 release notes)
Major changes: (see 1.11.0 release notes)
Humio Server 1.12.0 LTS (2020-06-09)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.12.0 | LTS | 2020-06-09 | Cloud | 2021-06-30 | No | 1.10.0 | No |
Hide file hashes
JAR Checksum | Value |
---|---|
MD5 | 73a3fc77b4b603df3d492a3c8f740cb2 |
SHA1 | 0e70382492ca4a59b3139fad1ee22344f0b257e8 |
SHA256 | dbf34321f44a4e60726a10d67d1a73db7915c94d7e786ad332d5832f34a81c81 |
SHA512 | 4f2840a8b552395735006481107c55567dbbd865025fb0fbfdf5d590938eafb541a780b4065a4ed4419b12e7e894c9a768902ea8f5c21ed7debc3368e6a7e528 |
Export to Bucket, findTimestamp()
,
selfJoin()
, Emergency User Sub-System This
release promotes the 1.11 releases from preview to stable. To
see more details go through the individual 1.11.x release notes
(links in the changelog).
The selfJoin()
query function allows
selecting log lines that share an identifier; for which there
exists (separate) log lines that match a certain filtering
criteria; such as "all log lines with a given userid for which
there exists a successful and an
unsuccessful login".
The findTimestamp()
query function will try
to find and parse timestamps in incoming data. The function
should be used in parsers and support automatic detection of
timestamps. It can be used instead of making regular expressions
specifying where to find the timestamp and parsing it with
parseTimestamp()
. Checkout the
findTimestamp()
for details.
As an alternative to downloading streaming queries directly, Humio can now upload them to an S3 or GCS bucket from which the user can download the data. See Data Storage, Buckets and Archiving.
If there are issues with the identity provider that Humio is configured to use, it might not be possible to log in to Humio. To mitigate this, Humio now provides emergency users that can be created locally within the Humio cluster. See Enabling Emergency Access.
Fluent Bit users might need to change the Fluent Bit configuration. To ensure compatibility with the newest Beats clients, the Elastic Bulk API has been changed to always return the full set of status information for all operations, as it is done in the official Elastic API. This can however cause problems when using Fluent Bit to ingest data into Humio.
Fluent Bit in default configuration uses a small buffer (4KB) for responses from the Elastic Bulk API, which causes problems when enough operations are bulked together. The response will then be larger than the response buffer as it contains the status for each individual operation. Make sure the response buffer is large enough, otherwise Fluent Bit will stop shipping data. See: https://github.com/fluent/fluent-bit/issues/2156 and https://docs.fluentbit.io/manual/pipeline/outputs/elasticsearch
Fixed in this release
Other
Other changes: (see 1.11.1 release notes)
Major changes: (see 1.11.0 release notes)
Humio Server 1.11.1 GA (2020-05-28)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.11.1 | GA | 2020-05-28 | Cloud | 2021-06-30 | No | 1.10.0 | No |
Available for download two days after release.
Hide file hashes
JAR Checksum | Value |
---|---|
MD5 | 5cb65fb38f5882c86e170fc19932bc2c |
SHA1 | d8639e7541b0de1b2ef5c3909f88d864b18faf63 |
SHA256 | 63e5264b32fc02774e1f4dd3a7dbd1ddbb124158e915d5df219b355faaf278d0 |
SHA512 | 7c7832559a5a1f43f87d7cf12bbb63b5a7ea6c0922a4c917b652caba08fe77375b49517f2675f819f9fdf0519e90310e572bee1cee8e72bade531605ddd14942 |
Bug Fixes and Memory Optimizations
Fixed in this release
Other
Dashboard widgets now display an error if data is not compatible with the widget
Several improvements to memory handling
Several improvements to query error handling
Elastic Bulk API change
Known Issues
Other
Fluent Bit users might need to change the Fluent Bit configuration. To ensure compatibility with the newest Beats clients, the Elastic Bulk API has been changed to always return the full set of status information for all operations, as it is done in the official Elastic API.
This can however cause problems when using Fluent Bit to ingest data into Humio.
Fluent Bit in default configuration uses a small buffer (4KB) for responses from the Elastic Bulk API, which causes problems when enough operations are bulked together. The response will then be larger than the response buffer as it contains the status for each individual operation. Make sure the response buffer is large enough, otherwise Fluent Bit will stop shipping data. See: https://github.com/fluent/fluent-bit/issues/2156 and https://docs.fluentbit.io/manual/pipeline/outputs/elasticsearch
Humio Server 1.11.0 GA (2020-05-19)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.11.0 | GA | 2020-05-19 | Cloud | 2021-06-30 | No | 1.10.0 | Yes |
Available for download two days after release.
Hide file hashes
JAR Checksum | Value |
---|---|
MD5 | 70406b0710c9c999ec5293357e03b05a |
SHA1 | f50294f5d0267d582d7207f7df33e16eee2744af |
SHA256 | c1a79a8f1ca41e14158992ba5dfff0a05ce39292b5ed5bfa8d5507973d054fdb |
SHA512 | f0f210d24aac849457839588202d662294911bf095e0f5a791093c61c435969b300f0a303d457a0e8cdbb13d742a81b3d09c6e11fad28f927ee9782153d05fc8 |
Export to Bucket, findTimestamp()
,
selfJoin()
, Emergency User Sub-System
The selfJoin()
query function allows
selecting log lines that share an identifier; for which there
exists (separate) log lines that match a certain filtering
criteria; such as "all log lines with a given userid for which
there exists a successful and an unsuccessful login".
The findTimestamp()
query function will try
to find and parse timestamps in incoming data. The function
should be used in parsers and support automatic detection of
timestamps. It can be used instead of making regular expressions
specifying where to find the timestamp and parsing it with
parseTimestamp()
. See the
findTimestamp()
reference page for details.
As an alternative to downloading streaming queries directly, Humio can now upload them to an S3 or GCS bucket from which the user can download the data. See Bucket Storage.
If there are issues with the identity provider that Humio is configured to use, it might not be possible to log in to Humio. To mitigate this, Humio now provides emergency users that can be created locally within the Humio cluster. See Enabling Emergency Access.
Behavior Changes
Scripts or environment which make use of these tools should be checked and updated for the new configuration:
Other
Allow for emergency logins if the primary login provider is having problems. See Enabling Emergency Access.
Fixed in this release
Configuration
New configuration
MAX_CHARS_TO_FIND_TIMESTAMP
. Default value should work for most deployments. See Configuration Settings.
Dashboards and Widgets
Gauge
widget now works for arbitrary numbers not only for aggregated numbers.
Functions
Query function
unit:convert()
Query function
findTimestamp()
Query function
selfJoin()
Query function
formatDuration()
Query function
selfJoinFilter()
Other
New built-in parser zeek-json.
Humio Server 1.10.9 LTS (2020-08-05)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.10.9 | LTS | 2020-08-05 | Cloud | 2021-04-30 | No | 1.10.0 | No |
Hide file hashes
JAR Checksum | Value |
---|---|
MD5 | a9fc2ea12707d4b1c581151e0026f1b9 |
SHA1 | d4f374af166c4138a1c5ef715ae7e6299fc4a51f |
SHA256 | b2f676c80a61a444b799295f6ca5e3284209aa1d4611182503e78c4b4ca64623 |
SHA512 | 146baf45e4181694b327849665771cc859a4757e32e0b982990da80171f144b12a01a47fcba2bdb728a4c10fede2a1ff0ff25c7b5bbf8be3a80caf6a982af9e2 |
These notes include entries from the following previous releases: 1.10.0, 1.10.1, 1.10.2, 1.10.3, 1.10.4, 1.10.5, 1.10.6, 1.10.7, 1.10.8
Security Fix
Fixed in this release
Summary
A couple of memory leaks have been found and fixed.
Fixed an issue with CSP that could cause the Humio UI to freeze on Safari browsers
New metric: "query-delta-cost": 30s delta cost on queries per repo, for the entire cluster.
S3Archiving could write events twice in a special case (When a merge happens where all inputs have been archived, write in global that the merge-result was archived too).
Fixed an issue where a query could get a "Result is partial" warning when the query took more then 15 minutes to complete while a merge of segments addressed by the query happened in the background
Better sorting when computing query prefixes in order to reuse queries.
This release fixes a security issue. For more information see: Security Disclosures
Improvements made to speed of frontpage loading. Noticeable for customers with many repositories and groups.
New internal jobs logging system stats: Search for
#type=humio | NonSensitive | groupby(kind)
to see them.Fixed an issue with SAML IDPs requiring query parameters to be passed via the configuration
SAML_IDP_SIGN_ON_URL
Autocreate users on login when synchronizing groups with external provider.
Fixed an issue that prevented deletion of unused objects in bucket storage, if the bucket contained .0 millions of objects or more
Bucket storage in GCP could did not clean up all tmp files
An issue could result in malformed messages being put into the ingest queue. This version is able to read and skip such messages. The issue causing such malformed messages has been fixed.
This release fixes a security issue. For more information see: Security Disclosures
Fixed an issue where long running queries started as part of an export, or by calls to the /query API would time out
Thread pools have been reorganized to require fewer threads and threads have been given new names.
Paging in UI. administration/Users & Permissions.
Support for a new storage format for segment files that will be introduced in a later release (to support rollback)
Memory requirements set using
-XX:MaxDirectMemorySize
is much lower now. Suggested value is ((#vCpu+3)/4) GB.Improved query scheduling on machines with many cores. This can improve search speeds significantly.
Fixed a number of issues with export and alerts in the humio-search-all repository.
Improved protocol within cluster for submitting queries to allow faster start of queries on huge repositories.
This release fixes a security issue. For more information see: Security Disclosures
Humio search all repo interaction with alerts and notifiers.
Fixed a number of bugs that would cause long-running queries using join, selfJoin or selfJoinFilter to timeout or fail
Other
Dealing with missing data points in timecharts
Add Role Based Access Control (RBAC) to the Humio UI
New line interpolation options
Support for controlling color and title in widgets
Several improvements to Query Functions
NetFlow support extended to also support IPFIX.
Added humio Health Check APIs
Time Chart series roll-up
Linear interpolation now default. New interpolation type: Basis
Replaces chart library with Vega, can be disabled using the ENABLE_VEGA_CHARTS=false flag.
Control widget styling directly from dashboards
Chart styling support (Pie, Bar)
Humio Server 1.10.8 LTS (2020-08-04)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.10.8 | LTS | 2020-08-04 | Cloud | 2021-04-30 | No | 1.10.0 | No |
Hide file hashes
JAR Checksum | Value |
---|---|
MD5 | ebc006bf64fc1f310f6c29a8854da2f7 |
SHA1 | 1461cd0e70f1ddb30b316cf9d0010a0840a11241 |
SHA256 | 7432902332b1a87f6899870e3bd47a18077a04980a63427896a0eac204300e2d |
SHA512 | c63714a10b7d06a869ab47bf4a84c31e88656f7e78c14281fdc9b78dc6139fc94b08f02dfd810352844bc0662838790227c18877a518b11cf67ad39c12e7ca0b |
These notes include entries from the following previous releases: 1.10.0, 1.10.1, 1.10.2, 1.10.3, 1.10.4, 1.10.5, 1.10.6, 1.10.7
Security Fix
Fixed in this release
Summary
A couple of memory leaks have been found and fixed.
Fixed an issue with CSP that could cause the Humio UI to freeze on Safari browsers
New metric: "query-delta-cost": 30s delta cost on queries per repo, for the entire cluster.
S3Archiving could write events twice in a special case (When a merge happens where all inputs have been archived, write in global that the merge-result was archived too).
Fixed an issue where a query could get a "Result is partial" warning when the query took more then 15 minutes to complete while a merge of segments addressed by the query happened in the background
Better sorting when computing query prefixes in order to reuse queries.
This release fixes a security issue. For more information see: Security Disclosures
Improvements made to speed of frontpage loading. Noticeable for customers with many repositories and groups.
New internal jobs logging system stats: Search for
#type=humio | NonSensitive | groupby(kind)
to see them.Fixed an issue with SAML IDPs requiring query parameters to be passed via the configuration
SAML_IDP_SIGN_ON_URL
Autocreate users on login when synchronizing groups with external provider.
Fixed an issue that prevented deletion of unused objects in bucket storage, if the bucket contained .0 millions of objects or more
Bucket storage in GCP could did not clean up all tmp files
An issue could result in malformed messages being put into the ingest queue. This version is able to read and skip such messages. The issue causing such malformed messages has been fixed.
Fixed an issue where long running queries started as part of an export, or by calls to the /query API would time out
Thread pools have been reorganized to require fewer threads and threads have been given new names.
Paging in UI. administration/Users & Permissions.
Support for a new storage format for segment files that will be introduced in a later release (to support rollback)
Memory requirements set using
-XX:MaxDirectMemorySize
is much lower now. Suggested value is ((#vCpu+3)/4) GB.Improved query scheduling on machines with many cores. This can improve search speeds significantly.
Fixed a number of issues with export and alerts in the humio-search-all repository.
Improved protocol within cluster for submitting queries to allow faster start of queries on huge repositories.
This release fixes a security issue. For more information see: Security Disclosures
Humio search all repo interaction with alerts and notifiers.
Fixed a number of bugs that would cause long-running queries using join, selfJoin or selfJoinFilter to timeout or fail
Other
Dealing with missing data points in timecharts
Add Role Based Access Control (RBAC) to the Humio UI
New line interpolation options
Support for controlling color and title in widgets
Several improvements to Query Functions
NetFlow support extended to also support IPFIX.
Added humio Health Check APIs
Time Chart series roll-up
Linear interpolation now default. New interpolation type: Basis
Replaces chart library with Vega, can be disabled using the ENABLE_VEGA_CHARTS=false flag.
Control widget styling directly from dashboards
Chart styling support (Pie, Bar)
Humio Server 1.10.7 LTS (2020-07-03)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.10.7 | LTS | 2020-07-03 | Cloud | 2021-04-30 | No | 1.8.5 | No |
Hide file hashes
JAR Checksum | Value |
---|---|
MD5 | 36d8bcdffeb41720006c2d07ae38669a |
SHA1 | 83fda60fe22239101cf1e81fd41bec04071f5ae5 |
SHA256 | 4aa66055218f4f4d7ad65a7476e7f8b6ddd57f8187472e80e2bef1506aa3484f |
SHA512 | c34ae669043a70d4ebc760365bcb82477fbbbaadb0f3229083732b1549e646444ffbb42d2e1285b9f0ca8e1213e9fef42a14d587b66ba3adf883fedc8ba09c72 |
These notes include entries from the following previous releases: 1.10.0, 1.10.1, 1.10.2, 1.10.3, 1.10.4, 1.10.5, 1.10.6
Bug Fixes and Improved Search Speeds for Many-Core Systems
Fixed in this release
Summary
A couple of memory leaks have been found and fixed.
Fixed an issue with CSP that could cause the Humio UI to freeze on Safari browsers
New metric: "query-delta-cost": 30s delta cost on queries per repo, for the entire cluster.
S3Archiving could write events twice in a special case (When a merge happens where all inputs have been archived, write in global that the merge-result was archived too).
Fixed an issue where a query could get a "Result is partial" warning when the query took more then 15 minutes to complete while a merge of segments addressed by the query happened in the background
Better sorting when computing query prefixes in order to reuse queries.
This release fixes a security issue. For more information see: Security Disclosures
Improvements made to speed of frontpage loading. Noticeable for customers with many repositories and groups.
New internal jobs logging system stats: Search for
#type=humio | NonSensitive | groupby(kind)
to see them.Fixed an issue with SAML IDPs requiring query parameters to be passed via the configuration
SAML_IDP_SIGN_ON_URL
Autocreate users on login when synchronizing groups with external provider.
Fixed an issue that prevented deletion of unused objects in bucket storage, if the bucket contained .0 millions of objects or more
Bucket storage in GCP could did not clean up all tmp files
An issue could result in malformed messages being put into the ingest queue. This version is able to read and skip such messages. The issue causing such malformed messages has been fixed.
Fixed an issue where long running queries started as part of an export, or by calls to the /query API would time out
Thread pools have been reorganized to require fewer threads and threads have been given new names.
Paging in UI. administration/Users & Permissions.
Support for a new storage format for segment files that will be introduced in a later release (to support rollback)
Memory requirements set using
-XX:MaxDirectMemorySize
is much lower now. Suggested value is ((#vCpu+3)/4) GB.Improved query scheduling on machines with many cores. This can improve search speeds significantly.
Fixed a number of issues with export and alerts in the humio-search-all repository.
Improved protocol within cluster for submitting queries to allow faster start of queries on huge repositories.
Humio search all repo interaction with alerts and notifiers.
Fixed a number of bugs that would cause long-running queries using join, selfJoin or selfJoinFilter to timeout or fail
Other
Dealing with missing data points in timecharts
Add Role Based Access Control (RBAC) to the Humio UI
New line interpolation options
Support for controlling color and title in widgets
Several improvements to Query Functions
NetFlow support extended to also support IPFIX.
Added humio Health Check APIs
Time Chart series roll-up
Linear interpolation now default. New interpolation type: Basis
Replaces chart library with Vega, can be disabled using the ENABLE_VEGA_CHARTS=false flag.
Control widget styling directly from dashboards
Chart styling support (Pie, Bar)
Humio Server 1.10.6 LTS (2020-06-24)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.10.6 | LTS | 2020-06-24 | Cloud | 2021-04-30 | No | 1.8.5 | No |
Hide file hashes
JAR Checksum | Value |
---|---|
MD5 | ff8b12b453b9e600ac6a5f917a799f8b |
SHA1 | c6bf4610bc284c7838d92e0c110461349511afeb |
SHA256 | 04503cb6691db596b5237e65e9f639013caab3010c753796392d1d59a9e06413 |
SHA512 | d6a8c3c898c4d93ac8dd564ec0394a5d5db4d06711bd498e38ce58a2b026bd8d6e1f2b664e30de92eb60afb7b452b34b4c17e0e6370331afdcc2030407f74674 |
These notes include entries from the following previous releases: 1.10.0, 1.10.1, 1.10.2, 1.10.3, 1.10.4, 1.10.5
Bug Fixes: Safari Freeze, SAML and Bucket Storage Clean-Up
Fixed in this release
Summary
A couple of memory leaks have been found and fixed.
Fixed an issue with CSP that could cause the Humio UI to freeze on Safari browsers
New metric: "query-delta-cost": 30s delta cost on queries per repo, for the entire cluster.
Fixed an issue where a query could get a "Result is partial" warning when the query took more then 15 minutes to complete while a merge of segments addressed by the query happened in the background
Better sorting when computing query prefixes in order to reuse queries.
This release fixes a security issue. For more information see: Security Disclosures
Improvements made to speed of frontpage loading. Noticeable for customers with many repositories and groups.
New internal jobs logging system stats: Search for
#type=humio | NonSensitive | groupby(kind)
to see them.Fixed an issue with SAML IDPs requiring query parameters to be passed via the configuration
SAML_IDP_SIGN_ON_URL
Autocreate users on login when synchronizing groups with external provider.
Fixed an issue that prevented deletion of unused objects in bucket storage, if the bucket contained .0 millions of objects or more
An issue could result in malformed messages being put into the ingest queue. This version is able to read and skip such messages. The issue causing such malformed messages has been fixed.
Fixed an issue where long running queries started as part of an export, or by calls to the /query API would time out
Thread pools have been reorganized to require fewer threads and threads have been given new names.
Paging in UI. administration/Users & Permissions.
Memory requirements set using
-XX:MaxDirectMemorySize
is much lower now. Suggested value is ((#vCpu+3)/4) GB.Fixed a number of issues with export and alerts in the humio-search-all repository.
Improved protocol within cluster for submitting queries to allow faster start of queries on huge repositories.
Humio search all repo interaction with alerts and notifiers.
Fixed a number of bugs that would cause long-running queries using join, selfJoin or selfJoinFilter to timeout or fail
Other
Dealing with missing data points in timecharts
Add Role Based Access Control (RBAC) to the Humio UI
New line interpolation options
Support for controlling color and title in widgets
Several improvements to Query Functions
NetFlow support extended to also support IPFIX.
Added humio Health Check APIs
Time Chart series roll-up
Linear interpolation now default. New interpolation type: Basis
Replaces chart library with Vega, can be disabled using the ENABLE_VEGA_CHARTS=false flag.
Control widget styling directly from dashboards
Chart styling support (Pie, Bar)
Humio Server 1.10.5 LTS (2020-06-09)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.10.5 | LTS | 2020-06-09 | Cloud | 2021-04-30 | No | 1.8.5 | No |
Hide file hashes
JAR Checksum | Value |
---|---|
MD5 | e26c8dcab38cf8ae70146af0c20de708 |
SHA1 | 38b4cd3f334845ebdc91d02b9f24cab628290805 |
SHA256 | 3234edd2ec5a01b67c07f8d87d57a967fb37a2e6b29a34317545a37796650a83 |
SHA512 | 281bfc2da26ef9f14c123c695943149499f43f7c67e20ff2150420bed7e745a7f6e2483b6073a366e453589f657a3717031071e6374b124657cb5b8c77542255 |
These notes include entries from the following previous releases: 1.10.0, 1.10.1, 1.10.2, 1.10.3, 1.10.4
Bug Fixes: humio-search-all and Query Timeouts
Fixed in this release
Summary
A couple of memory leaks have been found and fixed.
New metric: "query-delta-cost": 30s delta cost on queries per repo, for the entire cluster.
Fixed an issue where a query could get a "Result is partial" warning when the query took more then 15 minutes to complete while a merge of segments addressed by the query happened in the background
Better sorting when computing query prefixes in order to reuse queries.
This release fixes a security issue. For more information see: Security Disclosures
Improvements made to speed of frontpage loading. Noticeable for customers with many repositories and groups.
New internal jobs logging system stats: Search for
#type=humio | NonSensitive | groupby(kind)
to see them.Autocreate users on login when synchronizing groups with external provider.
An issue could result in malformed messages being put into the ingest queue. This version is able to read and skip such messages. The issue causing such malformed messages has been fixed.
Fixed an issue where long running queries started as part of an export, or by calls to the /query API would time out
Thread pools have been reorganized to require fewer threads and threads have been given new names.
Paging in UI. administration/Users & Permissions.
Memory requirements set using
-XX:MaxDirectMemorySize
is much lower now. Suggested value is ((#vCpu+3)/4) GB.Fixed a number of issues with export and alerts in the humio-search-all repository.
Improved protocol within cluster for submitting queries to allow faster start of queries on huge repositories.
Humio search all repo interaction with alerts and notifiers.
Fixed a number of bugs that would cause long-running queries using join, selfJoin or selfJoinFilter to timeout or fail
Other
Dealing with missing data points in timecharts
Add Role Based Access Control (RBAC) to the Humio UI
New line interpolation options
Support for controlling color and title in widgets
Several improvements to Query Functions
NetFlow support extended to also support IPFIX.
Added humio Health Check APIs
Time Chart series roll-up
Linear interpolation now default. New interpolation type: Basis
Replaces chart library with Vega, can be disabled using the ENABLE_VEGA_CHARTS=false flag.
Control widget styling directly from dashboards
Chart styling support (Pie, Bar)
Humio Server 1.10.4 LTS (2020-05-29)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.10.4 | LTS | 2020-05-29 | Cloud | 2021-04-30 | No | 1.8.5 | No |
Hide file hashes
JAR Checksum | Value |
---|---|
MD5 | 8fd134e50a88aa016037aff59bca7f70 |
SHA1 | 184f4c0d1b477eef81357e7efcbe2cafb2920fef |
SHA256 | 3398a9e1cc4f3e2f70e952f0c6481fe3c31f447e68d1589904ee26e614cb9a30 |
SHA512 | fab832e462afeaed828a55bc7a6b3d637317ace396ff0751fbf1813d4f0ce01f8cf11e4d3ed6cb6eda898e5521739de766b8ade609fcc98ca774a5d079f76509 |
These notes include entries from the following previous releases: 1.10.0, 1.10.1, 1.10.2, 1.10.3
Bug Fixes for Long-Running Queries
Fixed in this release
Summary
A couple of memory leaks have been found and fixed.
New metric: "query-delta-cost": 30s delta cost on queries per repo, for the entire cluster.
Better sorting when computing query prefixes in order to reuse queries.
This release fixes a security issue. For more information see: Security Disclosures
Improvements made to speed of frontpage loading. Noticeable for customers with many repositories and groups.
New internal jobs logging system stats: Search for
#type=humio | NonSensitive | groupby(kind)
to see them.Autocreate users on login when synchronizing groups with external provider.
An issue could result in malformed messages being put into the ingest queue. This version is able to read and skip such messages. The issue causing such malformed messages has been fixed.
Thread pools have been reorganized to require fewer threads and threads have been given new names.
Paging in UI. administration/Users & Permissions.
Memory requirements set using
-XX:MaxDirectMemorySize
is much lower now. Suggested value is ((#vCpu+3)/4) GB.Improved protocol within cluster for submitting queries to allow faster start of queries on huge repositories.
Humio search all repo interaction with alerts and notifiers.
Fixed a number of bugs that would cause long-running queries using join, selfJoin or selfJoinFilter to timeout or fail
Other
Dealing with missing data points in timecharts
Add Role Based Access Control (RBAC) to the Humio UI
New line interpolation options
Support for controlling color and title in widgets
Several improvements to Query Functions
NetFlow support extended to also support IPFIX.
Added humio Health Check APIs
Time Chart series roll-up
Linear interpolation now default. New interpolation type: Basis
Replaces chart library with Vega, can be disabled using the ENABLE_VEGA_CHARTS=false flag.
Control widget styling directly from dashboards
Chart styling support (Pie, Bar)
Humio Server 1.10.3 LTS (2020-05-20)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.10.3 | LTS | 2020-05-20 | Cloud | 2021-04-30 | No | 1.8.5 | No |
Hide file hashes
JAR Checksum | Value |
---|---|
MD5 | b534f6427bbc73720246c88df097d962 |
SHA1 | 4fd7b762eefcd241c14a0dcc74642ab5f96066a3 |
SHA256 | 03fbcdfffbd4e730ab6ac97ce1e0c88dfa2b2c544de01767647ac9fc1a606c97 |
SHA512 | cc1cd46e9e38cf9215e82da0246ba7cfd27b50becbe9b9a17c0c3c88427649b93339b6cffc6780bc241c2bd152d8f781b7f6d421dcd641def3cef66a8290841f |
These notes include entries from the following previous releases: 1.10.0, 1.10.1, 1.10.2
Bug Fixes
Fixed in this release
Summary
A couple of memory leaks have been found and fixed.
New metric: "query-delta-cost": 30s delta cost on queries per repo, for the entire cluster.
Better sorting when computing query prefixes in order to reuse queries.
This release fixes a security issue. For more information see: Security Disclosures
Improvements made to speed of frontpage loading. Noticeable for customers with many repositories and groups.
New internal jobs logging system stats: Search for
#type=humio | NonSensitive | groupby(kind)
to see them.Autocreate users on login when synchronizing groups with external provider.
An issue could result in malformed messages being put into the ingest queue. This version is able to read and skip such messages. The issue causing such malformed messages has been fixed.
Thread pools have been reorganized to require fewer threads and threads have been given new names.
Paging in UI. administration/Users & Permissions.
Memory requirements set using
-XX:MaxDirectMemorySize
is much lower now. Suggested value is ((#vCpu+3)/4) GB.Improved protocol within cluster for submitting queries to allow faster start of queries on huge repositories.
Humio search all repo interaction with alerts and notifiers.
Other
Dealing with missing data points in timecharts
Add Role Based Access Control (RBAC) to the Humio UI
New line interpolation options
Support for controlling color and title in widgets
Several improvements to Query Functions
NetFlow support extended to also support IPFIX.
Added humio Health Check APIs
Time Chart series roll-up
Linear interpolation now default. New interpolation type: Basis
Replaces chart library with Vega, can be disabled using the ENABLE_VEGA_CHARTS=false flag.
Control widget styling directly from dashboards
Chart styling support (Pie, Bar)
Humio Server 1.10.2 LTS (2020-05-19)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.10.2 | LTS | 2020-05-19 | Cloud | 2021-04-30 | No | 1.8.5 | No |
Hide file hashes
JAR Checksum | Value |
---|---|
MD5 | 34618e12957e3394f06dd86d7b283bb7 |
SHA1 | b9e75509bbec4093019a475cc3660c28295bd1c4 |
SHA256 | c4868bd4067833744cceaa44dac29a5af2b450d112c34f3dfd8d46906cf115f8 |
SHA512 | 5789415e8c6d748c7a724f702606e96ba48f7595517787d76b7e7581766e23c1de7af1eec5b66cc22d73241d61fb4866f3b12a88d73ee6ca453c8bab255fb331 |
These notes include entries from the following previous releases: 1.10.0, 1.10.1
Optimizations, Improved Humio Health Insights and Bug Fixes
Fixed in this release
Summary
A couple of memory leaks have been found and fixed.
New metric: "query-delta-cost": 30s delta cost on queries per repo, for the entire cluster.
Better sorting when computing query prefixes in order to reuse queries.
This release fixes a security issue. For more information see: Security Disclosures
Improvements made to speed of frontpage loading. Noticeable for customers with many repositories and groups.
New internal jobs logging system stats: Search for
#type=humio | NonSensitive | groupby(kind)
to see them.Autocreate users on login when synchronizing groups with external provider.
Thread pools have been reorganized to require fewer threads and threads have been given new names.
Paging in UI. administration/Users & Permissions.
Memory requirements set using
-XX:MaxDirectMemorySize
is much lower now. Suggested value is ((#vCpu+3)/4) GB.Improved protocol within cluster for submitting queries to allow faster start of queries on huge repositories.
Humio search all repo interaction with alerts and notifiers.
Other
Dealing with missing data points in timecharts
Add Role Based Access Control (RBAC) to the Humio UI
New line interpolation options
Support for controlling color and title in widgets
Several improvements to Query Functions
NetFlow support extended to also support IPFIX.
Added humio Health Check APIs
Time Chart series roll-up
Linear interpolation now default. New interpolation type: Basis
Replaces chart library with Vega, can be disabled using the ENABLE_VEGA_CHARTS=false flag.
Control widget styling directly from dashboards
Chart styling support (Pie, Bar)
Humio Server 1.10.1 LTS (2020-05-04)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.10.1 | LTS | 2020-05-04 | Cloud | 2021-04-30 | No | 1.8.5 | No |
Hide file hashes
JAR Checksum | Value |
---|---|
MD5 | abca8999c6542ff0a4d60b897eb5acf3 |
SHA1 | 176e416e63d13200dee21a509fa9ae2401b8a0ca |
SHA256 | 8a1a419c3bd664a006918ee0b957ed2f1fbdea20dda2ea8114ab4db7b039425a |
SHA512 | 93a8d7a1c86e646f9e7fb26f8feeb05f5bf22774cd69e2b0ac113c48f7736242554ddeb0065ff19ab9723546aa75a064af4e7f7779a8d2efa91651aa7f61f630 |
These notes include entries from the following previous releases: 1.10.0
Optimizations, Improved Humio Health Insights and Bug Fixes
Fixed in this release
Summary
New metric: "query-delta-cost": 30s delta cost on queries per repo, for the entire cluster.
This release fixes a security issue. For more information see: Security Disclosures
New internal jobs logging system stats: Search for
#type=humio | NonSensitive | groupby(kind)
to see them.Thread pools have been reorganized to require fewer threads and threads have been given new names.
Memory requirements set using
-XX:MaxDirectMemorySize
is much lower now. Suggested value is ((#vCpu+3)/4) GB.Improved protocol within cluster for submitting queries to allow faster start of queries on huge repositories.
Other
Dealing with missing data points in timecharts
Add Role Based Access Control (RBAC) to the Humio UI
New line interpolation options
Support for controlling color and title in widgets
Several improvements to Query Functions
NetFlow support extended to also support IPFIX.
Added humio Health Check APIs
Time Chart series roll-up
Linear interpolation now default. New interpolation type: Basis
Replaces chart library with Vega, can be disabled using the ENABLE_VEGA_CHARTS=false flag.
Control widget styling directly from dashboards
Chart styling support (Pie, Bar)
Humio Server 1.10.0 LTS (2020-04-27)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.10.0 | LTS | 2020-04-27 | Cloud | 2021-04-30 | No | 1.8.5 | Yes |
Hide file hashes
JAR Checksum | Value |
---|---|
MD5 | 4f1ed6fa41cb1f96c7a68b4242ff7280 |
SHA1 | 5e334e72d7a2e59ece26062007fd3baa6361cfb6 |
SHA256 | 36245d9e2d93650455dd2e69f7ad4b48f625c482334d9742674a9e4edfed17d4 |
SHA512 | fce509692d77f483a919e5e5abbe82b5c58edbeec05afd25add03cb5500603e741ef66fdb886b85bfe3db535d3ce41d43e57bbb31e72c5cd4517bfb7321c941e |
UI for Role Based Access Control (RBAC), Health Check API, Kafka Version Update, Vega Charts. This release promotes the 1.9 releases from preview to stable. This release is identical to 1.9.3 apart from the version string. To see more details go through the individual 1.9.x release notes (links in the changelog).
This release fixes a number of security issues. For more information see: Security Disclosures.
Updated Humio to use Kafka 2.4. Humio can still use versions of Kafka down through 1.1. Be aware that updating Kafka also requires you to update ZooKeeper to 3.5.6. There is a migration involved in updating ZooKeeper. See the ZooKeeper migration FAQ here. Use the migration approach using an empty snapshot. The other proposed solution can loose data.
Updated Kafka and ZooKeeper Docker images to use Kafka 2.4. Updating to Kafka 2.4 should be straightforward using Humio's Kafka/ZooKeeper Docker images. ZooKeeper image will handle migration. Stop all Kafka nodes. Stop all ZooKeeper nodes. Start all ZooKeeper nodes on the new version. Start all Kafka nodes on the new version. Before updating Kafka/ZooKeeper, we recommend backing up the ZooKeeper data directory. Then, add the ZooKeeper properties described below. If you are deploying Kafka/ZooKeeper using other tools, for example Ansible scripts, be aware there is a migration involved in updating ZooKeeper.
When updating Kafka/ZooKeeper we recommend setting these ZooKeeper properties
# Do not start the new admin server. Default port 8080 conflicts with Humio port.
admin.enableServer=false
# purge old snapshot files
autopurge.purgeInterval=1
# Allow 4 letter commands. Used by Humio to get info about the ZooKeeper cluster
4lw.commands.whitelist=*
Fixed in this release
Other
Dealing with missing data points in timecharts
Add Role Based Access Control (RBAC) to the Humio UI
New line interpolation options
Support for controlling color and title in widgets
Several improvements to Query Functions
NetFlow support extended to also support IPFIX.
Added humio Health Check APIs
Time Chart series roll-up
Linear interpolation now default. New interpolation type: Basis
Replaces chart library with Vega, can be disabled using the ENABLE_VEGA_CHARTS=false flag.
Control widget styling directly from dashboards
Chart styling support (Pie, Bar)
Humio Server 1.9.3 GA (2020-04-22)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.9.3 | GA | 2020-04-22 | Cloud | 2021-04-30 | No | 1.8.5 | No |
Available for download two days after release.
Hide file hashes
JAR Checksum | Value |
---|---|
MD5 | e2a84e14721946391ec9dddf66412723 |
SHA1 | 4b439142bf46d49c9e1b9c9685a2beacb921ef18 |
SHA256 | 646c3069015b1516dae2ef577d827a30d0deb62a393360fead05074a37edc670 |
SHA512 | bf673036d37ec539d1abeb04b9388e87f8c6ddfc781602d67bb7e28a5beda6195b827bba61bc9f769836f4a764330d93a33ae4788700758ca23cd7f8ecb88ae4 |
Security Fixes, Bug Fixes, and timeChart()
improvements
A few security vulnerabilities have been discovered as part of a proactive penetration test. None are known to have been exploited. More information will be forthcoming.
Fixed in this release
Functions
More efficient
collect()
function implementation.New query function
fieldstats()
.
Other
New Time Chart interpolation options.
New options for dealing with missing data in Time Charts.
Improve disk space monitoring when using bucket storage.
api-explorer not working due to CSP inline script.
the
query
metric only measured time for streaming queries, now it includes non-streaming as well.The segment queue length metric was not correct when segments got fetched from bucket storage by a query.
If at startup the
global-snapshot.json
file is missing, then try loading the ".1" backup copy.Improves responsiveness of the recent queries dropdown, and limits the number of stored recent queries to .0 per user per repository.
Allow dots in tagged field names.
Styling improvements in the "Style" panel for widgets.
Security: [critical] Fixed more security vulnerabilities discovered through proactive penetration testing (more information will be forthcoming).
Allow more concurrent processing to take place in "export" query processing.
Improvement
Dashboards and Widgets
Deal with Missing Data Points in Timecharts
This release improves the handling of missing data points in time charts. Previously you could either interpolate missing data points based on the surrounding data, or leave gaps in the charts. With the introduction of the new charts in 1.9.0 the gaps became more apparent than previously, and we have added new options to deal with missing data points. These replace the previous option "Allow Gaps", with four new options:
Do Nothing - This will leave gaps in your data
Linear Interpolation - Impute values using linear interpolation based on the nearest known data points.
Replace by Mean Value - Replace missing values with the mean value of the series.
Replace by Zero - Replace missing values with zeros.
The release also introduces new options for line interpolation.
Monotone
Natural
Cardinal
Catmull-Rom
Bundle
The latter three are impacted by the 'tension' setting in the timechart Style editor.
Humio Server 1.9.2 GA (2020-03-25)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.9.2 | GA | 2020-03-25 | Cloud | 2021-04-30 | No | 1.8.5 | No |
Available for download two days after release.
Hide file hashes
JAR Checksum | Value |
---|---|
MD5 | f457859662db92bd00c282aa3daa1393 |
SHA1 | a7d40e09f6a2e09a90fd2e3a2fc53618ac96a57f |
SHA256 | b33d955d51ddbc2b067b4539f1a447f4e6268e3944ead92dd7e03e56e754d95f |
SHA512 | e24b7f6b31c42b6e2b138eaa23bc6a1760dfff18b53bed6c16d1dcca79c31aabf01bbe913eee7cbe3629f410491754af77415bee14a81220099a7c4c8d1bb316 |
Security Fix and Bug Fixes
Fixed in this release
Summary
Added API to the list and deleted missing segments from global.
Security: [critical] Fixed a security vulnerability discovered through proactive penetration testing (more information will be forthcoming).
Humio Server 1.9.1 GA (2020-03-23)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.9.1 | GA | 2020-03-23 | Cloud | 2021-04-30 | No | 1.8.5 | No |
Available for download two days after release.
Hide file hashes
JAR Checksum | Value |
---|---|
MD5 | a3fb1c5f2be309f2fca630f151522bc9 |
SHA1 | c9bee3ec060d2b0cc75fba8fd085769c66224dcb |
SHA256 | 4ae4ed141342c8daa193d61351919832bf63f07b02b103c2d97eac18888b61e7 |
SHA512 | 6e43adfff7e5bf9a6078060ec2660fb23e799b97b45b3f895eacf95bf344fc4b5acef9562018db133460ff97f8bd7dd883d1c9284b6e34cd1462a257597e37b9 |
Security Fix and Bug Fixes
Fixed in this release
Summary
This is a critical update. Self-hosted systems with access for non-trusted users should upgrade immediately. We will follow up with more details when this update has been rolled out.
the health-check failed-http-status-check would get stuff in warn state, this has now been fixed
Humio Server 1.9.0 GA (2020-03-12)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.9.0 | GA | 2020-03-12 | Cloud | 2021-04-30 | No | 1.8.5 | Yes |
Available for download two days after release.
Hide file hashes
JAR Checksum | Value |
---|---|
MD5 | 81a34e3166d583b2a3479b7a4d82f64c |
SHA1 | 3f6646fc133c955d2c51977081da118f5b9b3fac |
SHA256 | f0877cc533ed58c3ca9630989b6848c79ba69f5217ee2dc6278909d4e586d076 |
SHA512 | e932eacb43d25cd10a95ad59ce07a0e7b68e7859e6b5ccf1d463cb9a23ddbf8cd0d186776a538a59ab3dcd57f1048f8dc9e6fe020ca907334f195ea22f3acdf8 |
UI for Role Based Access Control (RBAC), Health Check API, Kafka Version Update, Vega Charts
Fixed in this release
Summary
Now, you can click Edit Styling in the widget menu and modify styling directly from the dashboard view.
Improved (reduced) memory consumption for live groupby, and for groupby involving many distinct keys.
Since charts are such a central feature, we allow disabling the new implementation of widgets if you are experiencing issues with them. You can disable Vega charts globally using the ENABLE_VEGA_CHARTS=false flag.
This version replaces our chart library with Vega. The goal is to create a better, customizable, and more interactive charting experience in Humio. This first iteration is largely just a feature replacement for the existing functionality, with a few exceptions
You can now style your pie charts, and they will default to having a center radius (actually making them donuts!).
You can now style your bar charts to control things like label position and colors.
Queries involving join can now be with 'used export to file' and the
/query
HTTP endpoint.Role Based Access Control (RBAC) through the UI is now the only permission model in Humio. Please see the Manage users & permissions documentation for more information.
To prevent the charts from getting cluttered, you can adjust the maximum number of series that should be shown in the chart. Any series that are not part of the top-most series will be summed together and added to a new series called Other.
Be aware that updating Kafka also requires you to update ZooKeeper to 3.5.6. There is a migration involved in updating ZooKeeper. See the ZooKeeper migration FAQ here. Use the migration approach using an empty snapshot. The other proposed solution can loose data.
Humio's NetFlow support has been extended to also support IPFIX. See Humio's documentation for NetFlow Log Format.
Each chart type now supports assigning colors to specific series. This will allow you to, for instance, assign red to errors and green to non-errors.
Updated Kafka and ZooKeeper Docker images to use Kafka 2.4. Updating to Kafka 2.4 should be straightforward using Humio's Kafka/ZooKeeper Docker images. ZooKeeper image will handle migration. Stop all Kafka nodes. Stop all ZooKeeper nodes. Start all ZooKeeper nodes on the new version. Start all Kafka nodes on the new version. Before updating Kafka/ZooKeeper, we recommend backing up the ZooKeeper data directory. Then, add the ZooKeeper properties described below. If you are deploying Kafka/ZooKeeper using other tools, for example Ansible scripts, be aware there is a migration involved in updating ZooKeeper.
Linear interpolation is now the default, and we have added a new type of interpolation: Basis.
When updating Kafka/ZooKeeper we recommend setting these ZooKeeper properties
Do not start the new admin server. Default port 8080 conflicts with Humio port.admin.enableServer=false
purge old snapshot files autopurge.purgeInterval=1
Allow 4 letter commands. Used by Humio to get info about the ZooKeeper cluster 4lw.commands.whitelist=*
You can find the series configuration controls in the Style tab of the Search page.
The overall health of a Humio system is determined by a set of individual health checks. For more information about individual checks see the Health Checks page and the Health Check API page.
Updated Humio to use Kafka 2.4. Humio can still use versions of Kafka down through 1.1.
Functions
New
caseSensitive
option added to theparseTimestamp()
query function.New
selectLast()
function, which is likeselect()
but aggregate.
Humio Server 1.8.9 LTS (2020-03-25)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.8.9 | LTS | 2020-03-25 | Cloud | 2021-01-31 | No | 1.6.10 | No |
Hide file hashes
JAR Checksum | Value |
---|---|
MD5 | f366932ccffe66883cbf84fc1ab7c9e0 |
SHA1 | 8a37d7255ec24d2c3fe85b7fcdcf5fe9020746d9 |
SHA256 | fc42241e1f7dd2f928ddcf145eee85906764c004b765bf3509a65a33817a0f3a |
SHA512 | 9c5d08fd844f1058b036c2e588120ac70c3d2abe5e98eddedbdb85f6674d57acf4f0425fc66d8b175953f98c312ed44a5943333083a4a6c45ecb4e7014ce2027 |
These notes include entries from the following previous releases: 1.8.0, 1.8.1, 1.8.2, 1.8.3, 1.8.4, 1.8.5, 1.8.6, 1.8.7, 1.8.8
Security Fixes
Fixed in this release
Summary
TCP socket ingest listener would spend a lot of CPU when connected but not receiving any data.
"Export" queries could hit an internal limit and fail for large datasets.
Lower ingest queue timeout threshold from 90 to 30 seconds.
Major changes: (see 1.7.0 release notes)
Fix more scrolling issues in Chrome 80 and above.
When a node was missing for an extended period of time the remaining nodes would create smaller segment files than they should.
Fix edge case errors in the regex engine. Some case insensitive searches for some Unicode characters were not supported correctly.
Fix scrolling issue in Chrome 80 on the Search Page.
Other changes: (see 1.7.1, 1.7.2, 1.7.3, and 1.7.4 release notes)
New feature for ephemeral servers: Let ZooKeeper assign the UUID that in turn assigns the node ID in the cluster. This is turned on by setting the config option
ZOOKEEPER_URL_FOR_NODE_UUID
to the set of ZooKeepers to use for this. The optionZOOKEEPER_PREFIX_FOR_NODE_UUID
(default/humio_autouuid_
) sets the prefix to allow rack awareness. Note: Do not turn this on for an existing cluster. Do not turn on if running older 1.7.x or 1.8.x builds.Avoid calling fallocate on platforms that do not support this (for example, ZFS).
The ability to use Bucket Storage providers such as S3 and Google Cloud Storage for data storage.
Note: Do not install the Kafka, ZooKeeper or "single" Docker images of this build. Install 1.8.7 or later.
Alerts and exports now work on the special view "humio-search-all".
Fixed a race in upload of segment files for systems set up using ephemeral disks.
The Kafka and ZooKeeper images tagged with "1.8.6" were partially upgraded to Kafka 2.4.0.
Bucket storage download could report "download completed" also in case of problems fetching the file.
Fix security problem. This is a critical update. On-prem system with access for non-trusted users should upgrade. We follow up with more details when this update has been rolled out.
When a merge of segment files fails, delete the tmp-file that was created.
Assigning ingest tokens to parsers in sandbox repos.
The new feature for ephemeral servers using ZooKeeper to UUID did not properly reconnect when the network failed.
Security: [critical] Fixed a security vulnerability discovered through proactive penetration testing (more information will be forthcoming).
Query Quotas limits on the amount of resources a given user can spend. Besides those, there are a number of UI improvements, back-end improvements, and Bug Fixes.
Functions
join()
search function.
Humio Server 1.8.8 LTS (2020-03-23)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.8.8 | LTS | 2020-03-23 | Cloud | 2021-01-31 | No | 1.6.10 | No |
Hide file hashes
JAR Checksum | Value |
---|---|
MD5 | 0c0359bdfe15cb2f8f14487038c8043e |
SHA1 | 994919268e463ed643ff4977d9f131873833717b |
SHA256 | 3918516569ff4a03f9ac1131658b8ac4046186f5f4d17333e0ec555227136f63 |
SHA512 | c12b2e7630d97b618a7552290b942b443017005e607fa819459b0979b0e05e553b0bdd61fc5d508ce766247269e711ad370df7acb755b4928b54f7e25790ef5d |
These notes include entries from the following previous releases: 1.8.0, 1.8.1, 1.8.2, 1.8.3, 1.8.4, 1.8.5, 1.8.6, 1.8.7
Security Fixes
Fixed in this release
Summary
TCP socket ingest listener would spend a lot of CPU when connected but not receiving any data.
"Export" queries could hit an internal limit and fail for large datasets.
Lower ingest queue timeout threshold from 90 to 30 seconds.
Major changes: (see 1.7.0 release notes)
Fix more scrolling issues in Chrome 80 and above.
When a node was missing for an extended period of time the remaining nodes would create smaller segment files than they should.
Fix edge case errors in the regex engine. Some case insensitive searches for some Unicode characters were not supported correctly.
Fix scrolling issue in Chrome 80 on the Search Page.
Other changes: (see 1.7.1, 1.7.2, 1.7.3, and 1.7.4 release notes)
New feature for ephemeral servers: Let ZooKeeper assign the UUID that in turn assigns the node ID in the cluster. This is turned on by setting the config option
ZOOKEEPER_URL_FOR_NODE_UUID
to the set of ZooKeepers to use for this. The optionZOOKEEPER_PREFIX_FOR_NODE_UUID
(default/humio_autouuid_
) sets the prefix to allow rack awareness. Note: Do not turn this on for an existing cluster. Do not turn on if running older 1.7.x or 1.8.x builds.Avoid calling fallocate on platforms that do not support this (for example, ZFS).
The ability to use Bucket Storage providers such as S3 and Google Cloud Storage for data storage.
Note: Do not install the Kafka, ZooKeeper or "single" Docker images of this build. Install 1.8.7 or later.
Alerts and exports now work on the special view "humio-search-all".
Fixed a race in upload of segment files for systems set up using ephemeral disks.
The Kafka and ZooKeeper images tagged with "1.8.6" were partially upgraded to Kafka 2.4.0.
Bucket storage download could report "download completed" also in case of problems fetching the file.
Fix security problem. This is a critical update. On-prem system with access for non-trusted users should upgrade. We follow up with more details when this update has been rolled out.
When a merge of segment files fails, delete the tmp-file that was created.
Assigning ingest tokens to parsers in sandbox repos.
The new feature for ephemeral servers using ZooKeeper to UUID did not properly reconnect when the network failed.
Query Quotas limits on the amount of resources a given user can spend. Besides those, there are a number of UI improvements, back-end improvements, and Bug Fixes.
Functions
join()
search function.
Humio Server 1.8.7 LTS (2020-03-12)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.8.7 | LTS | 2020-03-12 | Cloud | 2021-01-31 | No | 1.6.10 | No |
Hide file hashes
JAR Checksum | Value |
---|---|
MD5 | 7c6181f950bb99ca50c5b444dc072cd8 |
SHA1 | 0cf020232fb2b5621c288dded8d3d9d3626db521 |
SHA256 | 8c003212b861957529332ad8d62facf2796bdbe58a66107559eb691c44762dea |
SHA512 | c8e7acdd3812408fa87ad6afa29573c09a8b9ec7518b03a6f41e7caa8205f4e172f78fe7f0d146020cc68eb80b0519d1e11571675b340639e0a73aa666e0cf8f |
These notes include entries from the following previous releases: 1.8.0, 1.8.1, 1.8.2, 1.8.3, 1.8.4, 1.8.5, 1.8.6
Bug Fixes
Fixed in this release
Summary
TCP socket ingest listener would spend a lot of CPU when connected but not receiving any data.
"Export" queries could hit an internal limit and fail for large datasets.
Lower ingest queue timeout threshold from 90 to 30 seconds.
Major changes: (see 1.7.0 release notes)
Fix more scrolling issues in Chrome 80 and above.
When a node was missing for an extended period of time the remaining nodes would create smaller segment files than they should.
Fix edge case errors in the regex engine. Some case insensitive searches for some Unicode characters were not supported correctly.
Fix scrolling issue in Chrome 80 on the Search Page.
Other changes: (see 1.7.1, 1.7.2, 1.7.3, and 1.7.4 release notes)
New feature for ephemeral servers: Let ZooKeeper assign the UUID that in turn assigns the node ID in the cluster. This is turned on by setting the config option
ZOOKEEPER_URL_FOR_NODE_UUID
to the set of ZooKeepers to use for this. The optionZOOKEEPER_PREFIX_FOR_NODE_UUID
(default/humio_autouuid_
) sets the prefix to allow rack awareness. Note: Do not turn this on for an existing cluster. Do not turn on if running older 1.7.x or 1.8.x builds.Avoid calling fallocate on platforms that do not support this (for example, ZFS).
The ability to use Bucket Storage providers such as S3 and Google Cloud Storage for data storage.
Note: Do not install the Kafka, ZooKeeper or "single" Docker images of this build. Install 1.8.7 or later.
Alerts and exports now work on the special view "humio-search-all".
Fixed a race in upload of segment files for systems set up using ephemeral disks.
The Kafka and ZooKeeper images tagged with "1.8.6" were partially upgraded to Kafka 2.4.0.
Bucket storage download could report "download completed" also in case of problems fetching the file.
When a merge of segment files fails, delete the tmp-file that was created.
Assigning ingest tokens to parsers in sandbox repos.
The new feature for ephemeral servers using ZooKeeper to UUID did not properly reconnect when the network failed.
Query Quotas limits on the amount of resources a given user can spend. Besides those, there are a number of UI improvements, back-end improvements, and Bug Fixes.
Functions
join()
search function.
Humio Server 1.8.6 LTS (2020-03-09)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.8.6 | LTS | 2020-03-09 | Cloud | 2021-01-31 | No | 1.6.10 | No |
Hide file hashes
JAR Checksum | Value |
---|---|
MD5 | 59a11140391a8f7a47de89bb2c3cf932 |
SHA1 | 05cf1d4355af997b5591f1b77e517d0f703cc5db |
SHA256 | cfc9736d19dac1a9b2c1f3ce082bd28e900178b4aeb781d7aef350557778a215 |
SHA512 | 47dbe78aba22795cbdbea741099bb5abe2c03d491479a8a0795793c6605cbbcad2b62eca3953e87c7635cafe57f496341887fff1704725e9a2d2db4071865d6a |
These notes include entries from the following previous releases: 1.8.0, 1.8.1, 1.8.2, 1.8.3, 1.8.4, 1.8.5
Fixes bug related assigning ingest tokens in a Sandbox.
Note: Do not install the Kafka, ZooKeeper or "single" Docker images of this build. Install 1.8.7 or later.
Fixed in this release
Summary
TCP socket ingest listener would spend a lot of CPU when connected but not receiving any data.
"Export" queries could hit an internal limit and fail for large datasets.
Lower ingest queue timeout threshold from 90 to 30 seconds.
Major changes: (see 1.7.0 release notes)
Fix more scrolling issues in Chrome 80 and above.
When a node was missing for an extended period of time the remaining nodes would create smaller segment files than they should.
Fix edge case errors in the regex engine. Some case insensitive searches for some Unicode characters were not supported correctly.
Fix scrolling issue in Chrome 80 on the Search Page.
Other changes: (see 1.7.1, 1.7.2, 1.7.3, and 1.7.4 release notes)
New feature for ephemeral servers: Let ZooKeeper assign the UUID that in turn assigns the node ID in the cluster. This is turned on by setting the config option
ZOOKEEPER_URL_FOR_NODE_UUID
to the set of ZooKeepers to use for this. The optionZOOKEEPER_PREFIX_FOR_NODE_UUID
(default/humio_autouuid_
) sets the prefix to allow rack awareness. Note: Do not turn this on for an existing cluster. Do not turn on if running older 1.7.x or 1.8.x builds.Avoid calling fallocate on platforms that do not support this (for example, ZFS).
The ability to use Bucket Storage providers such as S3 and Google Cloud Storage for data storage.
Note: Do not install the Kafka, ZooKeeper or "single" Docker images of this build. Install 1.8.7 or later.
Fixed a race in upload of segment files for systems set up using ephemeral disks.
Bucket storage download could report "download completed" also in case of problems fetching the file.
Assigning ingest tokens to parsers in sandbox repos.
The new feature for ephemeral servers using ZooKeeper to UUID did not properly reconnect when the network failed.
Query Quotas limits on the amount of resources a given user can spend. Besides those, there are a number of UI improvements, back-end improvements, and Bug Fixes.
Functions
join()
search function.
Humio Server 1.8.5 LTS (2020-02-28)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.8.5 | LTS | 2020-02-28 | Cloud | 2021-01-31 | No | 1.6.10 | No |
Hide file hashes
JAR Checksum | Value |
---|---|
MD5 | d9e4b78b03e7c5f53eba4bc0c75f865c |
SHA1 | 56725db4c01ea082769143361c978d17cffb931e |
SHA256 | ec3fc0e337a01b5e772176b708620bbeca16a871d7a3fced6451704546b960be |
SHA512 | 6ec7cfca8f64bd72e372540a0b375581e6b115e5e84108bdbd9f2606619887f940d8d0f213eba0011381a1a835b6ad3793ffe74741bd1fcd5571d11026149898 |
These notes include entries from the following previous releases: 1.8.0, 1.8.1, 1.8.2, 1.8.3, 1.8.4
Bug Fixes
Fixed in this release
Summary
TCP socket ingest listener would spend a lot of CPU when connected but not receiving any data.
"Export" queries could hit an internal limit and fail for large datasets.
Lower ingest queue timeout threshold from 90 to 30 seconds.
Major changes: (see 1.7.0 release notes)
Fix more scrolling issues in Chrome 80 and above.
When a node was missing for an extended period of time the remaining nodes would create smaller segment files than they should.
Fix edge case errors in the regex engine. Some case insensitive searches for some Unicode characters were not supported correctly.
Fix scrolling issue in Chrome 80 on the Search Page.
Other changes: (see 1.7.1, 1.7.2, 1.7.3, and 1.7.4 release notes)
New feature for ephemeral servers: Let ZooKeeper assign the UUID that in turn assigns the node ID in the cluster. This is turned on by setting the config option
ZOOKEEPER_URL_FOR_NODE_UUID
to the set of ZooKeepers to use for this. The optionZOOKEEPER_PREFIX_FOR_NODE_UUID
(default/humio_autouuid_
) sets the prefix to allow rack awareness. Note: Do not turn this on for an existing cluster. Do not turn on if running older 1.7.x or 1.8.x builds.Avoid calling fallocate on platforms that do not support this (for example, ZFS).
The ability to use Bucket Storage providers such as S3 and Google Cloud Storage for data storage.
Fixed a race in upload of segment files for systems set up using ephemeral disks.
Bucket storage download could report "download completed" also in case of problems fetching the file.
The new feature for ephemeral servers using ZooKeeper to UUID did not properly reconnect when the network failed.
Query Quotas limits on the amount of resources a given user can spend. Besides those, there are a number of UI improvements, back-end improvements, and Bug Fixes.
Functions
join()
search function.
Humio Server 1.8.4 LTS (2020-02-19)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.8.4 | LTS | 2020-02-19 | Cloud | 2021-01-31 | No | 1.6.10 | No |
Hide file hashes
JAR Checksum | Value |
---|---|
MD5 | 628055e8a489f8da436795da3b74c83b |
SHA1 | 693c732a14ab780fd2160194bb909599ef016a0f |
SHA256 | cb784f6287b75195d624013005b2ac6db3dfc1d3d56ed777efaf6502cb00a629 |
SHA512 | 9a30326b010e8e09288f9112d34017fcb2d3a7d26c9f6bf85804abb2977dc99784d32146b2236989230360e0b5525c92b46282bc768f3f42787cbcd5b1af4246 |
These notes include entries from the following previous releases: 1.8.0, 1.8.1, 1.8.2, 1.8.3
UI Scroll Bug Fix for Chrome 80 (again). This release is purely a fix for the Humio UI. After upgrading to Chrome 80, people have been experiencing issues with scrolling in some of Humio's widgets. We did not find all the problems in the previous release.
Fixed in this release
Summary
Major changes: (see 1.7.0 release notes)
Fix more scrolling issues in Chrome 80 and above.
When a node was missing for an extended period of time the remaining nodes would create smaller segment files than they should.
Fix edge case errors in the regex engine. Some case insensitive searches for some Unicode characters were not supported correctly.
Fix scrolling issue in Chrome 80 on the Search Page.
Other changes: (see 1.7.1, 1.7.2, 1.7.3, and 1.7.4 release notes)
New feature for ephemeral servers: Let ZooKeeper assign the UUID that in turn assigns the node ID in the cluster. This is turned on by setting the config option
ZOOKEEPER_URL_FOR_NODE_UUID
to the set of ZooKeepers to use for this. The optionZOOKEEPER_PREFIX_FOR_NODE_UUID
(default/humio_autouuid_
) sets the prefix to allow rack awareness. Note: Do not turn this on for an existing cluster. Do not turn on if running older 1.7.x or 1.8.x builds.Avoid calling fallocate on platforms that do not support this (for example, ZFS).
The ability to use Bucket Storage providers such as S3 and Google Cloud Storage for data storage.
Bucket storage download could report "download completed" also in case of problems fetching the file.
The new feature for ephemeral servers using ZooKeeper to UUID did not properly reconnect when the network failed.
Query Quotas limits on the amount of resources a given user can spend. Besides those, there are a number of UI improvements, back-end improvements, and Bug Fixes.
Functions
join()
search function.
Humio Server 1.8.3 LTS (2020-02-13)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.8.3 | LTS | 2020-02-13 | Cloud | 2021-01-31 | No | 1.6.10 | No |
Hide file hashes
JAR Checksum | Value |
---|---|
MD5 | f975edf282bf83febc974c154906611c |
SHA1 | a1e49cd77ffeae0e95daf3380d3d4138ee6b4b04 |
SHA256 | 28dc5287957bdfcb414f0f1ecd71ea8721ff8b87e713d6f02b26922c5e44f97b |
SHA512 | a6f251ab388038d54e40efec2e51e967c6c276f320a615f95bcbc9f139134ba25182430208d0917496bb2228c1608f296f979a2d24e10b32ede3fe301c29d364 |
These notes include entries from the following previous releases: 1.8.0, 1.8.1, 1.8.2
UI Scroll Bug Fix for Chrome 80. This release is purely a fix for the Humio UI. After upgrading to Chrome 80 people have been experiencing issues with scrolling on the Search page - specifically when the "Field" panel is visible.
Fixed in this release
Summary
Major changes: (see 1.7.0 release notes)
When a node was missing for an extended period of time the remaining nodes would create smaller segment files than they should.
Fix edge case errors in the regex engine. Some case insensitive searches for some Unicode characters were not supported correctly.
Fix scrolling issue in Chrome 80 on the Search Page.
Other changes: (see 1.7.1, 1.7.2, 1.7.3, and 1.7.4 release notes)
New feature for ephemeral servers: Let ZooKeeper assign the UUID that in turn assigns the node ID in the cluster. This is turned on by setting the config option
ZOOKEEPER_URL_FOR_NODE_UUID
to the set of ZooKeepers to use for this. The optionZOOKEEPER_PREFIX_FOR_NODE_UUID
(default/humio_autouuid_
) sets the prefix to allow rack awareness. Note: Do not turn this on for an existing cluster. Do not turn on if running older 1.7.x or 1.8.x builds.Avoid calling fallocate on platforms that do not support this (for example, ZFS).
The ability to use Bucket Storage providers such as S3 and Google Cloud Storage for data storage.
Bucket storage download could report "download completed" also in case of problems fetching the file.
The new feature for ephemeral servers using ZooKeeper to UUID did not properly reconnect when the network failed.
Query Quotas limits on the amount of resources a given user can spend. Besides those, there are a number of UI improvements, back-end improvements, and Bug Fixes.
Functions
join()
search function.
Humio Server 1.8.2 LTS (2020-02-10)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.8.2 | LTS | 2020-02-10 | Cloud | 2021-01-31 | No | 1.6.10 | Yes |
Hide file hashes
JAR Checksum | Value |
---|---|
MD5 | b3b553543c7db54d05b721550a4c9b3d |
SHA1 | 7a24873c6485caaf99abfed3bac79d982056793e |
SHA256 | 995787f60bcf744d6447b53282f48d547a89962202a81f3f4ba65dbf0dbe6398 |
SHA512 | 1814b6a443381fd0191f93bfdf49fa6c57b4bbf163500b56f742673780384a44d8a603121d5b0417b593ad24a4752edd82f8a3c965b56bfdb5d58de8f1014351 |
These notes include entries from the following previous releases: 1.8.0, 1.8.1
This is a bug fix release.
Fixed in this release
Summary
Major changes: (see 1.7.0 release notes)
When a node was missing for an extended period of time the remaining nodes would create smaller segment files than they should.
Fix edge case errors in the regex engine. Some case insensitive searches for some Unicode characters were not supported correctly.
Other changes: (see 1.7.1, 1.7.2, 1.7.3, and 1.7.4 release notes)
New feature for ephemeral servers: Let ZooKeeper assign the UUID that in turn assigns the node ID in the cluster. This is turned on by setting the config option
ZOOKEEPER_URL_FOR_NODE_UUID
to the set of ZooKeepers to use for this. The optionZOOKEEPER_PREFIX_FOR_NODE_UUID
(default/humio_autouuid_
) sets the prefix to allow rack awareness. Note: Do not turn this on for an existing cluster. Do not turn on if running older 1.7.x or 1.8.x builds.Avoid calling fallocate on platforms that do not support this (for example, ZFS).
The ability to use Bucket Storage providers such as S3 and Google Cloud Storage for data storage.
Bucket storage download could report "download completed" also in case of problems fetching the file.
The new feature for ephemeral servers using ZooKeeper to UUID did not properly reconnect when the network failed.
Query Quotas limits on the amount of resources a given user can spend. Besides those, there are a number of UI improvements, back-end improvements, and Bug Fixes.
Functions
join()
search function.
Humio Server 1.8.1 LTS (2020-02-03)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.8.1 | LTS | 2020-02-03 | Cloud | 2021-01-31 | No | 1.6.10 | Yes |
Hide file hashes
JAR Checksum | Value |
---|---|
MD5 | e99a4e6eb8ae5174416f3ba2ac6f975a |
SHA1 | c0ed5c06b6fc6d9f2bb6bc2c7d4afeb4aeabcd01 |
SHA256 | 156eadb8a285e688aecde7040505e6266407ab5ce6c58826ef9d6d12299fb5db |
SHA512 | 86acc3eef2c2817ba7e5f83b030319bb70a63ca90568fd1a949210177653951cfa76687c22d23518626fdcc08719b9ff67c66c918842b0010860c56696b171a5 |
These notes include entries from the following previous releases: 1.8.0
Bug Fixes
This is a bug fix release.
Fixed in this release
Summary
Major changes: (see 1.7.0 release notes)
Fix edge case errors in the regex engine. Some case insensitive searches for some Unicode characters were not supported correctly.
Other changes: (see 1.7.1, 1.7.2, 1.7.3, and 1.7.4 release notes)
New feature for ephemeral servers: Let ZooKeeper assign the UUID that in turn assigns the node ID in the cluster. This is turned on by setting the config option
ZOOKEEPER_URL_FOR_NODE_UUID
to the set of ZooKeepers to use for this. The optionZOOKEEPER_PREFIX_FOR_NODE_UUID
(default/humio_autouuid_
) sets the prefix to allow rack awareness. Note: Do not turn this on for an existing cluster. Do not turn on if running older 1.7.x or 1.8.x builds.Avoid calling fallocate on platforms that do not support this (for example, ZFS).
The ability to use Bucket Storage providers such as S3 and Google Cloud Storage for data storage.
Query Quotas limits on the amount of resources a given user can spend. Besides those, there are a number of UI improvements, back-end improvements, and Bug Fixes.
Functions
join()
search function.
Humio Server 1.8.0 LTS (2020-01-27)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.8.0 | LTS | 2020-01-27 | Cloud | 2021-01-31 | No | 1.6.10 | Yes |
Hide file hashes
JAR Checksum | Value |
---|---|
MD5 | 323f7c8a332086ce7f5f0d9796d346f7 |
SHA1 | 4423ccb2afcfcc7f83fe9802ee494abe4085f314 |
SHA256 | 318d70019a0678d65b721dccff6643818bd151497d5b873a59d9e9cc95ad4f77 |
SHA512 | a196b0f8cde842b395bf6e0401410ebc61d1052507d07aa37b78214acc0ba460cd0f926cb54c5be5530d8eb6d6e80998845bb2f0b1f148d683a3685c67c868e8 |
Joins, Bucket Storage Backend, Query Quotas, UI Improvements. This release promotes the 1.7 releases from preview to stable.
Fixed in this release
Summary
Major changes: (see 1.7.0 release notes)
Other changes: (see 1.7.1, 1.7.2, 1.7.3, and 1.7.4 release notes)
The ability to use Bucket Storage providers such as S3 and Google Cloud Storage for data storage.
Query Quotas limits on the amount of resources a given user can spend. Besides those, there are a number of UI improvements, back-end improvements, and Bug Fixes.
Functions
join()
search function.
Humio Server 1.7.4 GA (2020-01-27)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.7.4 | GA | 2020-01-27 | Cloud | 2021-01-31 | No | 1.6.10 | No |
Available for download two days after release.
Hide file hashes
JAR Checksum | Value |
---|---|
MD5 | cdfb4241fdc8daa44f9f2e54cca1355e |
SHA1 | b5e01c98c21e4ffcbc115b8585ab28ec90bc79f7 |
SHA256 | 63a8945e903755bcfc430d1b404f3a64558e8b7e26728f4a2c10ca22f6c67347 |
SHA512 | dedbf948de640e141ab246cdb431dbbcbfd190b1947dde2fe7896c0a3ad904e60ee7d2aaaffcf6efb63e8a65919a8554fab1fa5ea111b5ab2111b195d5d48fd3 |
Bug Fixes
Fixed in this release
Summary
Allow webhook notifiers to optionally not validate certificates.
Allows "Force remove" of a node from a cluster.
Stabilized sync of uploaded files within a cluster in combination with bucket storage.
Add Chromium to the list of compatible browsers
join now accepts absolute timestamps in millis in start and end parameters.
Humio Server 1.7.3 GA (2020-01-17)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.7.3 | GA | 2020-01-17 | Cloud | 2021-01-31 | No | 1.6.10 | No |
Available for download two days after release.
Hide file hashes
JAR Checksum | Value |
---|---|
MD5 | 4265deddaccfa78f648a6426faa28ce6 |
SHA1 | abc76135b365f4bb09793af9749a03b157b56176 |
SHA256 | 0becd94c687e07aa6a0ce116a96b2b6e7bace87e77e6fd7a4c01a5e837050c8f |
SHA512 | 2b5606a0ae94e95d9c8271628b2c3fc4c56c9a74f27c9735e334957fb7baf8d5de5cc7cd50f8f2fb5ee3cea6e3323bfb645efb4bc083e3f65f4a3803c6811a8f |
Bug Fixes
Fixed in this release
Summary
ERROR logs get output to stderr instead of stdout to avoid breaking the potential stdout format.
New log output option for the LOG4J_CONFIGURATION configuration now allows the built-in:
log4j2-stdout-json.xml
to get the log in NDJSON format, one line for each event on stdout.
Functions
top()
function allows limit up to 20.0 by default now. Used to be 1.0.
Humio Server 1.7.2 GA (2020-01-16)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.7.2 | GA | 2020-01-16 | Cloud | 2021-01-31 | No | 1.6.10 | No |
Available for download two days after release.
Hide file hashes
JAR Checksum | Value |
---|---|
MD5 | b523d1cdac2d03b7f215379080eb9053 |
SHA1 | 3846dfc31446f7788c1ac6b86821073397b0d462 |
SHA256 | 7bed62c85e26ff73feb19554b4296d0d9590c6d65f45e9685b9b5ffb356963a3 |
SHA512 | 54192cfc366ba7b3626ae3b6e0e5fb47bdc8dffb828a68198e9fa1f1800b4c2763abe765c4da34533b5beb1906b79dc3e40aa1234be684e34c1b266ae348070b |
Bug Fixes
Fixed in this release
Summary
Bucket storage: Also keep copies of the "metadata files" that you use for
lookup()
andmatch()
functions in the bucket and restore from there when needed.USING_EPHEMERAL_DISKS
allows running a cluster on disks that may be lost when the system restarts by assuming that only copies in Bucket Storage and the events in Kafka are preserved across restarts. If the filesystem remains during restart this is also okay in this mode and more efficient then fetching the files from the bucket.#repo=*
never matched but should always match.LIVEQUERY_CANCEL_TRIGGER_DELAY_MS
andLIVEQUERY_CANCEL_COST_PERCENTAGE
controls canceling of live queries that have been consuming the most cost for the previous 30s when the system experiences digest latency of more than the delay. New metrics:livequeries-canceled-due-to-digest-delay
livequeries-rate-canceled-due-to-digest-delay
livequeries-rate
Top(x, sum=y)
now also support non-integer values of y (even though the internal state is still an integer value)Bucket storage: Continue cleaning the old buckets after switching provider from S3 to GCP or vice versa.
The "query monitor" and "query quota" new share the definition of "cost points". The definition has changed in such a way that quotas saved by version up to 1.7.1 and earlier are disregarded by this (and later) versions.
Retention could in fail to delete obsolete files in certain cases.
The ZooKeeper status page now shows a warning when the commands it needs for the status page to work are not whitelisted on the ZK server.
New Utility inside the jar. Usage:
java -cp humio.jar com.humio.main.DecryptAESBucketStorageFile <secret string> <encrypted file> <decrypted file>
Allows decrypting a file that was uploaded using bucket storage outside the system.
Change: When the system starts with no users at all, the first user to log get root privileges inside the system.
LOG4J_CONFIGURATION
allows a custom log4j file. Or set to one of the built-in:log4j2-stdout.xml
to get the log in plain text dumped on stdout, orlog4j2-stdout-json.xml
to get the log in NDJSON format, one line for each event on stdout.Bucket storage, GCP variant: Remove temporary files after download from GCP. Previous versions left a copy in the tmp dir.
Bucket storage: Support download after switching provider from S3 to GCP or vice versa.
Query of segments only present in a bucket now works even if disabling further uploads to bucket storage.
Functions
Humio Server 1.7.1 GA (2020-01-06)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.7.1 | GA | 2020-01-06 | Cloud | 2021-01-31 | No | 1.6.10 | Yes |
Available for download two days after release.
Hide file hashes
JAR Checksum | Value |
---|---|
MD5 | df8268ca40d896676a0d771e40e84766 |
SHA1 | f2826fed2741cc74128a07f5f67c2bed498d58bc |
SHA256 | 6f29f7d3abb1df8547cc0be5470ec714469febfd2c8553ed58555e5936b27fbb |
SHA512 | 14529ec3496dcafee383a2d6ef85d0862a8071f144e9b0ac59bd59f3b9742a8d1e46092f16e0db32e0ab071bdd2235754f319c4fc95007d470c34bb304c52ce9 |
Bug Fixes and Removal of Limitations
Fixed in this release
Summary
Reuse of live dashboard queries on the humio-search-all repository did not work correctly. As an effect the number of live queries could keep increasing.
The Postmark integration was always assuming a humio.com from address. This has been fixed by introducing a new
POSTMARK_FROM
configuration parameter.Remove 64 K restriction on individual fields to be parsed by parsers.
Saved Queries/macros was not expanded when checking if a live dashboard query could reuse an existing query.
Allow explicit
auto
as argument to thespan
parameter in bucket and timechart. This makes it easier to set span from a macro argument.Handle large global snapshot files (larger than 2 G).
Humio Server 1.7.0 GA (2019-12-17)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.7.0 | GA | 2019-12-17 | Cloud | 2021-01-31 | No | 1.6.10 | Yes |
Available for download two days after release.
Hide file hashes
JAR Checksum | Value |
---|---|
MD5 | 949b46169cff1182c3da25c6f7f641f3 |
SHA1 | 9b9187c045ee4a4a8051ed233b7a8fdf056f514d |
SHA256 | aa53e3998b390863edb435a4658b5c51551d54c68f1b1983886063e297cd8cbc |
SHA512 | f3766baf64dbf62450970207cafabd65588a660cebceb8f447dd8d8d6baba8f5e26ac69e4873d8170a84c011264a5f16247d8beed28604e3fb91073af8435ba2 |
Join, Bucket Storage Backend, Query Quotas, UI Improvements
Humio now supports joins in the query language; the
functionality is largely similar to what could previously be
done by running a query, exporting it as a
.csv
, uploading said
.csv
file, and then using
the match()
function to filter/amend a
query result. See Join search
function.
Humio now supports storing segment files on Amazon S3 (and Google cloud storage) and compatible services to allow keeping more segment files than the local disks have room for and managing the local disk as a cache of these files. See Bucket Storage.
New LTS/GA Release Versioning
Stable release will have an even
Minor
version. If
Minor
is an odd number
(like in this release), it is a preview release. Critical fixes
will be back ported to the most recent stable release.
To make it easier to integrate with external systems Humio
dashboards, can now be passed URL parameters to set the
dashboard's global time interval. By passing query parameters
?time=<unix ms
timestamp>&window=5m
the dashboard will be
opened with a 10m time window (5m before and after the the
origin specified by time
).
The feature is not available for shared dashboards - since they
do not support changing time intervals.
You can now also disable shared dashboards completely using the
SHARED_DASHBOARDS_ENABLED=false
configuration
setting.
Fixed in this release
Configuration
Autosharding can now bet set "sticky" which means fixed as set by user on a specific (input) datasource. The API also allows listing all autosharding rules, both system-manages and sticky.
COMPRESSION_TYPE=high
is now the default compression type. Clusters running with default configuration, will change to high compression unless the configurationCOMPRESSION_TYPE=fast
is set.Add
SHARED_DASHBOARDS_ENABLED
configuration setting which allows disabling access to the shared dashboards feature - if e.g. your organization has strict security policies.
Dashboards and Widgets
UI: Allow disabling automatically searching when entering a repository search page, on a per-repo basis.
Top Feature: Joins allowing subqueries and joining data from multiple repositories, see Join.
UI: Word-wrap and event list orientation is now sticky in a session, meaning revisiting the search page will keep the previous selected options.
UI: The time selector on dashboards now allow panning and zooming - like the one on the search page.
UI: Improved
Query Monitor
in the administration section, making it much easier to find expensive queries. See Query Monitor.UI: Add support for loading a specific time window when launching a dashboard, by setting time= and window= in the URL.
Queries page removed, and delete and edit saved query functionality moved into "Queries" dropdown on search page.
UI: Improve word-wrap and allow columns in the event list to be marked as 'autosize'. Autosizing columns will adapt to the screen size when word-wrap is enabled.
UI: Don't show "unexpected error" screen when Auth Token expires.
Top Feature: Query quotas allowing limiting how many resources users can use when searching, see Query Quotas.
Top Feature: The "Queries" page has been replaced with a dropdown on the Search page, that allows searching saved and recent queries.
Top Feature: Bucket Storage with support for S3 and Google cloud storage, see See Bucket Storage.
Top Feature: Query errors will now be highlighted as-you-type in on the search page.
UI: Ensure counts of fields and value occurrences on the event list are reliable.
Upgrading: After installing this version, it is not possible to roll back to a version lower than 1.6.10. Be on version 1.6.10 before upgrading to this version.
Functions
The implementation of the
percentile()
function has been updated to be more precise (and faster).New function
callFunction
, allows you to call a Humio function by name. This is useful if you for instance want a dashboard where you can control what statistics your widgets show based on a parameter, e.g. timechart(function=callFunction(?statistic, field=response_time))New function
xml:prettyPrint()
The function
top()
has a newmax=field
argument, that can be used to make it work as a more efficient alias a groupby/sort combination, liketop(field, max=value, limit=5)
is equivalent (and much faster than)groupby(field, function=max(value)) | sort(limit=5)
.New function
json:prettyPrint()
Other
Java 13 is the recommended Java version. Docker images are now running Java 13.
New stable/preview release versioning scheme. See description.
Use case-insensitive comparison of usernames (historically an email address) when logging into Humio.
Humio Server 1.6.11 LTS (2020-01-06)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.6.11 | LTS | 2020-01-06 | Cloud | 2020-11-30 | No | 1.5.19 | No |
Hide file hashes
JAR Checksum | Value |
---|---|
MD5 | 95b8814ef3035ec68887dd3605e10b1f |
SHA1 | 42c2fbff389d6e75a05450abd326b3dd7da71bce |
SHA256 | fddbe1edb8b7501d277550e3a044f97d3448de6ccd0c3e02390d239841d3be2e |
SHA512 | 8238028be9c1fb9ce1c027b7fcb21d4cea6b1db4e984d2aaf951c6e3a1ecf961285ce6d4ff41a373abe7c8a94f3d3cd7eca80284a126efa5e4bf90903a38e822 |
These notes include entries from the following previous releases: 1.6.8, 1.6.9, 1.6.10
Handle Large Global Snapshot File
Breaking Changes
The following items create a breaking change in the behavior, response or operation of this release.
Summary
LDAP: It is now possible to specify an attribute within the LDAP record to use for the username rather than the default (an email address). This is only the case when using
ldap-search
method by specifying theLDAP_USERNAME_ATTRIBUTE
in the environment. Group names when using LDAP have historically been the distinguished name (DN) for that group, it is now possible to specify and attribute in the group record for the name by settingLDAP_GROUPNAME_ATTRIBUTE
. These changes necessitated a breaking change in theldap-search
code path in cases where users of Humio authenticate with a username (e.g.user
) rather than an email address (e.g. user@example.com). To elicit the same behavior as previous versions of Humio simply specify theLDAP_SEARCH_DOMAIN_NAME
which in the past would default to the value ofLDAP_DOMAIN_NAME
but no longer does.
Fixed in this release
Summary
New background job: Find segments that are too small compared to the desired sizes (from current config) and merge them into larger files. For
COMPRESSION_TYPE=high
this will recompress the inputs while combining them. This job runs by default.Improved memory usage from having large global.
Require setting
LDAP_SEARCH_DOMAIN_NAME
explicitly when usingldap-search
authentication method.Segment merge could leave out some parts when merging, leading to segments not on average becoming a large as is desired.
Add
LDAP_USERNAME_ATTRIBUTE
andLDAP_GROUPNAME_ATTRIBUTE
configuration settings to enable more control over names carried from LDAP into Humio.Query sessions were not properly cleaned up after becoming unused. This lead to a leak causing high amount of chatter between nodes.
Handle large global snapshot files (larger than 2 G).
Detect when event ingested are more than
MAX_HOURS_SEGMENT_OPEN
(24h
by default) old and add the taghumioBackfill
to them in that case to keep "old" events from getting mixed with current "live" events.Support for "sticky autosharding" and listing of current autosharding settings for all datasources in a repository.
Username/email is treated case-insensitive in Humio. This is more expected behavior of usernames as emails addresses are often used. In some rare occasions duplicate accounts might have been created with difference in casing and this change can trigger the otherwise dormant account to be chosen when logging in the next time. If this happens, use the administrations page to delete the unwanted user account and let the user log in again.
Humio Server 1.6.10 LTS (2019-12-12)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.6.10 | LTS | 2019-12-12 | Cloud | 2020-11-30 | No | 1.5.19 | Yes |
Hide file hashes
JAR Checksum | Value |
---|---|
MD5 | c8cfd22a5be0456f5905247b6f2bab3b |
SHA1 | 4c4b734217c5ac90bd9be5f6e2f34c4e182f8fa6 |
SHA256 | 2a2529e2b2bdbd54162075adb55b514d80bbe60ec1854b8044fb7cbea985c006 |
SHA512 | 883ee5a113ceeab99063d07d3b19b935e382dfdab598903fb896392d225d8dddc91716db56e29f4a92053d7b37e818806fcdad86126b9a5e2b68c17ad4f32c76 |
These notes include entries from the following previous releases: 1.6.8, 1.6.9
Bug Fixes and LDAP improvements. There are some changes to the configuration that will be required. See the change log below.
Breaking Changes
The following items create a breaking change in the behavior, response or operation of this release.
Summary
LDAP: It is now possible to specify an attribute within the LDAP record to use for the username rather than the default (an email address). This is only the case when using
ldap-search
method by specifying theLDAP_USERNAME_ATTRIBUTE
in the environment. Group names when using LDAP have historically been the distinguished name (DN) for that group, it is now possible to specify and attribute in the group record for the name by settingLDAP_GROUPNAME_ATTRIBUTE
. These changes necessitated a breaking change in theldap-search
code path in cases where users of Humio authenticate with a username (e.g.user
) rather than an email address (e.g. user@example.com). To elicit the same behavior as previous versions of Humio simply specify theLDAP_SEARCH_DOMAIN_NAME
which in the past would default to the value ofLDAP_DOMAIN_NAME
but no longer does.
Fixed in this release
Summary
New background job: Find segments that are too small compared to the desired sizes (from current config) and merge them into larger files. For
COMPRESSION_TYPE=high
this will recompress the inputs while combining them. This job runs by default.Improved memory usage from having large global.
Require setting
LDAP_SEARCH_DOMAIN_NAME
explicitly when usingldap-search
authentication method.Segment merge could leave out some parts when merging, leading to segments not on average becoming a large as is desired.
Add
LDAP_USERNAME_ATTRIBUTE
andLDAP_GROUPNAME_ATTRIBUTE
configuration settings to enable more control over names carried from LDAP into Humio.Query sessions were not properly cleaned up after becoming unused. This lead to a leak causing high amount of chatter between nodes.
Detect when event ingested are more than
MAX_HOURS_SEGMENT_OPEN
(24h
by default) old and add the taghumioBackfill
to them in that case to keep "old" events from getting mixed with current "live" events.Support for "sticky autosharding" and listing of current autosharding settings for all datasources in a repository.
Username/email is treated case-insensitive in Humio. This is more expected behavior of usernames as emails addresses are often used. In some rare occasions duplicate accounts might have been created with difference in casing and this change can trigger the otherwise dormant account to be chosen when logging in the next time. If this happens, use the administrations page to delete the unwanted user account and let the user log in again.
Humio Server 1.6.9 LTS (2019-11-25)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.6.9 | LTS | 2019-11-25 | Cloud | 2020-11-30 | No | 1.5.19 | No |
Hide file hashes
JAR Checksum | Value |
---|---|
MD5 | 078266ac0716bd634fdc03fc7ef24ee7 |
SHA1 | 2457f4a35cef44b75351ed1ce794e32a253416b4 |
SHA256 | 26934b20ede90d7b4d4a855fe9fb2af749ec37a27bfbf44e79cb5d6c38c133d9 |
SHA512 | dce9bdffddf89c9de819feb46bfbf59cd7bbc22da4d9131de588236a82306b131dacffbed3d6969f50e40a34e74b31d073cb31c814fa04a43e68e1063f89c435 |
These notes include entries from the following previous releases: 1.6.8
Bug Fixes and a new background job that reduces number of small files on disk. No configuration changes required, but see changes to backup in 1.6.6.
Fixed in this release
Summary
New background job: Find segments that are too small compared to the desired sizes (from current config) and merge them into larger files. For
COMPRESSION_TYPE=high
this will recompress the inputs while combining them. This job runs by default.Improved memory usage from having large global.
Segment merge could leave out some parts when merging, leading to segments not on average becoming a large as is desired.
Detect when event ingested are more than
MAX_HOURS_SEGMENT_OPEN
(24h
by default) old and add the taghumioBackfill
to them in that case to keep "old" events from getting mixed with current "live" events.Support for "sticky autosharding" and listing of current autosharding settings for all datasources in a repository.
Humio Server 1.6.8 LTS (2019-11-19)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.6.8 | LTS | 2019-11-19 | Cloud | 2020-11-30 | No | 1.5.19 | No |
Hide file hashes
JAR Checksum | Value |
---|---|
MD5 | 002f3f8e6d2139542912f503d9a97b6c |
SHA1 | acf4b53e6363b85bb94ee7afff1961c712a3d21a |
SHA256 | e25243edf2fa906c18b61ce838b7114bf7cf03e787a638effc0881d645161227 |
SHA512 | 258b86a43f403caed0852903cdbcfd10664bbced66ea773f5a517bbca7c45143b07829a73bed4c92c12b29166cce3a22d5314b633ef8e091f8480d122432dae6 |
Bug Fixes
No configuration changes required, but see changes to backup in 1.6.6.
Fixed in this release
Summary
Segment merge could leave out some parts when merging, leading to segments not on average becoming a large as is desired.
Support for "sticky autosharding" and listing of current autosharding settings for all datasources in a repository.
Humio Server 1.6.7 Archive (2019-11-04)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.6.7 | Archive | 2019-11-04 | Cloud | 2020-11-30 | No | 1.5.19 | No |
Available for download two days after release.
Hide file hashes
JAR Checksum | Value |
---|---|
MD5 | cc498d9bfbbae9fc815d046887e93ace |
SHA1 | bb9c4e01199a5c8727e4b825651139c808964b2f |
SHA256 | ede04107d73323f9b24374e9d6f45a8d62f56b484a2d042a4e3e0cd7781ddfc8 |
SHA512 | a13ae84ae29339410d5de53cc77a2975f529c6baed9e61941ed9eb2f35130eea6ae0cb14595f7c7f8fb320e2c0cecd7a3e6e690a0f156ba87ceb404d5035f21e |
Bug Fixes and Performance Improvements
No configuration changes required, but see changes to backup in 1.6.6.
Fixed in this release
Summary
Fix security bug. See Security Disclosures
Humio Server 1.6.6 Archive (2019-10-23)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.6.6 | Archive | 2019-10-23 | Cloud | 2020-11-30 | No | 1.5.19 | Yes |
Available for download two days after release.
Hide file hashes
JAR Checksum | Value |
---|---|
MD5 | 6f5f5a9ca0cc982b8368feab633178bf |
SHA1 | 82feebc7020bda86e42acdb18db6bc162020539a |
SHA256 | b969805efafe28497a2517ca3a3268dd1bb849bc39eb56ab31d62070ae07a5e0 |
SHA512 | 02df102a6589da27b9027b36447bc11222c616ca2c83bd00978d756001843464332b6cfcf886bc4dd749a31abe89ef62ba426bfce286ff37330a95c2ba809566 |
Bug Fixes and Performance Improvements
See changes to backup in 1.6.6.
Fixed in this release
Summary
Looking at the events for e.g. a timechart was previously untenable, due to a scrolling bug.
Improved error recovery in query language. This should make query error messages easier to read.
It is now possible to change the description for a repository or view.
Humio's built-in backup has been changed to delay deleting segment data from backup. By default Humio will wait 7 days from a segment file is deleted in Humio until it is deleted from backup. This is controlled using the config
DELETE_BACKUP_AFTER_MILLIS
. Only relevant if you are using Humio's built-in backup.Performance improvements in digest pipeline.
In Chrome, saving a query and marking it as the default query of the repo would previously not save the default status.
Humio Server 1.6.5 Archive (2019-10-01)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.6.5 | Archive | 2019-10-01 | Cloud | 2020-11-30 | No | 1.5.19 | No |
Available for download two days after release.
Hide file hashes
JAR Checksum | Value |
---|---|
MD5 | d5426b20769fcea54bdb6a296a1b853d |
SHA1 | 7881a0845b5548afd23938731abef349259a5002 |
SHA256 | d4d8abbdc1e4730141a36718101444377ed106c8a3db8557e572ca3eaec35bfa |
SHA512 | b71876f6d325b99cadbabb65acd304ca713ffb62e319a29941e16c5645540bf824a07c0fcc3d8820c735ae1e027bff1fad05ec150b36bf5bb973bc71d34ac274 |
Bug Fixes and Performance Improvements
No data migration required, but see version 1.6.3.
Fixed in this release
Summary
Redefined the
event-latency
metric to start measuring after parsing the events, just before inserting them into the ingest queue in Kafka. This metric is the basis of autosharding decisions and other scheduling priority choices internally and thus needs to reflect the time spent on the parts influenced by those decisions.Support reading events from the ingest queue in both the format written by 1.6.3 and older and 1.6.4.
The new metric
event-latency-repo/<repo>
includes time spent parsing too and is heavily influenced by the size of the bulks of events being posted to Humio.Apply the extra Kafka properties from config also on deleteRecordsBefore requests.
Improved performance of internal jobs calculating the data for the cluster management pages.
The new metric
ingest-queue-latency
measures the latency of events through the ingest queue in Kafka from the "send" to kafka and until it has been received by the Digest node.
Humio Server 1.6.4 Archive (2019-09-30)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.6.4 | Archive | 2019-09-30 | Cloud | 2020-11-30 | No | 1.5.19 | No |
Available for download two days after release.
Bug Fixes and Performance Improvements. Retracted - did not properly support existing events in ingest queue.
No data migration required, but see version 1.6.3.
Fixed in this release
Summary
New metrics tracking number of active datasources, internal target latency of digest, number of threads available for queries, latency of live query updating and segment building, and latency of overall ingest/digest pipeline tracked for each repository.
/query
endpoint andqueryjobs
endpoint now coordinate thread usage lowering the maximum total number of runnable threads from queries at any point in time.Improved performance of timecharts when there are many series and timechart need to select the "top n" ones to display.
Creating new labels while adding labels to a dashboard did not actually show the labels as available.
Do not install this build. Do not roll back from this build to 1.6.3 - update to 1.6.5 instead.
Improved word wrap of events list.
Humio Server 1.6.3 Archive (2019-09-25)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.6.3 | Archive | 2019-09-25 | Cloud | 2020-11-30 | No | 1.5.19 | No |
Available for download two days after release.
Hide file hashes
JAR Checksum | Value |
---|---|
MD5 | b4a29de04d375c5f71eda06a73ddb324 |
SHA1 | e3d60d5ec6fb651b35b12dbf4e004872e4558ee2 |
SHA256 | 1ea140db06de2ebe78769e1bc1d827f6d5b007fb9419ecd746250d2a1abfa984 |
SHA512 | 4c472f8250e41729b73295296816094eb0b8bd53265a4dede89091c413bf9288b5b03c3864f16aba40bcd1cdede28438df4b0aad7fa79dae5acc0c7312f1061c |
Dashboard parameters improvements and Bug Fixes. Data migration is required: Hash filters need rebuilding.
Dashboard parameters can depend on each other. Fixed various small UI bugs in data table. Improvements to event list column headers.
New features and improvements
Other
File based parameters on dashboards can now filter parts of a file out, by specifying a subset of entries in the file that should be used. This filtering can also be based on other parameters, so entries pulled from the file can depend on e.g. a query based parameter.
Fixed in this release
Functions
Using
dropEvent()
in a parser did not work when using the "Run Tests" button.
Other
EventList column header menu opens on click now, instead of on mouse hover.
Exporting a search (Or using
/query
endpoint in other contexts) would fail if any node was down even when the files needed to satisfy the search were available on other nodes. Note that a search in progress will still fail if a node goes missing while the search runs. (Searches in the UI restart in this case but that is not possible for an export.)MAX_EVENT_SIZE
defaults to 1 MB. Increasing this may have adverse affects for overall system performance.When setting up Humio the server will refuse to start if Kafka is not ready in the sense that the number of live Kafka brokers is less than number of Kafka bootstrap hosts in the configuration for Humio.
Regex matching gets rejected at runtime if it spends too many resources.
Improved names and states in thread dumps and added a
group
field to the traces. Run#type=humio class=threaddump state=RUNNABLE | timechart(group,limit=50,span=10s)
in the Humio repo to get an idea of variations in what the CPU time is being spent on.The Show in context window on event list would "jump" when used and a live query on dashboards.
Fix issue that made the timestamp column wrap on some platforms.
In Chrome, it was sometimes not possible to rename a dashboard, clone a dashboard, duplicate a widget, and other actions. This has been fixed.
LDAP login code rewritten.
Make JSON word-wrapping work when a column is syntax highlighted.
Fix issue with layout of pagination of table widgets in dashboards overflowing when it has a horizontal scroll bar
Latin-1 characters (those with code point 128 - 255) were not added correctly to hash filters. To fix this, Humio needs to rebuild the existing hash filters: The old hash files get deleted, and a new file prefix "hash5h3" is applied to the new files. This will be done in the background after updating to this version. For estimation of time to complete use a rate of .0GB/core/hour of original size. While rebuilding hash filter files the system will have a higher load from this and from searches that would benefit from the filters but need to run without them.
HASHFILTER_MAX_FILE_PERCENTAGE
defaults to 50. Hash filter files that are larger than this relative to their segment file do not get created. This trades the work required to scan them on search for disk space for files that are not very large.Replication of segment files among nodes now runs in multiple threads to allow faster restore form peers for a failed node.
Previously, exporting data from queries with parameters would always fail. This now works as expected.
MAX_JITREX_BACKTRACK
default to 1.0.0: Limits CPU resources spent in a regex match, failing the search if exceeded.Fix issue where streaming queries failed when a node in the cluster was unavailable.
The Event List widget no longer shows column menus on dashboards. Editing was not possible, but the menus would open anyway.
Humio Server 1.6.2 Archive (2019-09-04)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.6.2 | Archive | 2019-09-04 | Cloud | 2020-11-30 | No | 1.5.19 | No |
Available for download two days after release.
Hide file hashes
JAR Checksum | Value |
---|---|
MD5 | dbd67c77d4a876f223d5cf30f5765d7b |
SHA1 | 5d4f8b045ed5a98b85c906bbf190d5ea95de4c88 |
SHA256 | 407b5f8e24843bb18072ed4ce43e381a1022efb275f18c0f717e8e3784a92228 |
SHA512 | 872c39a7893b7196d65fde5d9ddcac31d30ba70ba2b7f4613e59396d06565a6410193036713eb3a4792e0fcfdf75b77bb83673ed8afce7f4f0a47aef568eb1f9 |
Event List Columns and Bug Fixes. The release replaces the event list on the search page with a table view where you can control which columns you would like to see.
Fixed in this release
Summary
The UI now only checks the version of the Humio installation when determining if it should reload dashboards.
Improve scheduling of uploads in S3 archiving to achieve better throughput.
The special handling of @display has been removed. The field is now like any other. If you use it today, you can add it as a column in your default columns.
If a field that you would like a column for is not present in the list of fields. You can manually add it from the toolbar of the fields panel.
Users are now notified about the dashboard reload 5s before reloading.
New Event List with customizable columns.
Saving a default query for your repository also saves the selected columns and will show them by default.
Fixed issue where some cluster nodes where configured differently than others, it would trigger a dashboard reload every 10s.
The default order of the events on the search page has been reversed. It is more natural to have newer events (lines) below older ones - just like logs appear in a log file. This can be changed in "Your Account".
Use the keyboard arrows and Enter key to quickly add and remove columns while in the "Filter Fields" textbox.
timechart with limit selecting top series was nondeterministic when live.
You can now add favorite fields to your views. These fields will always be started to the top of the fields panel, and be visible even if they are not part of the currently visible events.
Browser minimum versions get checked in the UI to warn if using a version known to miss required features.
The fields panel is open by default. You can change this in "Your Account" preferences.
Functions
New query function
hashRewrite()
to hide values of selected fieldsNew query function
hashMatch()
to be able to search (for exact value) on top of the hashed values.
Humio Server 1.6.1 Archive (2019-08-26)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.6.1 | Archive | 2019-08-26 | Cloud | 2020-11-30 | No | 1.5.19 | No |
Available for download two days after release.
Hide file hashes
JAR Checksum | Value |
---|---|
MD5 | 333a5243ca407814dc5437c71d7affc7 |
SHA1 | fa3be4c0332421af5b5531fcda456f5a404199f0 |
SHA256 | dd22a4a6c291d78537a4711aea71c0243646b3bdd6b645b9626c6266629c3039 |
SHA512 | 5c600ee0f7e88cbcb47500e235a9d2e9581b5630e4cfe596b000d2801219c6dc363ec9d0572a40a3bdd93c77868fbe7603d4502f4bdcbb008ab9e30af09d8f92 |
Bug Fixes
Fixed in this release
Summary
Live queries could lock the HTTP pool, leading to a combination of high CPU and problems accessing the HTTP interface.
Fixed issue preventing you from clicking links in Note Widgets.
Humio Server 1.6.0 Archive (2019-08-22)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.6.0 | Archive | 2019-08-22 | Cloud | 2020-11-30 | No | 1.5.19 | No |
Available for download two days after release.
Hide file hashes
JAR Checksum | Value |
---|---|
MD5 | 19c48ed4689582534bdb4d2351224497 |
SHA1 | 8cb74a483006d732fedb52d835120663800d53a2 |
SHA256 | 4e519242ffa15eb15344e912479e350e60376c493d9c51b056c0c1f6b5349122 |
SHA512 | a51db1066c5c7fc21c52a6a1a47d03ddb0b2a7bc977f37c1a65f27fb553ca9099587a5af74a942110f4cbd345b1a468f98407d320ea37e87b4a3cfe70892e044 |
Improved compression. Note Widgets and YAML Template Files
Dashboard Note
Widgets can
include descriptions and can contain template expressions and
links to external systems using the current parameter and time
values
Read more about note widgets at Note Widget
We are also introducing a new YAML file format for dashboard templates. The new format is much human-readable. It is the first step to being able to persist all entities (parsers, queries, alerts) as files.
Support for the now deprecated dashboard file import API and JSON format will continue, but expect it to be removed in a later release.
Fixed in this release
Configuration
COMPRESSION_TYPE=high
turns on a stronger compression when segments get merged. This results in better compression, at the expense of having slightly lower compression for the very recent events. The improvement is typically 2-3 times better compression for the merged segments.COMPRESSION_TYPE=extreme
uses the stronger compression also in the digest part, even though it is not as effective there due to the gain from having a larger file after the merge.
Functions
New function
start()
and functionend()
functions provides the time range being queried as fields.New function
urlEncode()
and functionurlDecode()
functions allow for encoding or decoding the value of a field for use in urls.The function
parseJson()
now accepts an exclude and include parameter. Use this to specify which fields should not be included.
Other
New function
copyEvent()
function allows duplicating an event into another datasource while ingesting. Usecase
to make the two events differ.COMPRESSION_TYPE=fast
(Default!) corresponds to versions before 1.6.xStyling of the dashboard "Labels" dropdown has been fixed.
Introducing a new YAML dashboard file format.
Pending parameter edits toggle, so that parameter changes are not immediately applied if the user desires not to.
Added GraphQL fields for shared dashboards.
Renaming a repository is now possible in settings.
Added cluster information pages for the ZooKeeper & Kafka Cluster used by Humio. Both are available under Administration.
The function
sort()
query function now ignores case when sorting strings.The sizes of the compressed files and the associated hash-filter files are tracked separately for the merged part, allowing you to see in the UI how well the long-term compression works as part of the total set.
Removed internal REST API for shared dashboards.
Added Note Widget support for dashboards.
Changing Dashboard labels will no longer trigger a "Dashboard was modified remotely" notification.
Note! Rolling back to v1.5.x is supported only for
COMPRESSION_TYPE=fast
which is the default in this release. The default is expected to change to "high" later on. The new compression types "high" and "extreme" are considered BETA release.This update does not support rolling updates. Stop all Humio nodes. Then start them on the new version.
Changed GraphQL fields for dashboard widgets.
Drawer heights were not being persisted between browser sessions.
Humio Server 1.5.23 Archive (2019-07-31)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.5.23 | Archive | 2019-07-31 | Cloud | 2020-11-30 | No | 1.5.8 | No |
Available for download two days after release.
Hide file hashes
JAR Checksum | Value |
---|---|
MD5 | 11cc896c1d0f7157e63acc1b34428106 |
SHA1 | 54b32056d10f567c56dcbad58f658ad88d5f0617 |
SHA256 | 0495a5384f2759d6e1adb9ff9030f9ded72b463ac667f74481878b8f2b0df5ed |
SHA512 | 7013f12452a3f191a69630a105a171f6f496d6eca88ca8bae794d7f304669e99a296d17d1b9a9123ed97da8514f657abfa01403048412016be44f250671eb09c |
Maintenance Build
Fixed in this release
Summary
Include size spent on hash filter files on disk in the cluster overview as Humio data rather than system data.
case
that assigned fields inside was not handled properly when pre-filtering using the hash filters.
Configuration
CACHE_STORAGE_SOURCE
defaults toboth
, and also allowssecondary
to only cache files from the secondary storage
Functions
Function
collect()
now requires the set of fields to collect.
Humio Server 1.5.22 Archive (2019-07-11)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.5.22 | Archive | 2019-07-11 | Cloud | 2020-11-30 | No | 1.5.8 | No |
Available for download two days after release.
Hide file hashes
JAR Checksum | Value |
---|---|
MD5 | 03a4585d4f91d76dcf32eb2605b2f2a5 |
SHA1 | 8b449045065f1fcac5fa4720154dd5c07b671dec |
SHA256 | dcf4c3c2646071ef7ec4e96daba79fc62b10b7d23e7afab168fdd0f1047ffee7 |
SHA512 | e455968440ae5b86d001f700616789dbfa3298cd02e769b473478bf5ff04f888b5c162d1afe5322cc35b040fdbe5d5817feaa850f26770e4f91b0928e595b78e |
Maintenance Build
Fixed in this release
Summary
Improved performance of "/query" endpoint.
There is now a
humio-query-id
response header on responses to "/query" search requests.Always close everything when Akka actorsystem is terminated.
Humio Server 1.5.21 Archive (2019-07-04)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.5.21 | Archive | 2019-07-04 | Cloud | 2020-11-30 | No | 1.5.8 | No |
Available for download two days after release.
Hide file hashes
JAR Checksum | Value |
---|---|
MD5 | 3e69e9568f9def1fe56c9772266e529e |
SHA1 | 17a1f945b825bb6220aa6cf4cf1d4675420578ac |
SHA256 | 84550bd4f356c57e37055b34f5a0fc00c25a20e08faa2d6a0f8843e029550932 |
SHA512 | dcbf18ea92afa7f5c0cbf1f41835a948e3aa0a6b2e303d1c40d8c2c8bd9c9084186e73c1a2aa76221381bf6f66872b7708e994243969273344e28fcb73b7db1c |
Maintenance Build
Fixed in this release
Summary
If an events gets
@error=true
in the ingest pipeline (including in the parser) it will also get#error=true
as a tag. This makes events with an error become a separate datasource in Humio allowing you to delete them independent from the others and makes problems from parsing timestamps not disrupt the pipeline when back filling old events.
Functions
New function
dropEvent()
lets you discard an event in the parser pipeline. If a parser filters out events using e.g. a regex match that does not match the parser will just keep the incoming events. Use this new function (typically in acase
) to explicitly drop an event while parsing it when it does not match the required format.
Humio Server 1.5.20 Archive (2019-07-04)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.5.20 | Archive | 2019-07-04 | Cloud | 2020-11-30 | No | 1.5.8 | No |
Available for download two days after release.
Hide file hashes
JAR Checksum | Value |
---|---|
MD5 | 6e7fc0c55dcb7e983c3e9ed01fe2a998 |
SHA1 | bee0844d369a237d7ce7da118806be97bb1fab83 |
SHA256 | d647ba881452c4905e44894168461f389bce43e9aa93c58d719f1ec9de62dd13 |
SHA512 | 4631ea0f892c5a6044a7c0b008b1350ce2d3408883e73fb3d357bb4f9920c762efbcf7efbb26b1fa3bb5577c327f260c243371c6a367d566ed0dc48794600499 |
Maintenance Build
Fixed in this release
Summary
"services" is no longer a reserved repo name.
Humio Server 1.5.19 Archive (2019-07-03)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.5.19 | Archive | 2019-07-03 | Cloud | 2020-11-30 | No | 1.5.8 | No |
Available for download two days after release.
Hide file hashes
JAR Checksum | Value |
---|---|
MD5 | fa9d61d17a037583bbe067a4aab64ab3 |
SHA1 | 1793534d22cfc53c5b15dde0dc60d008ad9c9204 |
SHA256 | 6e21cd3874a948a3ddf3554911c3104e60a2ca33e8e4a039755eca42c03e1508 |
SHA512 | ec09a60164c981970b20fd401140a5dac2e3814f6e1254c3472721ace2db957b92f9035b8833436008054220385b55e1e83b93f6b1547842c289a0194522913c |
File based parameters on dashboards. This release makes it easier to configure load balancers by adding sticky session headers to most UI Http requests.
The existing header Humio-Query-Session is used. For non-search related HTTP requests it will contain a random sticky session ID. For search related HTTP requests it contains a hash of the query being executed - just like it has done previously.
New file based parameter type is added to dashboards.
Fixed in this release
Queries
The HTTP request header Humio-Query-Session is now added to most requests from the UI.
Other
Make failover to the next node in digest when a node gets shut down gracefully faster by delaying the shutdown a few seconds while letting the follower catch up.
Fixed Interval Queries on dashboards used the time of the dashboard being loaded as the definition of "now". It will now use the time of the last change in the dashboard's global time.
Improved performance on servers with many cores for functions (such as top) that may require large states internally.
New shared files. Shared files can be used like the existing files, that are uploaded to repositories. A shared file is visible in all repositories and can be used by everyone. Only root users can create and manage them. For now, shared files can only be added using Humio's Lookup API. Shared files are visible in the files tab in all repositories for all users. Root users can also edit and delete shared files there. Shared files can be used from query functions like function
lookup()
and functionmatch()
they are referenced using the path /shared/filename.New type of parameter: Dashboards can now have file based parameters, which are populated with data from files uploaded to Humio
Humio Server 1.5.18 Archive (2019-06-26)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.5.18 | Archive | 2019-06-26 | Cloud | 2020-11-30 | No | 1.5.8 | No |
Available for download two days after release.
Hide file hashes
JAR Checksum | Value |
---|---|
MD5 | c4d0166d229209681cd23f4d34ce0e75 |
SHA1 | 23aeac090b6069258d168f235d025d497baf6147 |
SHA256 | db78a9fdc47bc3f3687e758c9e1dc5c13c14525626199568889226121eca4629 |
SHA512 | a8b66956d7d58a6bba71884ea01c91fea88d96b1c58ff9ba63a0f111da84f270e8bee16e87e1148415635425704472a5d4156b4300b1d7bb58ddd96091cdee49 |
New function parseXml()
and support
ephemeral drives for caching.
Fixed in this release
Summary
Humio can now keep a cache of the latest files on when told the path of a cache-dir using
CACHE_STORAGE_DIRECTORY
. Humio will then write copies of some of the files from primary and secondary storage here, assuming it is faster to read from the cache. The cache does not need to remain after a restart of Humio.CACHE_STORAGE_PERCENTAGE
(Default 90) controls how much of the available space on the drive Humio will try to use. This is useful on system such as AWS where the primary data storage is durable but slow due to being across a network (e.g. EBS) while the server also has fast NVME-drives that are ephemeral to the instance.Certain regular expressions involving
^
and$
could fail to match.MAX_EVENT_FIELD_COUNT
(default .0) controls the enforced maximum number of fields in an even in the ingest phase.New built in parser
corelight-es
to parse Corelight data send using the Elastic protocol.Reduce size of global snapshots file.
Remove configuration flags:
REPLICATE_REMOTE_GLOBAL_HOST
and REPLICATE_REMOTE_GLOBAL_USER_TOKENParameter input fields for query based parameters initially always showed
*
even when a default value was set. It now correctly shows the default value for the parameter.
Functions
New function
parseXml()
for use in parsers
Humio Server 1.5.17 Archive (2019-06-20)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.5.17 | Archive | 2019-06-20 | Cloud | 2020-11-30 | No | 1.5.8 | No |
Available for download two days after release.
Hide file hashes
JAR Checksum | Value |
---|---|
MD5 | d40aa67958efee188fbbd801389ca6a0 |
SHA1 | 912ff791cc76312bdc3c3e72877e9872b0cebca7 |
SHA256 | 45a48f7cdb99d4e08f95f1ef6802ecf730e8ae997b23ad1ba6f2f1e5fec14e91 |
SHA512 | ce51829c5152bed8c2d67581239365c4055ddc038545f20877c6a1d499aca6f69b920bc006d2499659194d0b4bf1ebdef7be7d55262318620dd73ee76c8d822b |
Maintenance Build
Fixed in this release
Summary
Update BitBucket OAuth integration to version 2
Updates to repos with reserved names on legacy repos did not work.
Humio Server 1.5.16 Archive (2019-06-11)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.5.16 | Archive | 2019-06-11 | Cloud | 2020-11-30 | No | 1.5.8 | No |
Available for download two days after release.
Hide file hashes
JAR Checksum | Value |
---|---|
MD5 | 1ae08e9fa6216dd50b413b0b733054d3 |
SHA1 | 9da479f33fa955645eb6f8c048a335dee59ff5a8 |
SHA256 | a5738bb660340e67570f673652c49efc2b6e9e4281ddab1c2bc6fdd242418644 |
SHA512 | b96b45050e0cb5d0a1b90968bf22b2bc8ee746076a266f147802b33f920cbac73dc2d5418ff86ed672411758c304fe4d387bb1bdc8c068594acda682fd66ec5b |
Maintenance Build
Fixed in this release
Summary
On Windows, the Ctrl+O shortcut no longer opens the "jump" menu on the home page, but Ctrl+Y does instead, to avoid conflicts with browser shortcuts.
Ability to read global-snapshot.json when the file is larger than 1GB.
Invalid parser no longer prevents ingest token page from loading.
Humio Server 1.5.15 Archive (2019-06-06)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.5.15 | Archive | 2019-06-06 | Cloud | 2020-11-30 | No | 1.5.8 | No |
Available for download two days after release.
Hide file hashes
JAR Checksum | Value |
---|---|
MD5 | ad5393b90a76798037d974eb15274170 |
SHA1 | a39f7d6a55a9e20d35a76cc77d1593e733f82fa0 |
SHA256 | ef07ed1498a14912ae33c438dbce446bd3c515598b56a53590bb28e0da764eac |
SHA512 | 35d8cf32cfffc01456fea778186098acc17fadd6003ca910eca6165c6113cbd5ad9a68a5fcd52a9ab9934a51bc3e9175f1f139a5b400e4249cfcd7f2b7faddda |
Dashboard Improvements and Bug Fixes
Fixed in this release
Summary
Regex with
[^\W]
did not execute[\w]
as it should.Dashboard parameters with fixed list of values now keep the order they were configured with
Dashboard parameters with fixed list of values can now have labels for each of the values
Humio metrics have now been documented.
Configuration
VALUE_DEDUP_LEVEL
default to the compression level. Range is [0 ; 63]. Higher values may trade extra digest time to get lower storage of events with many fields.
Functions
New function
eventFieldCount()
that returns the number of fields that this event uses internally for the values, use along witheventSize()
to get statistics on how your events are stored.
Humio Server 1.5.14 Archive (2019-05-29)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.5.14 | Archive | 2019-05-29 | Cloud | 2020-11-30 | No | 1.5.8 | No |
Available for download two days after release.
Hide file hashes
JAR Checksum | Value |
---|---|
MD5 | a04ea786b04438c4028a880948ab4ac0 |
SHA1 | 8d82c939347c1a28e46efa5605b5592e5b25824d |
SHA256 | edc162b7dcd604948cca55d924838754ca0162e3d5d97ba26a5e8cfd0df512b2 |
SHA512 | 30935d7230af57de26beb316a5d7c79a35a01d23e41dc4a5d06268910a7aa95535e1f7fbe88448a9f504e7e9a2c97fd43bfa0f90e4ce2a12c1976e5250a0999a |
Improved Pre-Filters
Fixed in this release
Summary
The disk space occupied by the pre-filter files now get included when enforcing retention by compressed size.
The format of the new
hash5h2
files is different from the previousbloom5h1
and the system will generate new files from scratch for all existing segment files and delete any existingbloom5h1
file.Improved pre-filters to support more searches while adding less overhead in disk space.
When
file
andcolumn
parameters to thecidr()
function used together, load subnet list from given CSV.
Automation and Alerts
When an alert fails to send the notification, don't restart the query, just retry the notification later.
Humio Server 1.5.13 Archive (2019-05-27)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.5.13 | Archive | 2019-05-27 | Cloud | 2020-11-30 | No | 1.5.8 | No |
Available for download two days after release.
Hide file hashes
JAR Checksum | Value |
---|---|
MD5 | 894e1bbdc1a57f2f0b46c162f42c3f46 |
SHA1 | ba223177715e7090610100557f0929eb9c628ac0 |
SHA256 | 5eb2a227ca5964915ebdd96bce5ae7bb0adb611e983714459440685ab45c86b6 |
SHA512 | 07a7237614c98045b72e5dffc3cb5d480adbf7b62747e9932e1599579128b205c9a80cf73983ea2c724dab0a9ba0d2226c84b9cffdbb45ab9e7bdb4eb938d623 |
Metrics are now send to a separate file
humio-metrics.log
.
Fixed in this release
Summary
New log file
humio-metrics.log
. Metrics data has been removed from thehumio-debug.log
and moved tohumio-metrics.log
. Metrics will still also be in the default Humio repository. If you are collecting the Humio log files, with for example Filebeat, you need to addhumio-metrics.log
to the collector.Fix some cases where parameters would not be picked up by the UI because of regex or string literals in the query.
Humio Server 1.5.12 Archive (2019-05-20)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.5.12 | Archive | 2019-05-20 | Cloud | 2020-11-30 | No | 1.5.8 | No |
Available for download two days after release.
Hide file hashes
JAR Checksum | Value |
---|---|
MD5 | a560cb0ce83c1809b9a5058b69591d7e |
SHA1 | b631299178941b9b104a35c5d446942b183889ec |
SHA256 | de58549b4e0b8cc5e1f57c75592259cea90eb022f280f422f1bcb8bdd7068c6b |
SHA512 | 58c40dae601d284b1a4c4fdf19629955c19c74abcb186aec61a8ce1f3c192909c21503f4eec8fc17ede7238f473e497dfe32b24a4e1a6a4c36902cf1ae4c4c0f |
Parameters can be used to make dashboards and queries dynamic.
Fixed in this release
Summary
You can now use the syntax ?param in queries. This will add input boxes to the search and dashboard pages. Read more in the Manage Dashboard Parameters documentation.
Parallel upload of segment files to S3. Degree of parallelism can be controlled with e.g.
S3_ARCHIVING_WORKERCOUNT=4
. Default is 1 if nothing is specified.URLs now contain parameter values making it easy to share specific dashboard configurations.
Humio Server 1.5.11 Archive (2019-05-16)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.5.11 | Archive | 2019-05-16 | Cloud | 2020-11-30 | No | 1.5.8 | No |
Available for download two days after release.
Hide file hashes
JAR Checksum | Value |
---|---|
MD5 | 4f01eabfd30739ac228e423a5f51582f |
SHA1 | e7c174433f19fbe81a13f6e7b2b8d849c480d22b |
SHA256 | e2eb4d827bd1a1a67923c30a535d8b968f4574c9bbf59bfc84c8a2fcfbccaec2 |
SHA512 | 952f9de0089c9528accfee959951a54ef400a3ce49d37d1666234a7ec3f1d9de1e7e164d8ff101e461e62f31faf4331575e63eb2fc4b54484bb9f6d9267e8bb2 |
Bug Fix Release
Fixed in this release
Summary
Bloom filters are now always on.
Named groups in regular expressions supports having
. [ ]
in their names.Moving segments to secondary storage can no longer be blocked by merging of segment files / s3 archiving.
Humio Server 1.5.10 Archive (2019-05-13)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.5.10 | Archive | 2019-05-13 | Cloud | 2020-11-30 | No | 1.5.8 | Yes |
Available for download two days after release.
Hide file hashes
JAR Checksum | Value |
---|---|
MD5 | 1b828fd0d80aa3c80aa9f1d4bba0dbd6 |
SHA1 | 998c039c894acdbe30d83efa414428ab77de2d7e |
SHA256 | b18d7b1844f4482f0e1aa8afe7e67ad1bf06e8bb5f40faebe9179c49dc72a412 |
SHA512 | cb3efdf806231d2f3d71d378054e7513eed3bd73ccb445159f8232da4df14bae1f1165bde27d5536554f178dbcde3973d2d64e2f33271247e734cc3d6b88fd3f |
New bloom filters, but please upgrade to 1.5.11 to avoid known problems in this build.
Fixed in this release
Summary
When enabled, this will write files along with the segment files with prefix
bloom5h1.
, which add approximately 5% storage overhead.MUST be enabled with BLOOMFILTER_ENABLED=true (Note! defaults to
false
in this release, which makes searches skip events they should not).New experimental bloom filters that speed up searching for constant strings such as UUIDs and IP-addresses; the longer the search string, the bigger the speedup. The bloom filters also help regular expression searching, including case insensitive ones.
It is safe to just delete any
bloom5h1.
files while the system is running, or in case the feature needs to be disabled.The bloom filter files will be generated as part of digest work, and also generated for "old" segment files when Humio is otherwise idle. Thus, when the feature is initially enabled, it will be visible that the CPU load is higher for a period of time.
Humio Server 1.5.9 Archive (2019-05-06)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.5.9 | Archive | 2019-05-06 | Cloud | 2020-11-30 | No | 1.5.8 | No |
Available for download two days after release.
Hide file hashes
JAR Checksum | Value |
---|---|
MD5 | b6dc38838aac82596ea8eebf294b9a97 |
SHA1 | ca35bf05425265706467adc415c8ff558bad12f9 |
SHA256 | 4d0af39687972dc6bb476928edf9c0b10d956336a564d0a828b8e036174e1da5 |
SHA512 | a2b8e9512fc287c8ff3f2d40a734d5bab872ccedf995bdeeb8a3fc900169a7d1bfaa85a6a30e43e0272ea97bf3e3f44e6f397535818e6e7b1fac15ca0ced98d2 |
Bug Fix Release
Fixed in this release
Summary
Add information on query prefixes to the
Query Monitor
. When inspecting a running query in theQuery Monitor
the query prefix can now be found in the details pane.Enable
sourcetype
field in the HEC endpoint to choose parser (unless another parser is attached to the parser token).Default filters in dashboards could cause search to not find anything.
Automation and Alerts
Alerts with multiple notifiers could result in notifications not adhering to the configured notification frequency resulting in notification spam.
Humio Server 1.5.8 Archive (2019-04-25)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.5.8 | Archive | 2019-04-25 | Cloud | 2020-11-30 | No | 1.4.x | Yes |
Available for download two days after release.
Hide file hashes
JAR Checksum | Value |
---|---|
MD5 | 4898c50c07b21c557e16f9ecb0525b64 |
SHA1 | 6f7de5e75418ab04752927081ac4af0156b78df9 |
SHA256 | 449c36c1b9cf793db02250e1d089594491fde458f46b33b2b4b2967ef7e0bef7 |
SHA512 | d382620aa86df5fc7d24977d6097f1d40829e5b1c5cce5431ce6110ca256be99a636bdb1d5b0d322fee1cc784d55f7b4cc12ae78da059b4089cbb9739494e7e0 |
New dashboard editing code and many other improvements
Fixed in this release
Summary
In tableview, if column data is of the form
\[Label](URL)
it is displayed asLabel
with a link to URL.Dashboard queries that are not live and uses a timeinterval relative to now, are migrated to be live queries. Going forward, queries with timeintervals relative to now will be live queries when added to dashboards.
S3 archiving now supports forward proxies.
parseTimestamp()
now handles dates, e.g. 31-08-2019.@source and @host is now supported for Filebeat v7.
The Auth0 integration now supports importing Auth0-defined roles. New config
AUTH0_ROLES_KEY
identifies the name of the role attribute coming in the AWT token from Auth0. See new auth0 config options Map Auth0 Roles.Validation of bucket and region when configuring S3 archiving.
Alerts notifiers with standard template did not produce valid JSON.
Built-in audit-log parser now handles a variable number of fractions of seconds.
Humio's own Jitrex regular expression engine is again the default one.
Configuration
Config property
KAFKA_DELETES_ALLOWED
has been removed and insteadDELETE_ON_INGEST_QUEUE
is introduced.DELETE_ON_INGEST_QUEUE
is set totrue
by default. When this flag is set, Humio will delete data on the Kafka ingest queue, when data has been written in Humio. If the flag is not set, Humio will not delete from the ingest queue. No matter how this flag is set, it is important to configure retention for the queue in Kafka. If Kafka is managed by Humio, Humio will set a 48hour retention when creating the queue. This defines how long data can be kept on the ingest queue and thus how much time Humio has to read the data and store it internally.
Humio Server 1.5.7 Archive (2019-04-10)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.5.7 | Archive | 2019-04-10 | Cloud | 2020-11-30 | No | 1.4.x | No |
Available for download two days after release.
Hide file hashes
JAR Checksum | Value |
---|---|
MD5 | 5ed44bb802f54fc03ea606da0e5d2afe |
SHA1 | d3c3d7705366851dfa019214c3cf2b653914c7f2 |
SHA256 | 41d0da3dd48732136ae245209cb549946e46f31ff0bca9a2fd37d2d9c73b05f4 |
SHA512 | f07ac6a204637973c64f830082245475b7ec3b6dfe01eb98d63a68adaa5f352a0c4647ec189cbc12f543c50e093d63475377031b869538be076c097394a853dc |
Bug Fix Release
Fixed in this release
Summary
Temporarily disable deletes of events from the ingest queue to allow recovering events skipped in the queue due to the above infinite loop problem.
Revert default Regex engine from jitrex to RE2J. jitrex has a case where it may loop infinitely and this will break the digest pipeline if this happens in a live query.
Humio Server 1.5.6 Archive (2019-04-04)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.5.6 | Archive | 2019-04-04 | Cloud | 2020-11-30 | No | 1.4.x | No |
Available for download two days after release.
Hide file hashes
JAR Checksum | Value |
---|---|
MD5 | 614a06c67d4e93e1a3f2083ce5cfe4c0 |
SHA1 | 57d3259a0469703caa38975a0be19062f0954e57 |
SHA256 | e410583413293132f0e6f09884c3387282892b31164c77472cb0f4343f4d6ec6 |
SHA512 | 0a00735576c700ee063824ead2845452cbf1f00530014be127d13cb7da89a55143d64192755b00e3039f175cc64985cad736cd9dbe08f54005163f43008c1a1c |
Bug Fix Release
Fixed in this release
Summary
LDAP integration.
Humio Server 1.5.5 Archive (2019-04-03)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.5.5 | Archive | 2019-04-03 | Cloud | 2020-11-30 | No | 1.4.x | No |
Available for download two days after release.
Hide file hashes
JAR Checksum | Value |
---|---|
MD5 | 39a4777bd2dae7d69177426d063dc6df |
SHA1 | 6d27f9ae18dbcb49bfc37f7c014db93f331652ea |
SHA256 | e3ec14f8cbc6d78c4beede209fe4be958e5c727fe3a7fcb2d7919afc81c3c85d |
SHA512 | 7d3c645974227a9e9dfd8ea261923f516b626d02eea73ab5bd7fdccd3bfdfbfab7f05e37abe49019d7a0ff97bd3824646e4de1ff8f587fdbf3623bed77c9da42 |
Event Context
Fixed in this release
Summary
The size of the Drawer, showing event details on the search page, is remembered (by being saved to local storage)
New event context searches let users select and search around one specific event.
Segment merging could reuse a tmp file when the system was restarted, which would block the merging process on that host from making progress.
Fix bug in regex not recognizing
[0-9]
as part of\w
Restart all relevant queries when an uploaded file gets changed. This allows live queries and alerts to refresh using the latest version of the file.
Live timecharts could accumulate data for 2 buckets instead of 1 into the bucket that was right-most when the charts starts.
A
GET
on/api/v1/users
that lists all known users on the system no longer includes information on the repositories for the user, as that made it too slow.Display information on disk space usage of primary / secondary storage location in cluster management UI.
Uploaded files cannot be bigger than specified in the config
MAX_FILEUPLOAD_SIZE
. Default value is .0 megabytes. The default value is used in our cloud.
Configuration
New parameters
MAX_SERIES_LIMIT
andMAX_BUCKET_POINTS
.
Humio Server 1.5.4 Archive (2019-03-26)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.5.4 | Archive | 2019-03-26 | Cloud | 2020-11-30 | No | 1.4.x | No |
Available for download two days after release.
Hide file hashes
JAR Checksum | Value |
---|---|
MD5 | d3c301554f98e7540728d412b6452e49 |
SHA1 | c172bf5c456df3ded1e08f37f232b0ea7fbf96eb |
SHA256 | d4b23822b1a63846913dbd9421cfc3a7d937af4c3b3d17262b3674e5a4a99a16 |
SHA512 | 82a2f80540e623f05eab2219cf79283f406255b18ae10083de1eecf30038ab8b413d497138c1cfa6c758978c490cbfbc21b3e4abc63e19c1d3bf47c8f1625fb9 |
Bug Fix Release
Fixed in this release
Summary
Query scheduler could get into a state of doing no work when overloaded during startup. Workaround while working on proper solution: Raise the queue size internally.
Humio Server 1.5.3 Archive (2019-03-26)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.5.3 | Archive | 2019-03-26 | Cloud | 2020-11-30 | No | 1.4.x | No |
Available for download two days after release.
Hide file hashes
JAR Checksum | Value |
---|---|
MD5 | 64439aad4fb74f03ab9cf140de039d2e |
SHA1 | 16f55ee1018400f92aec8fd0330e54b81077d4dd |
SHA256 | 6acb0265a88ee250390a1021db382724148e8b09a7738fa7c948fd6f95d06870 |
SHA512 | 9705984865b1852aff9b4237d391ea94b40e0cee972e1ed82fc4e5b9f84a08a69e71bfd5614ad2b141479ff1b1b50523ab1ef9e7b05c5ce31eb6a35d51f40c96 |
Bug Fix Release
Fixed in this release
Summary
Webhook request could end up with malformed requests.
New Config flag
WARN_ON_INGEST_DELAY_MILLIS
. How much should the ingest delay fall behind before a warning is shown in the search UI. Default is 30 seconds.
Humio Server 1.5.2 Archive (2019-03-25)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.5.2 | Archive | 2019-03-25 | Cloud | 2020-11-30 | No | 1.4.x | No |
Available for download two days after release.
Hide file hashes
JAR Checksum | Value |
---|---|
MD5 | 5df6f461a9703cdf6ae3095f73502e53 |
SHA1 | 5400b067c8e7c56b1febc4ed4903e62499f83470 |
SHA256 | 8814989d98be9aa7b53ddcdeaffa394056ca70c08edf6c5e2cdcf05a4db21c86 |
SHA512 | de757392b3ae568ed348af4bd0153ebc76b15e4f2b16a540f9af756235479f833b0a8f1d92cd3a87ea011dded75841f1dea91f1d15675ec7c465cc6db32f637d |
New functions for Math and Time operations.
Fixed in this release
Summary
Query prefixes for users were not properly applied to the "export" api.
Repositories with names starting with "api" were inaccessible.
Date picker marked the wrong day as current.
New functions for "Math" and "Time" operations.
Java version check: Allow JDK-11 and 12.
Humio Server 1.5.1 Archive (2019-03-22)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.5.1 | Archive | 2019-03-22 | Cloud | 2020-11-30 | No | 1.4.x | No |
Available for download two days after release.
Hide file hashes
JAR Checksum | Value |
---|---|
MD5 | 669f2e0e026ac5ce1ada07221cbd2397 |
SHA1 | 9157d3100b8c35d1c36ef03323aa1431de11b0d4 |
SHA256 | 846f0de0235cf4835e5da75853d4fbc069ad6d0fb2c74bafda3a0d471e1041c7 |
SHA512 | 5e27430929861a3ef0c3aea3accc88d7b57b8ac133d87dd1757cbc8ed88cb3914a93592c368886eed2b938c576d81099ee97fa60a79118fb4f6d120060dbb705 |
The default regex engine has been replaced.
Fixed in this release
Summary
New regex engine (Humio jitrex) is now the default; configure using DEFAULT_USER_INPUT_REGEX_ENGINE=HUMIO|RE2J. If you experience issues with regular expressions try setting configuration back to the previous default RE2J. You can also pass the special flags
/.../G
(for Google re2j) or/.../H
(for Humio jitrex) to compare.Timechart more efficient in the backend, better supporting more than 1 series.
New implementation backing
match(...)
for exact matching (glob=false
) allows using.csv
files up to 1 million lines. The limit for exact match state size can be set usingEXACT_MATCH_LIMIT=.0.0
.No owls were hurt in the production of this release.
Kill query or blacklist query was not always killed on all nodes.
Humio Server 1.5.0 Archive (2019-03-15)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.5.0 | Archive | 2019-03-15 | Cloud | 2020-11-30 | No | 1.4.x | No |
Available for download two days after release.
Hide file hashes
JAR Checksum | Value |
---|---|
MD5 | c3615501d40a6cb19301c26174548841 |
SHA1 | 0cb80023e27a2e43e6b0281711ed4483e1415caf |
SHA256 | 5ec7ad5370a1768d886885ecf118ea6bcd09af362175011521e1209308ab35b6 |
SHA512 | f5e5d71332d774f87e13143b458707c1ddbf114901292c4b1cbdef27844f0c6e7460afde36c6fa0f5e638c6251b908fcae3b22bcb908506726824399d568b78c |
All parsers are now written in Humio's query language.
Fixed in this release
Summary
New BETA feature: Delete Events allows deleting a set of events from the internal store using a filter query and a time range. At this point there is only API (GraphQL and REST) for this but no UI.
The option 'PARSE NESTED JSON' on the old json parser creation page is no longer available/supported. Instead use
parseJson()
on specific fields, e.g.parseJson() | parseJson(field=foo)
. This has to be done manually for migrated JSON parsers.permission for editing retention when running with
ENFORCE_AUDITABLE=true
.Migrated regex parsers with the option 'PARSE KEY VALUES' enabled has different parse semantics. If the regex fails key values will no longer be extracted.
All parsers created before the introduction of parsers written in Humio's query language are migrated.
Non root users could not see sandbox data when using RBAC.
Humio Server 1.4.9 Archive (2019-03-13)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.4.9 | Archive | 2019-03-13 | Cloud | 2020-11-30 | No | 1.3.2 | No |
Available for download two days after release.
Hide file hashes
JAR Checksum | Value |
---|---|
MD5 | 658abeee302dd14ad05a2b7d50ca5a6a |
SHA1 | c5bf3e514152899299880a77faf08fb7bb0595d8 |
SHA256 | dc48a0753a3113ced36f67fd91814735ba31f01cf51cdd6cd4ab31614460acb0 |
SHA512 | fa0f2732aaca9ad7bfde148270547efeec8da6a1d237c88d5d315e25269666054fb0ef76e08d2cfbebef5a29b460a947d75d3994f18c9e0bf0676d226b16fffd |
Bug Fix of retention not working.
Fixed in this release
Summary
Retention was not applied to all segment files in a clustered setup. The bug was introduced in 1.4.4.
Increasing
AUTOSHARDING_MAX
default from 8 to 16 and start autosharding at 4 instead of 2,Prevent labels in gauge widgets from being clipped.
Automation and Alerts
white space in field templates in alerts.
Humio Server 1.4.8 Archive (2019-03-11)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.4.8 | Archive | 2019-03-11 | Cloud | 2020-11-30 | No | 1.3.2 | No |
Available for download two days after release.
Hide file hashes
JAR Checksum | Value |
---|---|
MD5 | b4757d785261fc262a3d618296d690d1 |
SHA1 | 1dc9f2712b87567e40889262da8095086e060d6a |
SHA256 | 0eed830ffdd0ae4bfbacf5f2627a29104a0928e1e3b6635553cf841cef5f0aa2 |
SHA512 | 0950bc904e0b80b501c4fca4ee8634db2ca9fcca4fd9154e77dd8c223e867e754d1147b9013434cf15566d5fe21ccfe6056416d1ecf6d242a974141f88673acd |
Bug Fix Release
Fixed in this release
Summary
Bug Fix create default directories in the ZooKeeper Docker Image
Bug Fix introduced in last release for handling error messages
Humio Server 1.4.7 Archive (2019-03-07)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.4.7 | Archive | 2019-03-07 | Cloud | 2020-11-30 | No | 1.3.2 | Yes |
Available for download two days after release.
Hide file hashes
JAR Checksum | Value |
---|---|
MD5 | d09b1e4945cff155ce459cdd293bb4e9 |
SHA1 | d44e10514f3c9b1e46c9b3ccaef5f9418226b4e1 |
SHA256 | bd950358b9eb8f20269ee2823fd579a856c535237d49002891f6f9455c117fd7 |
SHA512 | f3b838a1c2b284467b99e395963685e47ca57a2c774c8a77ef3b51878fece9a8d6dd5371818f9897c6f591996a50f158a233b9cf511ce3167bd887047f4aec92 |
Saved queries allowed in
case
and
match
.
Fixed in this release
Summary
Humio now requires Java version 11. The docker images for Humio now include Java 11. If you run the "plain jar" you must upgrade your Java to 11.
Improved handling of the "Kafka reset" aka "Start from fresh Kafka" aka "Set a new topic prefix". Humio detects and properly handles starting after the user has wiped the Kafka installation, or pointed to a fresh install of Kafka.
Upgraded to Kafka to 2.1.1 in our Docker images and the Java client in Humio. Humio is still be compatible with older versions of Kafka. Lowest supported Kafka version is 1.1.0.
Saved queries now supported in
case
andmatch
.
Humio Server 1.4.6 Archive (2019-03-04)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.4.6 | Archive | 2019-03-04 | Cloud | 2020-11-30 | No | 1.3.2 | No |
Available for download two days after release.
Hide file hashes
JAR Checksum | Value |
---|---|
MD5 | 8cf206c7d6972c1268f9b01d78b231a7 |
SHA1 | b2b550a80c08715ee6a5f073612e05da7e16468d |
SHA256 | 38aaad962f12cd6d3bc44d0d2467cd9c8e6d12568f9fbe2cd527fbfa23895968 |
SHA512 | f078c07a386aa88ac910f198d81fd6d8d230c5e5e3b205afad6a29741ab1a4e6cb918828d8e2e1fe16e8c3e705705b6eeab5d0cfce17740367ba97b8d0804fe4 |
Bug Fix Release
Fixed in this release
Summary
Prevent 'http response splitting' attack in the "export as" function.
The personal sandbox was missing in list of visible repos for non-root users when
READ_GROUP_PERMISSIONS_FROM_FILE
was enabled.ENABLE_PERSONAL_API_TOKENS
defaults to true. When set to false the API tokens are no longer valid as auth-tokens.When
@timestamp
is in the filter part of the search then let that limit the timeinterval as if selected in the Time Selector.
Humio Server 1.4.5 Archive (2019-02-27)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.4.5 | Archive | 2019-02-27 | Cloud | 2020-11-30 | No | 1.3.2 | No |
Available for download two days after release.
Hide file hashes
JAR Checksum | Value |
---|---|
MD5 | a40c7e599b3e4fafc887ef8712552f70 |
SHA1 | 61f99d83c67f6a8d993f5bb13ca5f59887f85358 |
SHA256 | 5af2499bd32943ccdc3377b609b244ebf87c32e5c4d4d2c632dc2e7ee1921c3f |
SHA512 | 21f1a835d97f96347b0480b3a9b122cf7ff2a6e2cab7ad0a47b00d8b36ba3a24dafd78d34ee2404659388bbbf319ca066ce47cee7aa2048037fd03a475324c3f |
Bug Fix Release
Fixed in this release
Summary
Many background tasks now get executed only on hosts with segment storage partitions assigned, and the hosts use the storage partition assignments as the key to decide which hosts must execute the tasks, thus freeing up resources on the other hosts.
Shutdown of digest had a timeout of 10 seconds internally, which could lead to being dropped too soon while shutting down or restarting. This could result in ingest lag rising to over .0 seconds, where the expected lag is the time from the shutdown is initiated until a fe seconds after the new instance is started. There is a new config
SHUTDOWN_ABORT_FLUSH_TIMEOUT_MILLIS
which defaults to .0.0 millis (5 minutes) to allow proper shutdown also on systems with many datasources or slow filesystems / disks.Timechart in "steps mode" now displays the the the right of the label instead of to the left, which matches the fact that the labels are the start time.
NODE_ROLES
being applied in more background tasks.
Humio Server 1.4.4 Archive (2019-02-26)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.4.4 | Archive | 2019-02-26 | Cloud | 2020-11-30 | No | 1.3.2 | No |
Available for download two days after release.
Hide file hashes
JAR Checksum | Value |
---|---|
MD5 | 03050687bddbd6ffe56830ed84f827e4 |
SHA1 | f1f9cabbcbddb014e4aef0f8e1869fbbd8c60c8d |
SHA256 | 8ab6c51081d0b918de61340f7a6d241f740b3ec77ae9d3a7517805c925412af3 |
SHA512 | 718e7abf3b89076d07bc36c89606b9a72cc5b7c15432550b903931172ce6fca323aff85a0feecc4e1de5de5a02cc7ccb099c79166d59376ca8dc5dee78c8f2ee |
Bug Fix Release
Fixed in this release
Summary
Making a repository or view a favorite failed on recently created items.
Detect if a host in the cluster is being set to have the same vHost index as this host, and exit in this case.
On a cluster with many segments and a node with no segments, the cluster administration page could time out.
Repository statistics displayed on frontpage were out of date on servers without any digest partitions. This also made the search page display the warning "You don't have any data yet, consider the following options..." when searching until a result of the search was returned.
Having timezone offset larger than the span in a timechart could result in errors.
Humio Server 1.4.3 Archive (2019-02-21)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.4.3 | Archive | 2019-02-21 | Cloud | 2020-11-30 | No | 1.3.2 | No |
Available for download two days after release.
Hide file hashes
JAR Checksum | Value |
---|---|
MD5 | fb4d7742cb6418b51024f84bb61f9346 |
SHA1 | 6ba6f2f9636a28db9c4b882e07df356e2c255f6b |
SHA256 | 440bc3d0321042007c8b0c3749d93175e15d554dae66c1863495bb7464a9dcc4 |
SHA512 | f7a5c820e7482889371217d32faabda602f31389242876b5f7e49a2a8081804707b4013be6dd3323da03b3518aa852cb88c2820ebc44732fbb26330b03102436 |
Improved restart of live queries.
Fixed in this release
Summary
Restart live queries if their query prefixes change when using Role based authentication and access control (RBAC).
Remove migration from internal data formats older than what v1.3.x writes. Do not start this version without having upgraded to 1.3.2 or 1.4.x first.
Improved restart performance to better support restarting (or upgrading) the servers in a large cluster with large amounts of data.
Humio's UI is programmed in Elm and we upgraded to use Elm 0.19.
Configuration
NODE_ROLES
with current options being "all" or "httponly". The latter allows the node to avoid spending cpu time on tasks that are irrelevant to a nodes that has never had any local segments files and that will never be any assigned segments either.
Humio Server 1.4.2 Archive (2019-02-19)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.4.2 | Archive | 2019-02-19 | Cloud | 2020-11-30 | No | 1.3.2 | No |
Available for download two days after release.
Hide file hashes
JAR Checksum | Value |
---|---|
MD5 | cf5177de7c89a5c02121d29b4d359dab |
SHA1 | 08fc9b230423eb1abde2f8f3c43c6c06e230b457 |
SHA256 | 4cfef752ca7286a2cbb91e33ca7619488c06e59079640f7f8f2fd11779066b11 |
SHA512 | 2997e1f2a5ea57ce2e7b8ce2c360e113f9dfa2a983abb2e02bcab8e555591783cf7b7b29f4f567bb0531aa83136daa98461b3b4b8cf217d22eb8f4fa8fe513fa |
Minor release. Improve restarting of queries and Ingest listener performance.
Fixed in this release
Summary
Bug Fix ldap authentication code.
Humio Server 1.4.1 Archive (2019-02-18)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.4.1 | Archive | 2019-02-18 | Cloud | 2020-11-30 | No | 1.3.2 | No |
Available for download two days after release.
Hide file hashes
JAR Checksum | Value |
---|---|
MD5 | 78e0cf1a91b21be2f13f92f4551c212a |
SHA1 | 4f0dc81ffd99d0a7802f2a5f4a1cc492646a5ffc |
SHA256 | 68b99a2998f50c77c91f098e5d033347ab51912e8c639751a6012b29559abd5e |
SHA512 | 1d877446dba301b5e39d723b2e58a88edba98bc46d7c969821adf2db32ae115ef46b90f1ce3d2b0ddfc7df6aa75e5d5f3042e59840007d03c2433197f9c6de87 |
Minor release. Improve restarting of queries and Ingest listener performance.
Fixed in this release
Summary
Improved restarting searches when a node go away.
Improved Ingest listener performance. One socket can more throughput than before.
Humio Server 1.4.0 Archive (2019-02-14)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.4.0 | Archive | 2019-02-14 | Cloud | 2020-11-30 | No | 1.3.2 | No |
Available for download two days after release.
Hide file hashes
JAR Checksum | Value |
---|---|
MD5 | fb0290d5203f178cfbbef8df7b89106a |
SHA1 | 7e2f17d867734264c91c697849884bb530fbc450 |
SHA256 | cce65b639ab277dd50cf29f2d53ff119d705c64d33a4a118b8e49b899d8dd27c |
SHA512 | 7cfad54614ef63f35fe6d3cc50d2239650845b390e0aea63f1e2e199745a68854524fdb24cad5f59867339cbe0eb2432bd904cef7e7a90e9b115d5abe6f1a52a |
High availability for ingest and digest.
Fixed in this release
Summary
Emphasis is on efficiency during normal operation over being efficient in the failure cases: After failure the cluster will need some time to recover during which ingested events will get delayed. The cluster needs to have ample cpu to catch up after such a fail-over. There are both new and reinterpreted configuration options in the config environment for controlling how the segments get build for this.
Digest partitions can now be assigned to more than one host. Doing so enables the cluster to continue digesting incoming events if a single host is lost from the cluster.
Segments are flushed after 30 minutes. This makes S3 archiving likely to be less than 40 minutes after the incoming stream.
Clone existing dashboard when creating from the frontpage was broken.
If rolling back, make sure to roll back to version 1.3.2+
Functions
Humio Server 1.3.2 Archive (2019-02-12)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.3.2 | Archive | 2019-02-12 | Cloud | 2020-11-30 | No | 1.3.0 | No |
Available for download two days after release.
Hide file hashes
JAR Checksum | Value |
---|---|
MD5 | 78600bc158479cc9f830aa5091a2e5b2 |
SHA1 | ae23e67395282d4d639a074e38e83194d7830a6d |
SHA256 | 22756db5d14705888e67f6dc0d70bd42e55af29e72dc33511e405139a637c0d7 |
SHA512 | bd64769af08191c2fb79bd332b891c38adf6484d56b211768efadb952b94b666bad30abd484514c2d49420d2c8fd944cc571491f4f04fdac7036cac0aba27434 |
Allow an alert to have multiple notifiers, e.g. both Slack and PagerDuty
New features and improvements
Automation and Alerts
Allow an alert to have multiple notifiers, e.g. both Slack and PagerDuty.
Fixed in this release
Summary
Bar charts got incorrect height.
Sandbox permissions for the owner of the sandbox.
HEC ingest of array of numbers fixed.
Humio Server 1.3.1 Archive (2019-02-08)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.3.1 | Archive | 2019-02-08 | Cloud | 2020-11-30 | No | 1.3.0 | No |
Available for download two days after release.
Hide file hashes
JAR Checksum | Value |
---|---|
MD5 | d4a36b05d6937062d63eb86bc4b8c605 |
SHA1 | c1cb90603d98ea824f0b86a8fa1e6d8763023533 |
SHA256 | 513759afc48f71d35a23c3ab6bf9bace08737175ab721aef4bb4c658f23ffd72 |
SHA512 | e911929e8e2cbf0bbf0727735d8ce36b265b0f5a664dc1f1e6a70fc36740998dbc217614e5411846cffa123a27d2039d1785228583075947fca4cbe0691b5ebe |
Bug Fixes in addition to a new permission model.
Fixed in this release
Summary
LDAP changes being rolled back to allow users to login using just their username again.
Humio Server 1.3.0 Archive (2019-02-07)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.3.0 | Archive | 2019-02-07 | Cloud | 2020-11-30 | No | 1.2.0 | Yes |
Available for download two days after release.
Hide file hashes
JAR Checksum | Value |
---|---|
MD5 | c2c8064b9528569f55bae4809823d688 |
SHA1 | 0ca51526a8fb204a6d7f5d98ef3df9b99b9d9bb8 |
SHA256 | 0f4b789dfe2a36cab77850f9a2c1aa2c5d6db9f13527760fb58d428ff3bf6edc |
SHA512 | d6ec65028371383b770fedb57488530a144edc3ec5a684ed829d1997ff4a2bc6d4607a22d100e6729fcf8c3d30b3e480d45cd9975507754e2a891d0bafee43aa |
New permission model
Fixed in this release
Summary
Metric of type=HISTOGRAM in the internal "humio-metrics" repo had all values a factor of 10^6 too low.
New permission model used for Role Based Access Control is now in use all the time. Default setup includes the roles
member
,admin
, andeliminator
as usual.LDAP fix; may require users to login with full
user@domain
user name, not justuser
.The config for RBAC has changed (config file has a new name, environment variable names have changed).
Functions
worldMap()
function forgot about normalize option.
Humio Server 1.2.12 Archive (2019-02-05)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.2.12 | Archive | 2019-02-05 | Cloud | 2020-11-30 | No | 1.2.0 | No |
Available for download two days after release.
Hide file hashes
JAR Checksum | Value |
---|---|
MD5 | 8e5166154004d10cc532edf628ff9a75 |
SHA1 | 52d1c821efb9faa1e2929e37c18d4b31301ba06c |
SHA256 | 27c07d34ab9f3f1b71f4c69bdd6fa416fbb4e64a6c150dd25fef19261b0541da |
SHA512 | 87f05b4ab55a92ed45c3f0b0594a281b2ebf47b13656b69da0e7c698e0ec2676ad7000fd6bb3cca656057931319af27536ca68e290447b610afeffa7efc18645 |
Optimizations and Bug Fixes
Fixed in this release
Summary
Running queries now show only top-.0 queries to avoid overloading the browser in case of many queries.
Added a timechart of bulk size to built-in dashboard "Humio stats".
Optimizing for many datasources in a repo by removing a bottleneck related for "tag grouping" auto-detection.
Improvements to
Query Monitor
Functions
lowercase()
function now preserves unmodified fields in the "include=both" case, and no longer modifies "@timezone".
Humio Server 1.2.11 Archive (2019-01-31)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.2.11 | Archive | 2019-01-31 | Cloud | 2020-11-30 | No | 1.2.0 | No |
Available for download two days after release.
Hide file hashes
JAR Checksum | Value |
---|---|
MD5 | abb43dfa8ee8f12402a76a179410176b |
SHA1 | 1a03197336c007c67d50dbee4a10fb9255f6babc |
SHA256 | 31917a8d9b99a7374e84d2992bf4c7abf17e9a584b1a87ad2c0c4b2ef7413563 |
SHA512 | e808107e58221d854235b2eb4b882ce6f7ce607fe5ad7daf6e3d9cebaef34cf2be39530f9f3f90c7e1af3ee3f007c755e7c9f4570712d776b82ac66e8b857d8b |
Support for non-loadbalanced queries, optimizations and Bug Fixes
Fixed in this release
Summary
When your query matches more than .0 events, you can now scroll further back in time than those .0, "paging" through the older events. This works for any "non-aggregate query".
New "Zoom and pan" buttons to quickly change the search interval: Double the time-span or move the search interval 1/8th of the span to either side.
When your load-balancer does not act as "sticky" as described in Installing Using Containers, Humio now internally proxies requests to the proper internal node for search requests.
Write Humio metrics into the new repo
humio-metrics
. Any user can query metrics but only for the repos they can search. Looking at metrics that are not repo-specific requires being a member of the humio-metrics repo.Allow any user to query the humio-audit log, but only for the actions of the user. Looking at the actions of others requires being a member of the humio-audit repo.
Changes to desired digest partition to node assignments did not get reflected in other nodes until a restart of the other nodes.
Functions
New function for
parseFixedWidth()
.
Humio Server 1.2.10 Archive (2019-01-28)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.2.10 | Archive | 2019-01-28 | Cloud | 2020-11-30 | No | 1.2.0 | No |
Available for download two days after release.
Hide file hashes
JAR Checksum | Value |
---|---|
MD5 | 9dc2a50e634009d50bca66d3bcfc5f84 |
SHA1 | 72e5306849274de3274367e6a681e331103a2b0a |
SHA256 | 06dd50f0aba251dbbc1d26c653e180786319c945c25a35650f98bab376a55340 |
SHA512 | 5dfa5517f83781d1a107dc87ae5948ab7d50ce3fb0131d4d17cf0c244badfd4f3e70b7bf7ef0b4d1639c2a579766d0e607b6d9a85de3cf586e3c3af997613928 |
Load Balanced Queries, Optimizations and Bug Fixes
Fixed in this release
Summary
Added HTTP header to support loadbalancing queries. The header
Humio-Query-Session
is described in Installing Using Containers.parseCsv did not handle broken input gracefully.
New built-in parser for the popular .NET Serilog logging library.
Improved HEC performance.
Humio Server 1.2.9 Archive (2019-01-18)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.2.9 | Archive | 2019-01-18 | Cloud | 2020-11-30 | No | 1.2.0 | No |
Available for download two days after release.
Hide file hashes
JAR Checksum | Value |
---|---|
MD5 | 9263f70e884f582a3bf29cf33b30dd23 |
SHA1 | 7c2ece55d70d8bf0f6a4809259033d42b54a92f7 |
SHA256 | 235a22e311557b26a2493c885cc0875df7de0b36e098149271f7230dee1d6e7f |
SHA512 | ac8f52d1f342b01978744521d6489c1efcaa98f01a678057642ea194ba097c86832d97c12d79e78d4e1aa2ba1db74c48c5b458bea66890f770916b29962e12fd |
Maintenance Build
Fixed in this release
Summary
Delete of queries on the http endpoint now lets the query live for 5 seconds internally, to allow reusing the same query if resubmitted.
RetentionJob would not delete remaining segments marked for deletion if one delete failed.
Humio Server 1.2.8 Archive (2019-01-17)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.2.8 | Archive | 2019-01-17 | Cloud | 2020-11-30 | No | 1.2.0 | No |
Available for download two days after release.
Hide file hashes
JAR Checksum | Value |
---|---|
MD5 | 988ba57c9b42c57a996e3cca5874e4f8 |
SHA1 | eff65659bcef942f04e32f13882dd89dd592c00b |
SHA256 | 8ea328361f1a1b0d08177696aecc0239e1802caffd971e93ffc2302bc4bb912b |
SHA512 | d7f7ea99cc6de3b72adb419ffc52095c1ec7c02b9bc436bd73de7998b04429747c660faa2f672d8a8995c574d9321211000c8586eff753a37c7c1505826da8a3 |
Maintenance Build
Fixed in this release
Summary
Live queries in a cluster where not all servers had digest partitions could lead to events being stuck in the result when they should have been outside the query range at that point int time.
Better names for the metrics exposed on JMX. They are all in the com.humio.metrics package.
Cloning built-in parsers made them read-only which was not intentional.
Config
KAFKA_DELETES_ALLOWED
can be set to "true" to turn on deletes on the ingest queue even whenKAFKA_MANAGED_BY_HUMIO=false
.Support for applying a custom parser to input events from any "beat" ingester by assigning the parser to the ingest token.
Handle HTTP 413 errors when uploading too large files on the files page
Functions
New function, mostly for use in parsers scope:
parseCsv()
parses comma-separated fields into columns by name.
Humio Server 1.2.7 Archive (2019-01-15)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.2.7 | Archive | 2019-01-15 | Cloud | 2020-11-30 | No | 1.2.0 | No |
Available for download two days after release.
Hide file hashes
JAR Checksum | Value |
---|---|
MD5 | 204a89d9d15e30118198c14b0776426a |
SHA1 | f99f0bfc8df24c6bf6df30143c74b70b99ffce55 |
SHA256 | c56ad4cda1ec9a447c4170540e2d8338767333e775152e27a521cf0948aae08e |
SHA512 | 3225863a2801a30b0a9da59bf14bf1c7d5eac14469fa23d7a312cd765e88e8001a981d537e848067caeae656ad04455db5cde8ee484cd7370e1a58e469330961 |
Maintenance Build
Fixed in this release
Summary
New function
eventSize
that provides an estimate of the number of bytes being used to represent the event, uncompressed.enable escape to clear sticky events in all scenarios
A race condition could lead to memory being leaked.
Humio metrics on the Prometheus endpoint now have help texts and use labels where appropriate.
Short time zone names such as "EST" did not work properly in function that accept a time zone name.
S3 archiving has completed testing with customers and is no longer considered BETA, but ready for use now.
Export as CSV allows selecting the fields in the download dialog when the query does no set the fields through table or select.
A new built-in parser for "syslog" in the format of both the old and new RFC, that uses
case
to auto-detect the format.
Humio Server 1.2.6 Archive (2019-01-11)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.2.6 | Archive | 2019-01-11 | Cloud | 2020-11-30 | No | 1.2.0 | No |
Available for download two days after release.
Hide file hashes
JAR Checksum | Value |
---|---|
MD5 | 442b0eea2bda0e1e882d793bce5baf4d |
SHA1 | 622fe45e4c3c6ba2c519ce277a4a490cd385b80d |
SHA256 | 517ee1cca7921aa396584589353690dd8e4d236ceae0e285fcf969e0c559c89d |
SHA512 | 6426c8aa972831d898538e5978446ff379dadd76cd36f557b109898fbdc214fc6c4676dcba7847abf8784f1206b5751cc8d732c1334f3bd2ff608411505ac4cd |
Maintenance Build
Fixed in this release
Summary
S3 archiving: Include all tag keys in generated file names, also those listed in the configuration.
Allow GET/HEAD on elastic _bulk emulation API without auth. Some client poll that API before posting event.
Extracting a field from within a tag-field could make the query optimizer fail.
When using
select()
and not including@timestamp
, that field got included in exported files anyway. Now it gets included when specified as a selected field.Expose Humio metrics as JMX.
Allow both Basic-auth and OAuth on all ingest endpoints. We recommend putting tokens in the password field of the authentication.
Expose Humio metrics to Prometheus. The port needs to be configured using the configuration parameter
PROMETHEUS_METRICS_PORT
.HEC endpoints now accepts input from Docker Splunk logging driver. You can thus get your docker container logs into Humio using this logging driver. All you need to do is add
--log-driver=splunk --log-opt splunk-token=$TOKEN --log-opt splunk-url=https://humioserver
to yourdocker run
.Calendar in query interval selector had time zone problems.
Automation and Alerts
Improved detection of alerts that are canceled to get them restarted.
Functions
stats()
function (the[]
operator for functions) did not pass on the data used to select default widget.worldMap()
function now accepts theprecision
parameter for the geohash function embedded insideworldMap()
.
Humio Server 1.2.5 Archive (2019-01-09)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.2.5 | Archive | 2019-01-09 | Cloud | 2020-11-30 | No | 1.2.0 | No |
Available for download two days after release.
Hide file hashes
JAR Checksum | Value |
---|---|
MD5 | 7d419a81a0a59bcfb2dc0b6cc02bbdb3 |
SHA1 | 07d793802c31377f8a60ea0bec52a351176ed948 |
SHA256 | 419cadb92f32a3c4f8275c5d2b303c979e4f7fbe5c676d6396f6f40710771f9c |
SHA512 | cbe4a472c820225a4dff2c8145bf485b819172af6b0295a18593c4e90e7b16233bc069af9a2c030bef586bd32f38bbd60fa33b5be2fac798bfe17c4716bc1e7c |
Maintenance Build
Fixed in this release
Summary
Timeouts on the http endpoint have been changed from 60s to infinite. This allows exporting from queries that hit very little data, e.g. a live query that receives one event every hour.
When running with
PREFIX_AUTHORIZATION_ENABLED=true
Alerts and Shared dashboards now run as the user who saved them, restricted to those prefixes that the users has at the time the query starts.Added new query functions
lower
andupper
.Query performance improved by fixing a bottleneck that was noticeable on CPUs with more than 16 cores.
HEC protocol now accepts data at "/services/collector" url too. And accepts authorization in the form of a "Authorization" header with any realm name, as long as the token is a valid Humio token. This allows using e.g fluentd and other software to ship to Humio using HEC.
Segments with blocks where all timestamps are zero were reported as broken when trying to read them.
Allow * as fields for lowercase function to allow lower casing all field names and values. Recommended use case is in the ingest pipeline as this is an expensive operation.
Basic auth (used mostly on ingest endpoints) now allows putting the token into the password field instead of the username field. Use of the password field is recommended as some software treats the password as secret and the username as public.
Audit-logging did not happen for queries using the "/query" endpoint i.e. using the export button in the UI.
If the parser ends up setting a timestamp before 1971, or does not set a timestamp, use now as timestamp for the ingested event. Same for timestamps more than 10 seconds into the future.
Configuration
Humio will by default write threaddumps to the file
humio-threaddumps.log
every 10 seconds. This is configurable using the configuration parameterDUMP_THREADS_SECONDS
. Before this was disabled by default.
Humio Server 1.2.4 Archive (2019-01-02)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.2.4 | Archive | 2019-01-02 | Cloud | 2020-11-30 | No | 1.2.0 | Yes |
Available for download two days after release.
Hide file hashes
JAR Checksum | Value |
---|---|
MD5 | 8f00b74da1609381950edf68d769f7e2 |
SHA1 | ed33fa097e5f203b16efcefdd349cdbac9023cf4 |
SHA256 | 6a88855fa503b5275b2dec27c8f4d8e0259abd5deba4134416ee3acb4dad4806 |
SHA512 | b888ec3903f91be343df410a56647903fb0763e81b08c0cef0740d64c364b35f8c3cd72ddd1657bb3a794fd3c39aaf5a1453651f9fb44f6439606e1e1aa5f05b |
Secondary storage of segment files
Fixed in this release
Summary
Performance fix: Running live queries for weeks with a small time span for the bucket size was expensive.
Extended the internal latency measurement to include the time spent in the custom parsers as well.
When a segment file was deleted while being scheduled in a query, the query would end up being "99%" done and never complete.
Secondary Storage of segment files. This allows using a "fast" disk primarily, and a "slow" one for older files.
Ingesting with HTTP Event Collector (HEC) out of beta. Endpoint is located at
/api/v1/ingest/hec
.Deleting an ingest listener did not stop the listener.
Humio Server 1.2.3 Archive (2018-12-18)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.2.3 | Archive | 2018-12-18 | Cloud | 2020-11-30 | No | 1.2.0 | No |
Available for download two days after release.
Hide file hashes
JAR Checksum | Value |
---|---|
MD5 | 4f6da960a7c21c65a8625b168504e151 |
SHA1 | 62589a92f1a2b9759d970d2a2d828c667c1d1471 |
SHA256 | ed0242a2a159733babc22e912141ced535e2a4721dfd1ec03c80043e9d2c6976 |
SHA512 | 9d1c59e90b0220fb000fcf42bc0dfb71c727703e4d948c1d3dd2185354f2699e073f009b5d569e3c324a430cc7f99b8829433946b5303032c4cb122b1a5f7f92 |
Maintenance Build
Fixed in this release
Summary
Performance improvement: queries with
NOT NotMatching
were much slower than the plainfilterNotMatching
.New ingest endpoints without the repo in the path, as the repo is specified by the token for authentication, repo and and parser selection.
Widget auto-selection improved.
Default to running queries on only vcores/2 threads.
Display of query speed in clustered version was multiplied by (n+1)/n in a n-node cluster.
Humio Server 1.2.2 Archive (2018-12-14)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.2.2 | Archive | 2018-12-14 | Cloud | 2020-11-30 | No | 1.2.0 | Yes |
Available for download two days after release.
Hide file hashes
JAR Checksum | Value |
---|---|
MD5 | 24df9210b046af907c826c8151885ab4 |
SHA1 | 54649576e7e8960552f094a079e19fcaa81dd42b |
SHA256 | a1c0d3a35759c707eabad6dc459b059e550916328bd6676a76d5acd1d8a2fcbe |
SHA512 | d68a3f5d6ed53ee52921b1f4d175dcb74b8270618b291a8a04685992f41222a912b31803c7a5da23710dffe5c7dfb729977d5d9e56515e47f6ef9543c2244074 |
Maintenance Build
Fixed in this release
Summary
Configuration change.
ALLOW_UNLIMITED_STATE_SIZE
has been replaced byMAX_STATE_LIMIT
.MAX_STATE_LIMIT
limits state size in Humio searches and now allows specifying a number. For example the number of groups in thegroupBy()
function is limited byMAX_STATE_LIMIT
.sandbox did not work properly with
PREFIX_AUTHORIZATION_ENABLED=true
Humio Server 1.2.1 Archive (2018-12-13)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.2.1 | Archive | 2018-12-13 | Cloud | 2020-11-30 | No | 1.2.0 | No |
Available for download two days after release.
Hide file hashes
JAR Checksum | Value |
---|---|
MD5 | edc9569eac0a6b53871bbc2d45128841 |
SHA1 | 554b0a1c5d98f5eae3a705f6a30677b1a4ba2059 |
SHA256 | 2c73145e4c8fdf986ec57f523c19a4184ccded3e3a20b8529169d320d97dfab1 |
SHA512 | 21cac90fa422cb78458c07958db8d9ee284831b7ffefbacd75193eacc6ec405c42dc8c18e26b8170d8803d1aee06388209f6fb1753edc613aaf5f390df0672e0 |
Maintenance Build
Fixed in this release
Summary
Improve maximum throughput of a TCP-listener ingest up to 4 times the previous level for a single socket. Maximum throughput can reach .0.0 events/s when testing with .0 bytes/event on localhost. Use more sockets in parallel to achieve higher throughput.
Editing a parser with syntax errors did not work.
When pushing the query sub-system to the limit with many simultaneous long-running live queries for more than 10 seconds, a query could end up triggering a restart of itself.
Humio Server 1.2.0 Archive (2018-12-11)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.2.0 | Archive | 2018-12-11 | Cloud | 2020-11-30 | No | 1.2.0 | Yes |
Available for download two days after release.
Hide file hashes
JAR Checksum | Value |
---|---|
MD5 | 8c08fdaf29d88da3508f64dbaaa3b640 |
SHA1 | e940c781a712f4b3c4561557170a2762dfc296f4 |
SHA256 | 812a322cba7f784422262f61cc256cfcad91c81f6e3ea28082150307b69b0a5d |
SHA512 | 5e5dfdc312c51751ec9ffd24173add4acfdbbc2fb679924b05bc4b592c7848389b1d5d3b81326e154b97dd688702a23d4a220f897641546abf5248655f89bd7d |
Create parsers using Humio's search language. Changes to "Backup" layout.
Fixed in this release
Summary
In a cluster where any node did not have any digest roles, queries could get polled much too frequently.
kvParse()
function no longer overrides existing fields by default. To override existing fields based on input use:kvParse(override=true)
. See docskvParse()
.New parsers. It is now possible to create parsers using Humio's search syntax. Check out the Creating a Parser documentation. Existing parsers has not been migrated and it is still possible to use the old parsers. We encourage using the new parsers and will automatically migrate old parsers in a future release.
Blacklist queries. In the administration section of Humio it is now possible to blacklist queries. This can also be done from the
Query Monitor
page, by clicking a query and then block it in the details section, or using the Query Blacklist page directly.The parser overview page now shows parser errors. This is a quick way to detect if parsers are working as expected.
The backup feature now stores the copies of the segment files in separate folders for each Humio node. This allows the Humio nodes to delete files that are no longer owned by that node also in the case where all Humio nodes share a shared network drive. This change has the effect that existing backups are no longer valid and cannot be read by this version. Delete any existing backups when upgrading, or reconfigure Humio to use a fresh location for the backups.
parseTimestamp()
function has changed signature. The parameternowIfNone
has been removed and a newaddErrors
introduced. This can break existing searches/alerts/dashboards (but the parameter has not been widely used). See docsparseTimestamp()
.
Humio Server 1.1.37 Archive (2018-12-03)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.1.37 | Archive | 2018-12-03 | Cloud | 2020-11-30 | No | 1.1.0 | No |
Available for download two days after release.
Hide file hashes
JAR Checksum | Value |
---|---|
MD5 | 4bceede76af7c3bce25ebd5aea43f247 |
SHA1 | 45fb2bb7f927ac376c7b79f851c6e15624b0daa8 |
SHA256 | 9e021b952271054906c7d0c926ab21ef3a70cb36114819fa860d03827eed77d1 |
SHA512 | dc68309d17ed2815b3bf1219c36ae343eb778356cc07a819a23db046b48f013f65c464e24e1dab90713ce35a1dee6fd21d8f513539a93ddfd5081a5d8995d673 |
Maintenance Build
Fixed in this release
Summary
Field-extracting using regex did not work in live queries in an implicit AND.
Fix bug in UI when uploading file.
Add debug logs for LDAP login.
Humio Server 1.1.36 Archive (2018-11-28)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.1.36 | Archive | 2018-11-28 | Cloud | 2020-11-30 | No | 1.1.0 | Yes |
Available for download two days after release.
Hide file hashes
JAR Checksum | Value |
---|---|
MD5 | cb5614ccde77ea9789a0cfb300cd5a16 |
SHA1 | fc269cc45480142120ec11949f89cb2212df60d4 |
SHA256 | b41d9ecf30c02199cb5a3a954da1e80457b643d85622c0790ed9ee6ba1eb7a97 |
SHA512 | 8270540462fb03756b02bc6f6d4af37eef7c033c45664b05649c8aaf7484b1ae20ddbe707a6261a4b71c017372b916b835694e16138b6732807db2f42acdf4f9 |
Role-based auth support for SAML & LDAP
Fixed in this release
Summary
Config variable
AUTO_CREATE_USER_ON_SUCCESSFULL_LOGIN
renamed to (the correctly spelled)AUTO_CREATE_USER_ON_SUCCESSFUL_LOGIN
.GELF over HTTP support. Note that this format is a good fit for uncommon events, but due to lack of bulk support not efficient for streams with high amounts of traffic. Authentication is required using basic auth with an ingest token (or personal API token, but using that is not recommended).
Role-based access control is now supported for on-prem when using SAML or LDAP for authentication.
Set thread priorities on internal threads.
Functions
Extended
session()
function to accept an array of function instead of only one.
Humio Server 1.1.35 Archive (2018-11-27)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.1.35 | Archive | 2018-11-27 | Cloud | 2020-11-30 | No | 1.1.0 | No |
Available for download two days after release.
Hide file hashes
JAR Checksum | Value |
---|---|
MD5 | 72a131e4c17781a44b449624dd2c6c5f |
SHA1 | 831ce32e06c543a661e91f3f52ee577db2aaa0d3 |
SHA256 | 20a57eb91dbf23a1aef0b6fce605ac238536881b4f4f7a7eb2b178a444797fe9 |
SHA512 | 0d148f4a13c104bf70d403ba948f4723074d5cf6e15dd59e69d5f2195f8697f56f6e26146361e21831b35c5b14455c3c644da7718379c3c5e7adeb1f321185d5 |
Graylog compatible ingest support.
Fixed in this release
Summary
Allow ingest in "GELF" v1.1 format. See GELF Payload Specification. Humio supports ingest on using the UDP and UDP chunked encodings, and both may optionally be compressed using ZLIB. (Gzip not supported yet.) TCP is supported as zero-byte-delimited uncompressed.
Automation and Alerts
Alerts did not properly encode all parts of the query in the URL that is sent in the notification.
Humio Server 1.1.34 Archive (2018-11-22)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.1.34 | Archive | 2018-11-22 | Cloud | 2020-11-30 | No | 1.1.0 | Yes |
Available for download two days after release.
Hide file hashes
JAR Checksum | Value |
---|---|
MD5 | ab1eafa8442d6ed275c94527ed96ed19 |
SHA1 | 7a48b49cba75653d610cc700fab5237707fada44 |
SHA256 | 4aa7256991fbc5275ede446402ae9540281bcf39ba8a4503a0510d559ddcd45f |
SHA512 | 2c4b1ff36ee3d4409f0ae142cfd749d1ada4228b4a6c533685ff476093117f3a472ecc8c5b3e7354ab4412406d488488cc6ffb5f4ca905c8bd5144a128925d1a |
Improved LDAP support
Fixed in this release
Summary
By default users must be added inside Humio before they can log in using external authentication methods like LDAP and SAML. This can be controlled using the configuration flag
AUTO_CREATE_USER_ON_SUCCESSFUL_LOGIN=false
. If users are auto created in Humio when they successfully login for the first time, the user will not have access to any repositories unless explicitly granted. A new user will only be able to access the users personal sandbox.
Functions
Bug Fix for
match()
function. In some cases it did not match quoted strings.
Humio Server 1.1.33 Archive (2018-11-15)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.1.33 | Archive | 2018-11-15 | Cloud | 2020-11-30 | No | 1.1.0 | No |
Available for download two days after release.
Hide file hashes
JAR Checksum | Value |
---|---|
MD5 | 5a840e455815bf24e87ae39348effcfe |
SHA1 | af4b4f35e2269622f4caaa37c3cc498029bd17bc |
SHA256 | 85ca4356d4ec798ed19fa36c647701a316b6d6b83bc5d1685741c96e1cbe951c |
SHA512 | 7153862acd7603bc213137e42a1bc28f2ed1d80a057dec7513d8a65b0fdece6e7d87e6145744bbeabdd3bd87966cfb81f507c76b6c4ea8961eaadf239a6524ac |
Bug Fix and digest performance
Fixed in this release
Summary
Digest throughput improvements
Fixed: Parsers did not show the build-in parsers
Humio Server 1.1.32 Archive (2018-11-15)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.1.32 | Archive | 2018-11-15 | Cloud | 2020-11-30 | No | 1.1.0 | Yes |
Available for download two days after release.
Hide file hashes
JAR Checksum | Value |
---|---|
MD5 | d33e3e5efff00f0b1d85d385ea269462 |
SHA1 | 4f31157403313e8b17137366b3a8328773f026f5 |
SHA256 | 3c1122d4bea20462bbbabcd0dc42650756c7da2dd90c6460d6de304d3c3bf210 |
SHA512 | 0e0e0182678ae1b6ce1f46b15a9d2fb2818d3ac625920feca372d024510d3cecef0dd79dcd46a3839e4123211d834d5124cc019cbb43c8d6fea24f075deb52f2 |
stripAnsiCodes()
function, top on multiple
fields and Bug Fixes, default repository query
Fixed in this release
Summary
Repositories' default search interval has been replaced with the possibility to choose a default repository query. All default search intervals will be migrated to default queries. A default query can be set by saving a query and checking the "Use as default" checkbox.
Added support for Java 11. Humio can now be run with Java 9 or Java 11. Humio's Docker images are updated to use Java 11 and we encourage people to update to Java 11 and use Azul's OpenJDK Zulu builds.
Configuration
Bug Fix connecting to Kafka when using the property
EXTRA_KAFKA_CONFIGS_FILE
.
Functions
New
range()
function: finds numeric range between the smallest and largest numbers for the specified field over a set of events.New
stripAnsiCodes()
function: strips ANSI color codes from a field.top()
function now supports more than one field to group on combination of fields.
Humio Server 1.1.31 Archive (2018-11-09)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.1.31 | Archive | 2018-11-09 | Cloud | 2020-11-30 | No | 1.1.0 | No |
Available for download two days after release.
Hide file hashes
JAR Checksum | Value |
---|---|
MD5 | 60e957fae4f61ea8176aa2575546d400 |
SHA1 | eae8ff212e0f53a72a202358510fc91e33a022f3 |
SHA256 | 2db5c9571a7c7ea60587ecb96af1b427db7171b9fa8a81a50bf6e80bb51d478e |
SHA512 | 53f21f1352d9df0f7c0e13ab3e41f19b708129065cbbfa2b0c98054b49f6fac1d6ee3a198e56d0a3809f82e32fec9191f02ddc11f8827c8dd186adb1c9a66382 |
Reduce latency of incoming events.
Fixed in this release
Summary
Improved build-in dashboards, allowing them to be shared using share links like any other dashboard.
The latency measured from an event arrive at Humio until live queries have been updated with that event has been reduced by approximately 1 second to now be in the 0 - .0 millisecond range.
It is now possible to block ingestion in a repository. It can be done from the repository's settings page.
Humio Server 1.1.30 Archive (2018-11-04)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.1.30 | Archive | 2018-11-04 | Cloud | 2020-11-30 | No | 1.1.0 | No |
Available for download two days after release.
Hide file hashes
JAR Checksum | Value |
---|---|
MD5 | 32a2d83501170709e29b4773382ed9da |
SHA1 | 56b88c67f4c46fe9353d52992935340e7654c1d8 |
SHA256 | 04c3cc3efdeb71d79865081a9fee70acc0751af27bceb1efb80188ac1fb0a0ca |
SHA512 | 0e6bbedaac7ccec7fbe163ad23522be0cb00bdf391716b75cedab92426bdd4deec864ce6ae8cadf4d0b4e1a23a7f43d611ebbe7023c4f237b34b5b3c295f4d2a |
'Create Parser' button opened a beta page for creating parsers
Fixed in this release
Summary
'Create Parser' button opened a beta page for creating parsers.
Handle clients posting empty bulks of events.
Humio Server 1.1.29 Archive (2018-11-02)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.1.29 | Archive | 2018-11-02 | Cloud | 2020-11-30 | No | 1.1.0 | No |
Available for download two days after release.
Hide file hashes
JAR Checksum | Value |
---|---|
MD5 | bfd7b069334c3e9fcc3f669ef7108284 |
SHA1 | 249dabc85c1d0846dbd92e5c12c2fbcbeaae61ba |
SHA256 | 26071929b87a33ac4ae913521b16e8d9791358a615db84194a6b63e195106329 |
SHA512 | 7967a83ba7157b9e22e73a1fa19d69f40bc3269ff27bbd13f367e3f691f5df5ed3a29a3715ca84cad9e3e3b105d6e254545982564e92f635561d13f349961489 |
Bug Fixes
Fixed in this release
Summary
Allow '.' in S3 paths.
Live queries could get false sharing of
eval()
results.
Configuration
QUERY_EXECUTOR_CORES
allows setting cores for query engine individually.
Humio Server 1.1.28 Archive (2018-10-31)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.1.28 | Archive | 2018-10-31 | Cloud | 2020-11-30 | No | 1.1.0 | Yes |
Available for download two days after release.
Hide file hashes
JAR Checksum | Value |
---|---|
MD5 | 3e64aa4a09bef57eea35d35f1b4fa33e |
SHA1 | 02c468696787e5720abb007ce79976256a8d368c |
SHA256 | 5859b357a306c1acf606a6e5fce75cc28d5891a950a320e3522c25998298d777 |
SHA512 | 95b9ff0929f7f80941586311e3f2623f7d8357fdb2ef2b9d537c8f1672a987afcd5dd1bce7873ab9a08f4353b5bbebd7c2bbfc98ff90afd94bbea748c99dd3f4 |
Improved SAML authentication and digest performance
Fixed in this release
Summary
When zooming to a wider time range on a timechart with a fixed "span" parameter, widen the span and a dd a warning to allow the chart to work instead of failing with "too many bucket".
Back-pressure on ingest should not be applied to internal log lines, such as the internal debug and audit log entries.
The first search to hit a repository in a cluster with millions of segments would fail while listing those files.
Dashboard searches are kept running for 3 days, when they are not polled. After that they are not kept alive on the server. This is configurable using the config
IDLE_POLL_TIME_BEFORE_DASHBOARD_QUERY_IS_CANCELLED_MINUTES
. This replacesIDLE_POLL_TIME_BEFORE_LIVE_QUERY_IS_CANCELLED_MINUTES
.Performance improvements for digest on systems with many sparse datasources.
Humio Server 1.1.27 Archive (2018-10-24)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.1.27 | Archive | 2018-10-24 | Cloud | 2020-11-30 | No | 1.1.0 | No |
Available for download two days after release.
Hide file hashes
JAR Checksum | Value |
---|---|
MD5 | 8517a2486d8c6dcc9219b186c26c2f7f |
SHA1 | cf1fb05f32d5675cdb3faca53416c0de3faab22c |
SHA256 | d29498ec6e65c0a38d3c6ba50ef0c6d9c106df32aa96dba237b52d418d8809e3 |
SHA512 | 4045bf8dfde298fee12b8f37abc0ea3a0ec37733c47582342450bd59e5b4f7a8a783878926dfe1d2605e9d27fe9bbc98f7fbf907125ce54aa600e88f74ebbb1b |
Back-pressure on ingest overload and Bug Fixes
Fixed in this release
Summary
When too much data flows into Humio for Humio to keep up, apply back-pressure by responding statuscode 503 and header
Retry-After: .0
.The event list on the search page now correctly resets the widget when a new search is started
The max value of the y-axis of timecharts is now correctly updated on new results
Many changes internally to prepare for having more than one node in the "Digest rules" for fail-over handling of ingest traffic.
Pagination now works for tables on dashboards
Humio Server 1.1.26 Archive (2018-10-15)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.1.26 | Archive | 2018-10-15 | Cloud | 2020-11-30 | No | 1.1.0 | No |
Available for download two days after release.
Hide file hashes
JAR Checksum | Value |
---|---|
MD5 | 38fb6d4e13106b90ee91060def4741a9 |
SHA1 | b196f3f6337b3e5b519ed8e95c694b11ddecda1f |
SHA256 | 4804482761ce478d55d30226790c2ed3ee5fb4266940810663cab39971a394e8 |
SHA512 | f9f6368ad401811ff7d2af18995e5f025822d3a5d8d3f69d5da50a0aa8fcf7cb246168311828dc87a3bef2770700796a46e3f36c0390208085edb2df3f464a36 |
Minor Release
Fixed in this release
Summary
Performance improvement in table, sort and tail especially when using a large limit.
Using the field
message
instead oflog
(as described in v1.1.25) did not work properly.
Functions
Humio Server 1.1.25 Archive (2018-10-12)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.1.25 | Archive | 2018-10-12 | Cloud | 2020-11-30 | No | 1.1.0 | No |
Available for download two days after release.
Adds World Map
and
Sankey
visualizations and SAML
authentication support.
Fixed in this release
Summary
S3 archiving now handles grouped tags properly, generating one file for each tag combination also for grouped tags.
The new visualizations require a change to the CSP. If you have your own CSP, you need to add
'unsafe-eval'
to thescript-src
key.Importing repositories from another Humio instance has repository ID where repository name was required.
New visualization helper functions
geohash()
,worldMap()
,sankey()
.The update services widget that "phones home" to update.humio.com can now only be disabled if you have a license installed.
Support using filebeat to ship logs from Helm chart for ingest logs from a Kubernetes cluster. The message can be in the field
log
ormessage
.Searching using
... | *foo*** | ...
is identical to... | foo | ...
since plain text searches are always substring matches. But the former got turned into a full-string regex match for^.*foo.*$
which is 10-30 times slower compared to the fast substring search in Humio.New query syntax:
match
on a field that eases matching for several cases on a single field.
Functions
Humio Server 1.1.24 Archive (2018-10-05)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.1.24 | Archive | 2018-10-05 | Cloud | 2020-11-30 | No | 1.1.0 | No |
Available for download two days after release.
Maintenance Build
Fixed in this release
Summary
Segments deleted by retention-by-size would sometimes get left behind in global, adding warnings to users searching at intervals including deleted segments.
Reorder query prefixes to execute queries more efficiently. Moves tags to the front of query string to allow better start-of-query datasource filtering.
Humio Server 1.1.23 Archive (2018-10-01)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.1.23 | Archive | 2018-10-01 | Cloud | 2020-11-30 | No | 1.1.0 | No |
Available for download two days after release.
Update Kafka server version to 1.100.0
Fixed in this release
Summary
Do not reassign partitions in Kafka when there is already sufficient replicas (Only applied when
KAFKA_MANAGED_BY_HUMIO=true
, the default).Handle empty uploaded files.
Humio's Kafka and ZooKeeper Docker images have been upgraded to use Kafka 1.100.0. We recommended to keep the update procedure simple and not do a rolling upgrade. Instead shutdown Humio Kafka and ZooKeeper. Then fetch the new images and start ZooKeeper, Kafka and Humio. For details see Kafka's documentation for upgrading. (Note: This change was listed in release notes for v1.1.20 even though it was applied only to the kafka client there, and not to the server).
Improved performance of parsers that have
(?<@timestamp>\S+)
as their timestamp extractor regex.The query planner has been improved, so it can more precisely limit which data to search based on tags.
Configuration
Do not remove other topic configs in Kafka when setting those needed by humio (Only applied when
KAFKA_MANAGED_BY_HUMIO=true
, the default).
Humio Server 1.1.22 Archive (2018-09-27)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.1.22 | Archive | 2018-09-27 | Cloud | 2020-11-30 | No | 1.1.0 | No |
Available for download two days after release.
Fix Kafka prefix configuration problem. Faster percentiles.
Allow globbing when using
match()
.
Fixed in this release
Summary
The key/value parser now (also) considers characters below 0x20 to be separators. Good for e.g. FIX-format messages.
The UI for setting node ID on an ingest listener did not work.
Add flag
match(..., glob=true|false)
allowing the key column of a CSV file to include globbing with *.If using Kafka prefix configuration, the server would always assume Kafka has been reset. Releases 1.1.20 introduced this problem.
Functions
percentile()
function changed to use 32bit precision floating point making it ~3x faster.
Humio Server 1.1.21 Archive (2018-09-25)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.1.21 | Archive | 2018-09-25 | Cloud | 2020-11-30 | No | 1.1.0 | No |
Available for download two days after release.
Bug fix release.
Fixed in this release
Summary
The implicit
tail(.0)
that is applied when no aggregate function is in the query input did not sort properly in certain cases.Query Monitor
now also shows cpu time spent in the last 5 seconds.Timechart on views broke in previous version 1.1.20
Humio Server 1.1.20 Archive (2018-09-24)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.1.20 | Archive | 2018-09-24 | Cloud | 2020-11-30 | No | 1.1.0 | No |
Available for download two days after release.
Use Kafka version 1.100.0
Fixed in this release
Summary
Humio's Kafka and ZooKeeper Docker images have been upgraded to use Kafka 1.100.0.~~ (Update: See 1.1.23)
Added the possibility to add extra Kafka configuration properties to Kafka consumers and producers by pointing to a properties file using
EXTRA_KAFKA_CONFIGS_FILE
. This makes it possible to connect to a Kafka cluster using SSL and SASL.Humio is upgraded to use the Kafka 2.0 client. It is still possible to connect to a Kafka running version 1.X
Humio Server 1.1.19 Archive (2018-09-21)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.1.19 | Archive | 2018-09-21 | Cloud | 2020-11-30 | No | 1.1.0 | No |
Available for download two days after release.
Cluster administration, dashboard clone/export/import and faster startup on large datasets.
Fixed in this release
Summary
Auto-nice long running queries, making them take on lower priority compared to younger queries, measured by cputime spent.
Allow negative index in splitString, which then selects from the end instead of from the start.
-1
is the last element.Fix #2263, support for !match().
Generate pretty
@display
value insplit()
function.HUMIO_KAFKA_TOPIC_PREFIX
was not applied to all topics used by Humio, only where the name matchedglobal-*"
.Startup of the server is now much faster on large datasets.
Setting
INGEST_QUEUE_INITIAL_PARTITIONS
in config decides the initial number of partition in the ingest queue. This only has effect when starting a fresh Humio cluster with no existing data.Dashboards can now be copied to other repos and exported and imported as templates.
Faster response on cluster management and entering the search page.
Upgrading to this version requires running at least v1.1.0. If you run an older version, upgrade to v1.1.18, then v1.1.19.
New cluster management actions for reassigning partitions to hosts and moving existing data to other hosts.
Humio Server 1.1.18 Archive (2018-09-13)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.1.18 | Archive | 2018-09-13 | Cloud | 2020-11-30 | No | 1.1.0 | No |
Available for download two days after release.
Ease using your own Kafka including older versions of Kafka.
Fixed in this release
Summary
Ease using your own Kafka including older versions of Kafka
Added
MAX_HOURS_SEGMENT_OPEN
to the number of hours after which you want a segment closed and a new one started even if it has not filled up. Note that you may want to disable segment merging in this case to preserve these smaller segment files by also setting ENABLE_SEGMENT_MERGING=falseSet
KAFKA_MANAGED_BY_HUMIO=false
to stop Humio from increasing the replication of the topics in Kafka.
Humio Server 1.1.17 Archive (2018-09-10)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.1.17 | Archive | 2018-09-10 | Cloud | 2020-11-30 | No | 1.1.0 | No |
Available for download two days after release.
Show Events Per Second (EPS) when searching.
Fixed in this release
Summary
Improve search performance when adding fields to events.
Dashboards can now be copied to other views.
Show Events Per Second (EPS) when searching.
Humio Server 1.1.16 Archive (2018-09-06)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.1.16 | Archive | 2018-09-06 | Cloud | 2020-11-30 | No | 1.1.0 | No |
Available for download two days after release.
Add CRC to data files. Migrates data to support upcoming features to later serve as a potential roll back point.
Fixed in this release
Summary
New 'cluster overview' tab in admin page (this is work in progress, feedback appreciated).
Bug Fix. Regular expressions using
/.../
syntax sometimes matched incorrectly.Scheduling of queries now take cpu time spent in each into account, allowing new queries to get more execution time than long-running queries.
Adds CRC32c to the segment file contents.
Support CSV downloads. End the query with
| table([...])
or| select([...])
to choose columns.Note! v1.1.15 is able to read the files generated by v1.1.16. Rolling back to version 1.1.14 or earlier is not possible, as those versions cannot read the files that have CRC.
Regular expression matching with a 'plain' prefix is now faster.
Humio Server 1.1.15 Archive (2018-09-03)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.1.15 | Archive | 2018-09-03 | Cloud | 2020-11-30 | No | 1.1.0 | No |
Available for download two days after release.
Support for setting parsers in our Kubernetes integration and a
new parseHexString()
function.
Fixed in this release
Summary
Updated instructions for configuring the PagerDuty notifier.
Bug Fix. Tables now sort globally instead per page.
Our Helm chart for ingest logs from a Kubernetes cluster now support setting a parser using the pod label
humio-parser
.For more information, see Use Case: Migrating from Helm Chart to Operator.
Functions
New function
parseHexString()
.
Humio Server 1.1.14 Archive (2018-08-21)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.1.14 | Archive | 2018-08-21 | Cloud | 2020-11-30 | No | 1.1.0 | No |
Available for download two days after release.
Starring alerts and Introduce a displaystring for formatting log strings.
Fixed in this release
Summary
Bug Fix. Slack notifier had the message twice in the request.
Improve Netflow parser to handle packets coming out of order.
Introduced
@displaystring
.
Automation and Alerts
Starring alerts. Get your favorite alerts to the top of the list.
Humio Server 1.1.13 Archive (2018-08-16)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.1.13 | Archive | 2018-08-16 | Cloud | 2020-11-30 | No | 1.1.0 | No |
Available for download two days after release.
Improve LDAP support.
Fixed in this release
Summary
Enable logging in with LDAP without providing domain name. Domain name can be set as a config using
LDAP_DOMAIN_NAME
. See Authenticating with LDAP.Enforce an upper bound of the number of fields allowed for one event. The limit is .0. If an event has too many fields, .0 are included.
Humio Server 1.1.12 Archive (2018-08-15)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.1.12 | Archive | 2018-08-15 | Cloud | 2020-11-30 | No | 1.1.0 | No |
Available for download two days after release.
SMTP support for sending emails, new Files UI and support for self-signed certs for ldap.
Fixed in this release
Summary
New Files UI. Possible to manage files for use in the
lookup()
function.LDAP users must now be added with their domain. For example add
<user@myorganisation.com>
(instead of just user). Existing users are migrated by the system, so no actions are required.Segment file replication did not (re-)fetch a segment file if the file was missing on disk while the "global" state claimed it was present.
Eliminate the backtics syntax from
eval()
, the same effect can be obtained withtranspose()
.Add operators
>
,<
,>=
,<=
, and%
to eval expressions.SMTP support for Email Configuration
LDAPS can use a self-signed certificate through config.
Functions
Humio Server 1.1.11 Archive (2018-08-03)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.1.11 | Archive | 2018-08-03 | Cloud | 2020-11-30 | No | 1.1.0 | No |
Available for download two days after release.
Minor Release
Fixed in this release
Summary
"Export to file" did not work in the UI.
Performance improvement in the internal logging to the Humio dataspace.
Eliminated a race condition in the ingest pipeline that could drop data in overload conditions.
Humio Server 1.1.10 Archive (2018-08-02)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.1.10 | Archive | 2018-08-02 | Cloud | 2020-11-30 | No | 1.1.0 | No |
Available for download two days after release.
Minor Release
Fixed in this release
Summary
Now using Google's RE2/j as the default, not the JDK's. Can be configured using
USE_JAVA_REGEX
.Autosharding now happens after tag grouping. Improves performance in case where some datasources are slow and others very fast when those are grouped.
Humio Server 1.1.9 Archive (2018-07-30)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.1.9 | Archive | 2018-07-30 | Cloud | 2020-11-30 | No | 1.1.0 | No |
Available for download two days after release.
Minor Release
Fixed in this release
Summary
Bug Fix. When encountering a broken segment file, let the server start and ignore the broken file.
Humio Server 1.1.8 Archive (2018-07-26)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.1.8 | Archive | 2018-07-26 | Cloud | 2020-11-30 | No | 1.1.0 | No |
Available for download two days after release.
Minor Release
Fixed in this release
Summary
Bug Fix. Autosharded tags should not get tag-grouped.
Improve handling color codes in Humio's built in key-value parser
Humio Server 1.1.7 Archive (2018-07-05)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.1.7 | Archive | 2018-07-05 | Cloud | 2020-11-30 | No | 1.1.0 | No |
Available for download two days after release.
Minor Release
Fixed in this release
Summary
Bug Fix. Remove race condition that could create duplicate events on restart.
Update embedded GeoLite2 database to 20180703 version.
Verify Java version requirement on startup.
Datasource autosharding is now able to reduce the number of shards.
Functions
Humio Server 1.1.6 Archive (2018-07-04)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.1.6 | Archive | 2018-07-04 | Cloud | 2020-11-30 | No | 1.1.0 | No |
Available for download two days after release.
Minor Release
Fixed in this release
Summary
Bug Fix. Log rotation of Humio's own log files. Files was not deleted, but now they are.
Improved datasource autosharding to be less eager.
Bug Fix. Viewing details of a logline while doing a live query did not pause the stream. This resulted in the details view being closed when the logline went out of scope.
Restructured documentation.
Humio Server 1.1.5 Archive (2018-06-28)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.1.5 | Archive | 2018-06-28 | Cloud | 2020-11-30 | No | 1.1.0 | No |
Available for download two days after release.
Minor Release
Fixed in this release
Summary
Remove supervisors from the Docker image humio/humio-core
Humio Server 1.1.4 Archive (2018-06-28)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.1.4 | Archive | 2018-06-28 | Cloud | 2020-11-30 | No | 1.1.0 | No |
Available for download two days after release.
Minor Release
Fixed in this release
Summary
Bug Fix. Repo admins that are allowed to delete data can now delete datasources.
Humio Server 1.1.3 Archive (2018-06-27)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.1.3 | Archive | 2018-06-27 | Cloud | 2020-11-30 | No | 1.1.0 | No |
Available for download two days after release.
Minor Release
Fixed in this release
Summary
Fix memory problem when streaming events using the query endpoint.
Rename a widget on dashboard directly from the dashboard itself.
Supporting links in tables.
Humio Server 1.1.2 Archive (2018-06-25)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.1.2 | Archive | 2018-06-25 | Cloud | 2020-11-30 | No | 1.1.0 | No |
Available for download two days after release.
Minor Release
Fixed in this release
Summary
Fix clock on dashboards page.
Fix creating the sandbox dataspace in the signup flow.
Automation and Alerts
Allow fields in alert webhooks.
Humio Server 1.1.1 Archive (2018-06-21)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.1.1 | Archive | 2018-06-21 | Cloud | 2020-11-30 | No | 1.1.0 | No |
Available for download two days after release.
Minor Release
Fixed in this release
Summary
fix fullscreen mode for readonly dashboards
Humio Server 1.1.0 Archive (2018-06-21)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.1.0 | Archive | 2018-06-21 | Cloud | 2020-11-30 | No | 1.1.0 | No |
Available for download two days after release.
Minor Release
Fixed in this release
Summary
Added documentation for Kafka Connect Log Format.
It is not possible to rollback to previous versions when upgrading. Backup Global data by copying the file
/data/humio-data/global-data-snapshot.json
. Then it will be possible to rollback (with the possibility of loosing new datasources, users, dashboards etc that was created while running this version).Moved some of the edit options from the dashboard list to the dashboard itself.
Amazon AMI available in the Amazon marketplace.
Improved Fluentbit integration to better support ingesting logs from Kubernetes.
Dataspaces has been split into views and repositories. This allows searching across multiple repositories and adds support for fine grained access permissions. Read the introduction in this blogpost and check out the Repositories & Views documentation.
Functions
Added query function
parseUrl()
.
Humio Server 1.0.69 Archive (2018-06-12)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.0.69 | Archive | 2018-06-12 | Cloud | 2020-11-30 | No | 1.1.0 | No |
Available for download two days after release.
Hotfix Release
Fixed in this release
Summary
Canceling a query in 1.0.68 would consume resources, blocking worker threads for a long time. Please upgrade.
Humio Server 1.0.68 Archive (2018-06-11)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.0.68 | Archive | 2018-06-11 | Cloud | 2020-11-30 | No | 1.1.0 | Yes |
Available for download two days after release.
Regular update release.
Fixed in this release
Summary
Changes to Humio's logging. Humio now logs to 2 files
/data/logs/humio-debug.log
and/data/logs/humio_std_out.log
. Std out has become less noisy and is mostly error logging. This is only relevant for on-prem installations.Ingest queue replication factor in Kafka is now by default set to 2 (was 1). If it is currently set to 1 Humio will increase it to 2. The configuration parameter
INGEST_QUEUE_REPLICATION_FACTOR
can be used to control the replication factor.Deeplinking did not work in combination with having to log in.
Humio Server 1.0.67 Archive (2018-06-01)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.0.67 | Archive | 2018-06-01 | Cloud | 2020-11-30 | No | 1.1.0 | No |
Available for download two days after release.
Regular update release.
Fixed in this release
Summary
Support for GDPR: Hardened Audit Logging.
Improved search performance when reading data from spinning disk
Humio Server 1.0.66 Archive (2018-05-23)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.0.66 | Archive | 2018-05-23 | Cloud | 2020-11-30 | No | 1.1.0 | No |
Available for download two days after release.
Minor update release.
Fixed in this release
Summary
Added "prune replicas" method on on-premises HTTP API to remove extra copies when reducing replica count in cluster.
Increased default thread pool sizes a bit, but still only 1/4 of what the were before 1.0.65
Humio Server 1.0.65 Archive (2018-05-22)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.0.65 | Archive | 2018-05-22 | Cloud | 2020-11-30 | No | 1.1.0 | No |
Available for download two days after release.
Minor update release.
Fixed in this release
Summary
Search performance improvement: Reduce GC-pressure from reading files.
Reduced default thread pool sizes.
Importing a dataspace from another Humio instance did not handle multi-node cluster properly
Humio Server 1.0.64 Archive (2018-05-15)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.0.64 | Archive | 2018-05-15 | Cloud | 2020-11-30 | No | 1.1.0 | No |
Available for download two days after release.
Minor update release.
Fixed in this release
Summary
Search scheduling is more fair in cases with multiple heavy searches.
Humio Server 1.0.63 Archive (2018-05-15)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.0.63 | Archive | 2018-05-15 | Cloud | 2020-11-30 | No | 1.1.0 | No |
Available for download two days after release.
Regular update release.
Fixed in this release
Summary
Read segment files using read instead of mmap.
Automation and Alerts
Bug Fix. Alerts could end up not being run after restarting a query.
Humio Server 1.0.62 Archive (2018-05-09)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.0.62 | Archive | 2018-05-09 | Cloud | 2020-11-30 | No | 1.1.0 | No |
Available for download two days after release.
Minor update release.
Fixed in this release
Summary
Increase timeout for http query requests.
Humio Server 1.0.61 Archive (2018-05-08)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.0.61 | Archive | 2018-05-08 | Cloud | 2020-11-30 | No | 1.1.0 | No |
Available for download two days after release.
Minor update release.
Fixed in this release
Summary
Worked on http request handling - do not starve requests under load.
Improved "connect points" option in timecharts.
Humio Server 1.0.60 Archive (2018-05-04)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.0.60 | Archive | 2018-05-04 | Cloud | 2020-11-30 | No | 1.1.0 | No |
Available for download two days after release.
Minor update release.
Fixed in this release
Summary
Timeout idle http connections after 60 seconds.
Removed logging of verbose data structure when querying.
Increase maximum allowed http connections to 2.00.
Fix dashboard links on frontpage.
Removed error logging when tokens has expired.
Possible to expose Elastic compatible endpoint on port 9.0, which is the Elastic default. Use the configuration parameter
ELASTIC_PORT
Humio Server 1.0.59 Archive (2018-04-26)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.0.59 | Archive | 2018-04-26 | Cloud | 2020-11-30 | No | 1.1.0 | Yes |
Available for download two days after release.
Regular update release.
Requires data migration and configuration changes — Auth0 changes.
Deprecation
Items that have been deprecated and may be removed in a future release.
The configuration options
AUTH0_API_CLIENT_ID
andAUTH0_API_CLIENT_SECRET
have been deprecated in favor ofAUTH0_CLIENT_ID
andAUTH0_CLIENT_SECRET
respectively - the old names will continue to work as aliases.
Behavior Changes
Scripts or environment which make use of these tools should be checked and updated for the new configuration:
Summary
If you are using Auth0 in your on-prem installation of Humio you must update your Auth0 Application configuration and re-configure Humio(or start using your OAuth identity provider directly). We at Humio will be happy to help. Below configuration changes are only relevant if Auth0 is used for authentication:
Fixed in this release
Summary
New convenience syntax for passing the
as
parameter using assignment syntax.minx := min(x)
is equivalent tomin(x, as=minx)
. This can be used at top-level|
between bars|
, or within[
array blocks]
.The parser handles left and right double quotes which can easily occur if you edit your queries in a word processor, i.e.,
Protocol := "UDP - 17"
The Auth0 configuration properties
AUTH0_WEB_CLIENT_ID
andAUTH0_WEB_CLIENT_SECRET
have been removed. You can safely delete the associated Auth0, as Humio only requires on Auth0 Application in the future.New syntax for computing multiple aggregates for example, to compute both min and max
... | [min(foo), max(foo)] | ...
. This syntax is shorthand for thestats()
function.Existing users on cloud.humio.com will need to re-authenticate the application 'humio' to use their account information.
Users that are authenticated through Auth0 will need to configure the
PUBLIC_URL
option, you must add addINI$PUBLIC_URL/auth/auth0
To the list of callback URLs in your Auth0 Application.
New convenience syntax for passing the field= parameter to a function using curly assignment syntax.
ip_addr =~ cidr("127.0.0.1/24")
is equivalent tocidr("127.0.0.1/24", field=ip_addr)
. This can also be used for regex i.e.,name =~ regex("foo.*")
.The configuration option
AUTH0_WEB_CLIENT_ID_BASE64ENC
has been remove.Humio Auth0 no longer requires the grant
read:users
, you can safely disable that on your Auth0 Application - or just leave it.New naming convention for function names is
camelCase()
which is now reflected in documentation and examples. Functions are matched case-insensitively, so the change is backwards compatible.Humio now support authenticating with Google, GitHub and Atlassian/Bitbucket directly (see Authenticating with OAuth Protocol), without the need to go through Auth0. This is part of our GDPR efforts for our customers on cloud.humio.com, so as to avoid more third parties involved with your data than necessary.
Renamed the
alt
keyword tocase
.alt
will still work for a few releases but is now deprecated.Depending on how you set up your Auth0 application, you may need to update your Auth0 Application Type to "Regular Web Application" in the your Auth0 account, more details can be found in our Authenticating with OAuth Protocol documentation.
The
head()
function allows you to do deduplication by usinggroupBy([ field1, field2, ... ], function=head(1))
Functions
The query language has several new functions:
formatTime()
,ipLocation()
,rename()
,splitString()
,parseInt()
(updated version ofto_radix_10
function),stats()
,head()
. See Query Functions documentation for details of each one.
Humio Server 1.0.58 Archive (2018-04-19)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.0.58 | Archive | 2018-04-19 | Cloud | 2020-11-30 | No | 1.1.0 | No |
Available for download two days after release.
Regular update release.
Fixed in this release
Summary
Improved versioning. The version now starts with an actual version number. This version matches the version in Docker Hub.
Documentation has moved into its own project online at
https://docs.humio.com
.JSON parsers can be configured to parse nested JSON. That means it will look at all strings inside the JSON and check if they are actually JSON.
Small improvements to Grafana plugin.
New on-boarding flow supporting downloading and running Humio.
Humio is available as a downloadable Docker image. It can be used in trial mode for a month. After that a license is required.
Humio Server 1.0.57 Archive (2018-04-16)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.0.57 | Archive | 2018-04-16 | Cloud | 2020-11-30 | No | 1.1.0 | No |
Available for download two days after release.
Cloud-only release.
Fixed in this release
Summary
Added an update service widget to the menu bar that will announce new updates and give access to release notes directly in Humio. The service contacts a remote service: update.humio.com. If you do not want to allow this communication you can disable it from the Root Administration interface.
Updated Humio and Kafka Docker images to use Java 9.
New Query coordinator for handling distributed queries. This should improve the error messages on communication problems.
Humio Server 1.0.56 Archive (2018-03-26)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.0.56 | Archive | 2018-03-26 | Cloud | 2020-11-30 | No | 1.1.0 | No |
Available for download two days after release.
Bug Fix Release.
Fixed in this release
Summary
Race condition in segment merging code. Could lead to loss of data when changing size of segment files. The problem was introduced in the previous release as a case of the out-of-order processing fix.
Auto suggestions selection using mouse.
Humio Server 1.0.55 Archive (2018-03-22)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.0.55 | Archive | 2018-03-22 | Cloud | 2020-11-30 | No | 1.1.0 | No |
Available for download two days after release.
Regular update release.
Fixed in this release
Summary
JSON is not pretty printed when showing the details for an event in the message tab.
Improved Grafana integration).
Added a JSON tab when showing event details. The tab pretty prints the event and is only visible for JSON data.
When system got overloaded - events could get lost if processed out of order in a datasource.
Improved ingest performance by tuning LZ4 compression.
Humio Server 1.0.54 Archive (2018-03-15)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.0.54 | Archive | 2018-03-15 | Cloud | 2020-11-30 | No | 1.1.0 | No |
Available for download two days after release.
Regular update release.
Data migrations are required, but compatible both ways: Users on dataspace can now have multiple roles.
Fixed in this release
Summary
Audit Logging BETA feature. There is now a humio-audit dataspace with audit log of user actions on Humio.
"Export to file" failed on Sandbox dataspaces.
In uncommon cases when ingesting a large bulk of events that were not compressible at all, the non-compression could fail.
License keys in UI now ignore whitespace for ease of inserting keys with line breaks.
Humio Server 1.0.53 Archive (2018-03-13)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.0.53 | Archive | 2018-03-13 | Cloud | 2020-11-30 | No | 1.1.0 | No |
Available for download two days after release.
Regular update release.
Fixed in this release
Summary
In some scenarios the browsers back button had to be clicked twice or more to go back.
Enter does not start search after navigating using the browsers back button
Introduced License Installation. Humio requires a license to run. It can run in trial mode with all features enabled for a month.
Humio Server 1.0.52 Archive (2018-03-06)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.0.52 | Archive | 2018-03-06 | Cloud | 2020-11-30 | No | 1.1.0 | No |
Available for download two days after release.
Cloud-only release.
Fixed in this release
Summary
Make /regex/ work with AND and OR combinators.
Disconnect points on timecharts if there are empty buckets between them.
Labeling dashboards. Put labels on dashboards to organize them.
gzipping of http responses could hit an infinite loop, burning cpu until the process was restarted.
Starring dashboards. They will go to the top of the dashboard list and there is a section with starred dashboards on the frontpage.
Humio Server 1.0.51 Archive (2018-02-23)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.0.51 | Archive | 2018-02-23 | Cloud | 2020-11-30 | No | 1.1.0 | No |
Available for download two days after release.
Minor Update Release
Fixed in this release
Summary
Fix bug: Retention was not deleting anything.
Humio Server 1.0.50 Archive (2018-02-22)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.0.50 | Archive | 2018-02-22 | Cloud | 2020-11-30 | No | 1.1.0 | No |
Available for download two days after release.
Minor Update Release
Fixed in this release
Summary
Clustered on-premises installs could stall in the copying of completed segment files inside the cluster.
Fix issue with
:
occurring in certain query expressions, introduced with the new:=
syntax. A query such asfoo:bar | ...
using an unquoted string would fail to parse.Allow | before and after query.
Allow saving dashboards with queries that do not parse. Allows editing dashboards where another widget is failing.
Humio Server 1.0.49 Archive (2018-02-21)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.0.49 | Archive | 2018-02-21 | Cloud | 2020-11-30 | No | 1.1.0 | No |
Available for download two days after release.
Regular update release.
New features and improvements
Summary
Show Widget Queries on Dashboards. You can toggle displaying the queries that drive the widgets by clicking the "Code" button on dashboards. This makes it easier to write filters because you can peek at what fields are being used in your widgets.
Dashboard Filters. Dashboard Filters allow you to filter the data set across all widgets in a dashboard. This effectively means that you can use dashboards for drill-down and reuse dashboards with several configurations. Currently filters support writing filter expressions that are applied as prefixes to all your widgets' queries. We plan to extend this to support more complex parameterized set-ups soon - but for now, prefixing is a powerful tool that is adequate for most scenarios. Filters can be named and saved so you can quickly jump from e.g. Production Data to Data from your Staging Environment. You can also mark a filter as "Default". This means that the filter will automatically be applied when opening a dashboard.
Better URL handling in dashboards. The URL of a dashboard now includes more information about the current state or the UI. This means you can copy the URL and share it with others to link directly to what you are looking at. This includes dashboard time, active dashboard filter, and fullscreen parameters. This will make it easy to have wall monitors show the same dashboard but with different filters applied, and allow you to send links when you have changed the dashboard search interval.
Fixed in this release
Summary
Improvements to the query optimizer. Data source selection (choosing which data files to scan from disk) can now deal with more complex tag expressions. For instance, now queries involving
OR
, such as#tag1=foo OR #tag2=bar
are now processed more efficiently. The query analyzer is also able to identify#tag=value
elements everywhere in the query, not only in the beginning of the query.Improvement: Better handling of reconnecting dashboards when updating a Humio instance.
Configure when Humio stops updating live queries (queries on dashboards) that are not viewed (not polled). This is now possible with the config option
IDLE_POLL_TIME_BEFORE_LIVE_QUERY_IS_CANCELLED_MINUTES
. Default is 1 hour.Improvement: Better and faster query input field. We are using a new query input field where you should experience less "input lag" when writing queries. At the same time, syntax highlight has been tweaked, and while still not supporting some things like array notation, it is better than previous versions.
Clock on Dashboards. Making it easier to know what time/timezone Humio is displaying result for.
New
alt
language construct. This allows alternatives similar tocase
orcond
in other languages. With:logscale Syntax... | alt { <query>; <query>; ...; * } | ...
Every event passing through will be tried to the alternatives in order until one emits an event. If you add
; *
in the end, events will pass through unchanged even if no other queries match. Aggregate operators are not allowed in the alternative branches.New
eval
syntax. As a shorthand for... | eval(foo=expr) | ...
you can now write... | foo :=expr | ...
. Also, on the left hand side in an eval, you can writeatt := expr
, which assigns the field that is the current value ofatt
.
Humio Server 1.0.48 Archive (2018-02-19)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.0.48 | Archive | 2018-02-19 | Cloud | 2020-11-30 | No | 1.1.0 | No |
Available for download two days after release.
Regular update release. Data migration is required: the backups are incompatible.
New features and improvements
Summary
Export to file. It is now possible to export the results of a query to a file. When exporting, the result set is not limited for filter queries, making it possible to export large amounts of data. Plain text, JSON and ND-JSON (Newline Delimited JSON) formats are supported in this version.
Functions
Fixed in this release
Summary
global-snapshots topic in Kafka: Humio now deletes the oldest snapshot after writing a new, keeping the latest 10 only.
Backup feature (using
BACKUP_NAME
in env) now stores files in a new format. If using this, you must either move the old files out of the way, or setBACKUP_NAME
to a new value, thus pointing to an new backup directory. The new backup system will proceed to write a fresh backup in the designated folder. The new backup system no longer require use of "JCE policy files". Instead, it needs to run on java "1.8.0_161" or later. The current Humio docker images includes "1.8.0_162".Performance improvement for searches using in particular "expensive" aggregates functions such as groupby and percentile.
Humio Server 1.0.47 Archive (2018-02-07)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.0.47 | Archive | 2018-02-07 | Cloud | 2020-11-30 | No | 1.1.0 | No |
Available for download two days after release.
Regular update release.
Fixed in this release
Summary
Log4j2 updated from 2.9.1 to 2.10.0. If you are using a custom logging configuration, you may need to update your configuration accordingly.
To eliminate GC pauses caused by compression in the Kafka-client in Humio, Humio now disables compression on all topics used by Humio. Humio compresses internally before writing to Kafka on messages where compression is required (Ingest is compressed). This release of Humio enforces this setting onto the topics used by humio. This is the list of topics used by Humio. (Assuming you have not configured a prefix, which is in then used on all of them)
global-events global-snapshots humio-ingest transientChatter-events
You can check the current non-default settings using this command:
cd SOME_KAFKA_INSTALL_DIR ./bin/kafka-configs.sh --zookeeper localhost:2181 --entity-type \ topics --entity-name humio-ingest --describe
Removed GC pauses caused by java.util.zip.* native calls from compressed http-traffic triggering "GCLocker initiated GC", which could block the entire JVM for many seconds.
Reduced query state size for live queries decreasing memory usage.
Added
concat()
function.
Humio Server 1.0.46 Archive (2018-02-02)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.0.46 | Archive | 2018-02-02 | Cloud | 2020-11-30 | No | 1.1.0 | No |
Available for download two days after release.
Minor update release.
Fixed in this release
Functions
rdns()
function now runs asynchronously, looking up names in the background and caching the responses. Fast static queries may complete before the lookup completes. Push rdns as far right as possible in your queries, and avoid filtering events based on the result, as rdns is non-deterministic.
Humio Server 1.0.45 Archive (2018-02-01)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.0.45 | Archive | 2018-02-01 | Cloud | 2020-11-30 | No | 1.1.0 | No |
Available for download two days after release.
Minor update release.
Fixed in this release
Summary
Improved performance on live queries with large internal states
Humio Server 1.0.44 Archive (2018-01-30)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.0.44 | Archive | 2018-01-30 | Cloud | 2020-11-30 | No | 1.1.0 | No |
Available for download two days after release.
Regular update release. Data migration is required. Rollback to previous version is supported with no actions required.
Fixed in this release
Summary
If the "span" for a timechart is wider than the search interval, the default span is used and a warning is added. This improves zooming in time on dashboards.
Fix bug in live queries after restarting a host.
OnPrem: Configuration obsolete: No longer supports the
KAFKA_HOST
/KAFKA_PORT
configuration parameters. Use theKAFKA_SERVERS
configuration instead.Added VictorOps notifier
Regular expression parsing limit is increased from 4K to 64K when ingesting events.
Added PagerDuty notifier
On timechart, mouse-over now displays series sorted by magnitude, and pretty-prints the numbers.
OnPrem: Size of query states are now bounded by the
MAX_INTERNAL_STATESIZE
, which defaults to MaxHeapSize/128.
Humio Server 1.0.43 Archive (2018-01-25)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.0.43 | Archive | 2018-01-25 | Cloud | 2020-11-30 | No | 1.1.0 | No |
Available for download two days after release.
Minor update release.
Fixed in this release
Summary
Stop queries and warn if too big query states are detected
Warnings are less intrusive inn the UI.
Humio Server 1.0.42 Archive (2018-01-23)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.0.42 | Archive | 2018-01-23 | Cloud | 2020-11-30 | No | 1.1.0 | No |
Available for download two days after release.
Minor update release.
Fixed in this release
Summary
Ingesting to a personal sandbox dataspace using ingest token was not working.
Firefox is now supported.
Added "tags" to "ingest-messages" endpoint to allow the source to add tags to the events. It is still possible and recommended to add the tags using the parser.
Added OpsGenie notification template
Support ANSI colors
Documentation
Added documentation of the file formats that the
lookup()
function is able to use.
Automation and Alerts
An alert could fire a notification on a partial query result, resulting in extra alerts being fired.
Humio Server 1.0.41 Archive (2018-01-19)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.0.41 | Archive | 2018-01-19 | Cloud | 2020-11-30 | No | 1.1.0 | No |
Available for download two days after release.
Regular update release.
Fixed in this release
Summary
Fix bug #35 Preventing you from doing e.g.
groupby
for fields containing spaces or quotes in their field name.New front page page. You can now jump directly to a dashboard from the front page using the dropdown on each list item. All dashboards can also be filtered and accessed from the "Dashboards Tab" on the front page.
For on-prems: You can now adjust
BLOCKS_PER_SEGMENT
from the default of .0 for influence on size of segment files.New implementation of Query API for integration purposes.
Added suggestions on sizing of hardware to run Humio: Instance Sizing.
Better Page Titles for Browser History.
Startup time reduced when running on large datasets.
Multiple problems on the Parsers page have been fixed.
replace()
function on rawstring@ now work also for live part of query.More guidance for new users in the form of help messages and tooltips.
If Kafka did not respond for 5 seconds, ingested events could get duplicated inside humio.
Cancelled queries influenced performance after they were cancelled.
Renewing your API token from your account settings page.
Functions
Humio Server 1.0.40 Archive (2018-01-09)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.0.40 | Archive | 2018-01-09 | Cloud | 2020-11-30 | No | 1.1.0 | No |
Available for download two days after release.
Minor update release.
Fixed in this release
Summary
Added option to do authentication in a http proxy in front of Humio, while letting Humio use the username provided by the proxy.
Fixed performance regression in latest release when querying, that hit in particular data sources with small events
Functions
percentile()
function now acceptsas
parameter, allowing to plot multiple series as percentiles in a timechart.
Humio Server 1.0.39 Archive (2018-01-04)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.0.39 | Archive | 2018-01-04 | Cloud | 2020-11-30 | No | 1.1.0 | No |
Available for download two days after release.
Regular update release.
Fixed in this release
Summary
Filebeat now utilises tags in parsers. The Filebeat configuration is still backward compatible.
Netflow support for on premises customers. It is now possible to send Netflow data directly to Humio. It is configured using Ingest listeners.
Tags can be defined in parsers (see Event Tags).
Tag sharding. A tag with many different values would result in a lot of small datasources which will hurt performance. A Tag will be sharded if it has many different values. For example, having a field user as tag and having .0.0 Different users could result in .0.0 datasources. Instead the tag will be sharded and allowed to have 16 different values (by default). In general do not use a field with high cardinality as a tag in Humio.
Root user management in the UI. A gear icon has been added next to the "Add Dataspace" button, if you are logged in as a root user. Press it and it is possible to manage users.
Better Zeek (Bro) Network Security Monitor integration.
Datasources are autosharded into multiple datasources if they have huge ingest loads. This is mostly an implementation detail.
Functions
Humio Server 1.0.38 Archive (2017-12-18)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.0.38 | Archive | 2017-12-18 | Cloud | 2020-11-30 | No | 1.1.0 | No |
Available for download two days after release.
Minor update release.
Fixed in this release
Summary
Fixed a bug where the "parsers" page the fields found in parsing were hidden.
Fixed a bug that leaked Kafka-connections.
Turn off LZ4 on connection from Humio to kafka. Note: Storage of data in Kafka is controlled by broker settings, although having "producer" there will turn compression off now. The suggested Kafka broker (or topic) configuration is to have "compression.type=lz4"
Humio Server 1.0.37 Archive (2017-12-15)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.0.37 | Archive | 2017-12-15 | Cloud | 2020-11-30 | No | 1.1.0 | No |
Available for download two days after release.
Minor update release.
Fixed in this release
Summary
Set default
timechart(limit=20)
. This can cause some dashboards to display warnings.
Humio Server 1.0.36 Archive (2017-12-14)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.0.36 | Archive | 2017-12-14 | Cloud | 2020-11-30 | No | 1.1.0 | No |
Available for download two days after release.
Regular update release.
Fixed in this release
Summary
Different View Modes have been made more prominent in the Search View by the addition of tabs at the top of the result view. As we extend the visualization to be more specialized for different types of logs we expect to add more Context Aware tabs here, as well as in the inspection panel at the bottom of the screen.
Styling improvement on several pages.
Event List Results are now horizontally scrollable, though limited in length for performance reasons.
Typo Corrections in the Tutorial
Performance improvements in timecharts.
New Search View functionality allows you to sort the event list to show newest events at the end of the list.
Syntax highlighting in the event list for certain formats including JSON.
Scrolling the event list or selecting an event will pause the result stream while you inspect the events, this especially makes it easier to look at Live Query results. Resume a stream by hitting
Esc
or clicking the button.
Humio Server 1.0.35 Archive (2017-12-13)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.0.35 | Archive | 2017-12-13 | Cloud | 2020-11-30 | No | 1.1.0 | No |
Available for download two days after release.
Regular update release.
Fixed in this release
Summary
New parameter
timechart(limit=N)
chooses the "top N charts" selected as the charts with the most area under them. When unspecified,limit
value defaults to .0, and produces a warning if exceeded. When specified explicitly, no warning is issued.Filter functions can now generically be negated
!/foo/
,!cidr(...)
,!in(...)
, etc.Upgraded to kafka 1.0. This is IMPORTANT for on premises installations. It requires updating the kafka docker image before updating the humio docker image
Humio Server 1.0.34 Archive (2017-12-11)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.0.34 | Archive | 2017-12-11 | Cloud | 2020-11-30 | No | 1.1.0 | No |
Available for download two days after release.
Minor update release.
Fixed in this release
Summary
Tags can now be sharded, allowing to add e.g IP-addresses as tags. (Only for root users, ask your admin.)
Support datasources with large data volumes by splitting them into multiple internal datasources. (Only for root users, ask your admin.)
Humio Server 1.0.33 Archive (2017-12-07)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.0.33 | Archive | 2017-12-07 | Cloud | 2020-11-30 | No | 1.1.0 | No |
Available for download two days after release.
Minor update release.
Fixed in this release
Summary
Kafka topic configuration defaults changed and documented. If running on-premises, please inspect and update the retention settings on the Kafka topics created by Humio to match your Kafka . See Kafka Configuration.
Humio Server 1.0.32 Archive (2017-12-06)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.0.32 | Archive | 2017-12-06 | Cloud | 2020-11-30 | No | 1.1.0 | No |
Available for download two days after release.
Regular update release.
Fixed in this release
Summary
Improved ingest performance by batching requests more efficiently to the kafka ingest queue. Queue Serialization format changed as well
Fixed bug with some tables having narrow columns making text span many lines
Fixed bug in timechart graphs, The edge buckets made the graph go to much back in time and also into the future
New implementation of the
timeChart()
function with better performance.When saving queries/alerts - the query currently in the search field is saved - not the last one that ran
Humio Server 1.0.31 Archive (2017-11-26)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.0.31 | Archive | 2017-11-26 | Cloud | 2020-11-30 | No | 1.1.0 | No |
Available for download two days after release.
Minor update release.
Fixed in this release
Summary
Fixed failure to compile regexp in query was reported as an internal server error
Make Kafka producer settings relative to Java max heap size
Humio now sets a CSP header by default. You can still replace this header in your proxy if needed
Humio Server 1.0.30 Archive (2017-11-24)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.0.30 | Archive | 2017-11-24 | Cloud | 2020-11-30 | No | 1.1.0 | No |
Available for download two days after release.
Minor update release.
Fixed in this release
Summary
Improve support for running Humio behind a proxy with CSP
Possible to specify tags for ingest listeners in the UI
Fix links to documentation when running behind a proxy
Humio Server 1.0.29 Archive (2017-11-21)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.0.29 | Archive | 2017-11-21 | Cloud | 2020-11-30 | No | 1.1.0 | No |
Available for download two days after release.
Regular update release.
Fixed in this release
Summary
New sandbox dataspaces. Every user get their own sandbox dataspace. It is a personal dataspace, which can be handy for testing or quickly uploading some data
New interactive tutorial
Ui for adding ingest listeners (Only for root users)
Added pagination to tables
Fixed a couple of issues regarding syntax highlighting in the search field
Humio Server 1.0.28 Archive (2017-11-15)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.0.28 | Archive | 2017-11-15 | Cloud | 2020-11-30 | No | 1.1.0 | No |
Available for download two days after release.
Regular update release.
Fixed in this release
Summary
Add documentation for new regular expression syntax
Fix bug with "save as" menu being hidden behind event distribution graph
Fix bug where Humio ignored the default search range specified for the dataspace
Humio Server 1.0.27 Archive (2017-11-14)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.0.27 | Archive | 2017-11-14 | Cloud | 2020-11-30 | No | 1.1.0 | No |
Available for download two days after release.
Regular update release.
Fixed in this release
Summary
Possible to specify tags when using ingest listeners
Grafana integration. Check it out
Alerts are out of beta.
Improved Error handling when a host is slow. Should decrease the number of warnings
Humio Server 1.0.26 Archive (2017-11-09)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.0.26 | Archive | 2017-11-09 | Cloud | 2020-11-30 | No | 1.1.0 | No |
Available for download two days after release.
Regular update release.
Fixed in this release
Summary
When no field is named, i.e. as in
/err/i
, then@rawstring
is being searched.When such a regex-match expression appears at top-level e.g.
|
between two bars|
, then named capturing groups also cause new fields to be added to the output event as for theregex()
function.Performance has improved for most usages of regex (we have moved to use
RE2/J
rather than Javajava.util.regex
.)New syntax
field = /regex/idmg
syntax for matching. Optional flagsi
=ignore case,m
=multiline (change semantics of$
and^
to match each line, nut just start/end),d
=dotall (.
includes\n
), andg
=same asrepeat=true
for theregex()
function. I.e. to case-insensitively find all log lines containingerr
(orERR
, orErr
) you can now search/err/i
A bug has been fixed where searching for unicode characters could cause false positives.
Improve syntax highlighting in search field
Humio Server 1.0.25 Archive (2017-11-06)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.0.25 | Archive | 2017-11-06 | Cloud | 2020-11-30 | No | 1.1.0 | No |
Available for download two days after release.
Regular update release.
Fixed in this release
Summary
Anonymous Composite Function Calls can now make use of filter expressions:
#type=accesslog | groupby(function={ uri=/foo* | count() })
Support for C-style allow comments
// single line
or/* multi line */
New HTTP Ingest API supporting parsers.
Saved queries can be invoked as a macro (see User Functions) using the following syntax:
$"name of saved query"()
or$nameOfSavedQuery()
. Saved queries can declare arguments using?{arg=defaultValue}
syntax. Such arguments can be used where ever a string, number or identifier is allowed in the language. When calling a saved query, you can specify values for the arguments with a syntax like:$savedQuery(arg=value, otherArg=otherValue)
.
Humio Server 1.0.24 Archive (2017-11-01)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.0.24 | Archive | 2017-11-01 | Cloud | 2020-11-30 | No | 1.1.0 | No |
Available for download two days after release.
Regular update release.
Fixed in this release
Summary
Event timestamps are set to Humio's current time, at ingestion, if they have timestamps in the future. These events will also be annotate with the fields @error=true and @error_msg='timestamp was set to a value in the future. Setting it to now'. Events are allowed to be at most 10 seconds into the future, to take into account some clock skew between machines.
Improved handling of server deployments in dashboards
Created a public github repository with scripts to support on-premises Humio installation and configuration.
Timecharts are redrawn when series are toggled
Fix bug with headline texts animating forever
Humio Server 1.0.23 Archive (2017-10-23)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.0.23 | Archive | 2017-10-23 | Cloud | 2020-11-30 | No | 1.1.0 | No |
Available for download two days after release.
Minor release.
Fixed in this release
Summary
Fixed Bug in search field when pasting formatted text
Fixed Session timeout bug when logging in with LDAP
Better support for busting the browsers local cache on new releases
Humio Server 1.0.22 Archive (2017-10-17)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.0.22 | Archive | 2017-10-17 | Cloud | 2020-11-30 | No | 1.1.0 | No |
Available for download two days after release.
Regular update release.
Fixed in this release
Summary
Added time range parameterization to dashboards
Fixed visual bug in the event distribution graph
Functions
The
in()
function now allows wildcards in itsvalues
parameter
Humio Server 1.0.21 Archive (2017-10-17)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.0.21 | Archive | 2017-10-17 | Cloud | 2020-11-30 | No | 1.1.0 | No |
Available for download two days after release.
Regular update release.
Fixed in this release
Summary
Added syntax highlighting of the query in the search field.
Allow resizing the search field.
Humio Server 1.0.20 Archive (2017-10-13)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.0.20 | Archive | 2017-10-13 | Cloud | 2020-11-30 | No | 1.1.0 | No |
Available for download two days after release.
Minor release.
Fixed in this release
Summary
A system job will now periodically compacts and merges small segment files (caused by low volume data sources) improving performance and reducing storage requirements.
Fixed bug showing basic authentication dialogue in browser when logging token expires
Add parameter
cidr(negate=true|false)
flagAdd ipv6 support to cidr
Humio Server 1.0.19 Archive (2017-10-11)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.0.19 | Archive | 2017-10-11 | Cloud | 2020-11-30 | No | 1.1.0 | No |
Available for download two days after release.
Regular update release.
Fixed in this release
Summary
Mouse over in timecharts now displays values for all series in hovered bucket
Since using the ingest queue is on by default, if running a clustered setup, make sure to update the ingest partition assignments. At the very least reset them to defaults (see Cluster Management API).
Humio Server 1.0.18 Archive (2017-10-10)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.0.18 | Archive | 2017-10-10 | Cloud | 2020-11-30 | No | 1.1.0 | No |
Available for download two days after release.
Cloud-only release.
Fixed in this release
Summary
Ingest queue is used by default (if not disabled)
Events are highlighted in the eventdistribution graph when they are hovered.
Improved Auth0 on-prem support.
Possible to migrate dataspaces from one Humio to another.
Improved query scheduling for dashboards starting many queries at the same time.
Functions
New query functions:
in()
,length()
,sample()
andlowercase()
.
Humio Server 1.0.17 Archive (2017-09-29)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.0.17 | Archive | 2017-09-29 | Cloud | 2020-11-30 | No | 1.1.0 | No |
Available for download two days after release.
This is a release for Humio Cloud users only.
Fixed in this release
Summary
Cloud-only release.
Humio Server 1.0.16 Archive (2017-09-06)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.0.16 | Archive | 2017-09-06 | Cloud | 2020-11-30 | No | 1.1.0 | No |
Available for download two days after release.
Regular update release.
Fixed in this release
Summary
Fix bug with
Events list
view for aggregate queriesGeneric UDP/TCP ingest added (for e.g. syslog). Config with HTTP/JSON API only, no GUI yet.
UI improvements with auto suggest / pop up documentation.
New LDAP config options adding adding
ldap-search
toAUTHENTICATION_METHOD
for using a bind user.Fix bug with combination of add-cluster-member and real-time-backup-enabled.
Functions
New function
shannonEntropy()
.
Humio Server 1.0.15 Archive (2017-08-30)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.0.15 | Archive | 2017-08-30 | Cloud | 2020-11-30 | No | 1.1.0 | No |
Available for download two days after release.
Regular update release.
Fixed in this release
Summary
Copy dashboard feature
Improve Auth0 dependencies. (Better handling of communication problems)
Change styling of list widgets
Syslog ingestion (Line ingestion) in Beta for on premises installations
Humio Server 1.0.14 Archive (2017-08-17)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.0.14 | Archive | 2017-08-17 | Cloud | 2020-11-30 | No | 1.1.0 | No |
Available for download two days after release.
Regular update release.
Fixed in this release
Summary
Make it possible to show event details, when looking at raw events inside a timechart (1438)
Show warning when there are too many points to plot in a timechart and some are discarded (1444)
Fix scrolling in safari for tables (1308)
Humio Server 1.0.13 Archive (2017-08-16)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.0.13 | Archive | 2017-08-16 | Cloud | 2020-11-30 | No | 1.1.0 | No |
Available for download two days after release.
Regular update release.
Fixed in this release
Summary
Remember which tab to show in event details drawer (Same as the last one)
Documentation for cluster management operations
Dataspace type ahead filter on frontpage
Ingest request waits for 1 Kafka server to ack the request by default (improves data loss scenarios with machines failing)
Widget options now use radio buttons for many options
Humio Server 1.0.12 Archive (2017-08-04)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.0.12 | Archive | 2017-08-04 | Cloud | 2020-11-30 | No | 1.1.0 | No |
Available for download two days after release.
Regular update release.
Fixed in this release
Summary
Background tabs are only updated minimally, resulting in much less CPU usage.
Fix an issue with scrollbars appearing in dashboards. (1403)
Various minor UI changes.
Fixed a bug that would prevent wiping the kafka used to run Humio. (1347, 1408)
New 'server connection indicator' shows that the server is currently reachable from the browser.
Humio Server 1.0.11 Archive (2017-07-09)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.0.11 | Archive | 2017-07-09 | Cloud | 2020-11-30 | No | 1.1.0 | No |
Available for download two days after release.
Cloud-only release.
Fixed in this release
Summary
Improved the update logic for read-only dashboards (#1341)
Fix an issue where login fails and the UI hangs w/auth0. (#1368)
Improved rendering performance for dashboards (#1360)
When running an aggregate query (such as a groupby) the UI now shows an
Events list
tab to see the events that were selected as input to the aggregate.
Humio Server 1.0.10 Archive (2017-06-22)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.0.10 | Archive | 2017-06-22 | Cloud | 2020-11-30 | No | 1.1.0 | No |
Available for download two days after release.
Regular update release.
Fixed in this release
Summary
Support for LDAP authentication for on-premises installations. (#1222)
For calculations on events containing numbers, the query engine now maintains a higher precision in intermediate results. Previously, numbers were limited to two decimal places, so now smaller numbers can show up in the UI. (#603)
Certain long queries could crash the system. (#781)
The event distribution graph is not aligned better with graphs shown below.
The
limit
parameter ontable()
andsort()
functions now only issues a warning if the system limit is reached, not when the explicitly specified limit is reached. (#1323)Various improvements in the scale-out implementation. Contact us for more detail if relevant.
Ingest requests are not rejected with an error, when incoming events contain fields reserved for humio (like @timestamp). Instead an @ is prepended to the field name and extra fields are added to the event describing the problem(
@error=true
). (#1320)
Humio Server 1.0.9 Archive (2017-06-15)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.0.9 | Archive | 2017-06-15 | Cloud | 2020-11-30 | No | 1.1.0 | No |
Available for download two days after release.
Regular update release.
Fixed in this release
Summary
The event details view has been improved in various ways: remember height, new buttons for 'groupby attribute' and 'filter without'. (#1277)
In certain cases, live queries producing a warning would add the same warning repeatedly for every poll. (#1255)
While running a query, the UI will now indicate progress 0-.0%. (#1262)
The scale-out implementation is improved in several ways. Most significantly, functionality adding a node to a cluster has been added. Contact us for more detail if relevant.
Timecharts with
span=1d
now uses the browser timezone to determine day boundary. (#1250)Fixed a bug where read-only dashboards allowed dragging/resizing widgets. (#1274)
Humio can optionally use re2j (Google regular expression implementation), which is slightly slower than the default Java version, but avoids some strange corner cases that can sometimes cause stackoverflow. Controlled with
USE_JAVA_REGEX
. Defaults totrue
.For UI queries (and those using the
queryjob
API) the limit on the result set is lowered to 1.0 rows/events. This avoids the UI freezing in cases where a very large result set is generated. To get more than 1.0 results the query HTTP endpoint has to be used. (#1281, #960)Add parameters
unit
andbuckets
totimeChart()
. The parameterbuckets
allows users to specify a specific number of buckets (maximum 1.0) to split the query interval into, as an alternative to thespan
parameter which has issues when resizing the query interval. Theunit
parameter lets you convert rates by e.g. passingunit="bytes/bucket to Mibytes/hour"
. As the bucket (or span) value changes, the output is converted to the given output unit. (#1295)
Humio Server 1.0.8 Archive (2017-05-22)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.0.8 | Archive | 2017-05-22 | Cloud | 2020-11-30 | No | 1.1.0 | No |
Available for download two days after release.
Major release includes early access for new multi-host scale-out functionality. See separate documentation for how to install and configure these functions.
Fixed in this release
Summary
Fixed a bug with time charts that did not always include the Plotline Y. (#1111)
Fixed a bug which made docs not redirect properly for on-prem installations (#1112)
Dashboards now indicate errors in the underlying queries with an transparent overlay (#775)
Fixed minor bug in parser selection (only used in undocumented tags selection mechanism)
For +2 seconds aggregate queries, shuffle order logs are processed. This lets the user get a rough estimate of the nature of the data, which works well for such queries using e.g. avg or percentiles aggregates. (#1227)
Fixed a bugs with live aggregate queries which could cause results to inflate over time. (#1213)
Dashboards can now be reconfigured by dragging and resizing widgets (#1205)
Humio Server 1.0.7 Archive (2017-05-04)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.0.7 | Archive | 2017-05-04 | Cloud | 2020-11-30 | No | 1.1.0 | No |
Available for download two days after release.
Regular update release.
Fixed in this release
Summary
Improve scroll behavior in tables on dashboards (#1190)
Added UI to allow
root
users to set the retention on data spaces (#502)New flag to
groupby(limit=N)
allows specifying the maximum number of groups (0 up to ∞). If more thanN
entries are present, elements not matching one of the existing are ignored and a warning is issued. The system has a hard limit of .00, which can be removed by the operator by setting the propertyALLOW_UNLIMITED_GROUPS=true
in the Humio configuration file (environment file for Docker). (#1199)
Humio Server 1.0.6 Archive (2017-04-27)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.0.6 | Archive | 2017-04-27 | Cloud | 2020-11-30 | No | 1.1.0 | No |
Available for download two days after release.
Regular update release
Fixed in this release
Summary
Fixes for logarithmic scale graphs (#1111)
In the event-list view, a toggle has been added to enable line wrapping. (#1121)
Dashboard settings have been moved to the dataspace page, rather than on the front page (#1125)
Save metadata locally to the file
global-data-snapshot.json
rather than to the Kafka topic global-snapshots. This file should only be edited while the server is down, and even then with care.Allow configuration for standard search interval other than 24h (#1149)