Latest LTS Release
Falcon LogScale 1.165.1 LTS (2024-12-17)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.165.1 | LTS | 2024-12-17 | Cloud On-Prem | 2025-12-31 | Yes | 1.112 | No |
Hide file hashes
Filename | Hashtype | File Hash |
---|---|---|
server-alpine_x64 | SHA256 | f8a30db3009f7fb34d5d4fc23e12bbdd4b90b913931369b6146808b29541b79b |
server-linux_x64 | SHA256 | 6509786ea0df0c87fb4712e3bb92c96252c2438e110f376d8ce12fb5453a0ac7 |
Docker Image | SHA256 Checksum |
---|---|
humio-core | f0fe82c6e6f3d9560a9c1b928393345c3471f72dbdc0832f429cc6719f84ec7a |
humio-single-node-demo | 17c4dbb564ce98e73cbda45eea0e13609e8a83b509cc7670667c105afbf2ecb1 |
Download
Bug fixes and updates.
Removed
Items that have been removed as of this release.
GraphQL API
Removed the following deprecated fields from the
Cluster
GraphQL type:
ingestPartitionsWarnings
suggestedIngestPartitions
suggestedIngestPartitions
storagePartitions
storagePartitionsWarnings
suggestedStoragePartitions
Configuration
The dynamic configuration and related GraphQL API
AstDepthLimit
has been removed.The
UNSAFE_ALLOW_FEDERATED_CIDR
,UNSAFE_ALLOW_FEDERATED_MATCH
, andALLOW_MULTI_CLUSTER_TABLE_SYNCHRONIZATION
environment variables have been removed as they now react as if they are always enabled.
Deprecation
Items that have been deprecated and may be removed in a future release.
The lastScheduledSearch field from the
ScheduledSearch
datatype is now deprecated and planned for removal in LogScale version 1.202. The new lastExecuted and lastTriggered fields have been added to theScheduledSearch
datatype to replace lastScheduledSearch.
Upgrades
Changes that may occur or be required during an upgrade.
Installation and Deployment
The JDK has been upgraded to 23.0.1
New features and improvements
Security
Users can now view actions in restricted read-only mode when they have the
Data read access
permission on the repository or view.Users can now see and use saved queries without needing the
CreateSavedQueries
and theUpdateSavedQueries
permissions.Users can now see actions in restricted read-only mode when they have the
ReadAccess
permission on the repository or view.
Installation and Deployment
Bumped the lowest compatible version for
UNSAFE_RELAX_FEDERATED_PROTOCOL_VERSION_CHECK
to 1.163.0. Searching LogScale Multi-Cluster Search in clusters can only be used when all clusters are using 1.163 or above.
UI Changes
PDF Render Service now supports proxy communication between service and LogScale. Adding the environment variable
http_proxy
orhttps_proxy
to the PDF render service environment will add a proxy agent to all requests from the service to LogScale.Documentation is now displayed on hover in the LogScale query editor within Falcon. The full syntax usage and a link to the documentation is now visible for any keyword in a query.
The
Files
page now features a new table view with enhanced search and filtering, making it easier to find and manage your files. You can now import multiple files at once.For more information, see Lookup Files.
When Saving Queries, saved queries now appear in sorted order and are also searchable.
Users with the
ReadAccess
permission on the repository or view can now view scheduled reports in read-only mode.Files grouped by package are now displayed back again on the
Files
page including the Package Name column, which was temporarily unavailable after the recent page overhaul.A custom dialog now helps users save their widget changes on the
Dashboard
page before continuing on theSearch
page.
Automation and Alerts
In the activity logs, the exception field now only contains the name of the exception class, as the remainder of what used to be there is already present in the exceptionMessage field.
Three alert messages were deprecated and replaced with new, more accurate alert messages.
For Legacy Alerts: The query result is currently incomplete. The alert will not be polled in this loop replaces Starting the query for the alert has not finished. The alert will not be polled in this loop.
For Filter Alerts and Aggregate Alerts: The query result is currently incomplete. The alert will not be polled in this run replaces Starting the alert query has not finished. The alert will not be polled in this run in some situations where it is more correct.
The alert message was updated for filter and aggregate alerts in some cases where the live query was stopped due to the alert being behind.
For more information, see Monitoring Alert Execution through the humio-activity Repository.
The queryStart and queryEnd fields has been added for two aggregate alerts log lines:
Alert found results, but no actions were invoked since the alert is throttled
Alert found no results and will not trigger
and removed for three others as they did not contain the correct value:
Alert is behind. Will stop live query and start running historic queries to catch up
Alert query took too long to start and the result are now too old. LogScale will stop the live query and start running historic queries to catch up
Running a historic query to catch up took too long and the result is now outside the retry limit. LogScale will skip this data and start a query for events within the retry limit
The
Alerts
page now shows the following UI changes:A new column Last modified is added in the
Alerts
overview to display when the alert was last updated and by whom.The same above column is added either in the alert properties side panel and in the
Search
page.The Package column is no longer displayed as default on the
Alerts
overview page.
For more information, see Creating an Alert from the Alerts Overview.
GraphQL API
The disableFieldAliasSchemaOnViews GraphQL mutation has been added. This mutation allows you to disable a schema on multiple views or repositories at once, instead of running multiple disableFieldAliasSchemaOnView mutations.
For more information, see disableFieldAliasSchemaOnViews() .
New yamlTemplate fields have been created for
Dashboard
andSavedQuery
datatypes. They now replace the deprecated templateYaml fields.For more information, see
Dashboard
,SavedQuery
.GraphQL introspection queries now require authentication. Setting the configuration parameter
API_EXPLORER_ENABLED
tofalse
will still reject all introspection queries.Added the permissionType field to the
Group
GraphQL type. This field identifies the level of permissions the group has (view, organization or system).Added the following mutations:
createSystemPermissionsTokenV2
These mutations extend the functionality of the previous versions (without the
V2
suffix) by returning additional information about the token such as the id, name, permissions, expiry and IP filters.
Storage
WriteNewSegmentFileFormat
feature flag is now removed and the feature enabled by default to improve compression of segment files.The amount of autoshard increase requests allowed has been reduced, to reduce pressure on global traffic from these requests.
API
Implemented support for returning a result over 1GB in size on the
/api/v1/globalsubset/clustervhost
endpoint. There is now a limit on the size of 8GB of the returned result.
Configuration
A new boolean dynamic configuration parameter,
DisableNewRegexEngine
has been added for disabling the LogScale Regular Expression Engine V2 globally on the cluster. This parameter does not stop queries that are already running and using the engine, but prevents the submission of new ones. See Setting a Dynamic Configuration Value for an example of how to set dynamic configurations.The default value of
INGEST_OCCUPANCY_QUERY_PERMIT_LIMIT
variable has been changed from90 %
to20 %
.The default value for
MINISEGMENT_PREMERGE_MIN_FILES
has been increased from4
to12
. This results in less global traffic from merges, and reduces churn in bucket storage from mini-segments being replaced.
Dashboards and Widgets
Numbers in the
Table
widget can now be displayed with trailing zeros to maintain a consistent number of decimal places.When configuring series for a widget, suggestions for series are now available in a dropdown list, rather than having to type the series out.
The
Bar Chart
widget can now be configured in the style panel with a horizontal or vertical orientation.
Ingestion
Query resources will now also account for reading segment files in addition to scanning files. This will enable better control of CPU resources between search and the data pipeline operations (ingest, digest, storage).
Increased a timeout for loading new CSV files used in parsers to reduce the likelihood of having the parser fail.
The way query resources are handled with respect to ingest occupancy has changed. If the maximum occupancy over all the ingest readers is less than the limit set (90 % by default), LogScale will not reduce resources for queries. The new configuration variable
INGEST_OCCUPANCY_QUERY_PERMIT_LIMIT
now allows to change such default limit of 90 % to adjust how busy ingest readers should be in order to limit query resources.The toolbar of the Parser editor has been modified to be more in-line with the design of the LogScale layout. You can now find
, and buttons under the ellipsis menu.For more information, see Parsing Data.
Added logging when a parser fails to build and ingest defaults to ingesting without parsing. The log lines start with Failed compiling parser.
Log Collector
LogScale Collector can now enable internal loggin of instances through
Fleet Management
.For more information, see Fleet Management Internal Logging.
Queries
LogScale Regular Expression Engine V2 is now optimized to support character match within a single line, e.g.
/.*/s
.Ad-hoc tables feature is introduced for easier joins. Use the
defineTable()
function to define temporary lookup tables. Then, join them with the results of the primary query using thematch()
function. The feature offers several benefits:Intuitive approach that now allows for writing join-like queries in the order of execution
Step-by-step workflow to create complex, nested joins easily.
Workflow that is consistent to the model used when working with Lookup Files
Easy troubleshooting while building queries, using the
readFile()
functionExpanded join use cases, providing support for:
inner joins with
match(... strict=true)
left joins with
match(... strict=false)
right joins with
readFile() | match(... strict=false)
Join capabilities in LogScale Multi-Cluster Search environments (Self-Hosted users only)
When
match()
or similar functions are used, additional tabs from the files and/or tables used in the primary query now appear in order inSearch
next to the Results tab. The tab names are prefixed by \"Table: \" to make it more clear what they refer to.For more information, see Using Ad-hoc Tables.
Changed the internal submit endpoint such that the requests logs correct information on whether the request is internal or not.
Functions
Improvements in the
sort()
,head()
, andtail()
functions: the error message when entering an incorrect value in thelimit
parameter now mentions both the minimum and the maximum configured value for the limit.Introducing the new query function
array:rename()
. This function renames all consecutive entries of an array starting at index 0.For more information, see
array:rename()
.A new parameter
trim
has been added to theparseCsv()
function to ignore whitespace before and after values. In particular, it allows quotes to appear after whitespace. This is a non-standard extension useful for parsing data created by sources that do not adhere to the CSV standard.The following new functions have been added:
bitfield:extractFlagsAsString()
collects the names of the flags appearing in a bitfield in a string.bitfield:extractFlagsAsArray()
collects the names of the flags appearing in a bitfield in an array.
bitfield:extractFlags()
can now handle unsigned 64 bit input. It can also handle larger integers, but only the lowest 64 bits will be extracted.The
wildcard()
function has an additional parameter:includeEverythingOnAsterisk
. When this parameter is set totrue
, andpattern
is set to*
, the function will also match events that are missing the field specified in thefield
parameter.For more information, see
wildcard()
.The following query functions limits have now their minimum value set to
1
. In particular:The
bucket()
andtimeChart()
query functions now require that the value given as theirbucket
argument is at least1
. For example,bucket(buckets=0)
will produce an error.The
collect()
,hash()
,readFile()
,selfJoin()
,top()
andtranspose()
query functions now require theirlimit
argument to be at least1
. For example,top([aid], limit=0)
will produce an error.The
series()
query function now requires thememlimit
argument to be at least1
, if provided. For example,| series(collect=aid, memlimit=0)
will produce an error.
The new query functions
crypto:sha1()
andcrypto:sha256()
have been added. These functions compute a cryptographic SHA-hashing of the given fields and output ahex
string as the result.
Fixed in this release
Security
OIDC authentication would fail if certain characters in the
state
variable were not properly URL-encoded when redirecting back to LogScale. This issue has been fixed.
UI Changes
Event List has been fixed as it would not take sorting from query API into consideration when sorting events based on UI configuration.
The red border appearing in the
Table
widget when invalid changes are made to a dashboard interaction is now fixed as it would not display correctly.Dragging would stop working on the
Dashboard
page in cases where invalid changes were made and saved to a widget and the user would then click . This issue has been fixed and the dragging now works correctly also in this case.
Automation and Alerts
Fixed an issue where the
Action
overview page would not load if it contained a large number of actions.
GraphQL API
role.users query has been fixed as it would return duplicate users in some cases.
Storage
Mini-segments would not be prioritized correctly when fetching them from bucket storage. This issue has now been fixed.
Segments were not being fetched on an owner node. This issue could lead to temporary under-replication and keeping events in Kafka.
Resolved a defect that could lead to corrupted JSON messages on the internal Kafka queue.
NullPointerException error occurring since version 1.156.0 when closing segment readers during
redactEvent
processing has now been fixed.Several issues have been fixed, which could cause LogScale to replay either too much, or too little data from Kafka if segments with
topOffsets
were deleted at inopportune times. LogScale will now delay deleting newly written segments, even if they violate retention, until thetopOffsets
field has been cleared, which indicates that the segments cannot be replayed from Kafka later. Segment bytes being held onto in this way are logged by theRetentionJob
as part of the periodic logging.An extremely rare data loss issue has been fixed: file corruption on a digester could cause the cluster to delete all copies of the affected segments, even if some copies were not corrupt. When a digester detects a corrupt recently-written segment file during bootup, it will no longer delete that segment from Global. It will instead only remove the local file copy. If the segment needs to be deleted in Global because it's being replayed from Kafka, the new digest leader will handle that as part of taking over the partition.
Recently ingested data could be lost when the cluster has bucket storage enabled,
USING_EPHEMERAL_DISKS
is set tofalse
, and a recently ingested segment only exists in bucket storage. This issue has now been fixed.LogScale could spuriously log Found mini segment without replacedBy and a merge target that already exists errors when a repository is undeleted. This issue has been fixed.
API
An issue has been fixed in the computation of the
digestFlow
property of the query response. The information contained there would be stale in cases where the query started from a cached state or there were digest leadership changes (for example, in case of node restarts).For more information, see Polling a Query Job.
Dashboards and Widgets
Long values rendered in the
Single Value
widget would overflow the widget container. This issue has now been fixed.Dashboard parameter values were mistakenly not used by saved queries in scenarios with parameter naming overlap and no saved query arguments provided.
Ingestion
Parser Assertions have been fixed as some would be marked as passing, even though they should be failing.
An erronous array gap detection has been fixed as it would detect gaps where there were none.
An error is no longer returned when running parser tests without test cases.
An issue has been fixed that could cause the starting position for digest to get stuck in rare cases.
Queries
Backtracking checks are now added to the optimized instructions for
(?s).*?
in the LogScale Regular Expression Engine V2. This prevents regexes of this type from getting stuck in infinite loops which are ultimately detrimental to a cluster's health.Fixed an issue which could cause live query results from some workers being temporarily represented in the final result twice. The situation was transient and could only occur during digester changes.
Fixed an issue where a query would fail to start in some cases when the query cache was available. The user would see the error Recent events overlap span excluded from query using historicStartMin.
Stopping alerts and scheduled searches could create a Could not cancel alert query entry in the activity logs. This issue has now been fixed. The queries were still correctly stopped previously, but this bug led to incorrect logging in the activity log.
The query scheduler has been fixed for an issue that could cause queries to get stuck in rare cases.
Functions
In
defineTable()
,start
andend
parameters did not work correctly when the primary query's end time was a relative timestamp: the sub-query's time was relative tonow
, and it has now been fixed to be relative to the primary query's end time.Error messages produced by the
match()
function could reference the wrong file. This issue has now been fixed.
Other
Query result highlighting would crash cluster nodes when getting filter matches for some regexes. This issue has been fixed.
Known Issues
Functions
A known issue in the implementation of the
defineTable()
function means it is not possible to transfer generated tables larger than 128MB. The user receives an error if the generated table exceeds that size.The
match()
function misses some matching rows when matching on multiple rows inglob
mode. This happens in cases where there are rows with different glob patterns matching on the same event. For example, using a fileexample.csv
:Raw Eventscolumn1,column2 ab*,one a*,two a*,three and the query:
logscalematch(example.csv, field=column1, mode=glob, nrows=3)
An event with the field column1=abc will only match on the last two rows.
The
match()
function misses some matching rows when matching on multiple rows incidr
mode. This happens in cases where there are rows with different subnets matching the same event. For example, using a fileexample.csv
:Raw Eventssubnet,value 1.2.3.4/24,monkey 1.2.3.4/25,horse and the query:
logscalematch(example.csv, field=subnet, mode=cidr, nrows=3)
An input event with ip = 1.2.3.10 will only output:
ip,value 1.2.3.10,horse
whereas the correct output should actually be:
ip,value 1.2.3.10,horse 1.2.3.10,monkey
Improvement
UI Changes
Improving the information messages that are displayed in the query editor when errors with lookup files used in queries occur.
Improving the warnings given when performing multi-cluster searches across clusters running on different LogScale versions.
API
Improved the efficiency of the autosharding rules store.
Queries
Worker query prioritization is improved in specific cases where a query starts off highly resource-consuming but becomes more efficient as it progresses. In such cases, the scheduler could severely penalize the query, leading to it being unfairly deprioritized.
Queries that refer to fields in the event are now more efficient due to an improvement made in the query engine.