The groupBy() query function is used to group together events by one or more specified fields. This is similar to the GROUP BY method in SQL databases.

ParameterTypeRequiredDefault ValueDescription
field[a]array of stringsrequired   This specifies which field or fields to group.
functionarray of aggregate functionsoptional[b] count(as=_count) This specifies which aggregate functions to use with each group. If several aggregators are listed for the function parameter, then their outputs are combined using the rules described for stats().
limitstringoptional[b] 20,000 This sets the limit for the number of group elements. Default value controlled by GroupDefaultLimit dynamic configuration, max value controlled by GroupMaxLimit. If the argument is max (limit=max), then the value of GroupMaxLimit is used. It prioritizes the top-N series. The top N value being the series with the highest numerical value attributed to it by the subquery across all fields.
  Maximum1,000,000 
  Controlling Variables
  

GroupMaxLimit

Variable default: 1,000,000 group elements

[a] The parameter name field can be omitted.

[b] Optional parameters use their default value unless explicitly set.

Hide omitted argument names for this function

Show omitted argument names for this function

groupBy() Function Operation

The groupBy() function has specific implementation and operational considerations, outlined below.

  • When groupBy() reaches its limit, each group may not have processed all relevant events. Consequently, when using count as the subaggregate function,groupBy() will produce a lower bound estimate.

Series Selection in groupBy()

The selection is based on the aggregate numerical output across all specified functions and all time buckets, not the series identifiers themselves.

The limit prioritizes the top-N series. The top N value being the series with the highest numerical value attributed to it by the subquery across all fields.

Series Selection Process:

  • The selection is based on the numerical values produced by the subquery/function.

  • It is not based on the series names.

  • When multiple functions are used, the function considers all values produced.

For different examples of top N series selection, see Find Top N Value of Series - Example 1 and Find Top N Value of Series - Example 2.

Grouping in groupBy()

The groupBy() can be used to execute aggregate functions on each group. The fields generated are grouped by the field or fields in the field parameter for each aggregate function. For example, by default the count() function is used and the grouped count returned in the _count field by default.

The default is to use count() as an aggregate function. However, if the goal is to only find the unique values of a field or combinations of multiple fields, then groupBy() can be provided with an empty list as its aggregate function.

When showing time series data, the timeChart() and bucket() functions are an extension of groupBy() that groups by time.

Limits when using groupBy()

Warning

The groupBy() may return incomplete results limited to a restricted subset of groups and related events, in cases where the limit parameter is exceeded.

The groupBy() function is limited in the number of groups it will handle. Such limit is set in the LogScale configuration using GraphQL mutations bound by the GroupMaxLimit dynamic configuration.

Starting from version 1.127, the maximum value of the limit parameter is 1,000,000 by default (adjustable by the corresponding GroupMaxLimit dynamic configuration), which means it is possible to create up to 1 Million groups.

When the limit parameter is not supplied explicitly, its default value is determined by the GroupDefaultLimit dynamic configuration, which has a default value of 20,000.

GroupDefaultLimit cannot be higher than GroupMaxLimit.

Attention

The limit parameter is governed by the dynamic configurations GroupMaxLimit and GroupDefaultLimit to the same value for a seamless upgrade.

Additionally, like all LogScale functions, groupBy() has an internal memory limit determined by the dynamic configuration QueryCoordinatorMemoryLimit. This means that it is possible for the function to collect less than the specified limit number of groups, if the total amount of data collected by the function exceeds this limit. The internal memory limit is in place to protect the cluster health, and ensure sufficient memory for other queries to run. Should the memory limit be reached, no more groups are added. This may lead to results that are inconsistent with the results shown prior to the limit being reached. The groups that were present in prior results may be pushed out of the result set during merges between nodes, to ensure that the memory usage is kept within bounds. If a group is shown, the results for that group will be correct. In situations where you reach the memory limit, you may want to use the top() function to narrow the list of groups.

One of the reasons for this limitation is because groupBy() is implemented to work entirely in memory, and cannot spill to disk. The limit helps to prevent searches from consuming too much memory. This method and limitation may be changed in future versions of LogScale.

groupBy() Syntax Examples

Since the most common use-case is to count the distinct values of a field, the default behavior is to use count() to aggregate the events. The simplest use-case of groupBy() is, therefore, to count the occurrence of the distinct values of a field:

logscale
groupBy(status_code)

Given input:

status_code ip
440 1.111.111.111
500 1.111.111.111
440 1.111.111.111
440 2.222.222.222

The output is:

status_code _count
440 3
500 1

The task is to only find the unique values, groupBy() can be provided with an empty list as the aggregate function, implying that nothing will be aggregated:

logscale
groupBy(status_code, function=[])

The output is:

status_code
440
500

Similarly, groupBy() can be used to find the unique combinations of field values present in the data:

logscale
groupBy([status_code, ip], function=[])

The output is:

status_code ip
440 1.111.111.111
500 1.111.111.111
440 2.222.222.222

A usage of the function such as the one above is also very useful for query-based parameters in dashboards, see Query Parameter for more explanations.

As an example of how to use this query function, suppose you have a LogScale repository that is ingesting data from one or more web servers. Now suppose you want to get a list of HTTP status codes, the results of users accessing the server. For instance, HTTP code 404 for Not Found — for web pages not found. You could get a list of all of the HTTP codes returned and a count of each by executing a query that counts the different http status codes, like this:

logscale
groupBy(field=status_code, function=count())
| sort(statuscode, order=asc)

In addition to the groupBy(), the results here are piped to the sort() function to sort the results from the lowest HTTP code to the highest.

statuscode_count
2007485
3012314
30465
400101
4046425
4052
4089

With the groupBy() function, you can have LogScale group by more than one field. You would just give the fields in an array, a comma-separated list within square-brackets. It would look something like this query in which events are grouped on HTTP method and status code:

logscale
groupBy(field=[method, statuscode], function=count())
| sort([method, statuscode], order=asc)

Will produce these results:

methodstatuscode
CONNECT301
GET200
GET301
GET304
GET400
GET404
HEAD200
HEAD301
HEAD400
HEAD404
OPTIONS200
OPTIONS301
POST200
POST301
POST400
POST404

Although this query works, you may also want to show the total for each method, not just the total for each status code of each method. To do that, you'll have to adjust the query to look like this:

logscale
[groupBy(method, function=[count(as=method_total), groupBy(statuscode, function=count(as=method_status_count))])]

In this query, a groupBy() is nested within another groupBy(). The results will look like this:

methodmethod_totalstatuscodemethod_status_count
GET140782007326
GET140783011246
GET1407830465
GET1407840070
GET140784045371
HEAD13920014
HEAD13930122
HEAD1394006
HEAD13940497
POST200520014
POST20053011002
POST200540025
POST2055404964

These results still might not be as tidy as you like. You might do better with separate queries for each method or each status code — assuming you're interested in only a few specific ones. Another possibility might be to use an API to be able to assemble the results in a manner that you prefer.

Instead of a single function, additional query functions can be also applied, allowing for calculations on the count that is obtained from groupBy(). The following query contains an embedded expression within groupBy(), as a demonstration of formatting/calculating the value within a groupBy() expression:

logscale
groupBy(host, function=[{count()
| esp:=_count/300}])

The query generates a count() of the number of hosts in the aggregation, which creates a field _count which is then divided by 300 and placed into the field esp.

groupBy() Examples

Click + next to an example below to get the full details.

Hourly Data Events

Summarize events by providing a count of the number of data events per hour using the time:hour() function

Query
logscale
hr := time:hour(field="@ingesttimestamp")
|groupBy(hr)
Introduction

In this example, the time:hour() function is used with groupBy() to average the count of data events per hour.

Step-by-Step
  1. Starting with the source repository events.

  2. logscale
    hr := time:hour(field="@ingesttimestamp")

    Gets the hour (24-hour clock) of the values in the @ingesttimestamp and returns the results in a new field named hr.

  3. logscale
    |groupBy(hr)

    Groups the returned results by hr field and provides a count of the number of data events per hour in a _count field.

  4. Event Result set.

Summary and Results

The query is used to average the count of data events per hour. The results can be plotted onto a bar chart.

Sort Timestamps With groupBy()

Sorting fields based on aggregated field values

Query
logscale
timestamp := formatTime(format="%H:%M")
| groupBy([thread],
function=[{sort("timestamp")
| collect("timestamp")}])
Introduction

When using aggregation, you may want to sort on a field that is part of the aggregated set but not the main feature of the aggregated value. For example, sorting the values by their timestamp rather than the embedded value. To achieve this, you should use a function that sorts the field to be used as the sort field, and then use collect() so that the value from before the aggregation can be displayed in the generated event set. This query can be executed in the humio respository.

Step-by-Step
  1. Starting with the source repository events.

  2. logscale
    timestamp := formatTime(format="%H:%M")

    Creates a new field, timestamp formatted as HH:MM.

  3. logscale
    | groupBy([thread],

    Groups the events, first by the name of the thread and then the formatted timestamp.

  4. logscale
    function=[{sort("timestamp")
    | collect("timestamp")}])

    Uses the sort() combined with collect() as the method fo aggregation. As an embedded expression for the function, this will sort the events on the timestamp field and then retrieve the field as it would normally be removed as part of the aggregation process.

  5. Event Result set.

Summary and Results

The result set will contain a list of the aggregated thread names sorted by the timestamp:

threadtimestamp
BootstrapInfoJob10:09
DataSynchJob10:09
Global event loop10:10
LocalLivequeryMonitor10:09
LogCollectorManifestUpdate10:09
TransientChatter event loop10:10
aggregate-alert-job10:09
alert-job10:09
block-processing-monitor-job10:09
bloom-scheduler10:09
bucket-entity-config10:09
bucket-overcommit-metrics-job10:09
bucket-storage-download10:09
bucket-storage-prefetch10:09
chatter-runningqueries-logger10:09
chatter-runningqueries-stats10:09

Deduplicate Content by Field

Deduplicating content based on a specific field

Query
logscale
groupBy(field, function=tail(1))
Introduction

If you want to deduplicate events by a given field, for example to identify a unique list of events for further processing, you can use an aggregate function. In this example, the groupBy() function is used with tail() to use the last value in a sequence of events.

Step-by-Step
  1. Starting with the source repository events.

  2. logscale
    groupBy(field, function=tail(1))

    Groups all events in a specific field, and reduces the results using tail() to take only the last value.

  3. Event Result set.

Summary and Results

The query is used to deduplicate events by a given field. This is useful if you want to create a unique list of events for further processing.

Alert Query For Parsers Issues

Reporting errors

Query
logscale
#type=humio #kind=logs
| loglevel=WARN
| class = c.h.d.ParserLimitingJob
| "Setting reject ingest for"
| groupBy(id, function=[count(), min(@timestamp), max(@timestamp)] )
| timeDiff:=_max-_min
| timeDiff > 300000 and _count > 10
Introduction

This alert query tries to balance reacting when there are problems with parsers, without being too restrictive.

Step-by-Step
  1. Starting with the source repository events.

  2. logscale
    #type=humio #kind=logs

    Filters on all logs across all hosts in the cluster.

  3. logscale
    | loglevel=WARN

    Filters for all events where the loglevel is equal to WARN.

  4. logscale
    | class = c.h.d.ParserLimitingJob

    Assigns the value c.h.d.ParserLimitingJob to the class for the logs having the loglevel value WARN.

  5. logscale
    | "Setting reject ingest for"

    Filters for events containing the string Setting reject ingest for. This is the error message generated when ingested events are rejected.

  6. logscale
    | groupBy(id, function=[count(), min(@timestamp), max(@timestamp)] )

    Groups the returned result by the field id, makes a count on the events and returns the minimum timestamp and maximum timestamp. This returns a new event set, with the fields id, _count, _min, and _max.

  7. logscale
    | timeDiff:=_max-_min

    Calculates the time difference between the maximum timestamp values and the minimum timestamp values and returns the result in a new field named timeDiff.

  8. logscale
    | timeDiff > 300000 and _count > 10

    Returns all events where the values of timeDiff is greater that 300000 and where there are more than 10 occurrences.

  9. Event Result set.

Summary and Results

This query is used to set up alerts for parsers issues. Setting up alerts for parsers issues will allow to proactively reach out to customers where their queries are being throttled and help them.

Calculate Events per Second by Host

Determine event rate for each host over a 5-minute period using an embedded expression within the groupBy() function

Query
logscale
groupBy(host, function=[{count() | esp:=_count/300}])
Introduction

In this example, the groupBy() function is used with an embedded expression to calculate the total event count and events per second for each host over a 5-minute period.

Example incoming data might look like this:

@timestamphostservicestatusresponse_time
2025-08-06T10:00:00Zserver1webok120
2025-08-06T10:00:01Zserver2databaseok85
2025-08-06T10:00:02Zserver1webok95
2025-08-06T10:00:03Zserver3cacheok45
2025-08-06T10:00:04Zserver2databaseerror250
2025-08-06T10:00:05Zserver1webok110
2025-08-06T10:00:06Zserver3cacheok40
2025-08-06T10:00:07Zserver2databaseok90
2025-08-06T10:00:08Zserver1weberror300
2025-08-06T10:00:09Zserver3cacheok42
Step-by-Step
  1. Starting with the source repository events.

  2. logscale
    groupBy(host, function=[{count() | esp:=_count/300}])

    Groups events by the host field and uses an embedded expression to count the number of events per host and calculate events per second in one operation.

    The count() function returns the count in a field named _count by default. The embedded expression then divides this value by 300 and stores the result in a new field named esp. This calculation provides the average events per second for each host over the time period.

    Using an embedded expression within the groupBy() function performs both the count and the calculation as part of the same aggregation. Both the original _count and the calculated esp field are included in the results.

  3. Event Result set.

Summary and Results

The query is used to analyze event frequency patterns by calculating both the total event count and the average events per second for each host.

Note that the query aggregates against both the original count and the count/300 as one aggregate set. Using an embedded expression is more efficient for larger event sets.

This query is useful, for example, to monitor system load distribution across hosts, identify hosts with unusual event rates, or establish baseline activity patterns for capacity planning.

Sample output from the incoming example data:

host_countesp
server140.013333
server230.010000
server330.010000

Note that the _count field shows the total number of events per host, and the esp field shows the calculated events per second (total events divided by 300 seconds)

This data is ideal for visualization using a Time Chart widget to show event rates over time. A Bar Chart widget could compare event rates across hosts, while a Gauge widget could show current event rates against predefined thresholds. Consider creating a dashboard that combines these visualizations with alerts for when event rates exceed normal ranges.

Calculate Query Cost for All Users by Repository

Search across multiple repositories to calculate query costs for all users by repository using sort() and groupBy() functions

Query
logscale
#type=humio #kind=logs class=c.h.j.RunningQueriesLoggerJob message="Highest Cost query"
 | groupBy(repo, initiatingUser, totalLiveCost, totalStaticCost)
 | sort([totalLiveCost, totalStaticCost])
Introduction

In this example, the query uses sort() and groupBy() functions to find query costs. The query filters logs in humio repository that are tagged with kind equal to logs and then returns the events where the class field has values containing c.h.j.RunningQueriesLoggerJob, searching for the specific value Highest Cost query.

Example incoming data might look like this:

#type#kindclassmessagetimestampdataspaceinitiatingUsertotalLiveCosttotalStaticCostdeltaTotalCostrepo
humiologsc.h.j.RunningQueriesLoggerJobHighest Cost query2025-03-26T09:30:00Zproductionjohn.doe15008002300security-logs
humiologsc.h.j.RunningQueriesLoggerJobHighest Cost query2025-03-26T09:31:00Zdevelopmentjane.smith200012003200app-logs
humiologsc.h.j.RunningQueriesLoggerJobHighest Cost query2025-03-26T09:32:00Zstagingbob.wilson10005001500infra-logs
humiologsc.h.j.RunningQueriesLoggerJobHighest Cost query2025-03-26T09:33:00Zproductionjohn.doe18009002700security-logs
humiologsc.h.j.RunningQueriesLoggerJobHighest Cost query2025-03-26T09:34:00Zdevelopmentjane.smith250013003800app-logs
humiologsc.h.j.RunningQueriesLoggerJobHighest Cost query2025-03-26T09:35:00Zstagingalice.cooper12006001800infra-logs
Step-by-Step
  1. Starting with the source repository events.

  2. logscale
    #type=humio #kind=logs class=c.h.j.RunningQueriesLoggerJob message="Highest Cost query"

    Filters for Humio internal logs containing c.h.j. RunningQueriesLoggerJob in the class field and where the value in the message field is equal to Highest Cost query.

  3. logscale
    | groupBy(repo, initiatingUser, totalLiveCost, totalStaticCost)

    Groups the results by the repo field, the initiatingUser field, and by both cost types (the fields totalLiveCost, totalStaticCost), and returns a count in a field named _count.

  4. logscale
    | sort([totalLiveCost, totalStaticCost])

    Sorts the results by both the totalLiveCost field and the totalStaticCost field, in descending order by default.

  5. Event Result set.

Summary and Results

The query is used to search across multiple repositories and output query costs for all users by repository. The query returns the count in a field named _count. Use this query to focus on live and static costs separately.

Sample output from the incoming example data:

repoinitiatingUsertotalLiveCosttotalStaticCost_count
app-logsjane.smith200012001
security-logsjohn.doe15008001
infra-logsbob.wilson10005001

Detect All Occurrences of Event A Before Event B

Detect all occurrences of event A before event B (brute force attack) using the partition() function combined with groupBy()

Query
logscale
head()
| groupBy(
      key,
      function = partition(
          condition=test(status=="success"),
          split="after",
          [
              { status="failure" | count(as=failures) }, 
              range(@timestamp, as=timespan), 
              selectLast(status)
          ]
      )
  )
| failures >= 3
| status = "success"
Introduction

In this example, the partition() function is used with the groupBy() function to detect all occurrences of event A before event B (brute force attack).

The query will detect instances where there were 3 or more failed attempts followed by a successful attempt within the specified 10-second window.

Note that the partition() function must be used after an aggregator function to ensure event ordering. Also note that the events must be sorted in order by timestamp to prevent errors when running the query. It is possible to select any field to use as a timestamp.

Example incoming data might look like this:

@timestampkeystatus
1451606300200cfailure
1451606300400cfailure
1451606300600cfailure
1451606301000afailure
1451606302000afailure
1451606302200afailure
1451606302300afailure
1451606302400bfailure
1451606302500afailure
1451606302600asuccess
1451606303200bfailure
1451606303300csuccess
1451606303400bfailure
1451606304500afailure
1451606304600afailure
1451606304700afailure
1451606304800asuccess
Step-by-Step
  1. Starting with the source repository events.

  2. logscale
    head()

    Selects the oldest events ordered by time.

  3. logscale
    | groupBy(
          key,
          function = partition(
              condition=test(status=="success"),
              split="after",
              [
                  { status="failure" | count(as=failures) }, 
                  range(@timestamp, as=timespan), 
                  selectLast(status)
              ]
          )
      )

    Groups the events by a specified key (for example, a user ID or IP address), filters for successful events (filters for events that meet the defined condition for the field status that must contain the value success), then splits the data after each successful event. Notice how the condition is provided as a non-aggregate subquery.

    Furthermore, it filters all the failed attempts where the field status contains the value failure.

    Makes a count of all the failed attempts, and returns the results in a field named failures, calculates the timespan of the failures, and selects the status of the last event. Calculating the timespan of the failure sequence, is useful for analysis.

  4. logscale
    | failures >= 3

    Filters for partitions with 3 or more failures.

  5. logscale
    | status = "success"

    Filters for partitions containing the value success in the status field.

  6. Event Result set.

Summary and Results

The query is used to detect all occurrences of potential brute force attack patterns. It looks for instances where there were 3 or more failed attempts (event A) followed by a successful attempt (event B), regardless of the time between failures. The timespan between each attempt is reported, which could be used to identify brute force attacks.

Sample output from the incoming example data:

keyfailurestimespanstatus
a51600success
a3300success
c33100success

Detect Event A Happening X Times Before Event B

Detect event A happening X times before event B (brute force attack) using the partition() function combined with groupBy()

Query
logscale
head()
| groupBy(
    key,
    function = partition(
        condition=test(status=="success"),
        split="after",
        [
            { status="failure" | count(as=failures) }, 
            range(@timestamp, as=timespan), 
            max(@timestamp), 
            selectLast(status)
        ]
    )
)
| failures >= 3
| status = "success"
Introduction

In this example, the partition() function is used with the groupBy() function to detect event A happening X times before event B (brute force attack).

The query will detect instances where there were 3 or more failed attempts followed by a successful attempt within the specified 10-second window.

Note that the partition() function must be used after an aggregator function to ensure event ordering. Also note that the events must be sorted in order by timestamp to prevent errors when running the query. It is possible to select any field to use as a timestamp.

Example incoming data might look like this:

@timestampkeystatus
1451606300200cfailure
1451606300400cfailure
1451606300600cfailure
1451606301000afailure
1451606302000afailure
1451606302200afailure
1451606302300afailure
1451606302400bfailure
1451606302500afailure
1451606302600asuccess
1451606303200bfailure
1451606303300csuccess
1451606303400bfailure
1451606304500a<no value>
1451606304600cfailure
1451606304700cfailure
1451606304800cfailure
Step-by-Step
  1. Starting with the source repository events.

  2. logscale
    head()

    Selects the oldest events ordered by time.

  3. logscale
    | groupBy(
        key,
        function = partition(
            condition=test(status=="success"),
            split="after",
            [
                { status="failure" | count(as=failures) }, 
                range(@timestamp, as=timespan), 
                max(@timestamp), 
                selectLast(status)
            ]
        )
    )

    Groups the events by a specified key (for example, a user ID or IP address), then splits the sequence of events after each successful event (where the condition status=="success").

    For each partition, it counts the number of failure in status and stores it in the field failures, finds the range of timestamps in the partition, finds the newest timestamp, and finds the latest status to show if the partition ended with a success.

  4. logscale
    | failures >= 3

    Filters for partitions that contained 3 or more failures.

  5. logscale
    | status = "success"

    Filters for partitions with the value success in the status field to ensure that the final status is a success.

  6. Event Result set.

Summary and Results

The query is used to detect instances where there are 3 or more failed attempts followed by a successful attempt. The query can be used to detect a brute force attack where an attacker tries multiple times before succeeding. Note that the effectiveness of this query depends on the nature of your data and the typical patterns in your system.

Sample output from the incoming example data:

keyfailurestimespanstatus
a51600success
a3300success
c33100success

Filter Out Fields With No Value

Filter out fields with no values from search results using the groupBy with sort()

Query
logscale
method=GET
groupBy(field=[method, statuscode], function=count(as=method_total))
sort([method, statuscode], order=asc)
FieldName!=""
Introduction

It is possible to filter out on fields with no values in a given returned search result. In this example, all statuscode fields containing no value is filtered out from the final search result.

Example incoming data might look like this:

methodstatuscodemethod_total
GET<no value>10
GET20032492
GET3011
GET304113
GET4039
GET404132
Step-by-Step
  1. Starting with the source repository events.

  2. logscale
    method=GET

    Filters for all events with methods of the type GET.

  3. logscale
    groupBy(field=[method, statuscode], function=count(as=method_total))

    Groups the returned results into a method field and a statuscode field and makes a count of the events in a new field named method_total.

  4. logscale
    sort([method, statuscode], order=asc)

    Sorts the returned results in ascending order.

  5. logscale
    FieldName!=""

    Excludes all events where one of the fields do not contain a value.

  6. Event Result set.

Summary and Results

The query is used to filter out fields not containing any values from the returned search result.

Sample output from the incoming example data:

methodstatuscodemethod_total
GET20032492
GET3011
GET304113
GET4039
GET404132

Find Overlapping User Sessions

Detect when user sessions overlap in time using the neighbor() function within groupBy()

Query
logscale
groupBy(user.id, function=neighbor(include=[startTime, endTime], prefix=previous))
| groupBy(user.id, function=neighbor(direction=succeeding, include=[startTime, endTime], prefix=next))
| case {
        test(startTime <= previous.endTime) | overlaps := "previous";
        test(endTime >= next.startTime) | overlaps := "next";
        }
| select([user.id, startTime, endTime, overlaps, previous.startTime, previous.endTime, next.startTime, next.endTime])
Introduction

In this example, the neighbor() function is used to identify overlapping user sessions by comparing session start and end times with both previous and next sessions for each user. The function is implemented as part of groupBy() operations for optimal performance, as this approach processes the data in a single pass.

Example incoming data might look like this:

@timestampuser.idstartTimeendTime
2025-08-06T10:00:00Zuser12025-08-06T10:00:00Z2025-08-06T10:30:00Z
2025-08-06T10:15:00Zuser12025-08-06T10:15:00Z2025-08-06T10:45:00Z
2025-08-06T10:45:00Zuser12025-08-06T10:45:00Z2025-08-06T11:15:00Z
2025-08-06T10:00:00Zuser22025-08-06T10:00:00Z2025-08-06T10:20:00Z
2025-08-06T10:30:00Zuser22025-08-06T10:30:00Z2025-08-06T10:50:00Z
2025-08-06T11:00:00Zuser22025-08-06T11:00:00Z2025-08-06T11:20:00Z
Step-by-Step
  1. Starting with the source repository events.

  2. logscale
    groupBy(user.id, function=neighbor(include=[startTime, endTime], prefix=previous))

    Groups events by user.id and uses neighbor() to access the previous session's startTime and endTime. The prefix parameter adds previous. to the field names in the output. Using neighbor() as part of the groupBy() function is significantly more efficient than using it separately, as it allows the operation to be performed during group creation, eliminating the need for additional filtering and transformation steps to manually find sessions for particular users.

  3. logscale
    | groupBy(user.id, function=neighbor(direction=succeeding, include=[startTime, endTime], prefix=next))

    Groups events by user.id again and uses neighbor() to access the next session's startTime and endTime. The direction parameter is set to succeeding to look at the next event, and the prefix adds next. to the field names. This second grouping operation maintains the performance benefits of integrating neighbor() within groupBy().

  4. logscale
    | case {
            test(startTime <= previous.endTime) | overlaps := "previous";
            test(endTime >= next.startTime) | overlaps := "next";
            }

    Uses a case statement to evaluate two conditions and returns the results in a new field named overlaps. The field indicates whether a session overlaps with either its previous or next session:

    • First test: test(startTime <= previous.endTime) checks if the current session starts before or at the same time as when the previous session ends. If true, it assigns previous to the overlaps field.

    • Second test: test(endTime >= next.startTime) checks if the current session ends after or at the same time as when the next session starts. If true, it assigns next to the overlaps field.

    Note that if a session has no overlaps, the overlaps field will not be created for that event.

  5. logscale
    | select([user.id, startTime, endTime, overlaps, previous.startTime, previous.endTime, next.startTime, next.endTime])

    Explicitly selects and orders the output fields for clarity. This ensures consistent output formatting and removes any additional fields that might have been created during the processing steps.

  6. Event Result set.

Summary and Results

The query is used to identify user sessions that overlap in time with either their previous or next sessions.

This query is useful, for example, to detect potential security issues where a user appears to be logged in from multiple locations simultaneously, or to identify problems with session management in applications.

Note

Note that this query is optimized for performance by incorporating the neighbor() function within the groupBy() operations. This approach is significantly more efficient than applying neighbor() separately after grouping, as it reduces the number of processed events by the neighbor() function and leverages LogScale's internal optimizations for grouped operations.

Sample output from the incoming example data:

user.idstartTimeendTimeoverlapsprevious.startTimeprevious.endTimenext.startTimenext.endTime
user12025-08-06T10:00:00Z2025-08-06T10:30:00Znext<no value><no value>2025-08-06T10:15:00Z2025-08-06T10:45:00Z
user12025-08-06T10:15:00Z2025-08-06T10:45:00Zprevious2025-08-06T10:00:00Z2025-08-06T10:30:00Z2025-08-06T10:45:00Z2025-08-06T11:15:00Z
user12025-08-06T10:45:00Z2025-08-06T11:15:00Zprevious2025-08-06T10:15:00Z2025-08-06T10:45:00Z<no value><no value>

The output demonstrates the overlap detection: sessions are marked as overlapping with either their previous session or their next session. The overlaps field contains either previous or next depending on which neighboring session overlaps with the current session.

The results from this query would be well-suited for visualization in a time chart widget, showing the overlapping sessions across time.

Find Processes with Low Execution Count

Group processes by hash and name to identify rarely executed ones using the groupBy() function

Query
logscale
#event_simpleName=ProcessRollup2 OR #event_simpleName=SyntheticProcessRollup2
aid=?aid
groupBy([SHA256HashData, ImageFileName], limit=max)
_count < 5
sort(_count, limit=1000)
Introduction

In this example, the groupBy() function is used to identify processes that have been executed only a few times on a specific host, which could be useful for detecting unusual or potentially suspicious activity.

Example incoming data might look like this:

@timestampevent_simpleNameaidSHA256HashDataImageFileNameCommandLine
2025-10-06T10:00:00ZProcessRollup212345abca1b2c3d4e5f6...chrome.exeC:\Program Files\Google\Chrome\Application\chrome.exe
2025-10-06T10:05:00ZProcessRollup212345abca1b2c3d4e5f6...chrome.exeC:\Program Files\Google\Chrome\Application\chrome.exe
2025-10-065T10:10:00ZSyntheticProcessRollup212345abcf6e5d4c3b2a1...suspicious.exeC:\Users\Admin\Downloads\suspicious.exe
2025-10-06T10:15:00ZProcessRollup212345abc98765432dcba...notepad.exeC:\Windows\System32\notepad.exe
2025-10-06T10:20:00ZProcessRollup212345abc98765432dcba...notepad.exeC:\Windows\System32\notepad.exe
2025-10-06T10:25:00ZProcessRollup212345abc11223344aabb...calc.exeC:\Windows\System32\calc.exe
Step-by-Step
  1. Starting with the source repository events.

  2. logscale
    #event_simpleName=ProcessRollup2 OR #event_simpleName=SyntheticProcessRollup2

    Filters events to include only process execution events with event_simpleName equal to ProcessRollup2 or SyntheticProcessRollup2.

  3. logscale
    aid=?aid

    Filters events for a specific host using the aid (agent ID) parameter.

  4. logscale
    groupBy([SHA256HashData, ImageFileName], limit=max)

    Groups events by both the SHA256HashData and ImageFileName fields. The limit parameter is set to max to ensure all groups are included.

    By default the count() function is used and the grouped count returned in a field named _count.

  5. logscale
    _count < 5

    Filters the groups to show only those with fewer than 5 executions, using the built-in _count field that is automatically created by groupBy().

  6. logscale
    sort(_count, limit=1000)

    Sorts the results by execution count in ascending order, limiting the output to 1000 results.

  7. Event Result set.

Summary and Results

The query is used to identify processes that have been executed infrequently on a specific host by grouping them based on their hash value and image name.

This query is useful, for example, to detect potentially suspicious or unusual processes that do not run often, which could indicate malicious activity or unauthorized software installations.

Sample output from the incoming example data:

SHA256HashDataImageFileName_count
f6e5d4c3b2a1...suspicious.exe1
11223344aabb...calc.exe1
98765432dcba...notepad.exe2

The results are sorted by execution count, showing the least frequently executed processes first. Each row represents a unique combination of process hash and name, along with how many times it was executed.

Processes with the same name but different hashes are treated as separate entries, helping identify potentially malicious files masquerading as legitimate processes.

Group Events by Single Field

Basic grouping of events by status_code field using the groupBy() function

Query
logscale
groupBy(status_code)
Introduction

In this example, the groupBy() is used to group events by their status codes to analyze the distribution of different response statuses.

Example incoming data might look like this:

@timestampstatus_codeendpointresponse_time
1686837825000200/api/users145
1686837826000404/api/products89
1686837827000200/api/orders167
1686837828000500/api/payment890
1686837829000200/api/users156
1686837830000404/api/items78
1686837831000200/api/orders178
1686837832000500/api/checkout923
1686837833000200/api/products134
1686837834000404/api/users92
Step-by-Step
  1. Starting with the source repository events.

  2. logscale
    groupBy(status_code)

    Groups events by unique values in the status_code field. When used without any aggregate functions, groupBy() automatically creates a field named _count showing the number of events for each unique value.

    It is the same as: groupBy(status_code, function=count())

  3. Event Result set.

Summary and Results

The query is used to analyze the distribution of status codes across all events.

The _count field is automatically added to show the number of events in each group.

This query is useful, for example, to monitor system health, identify error patterns, or track the frequency of different response types in a service.

For other examples with groupBy(), see groupBy() Syntax Examples.

Sample output from the incoming example data:

status_code_count
2005
4043
5002

Group Events by Single Field Without Count

Basic grouping of events by status_code field with explicit empty function parameter using the groupBy() function

Query
logscale
groupBy(status_code, function=[])
Introduction

In this example, the groupBy() is used to group events by their status codes without calculating count.

Example incoming data might look like this:

@timestampstatus_codeendpointresponse_time
1686837825000200/api/users145
1686837826000404/api/products89
1686837827000200/api/orders167
1686837828000500/api/payment890
1686837829000200/api/users156
1686837830000404/api/items78
1686837831000200/api/orders178
1686837832000500/api/checkout923
1686837833000200/api/products134
1686837834000404/api/users92
Step-by-Step
  1. Starting with the source repository events.

  2. logscale
    groupBy(status_code, function=[])

    Groups events by unique values in the status_code field. The empty function array (function=[]) prevents automatic counting.

    This approach helps conserve memory while identifying the unique status codes in the events.

  3. Event Result set.

Summary and Results

The query is used to identify unique field values (in this case different status codes) while minimizing memory usage.

This query is useful, for example, to quickly discover unique values in large event sets and support initial data exploration before detailed analysis.

For other examples with groupBy(), see groupBy() Syntax Examples.

Sample output from the incoming example data:

status_code
200
404
500

Group First Events by Log Level

Limit and group events using head() and groupBy() functions

Query
logscale
head(limit=10)
groupBy(loglevel)
Introduction

In this example, the head() function is used to limit the result set to 100 events, which are then grouped by their log level using the groupBy() function.

Example incoming data might look like this:

@timestamploglevelservicemessagestatus_code
2025-09-01T10:00:00ZERRORauthenticationFailed login attempt401
2025-09-01T10:00:05ZINFOauthenticationSuccessful login200
2025-09-01T10:00:10ZERRORdatabaseConnection timeout503
2025-09-01T10:00:15ZWARNapiRate limit approaching429
2025-09-01T10:00:20ZERRORauthenticationInvalid token401
2025-09-01T10:00:25ZINFOapiRequest processed200
2025-09-01T10:00:30ZDEBUGdatabaseQuery executed200
2025-09-01T10:00:35ZERRORapiInternal error500
2025-09-01T10:00:40ZINFOauthenticationUser logout200
2025-09-01T10:00:45ZWARNdatabaseHigh CPU usage200
2025-09-01T10:00:50ZDEBUGapiCache hit200
2025-09-01T10:00:55ZERRORauthenticationSession expired401
Step-by-Step
  1. Starting with the source repository events.

  2. logscale
    head(limit=10)

    Returns the first 10 events from the dataset. The limit parameter explicitly specifies the number of events to return. The events are returned in the order they were received, starting from the oldest event in the time range.

  3. logscale
    groupBy(loglevel)

    Groups the events by the values in the loglevel field. The groupBy() function creates buckets for each unique value and counts the number of events in each bucket. By default, it creates a field named _count containing the number of events in each group.

  4. Event Result set.

Summary and Results

The query is used to analyze the distribution of log levels across the first 10 events in the dataset. If head(limit=100) it would have returned 100 events.

This query is useful, for example, to quickly assess the proportion of different log levels in a sample of events or to identify if there is an unusual distribution of log severities.

Sample output from the incoming example data:

loglevel_count
ERROR5
INFO3
WARN2
DEBUG2

Note that the output shows the count of events for each log level found within the first 10 events, providing a quick overview of the log level distribution in the sample.

Group HTTP Methods and Count Status Codes

Analyze HTTP traffic patterns using nested groupBy() function

Query
logscale
groupBy(method, function=[count(as=method_total),
        groupBy(statuscode, function=count(as=method_status_count))])
Introduction

In this example, the groupBy() function is used to analyze HTTP traffic patterns by grouping requests first by HTTP method and then by status code, providing counts at both levels.

Example incoming data might look like this:

@timestampmethodstatuscodepathbytes
2025-08-06T10:00:00ZGET200/index.html1024
2025-08-06T10:00:01ZPOST201/api/users512
2025-08-06T10:00:02ZGET404/missing.html256
2025-08-06T10:00:03ZGET200/about.html768
2025-08-06T10:00:04ZPOST400/api/users128
2025-08-06T10:00:05ZPUT200/api/users/1896
2025-08-06T10:00:06ZGET200/contact.html645
2025-08-06T10:00:07ZPOST201/api/orders789
2025-08-06T10:00:08ZGET404/old-page.html234
2025-08-06T10:00:09ZDELETE204/api/users/20
Step-by-Step
  1. Starting with the source repository events.

  2. logscale
    groupBy(method, function=[count(as=method_total),
            groupBy(statuscode, function=count(as=method_status_count))])

    Groups events first by the method field and performs two functions:

    • Counts the total number of events for each HTTP method using count() and returns the result in a new field named method_total.

    • Creates a nested grouping by statuscode within each method group, counting occurrences using count() and returns the result in a new field named method_status_count.

  3. Event Result set.

Summary and Results

The query is used to analyze HTTP traffic patterns by providing a hierarchical view of request methods and their associated status codes.

This query is useful, for example, to identify patterns in API usage, detect potential issues with specific HTTP methods, or monitor the distribution of success and error responses across different request types.

Sample output from the incoming example data:

methodmethod_totalstatuscodemethod_status_count
GET52003
GET54042
POST32012
POST34001
PUT12001
DELETE12041

Note that the output shows both the total count per method (method_total) and the breakdown of status codes (method_status_count) within each method, providing a comprehensive view of the HTTP traffic distribution.

This data would be effectively visualized using a Sankey diagram widget to show the flow from HTTP methods to status codes, or a nested pie chart to display the distribution.

Group HTTP Methods and Status Codes Using Nested groupBy()

Analyze HTTP traffic patterns by method and status code using the groupBy() function

Query
logscale
groupBy(method, function=[count(as=method_total), groupBy(statuscode, function=count(as=method_status_count))])
Introduction

In this example, the groupBy() function is used to analyze HTTP traffic patterns by grouping requests first by HTTP method and then by status code within each method.

Example incoming data might look like this:

@timestampmethodstatuscodepathbytes
2025-08-06T10:00:00ZGET200/index.html1024
2025-08-06T10:00:01ZPOST201/api/users512
2025-08-06T10:00:02ZGET404/missing.html256
2025-08-06T10:00:03ZGET200/about.html768
2025-08-06T10:00:04ZPOST400/api/users128
2025-08-06T10:00:05ZPUT200/api/users/1384
2025-08-06T10:00:06ZGET200/contact.html896
2025-08-06T10:00:07ZDELETE204/api/users/20
2025-08-06T10:00:08ZGET500/error.html1024
2025-08-06T10:00:09ZPOST201/api/orders756
Step-by-Step
  1. Starting with the source repository events.

  2. logscale
    groupBy(method, function=[count(as=method_total), groupBy(statuscode, function=count(as=method_status_count))])

    Groups events by the method field and performs two aggregations:

    • Counts total events for each HTTP method using count(), and returns the result in a new field named method_total.

    • Creates a nested grouping by statuscode within each method group, counting occurrences using count() and returns the result in a new field named method_status_count.

  3. Event Result set.

Summary and Results

The query is used to analyze HTTP traffic patterns by providing a hierarchical view of request methods and their associated status codes.

This query is useful, for example, to identify patterns in API usage, detect potential issues with specific HTTP methods, or monitor the distribution of success and error responses across different request types.

Sample output from the incoming example data:

methodmethod_totalstatuscodemethod_status_count
GET52003
GET54041
GET55001
POST32012
POST34001
PUT12001
DELETE12041

Note that the output shows the total count for each HTTP method in method_total and a breakdown of status codes and their counts within each method in method_status_count.

This data is well-suited for visualization using a Sankey diagram widget, which can effectively show the flow from HTTP methods to status codes.

List All EC2 Hosts With FirstSeen Data Within 14 Days

List all the EC2 hosts with FirstSeen data within 14 days using the groupBy() function with selectLast()

Query
logscale
#repo=sensor_metadata #data_source_name=aidmaster cloud.provider = "AWS_EC2_V2"
| groupBy([aid], function=(selectLast([event_platform, aid, ComputerName, AgentVersion, FirstSeen])), limit=max)
| FirstSeen := formatTime("%FT%T%z", field=FirstSeen)
| TimeDelta := now() - duration("14d")
Introduction

In this example, the groupBy() function is used with selectLast() to retrieve the latest information about AWS EC2 instances running CrowdStrike sensors, showing their platform, hostname, agent version, and when they were first seen, with a 14-day reference point for age comparison.

Example incoming data (raw data in sensor_metadata) might look like this:

@timestampaidcloud.providerevent_platformComputerNameAgentVersionFirstSeen
2025-05-20T10:00:00Z1234abcdAWS_EC2_V2Windowsec2-web-016.45.156782025-01-15T08:30:00Z
2025-05-21T11:00:00Z1234abcdAWS_EC2_V2Windowsec2-web-016.45.156792025-01-15T08:30:00Z
2025-05-22T12:00:00Z5678efghAWS_EC2_V2Linuxec2-app-026.45.156782025-02-01T14:45:00Z
2025-05-23T13:00:00Z5678efghAWS_EC2_V2Linuxec2-app-026.45.156792025-02-01T14:45:00Z
2025-05-24T14:00:00Z90123ijkAWS_EC2_V2Windowsec2-db-036.45.156782025-03-10T09:15:00Z
2025-05-25T15:00:00Z90123ijkAWS_EC2_V2Windowsec2-db-036.45.156792025-03-10T09:15:00Z
Step-by-Step
  1. Starting with the source repository events.

  2. logscale
    #repo=sensor_metadata #data_source_name=aidmaster cloud.provider = "AWS_EC2_V2"

    Searches in the sensor_metadata repository, and filters for #data_source_name fields containing the value aidmaster, looking for cloud.provider of the type AWS_EC2_V2 only.

  3. logscale
    | groupBy([aid], function=(selectLast([event_platform, aid, ComputerName, AgentVersion, FirstSeen])), limit=max)

    Groups results by the field aid (Agent ID). Then, for each unique group, selects the most recent values for the fields: event_platform, aid, ComputerName, AgentVersion, FirstSeen.

    Using the selectLast() within the groupBy() is what actually selects the most recent record for each group.

  4. logscale
    | FirstSeen := formatTime("%FT%T%z", field=FirstSeen)

    Formats the timestamp in the FirstSeen field into ISO 8601 format. The result is stored back in the FirstSeen field.

  5. logscale
    | TimeDelta := now() - duration("14d")

    Calculates timestamp from 14 days ago, and returns the results into a new field named TimeDelta. The calculation is done by subtracting a 14-day duration from the current time using duration().

    This new TimeDelta field that represents a timestamp from 14 days ago, can be used for filtering or comparing against the FirstSeen timestamps.

  6. Event Result set.

Summary and Results

The query is used to retrieve the latest information about AWS EC2 instances running CrowdStrike sensors, showing their platform, hostname, agent version, and when they were first seen, with a 14-day reference point for age comparison.

The query is useful, for example, for auditing EC2 instance coverage, identifying newly added EC2 instances within the last two weeks, monitoring sensor versions or identifying aging or outdated installations.

Sample output from the incoming example data:

aidevent_platformComputerNameAgentVersionFirstSeenTimeDelta
1234abcdWindowsec2-web-016.45.156792025-01-15T08:30:00+00002025-05-12T13:06:56+0000
5678efghLinuxec2-app-026.45.156792025-02-01T14:45:00+00002025-05-12T13:06:56+0000
90123ijkWindowsec2-db-036.45.156792025-03-10T09:15:00+00002025-05-12T13:06:56+0000

Each aid appears only once with its most recent values. Note that TimeDelta value is based on the current date provided (Mon, 26 May 2025 13:06:56 GMT).