Determines the number of bytes that this event internally uses in disk storage for the values, not counting the bytes for storing the field names. This does not include the RAM usage of an event during a query, implying that aggregated events will have a size of zero.

ParameterTypeRequiredDefault ValueDescription
asstringoptional[a] _eventSize Name of output field.

[a] Optional parameters use their default value unless explicitly set.

Note

The eventSize() function must be used before any aggregate function, otherwise the event size will not be returned as zero.

eventSize() Examples

Click + next to an example below to get the full details.

Search For Events by Size in Repository

Search for events of a certain size in a repository using eventSize() function

Query
logscale
eventSize()
| _eventSize > 10000
Introduction

The eventSize() function is used to search for events depending on the internal disk storage usages. The function augments the event data with the event size information.

Example incoming data might look like this:

@timestampmessageuserip_address
2025-10-31T10:00:00.000ZShort log messagealice192.168.1.100
2025-10-31T10:01:00.000ZVery long detailed error message with stack trace: Error at line 1234\nStack trace:\ncom.example.Class.method(Class.java:100)\ncom.example.OtherClass.otherMethod(OtherClass.java:200)\ncom.example.MainClass.main(MainClass.java:300)\nCaused by: java.lang.NullPointerException\nat com.example.Class.method(Class.java:100)bob192.168.1.101
2025-10-31T10:02:00.000ZMedium length message with some details about user activity and system statuscharlie192.168.1.102
2025-10-31T10:03:00.000ZAnother very long message containing detailed system metrics: CPU usage: 85%, Memory: 16GB used of 32GB total, Disk usage: 75% on /dev/sda1, Network: IN=1.2GB/s OUT=800MB/s, Active connections: 1250, Thread count: 500, Active users: 3500, Cache hit ratio: 95%, Database connections: 100/150david192.168.1.103
2025-10-31T10:04:00.000ZBrief status updateeve192.168.1.104

Step-by-Step
  1. Starting with the source repository events.

  2. logscale
    eventSize()

    Determines the number of bytes that events internally use in disk storage for the values (not counting the bytes for storing the field names), and returns the results in a field named _eventSize.

  3. logscale
    | _eventSize > 10000

    Searches for events that take up more than 10000 bytes in internal disk storage usage. Notice that you cannot do a direct comparison, as the function augments the event data with the event size information, rather than returning data.

  4. Event Result set.

Summary and Results

The query is used to get an overview of the disk storage usage of the different events and in this example filter on the largest ones. A high disk storage usage can cause performance issues, depending on the time range.

Sample output from the incoming example data:

messageuserip_address_eventSize
Very long detailed error message with stack trace: Error at line 1234\nStack trace:\ncom.example.Class.method(Class.java:100)\ncom.example.OtherClass.otherMethod(OtherClass.java:200)\ncom.example.MainClass.main(MainClass.java:300)\nCaused by: java.lang.NullPointerException\nat com.example.Class.method(Class.java:100)bob192.168.1.10112500
Another very long message containing detailed system metrics: CPU usage: 85%, Memory: 16GB used of 32GB total, Disk usage: 75% on /dev/sda1, Network: IN=1.2GB/s OUT=800MB/s, Active connections: 1250, Thread count: 500, Active users: 3500, Cache hit ratio: 95%, Database connections: 100/150david192.168.1.10311200

Note that only events with an _eventSize greater than 10000 bytes are included in the results. The _eventSize field shows the internal storage size in bytes for each event.

Track Event Size Within a Repository

Calculate the event size and report the relative size statistics for each event using eventSize() function

Query
logscale
eventSize(as=eventSize)
|timeChart(function=[max(eventSize),percentile(field=eventSize,percentiles=[50,75,90,99])])
Introduction

This query shows how statistical information about events can first be determined, and then converted into a graph that shows the relative sizes.

Example incoming data might look like this:

@timestampmessageuserip_address
2025-10-31T10:00:00.000ZShort log messagealice192.168.1.100
2025-10-31T10:01:00.000ZVery long detailed error message with stack tracebob192.168.1.101
2025-10-31T10:02:00.000ZMedium length message with detailscharlie192.168.1.102
2025-10-31T10:03:00.000ZAnother very long message with metricsdavid192.168.1.103
2025-10-31T10:04:00.000ZBrief statuseve192.168.1.104
2025-10-31T10:05:00.000ZStandard length log entryfrank192.168.1.105
2025-10-31T10:06:00.000ZExtensive system report with detailsgrace192.168.1.106
2025-10-31T10:07:00.000ZQuick updatehenry192.168.1.107
2025-10-31T10:08:00.000ZDetailed performance metrics and analysisivan192.168.1.108
2025-10-31T10:09:00.000ZSystem notificationjulia192.168.1.109
Step-by-Step
  1. Starting with the source repository events.

  2. logscale
    eventSize(as=eventSize)

    Extracts the information about the size of each individual event using the eventSize() function.

  3. logscale
    |timeChart(function=[max(eventSize),percentile(field=eventSize,percentiles=[50,75,90,99])])

    Calculates the percentile() for the eventSize field and determines which filesize is above 50%,75%, and 90,99% of the overall event set, then finds the maximum size for the specified field over a set of events, and displays the returned results in a timechart.

  4. Event Result set.

Summary and Results

The query is used to show how statistical information about events can first be determined, and then converted into a graph that shows the relative sizes.

Sample output from the incoming example data:

_bucket_max_50_75_90_99
169874340000012500.00000000000003200.40225914196785800.07734142121638900.07751385401211200.634566556551
169874346000011800.00000000000003100.68087756308435600.49943717674738700.07751385401211000.634566556551
169874352000012200.00000000000003300.75034317658245900.49943717674739100.07751385401211500.586660467782

Note that the output shows the maximum event size (_max) and different percentiles (_50, _75, _90, _99) for the events in each time bucket. The _bucket field contains epoch timestamps in milliseconds representing the start of each time interval.