Draw a Time Chart
where the
x-axis is time. Time is grouped into buckets.
Parameter | Type | Required | Default Value | Description |
---|---|---|---|---|
buckets | number | optional[a] | Defines the number of buckets. The time span is defined by splitting the query time interval into this many buckets. | |
Minimum | 1 | |||
Maximum | 1500 | |||
function | array of aggregate functions | optional[a] | count() | Specifies which aggregate functions to perform on each group. If several aggregators are listed for the function parameter, then their outputs are combined using the rules described for stats() . |
limit | number | optional[a] | 10 | Defines the maximum number of series to produce. A warning is produced if this limit is exceeded, unless the parameter is specified explicitly. |
Maximum | 500 | |||
minSpan | string | optional[a] | Determines the minimum span or size of the buckets that can be produced by timeChart() : for example, if set to 5h , a query duration of 1 day (24 hours), can only be split into 5 buckets, with the last bucket covering an additional hour into the future. Relative Time Syntax values are valid values for this parameter. | |
series [b] | string | optional[a] | Each value in the field specified by this parameter becomes a series on the graph. | |
span | string | optional[a] | auto | Defines the time span for each bucket. The time span is defined as a Relative Time Syntax like 1hour or 3 weeks . If not provided or set to auto , the search time interval —and thus the number of buckets— is determined dynamically. |
timezone | string | optional[a] | Defines the time zone for bucketing. This value overrides timeZoneOffsetMinutes which may be passed in the HTTP/JSON query API. For example: timezone=UTC or timezone='+02:00' . | |
unit | string | optional[a] | No conversion | Each value is a unit conversion for the given column. For instance: bytes/span to Kbytes/day converts a sum of bytes into Kb/day automatically taking the time span into account. If present, this array must be either length 1 (apply to all series) or have the same length as the function parameter. See the reference at Relative Time Syntax. |
[a] Optional parameters use their default value unless explicitly set. |
Hide omitted argument names for this function
Omitted Argument NamesThe argument name for
series
can be omitted; the following forms of this function are equivalent:logscale SyntaxtimeChart("value")
and:
logscale SyntaxtimeChart(series="value")
These examples show basic structure only.
timeChart()
Syntax Examples
Show the number of events per hour over the last 24 hours. We do this by selecting to search over the last 24 hours in the time selector in the UI, and then we tell the function to make each time bucket one hour long (with
span=1hour
):logscaletimeChart(span=1h, function=count())
The above creates 24 time buckets when we search over the last 24 hours, and all searched events get sorted into groups depending on the bucket they belong to (based on their @timestamp value). When all events have been divided up by time, the
count()
function is run on each group, giving us the number of events per hour.Instead of counting all events together, you can also count different kinds of events. For example, you may want to count different kinds of HTTP methods used for requests in the logs. If those are stored in a field named method, you can use this field as a series:
logscaletimeChart(span=1h, function=count(), series=method)
Instead of having one group of events per time bucket (as in the previous example), we will now get multiple groups: one group for every value of method that exists in the timespan we're searching in. So if we are still searching over a 24 hour period, and we have received only
GET
,PUT
, andPOST
requests in that timespan, we will get three groups of events per bucket (because we have three different values for method).This means we end up with 72 groups of events. And every group contains only events which correspond to some time bucket and a specific value of method. Then
count()
is run on each of these groups, to give us the number ofGET
events per hour,PUT
events per hour, andPOST
events per hour.Figure 114. Counting Events Divided Into Buckets
Show the number of different HTTP methods by dividing events into time buckets of 1 minute and counting the HTTP methods (
GET
,POST
,PUT
etc). As in the previous example, the timechart will have a line for each HTTP method.logscaletimeChart(span=1min, series=method, function=count())
Use the number of buckets —instead of the time span — to show the number of different HTTP methods over time:
logscaletimeChart(buckets=1000, series=method, function=count())
Get a graph with the response time percentiles:
logscaletimeChart(function=percentile(field=responsetime, percentiles=[50, 75, 90, 99, 99.9]))
Use an array of functions in
function
to get a graph with the response time average as well as the percentiles:logscaletimeChart(function=[avg(responsetime), percentile(field=responsetime, percentiles=[50, 75, 90, 99, 99.9])])
Use coda hale metrics to print rates of various events once per minute. Such lines include 1-minute average rates
m1=N
whereN
is some number. This example displays all such meters (which are identified by the field name), converting the rates fromevents/sec
toKi/day
.logscaletype=METER rate_unit=events/second | timeChart(name, function=avg(m1), unit="events/sec to Ki/day", span=5m)
Upon completion of every LogScale request, we issue a log entry which (among other things) prints the
size=N
of the result. When summing such size's you would need to be aware of the span, but using a unit conversion, we can display the number in Mbytes/hour, and the graph will be agnostic to the span.logscaletimeChart(function=sum(size), unit="bytes/bucket to Mbytes/hour", span=30m)
timeChart()
Examples
Click
next to an example below to get the full details.Calculate Ingest Queue Compression
Determine the ingest queue compression size
Query
#type=humio #kind=metrics
| name=/^ingest-writer-(?<un>un)?compressed-bytes$/
| case {
un=*
| un := m1;
comp := m1 }
| timeChart(function=[sum(un,as=un),sum(comp,as=comp)], minSpan=1m)
| ratio:=un/comp
drop([un,comp])
Introduction
This query is used to calculate ingest queue average compression.
A compression ratio is used to express the amount of data that has
been saved by compressing. A 10x ratio would mean that 100 GB of
data is compressed down to 10 GB of data. This value is discovered
by dividing the initial data size by the compressed data size, so
for example 100/10
.
Step-by-Step
Starting with the source repository events.
- logscale
#type=humio #kind=metrics
Filters on all humio records and filters on all metrics within the cluster.
- logscale
| name=/^ingest-writer-(?<un>un)?compressed-bytes$/
Filters for the field name where the string starts with
ingest-writer
and calculates the ingest queue average compression. Creates a new field named un if the data is uncompressed by using a regular expression match looking for 'uncompressed-bytes'. - logscale
| case { un=* | un := m1; comp := m1 }
Creates two fields with using the same number from the m1 field, un contains the uncompressed values. comp contains the compressed values. This is achieved by using a
case
statement to look for the un field created in the previous step. In each case the value of the resultant field is the value of the m1 field which is the size of the compressed or uncompressed data. - logscale
| timeChart(function=[sum(un,as=un),sum(comp,as=comp)], minSpan=1m)
Shows the calculated sum of the values in the fields un and comp in buckets of 1 min in a timechart. This shows the comparison between the compressed and uncompressed data, since the incoming data is reported in the humio repo in pairs of events.
- logscale
| ratio:=un/comp
Compares the two fields un and comp and returns the average compressed size by dividing the sum of the un field with the sum of the comp field.
- logscale
drop([un,comp])
Discards the un field and the comp field from the results.
Event Result set.
Summary and Results
The query is used to calculate the ingest queue average compression using ratio on the sum from two fields. The use of the right compression method is vital for reducing network traffic, CPU and memory usage.
Calculate a Percentage of Successful Status Codes Over Time
Query
| success := if(status >= 500, then=0, else=1)
| timeChart(series=customer,function=
[
{
[sum(success,as=success),count(as=total)]
| pct_successful := (success/total)*100
| drop([success,total])}],span=15m,limit=100)
Introduction
Calculate a percentage of successful status codes inside the
timeChart()
function field.
Step-by-Step
Starting with the source repository events.
- logscale
| success := if(status >= 500, then=0, else=1)
Adds a success field at the following conditions:
If the value of field status is greater than or equal to
500
, set the value of success to0
, otherwise to1
.
- logscale
| timeChart(series=customer,function= [ { [sum(success,as=success),count(as=total)]
Creates a new timechart, generating a new series, customer that uses a compound function. In this example, the embedded function is generating an array of values, but the array values are generated by an embedded aggregate. The embedded aggregate (defined using the
{}
syntax), creates asum()
andcount()
value across the events grouped by the value of success field generated from the filter query. This is counting the1
1 or0
generated by theif()
function; counting all the values and adding up the ones for successful values. These values will be assigned to the success and total fields. Note that at this point we are still within the aggregate, so the two new fields are within the context of the aggregate, with each field being created for a corresponding success value. - logscale
| pct_successful := (success/total)*100
Calculates the percentage that are successful. We are still within the aggregate, so the output of this process will be an embedded set of events with the total and success values grouped by each original HTTP response code.
- logscale
| drop([success,total])}],span=15m,limit=100)
Still within the embedded aggregate, drop the total and success fields from the array generated by the aggregate. These fields were temporary to calculate the percentage of successful results, but are not needed in the array for generating the result set. Then, set a span for the buckets for the events of 15 minutes and limit to 100 results overall.
Event Result set.
Summary and Results
This query shows how an embedded aggregate can be used to generate a sequence of values that can be formatted (in this case to calculate percentages) and generate a new event series for the aggregate values.
Call Named Function on a Field - Example 2
Calls the named function (count()
) on a field
over a set of events
Query
timeChart(function=[callFunction(?{function=count}, field=value)])
Introduction
In this example, the callFunction()
function is
used to call the named function (count()
) on a
field over a set of events using the query parameter
?function
.
Step-by-Step
Starting with the source repository events.
- logscale
timeChart(function=[callFunction(?{function=count}, field=value)])
Counts the events in the value field, and displays the results in a timechart.
Notice how the query parameter
?function
is used to select the aggregation function for atimeChart()
. Event Result set.
Summary and Results
The query is used to count events and chart them over time. Because we
are using callFunction()
, it could be a different
function based on the dashboard parameter.
Using a query parameter (for example,
?function
) to select the aggregation
function for a timeChart()
is useful for dashboard
widgets.
Using callFunction()
allow for using a function
based on the data or dashboard parameter instead of writing the query
directly.
Compute Cumulative Aggregation Across Buckets
Compute a cumulative aggregation across buckets using the
accumulate()
function with
timeChart()
Query
timeChart(span=1000ms, function=sum(value))
| accumulate(sum(_sum, as=_accumulated_sum))
Introduction
In this example, the accumulate()
function is used
with timeChart()
to accumulate values across time
intervals.
Note that the accumulate()
function must be used
after an aggregator function to ensure event ordering.
Example incoming data might look like this:
@timestamp | key | value |
---|---|---|
1451606301001 | a | 5 |
1451606301500 | b | 6 |
1451606301701 | a | 1 |
1451606302001 | c | 2 |
1451606302201 | b | 6 |
Step-by-Step
Starting with the source repository events.
- logscale
timeChart(span=1000ms, function=sum(value))
Groups data into 1-second buckets over a 4-second period, sums the field value for each bucket and returns the results in a field named _sum. The result is displayed in a timechart.
- logscale
| accumulate(sum(_sum, as=_accumulated_sum))
Calculates a running total of the sums in the _sum field, and returns the results in a field named _accumulated_sum.
Event Result set.
Summary and Results
The query is used to accumulate values across time intervals/buckets. The query is useful for tracking cumulative metrics or identifying trends in the data.
Sample output from the incoming example data:
_bucket | _sum | _accumulated_sum |
---|---|---|
1451606300000 | 0 | 0 |
1451606301000 | 12 | 12 |
1451606302000 | 8 | 20 |
1451606303000 | 0 | 20 |
The timechart looks like this:
![]() |
Create Time Chart Widget for All Events
Query
timeChart(span=1h, function=count())
Introduction
The Time Chart Widget is the most
commonly used widget in LogScale. It displays bucketed
time series data on a timeline. The
timeChart()
function is used to create time
chart widgets, in this example a timechart that shows the number
of events per hour over the last 24 hours. We do this by selecting
to search over the last 24 hours in the time selector in the UI,
and then we tell the function to make each time bucket one hour
long (withspan=1hour
).
Step-by-Step
Starting with the source repository events.
- logscale
timeChart(span=1h, function=count())
Creates 24 time buckets when we search over the last 24 hours, and all searched events get sorted into groups depending on the bucket they belong to (based on their @timestamp value). When all events have been divided up by time, the
count()
function is run on each group, giving us the number of events per hour. Event Result set.
Summary and Results
The query is used to create timechart widgets showing number of events per hour over the last 24 hours. The timechart shows one group of events per time bucket. When viewing and hovering over the buckets within the time chart, the display will show the precise value and time for the displayed bucket, with the time showing the point where the bucket starts.
Create Time Chart Widget for Different Events
Query
timeChart(span=1h, function=count(), series=method)
Introduction
The Time Chart Widget is the most
commonly used widget in LogScale. It displays bucketed
time series data on a timeline. The
timeChart()
function is used to create time
chart widgets, in this example a timechart that shows the number
of the different events per hour over the last 24 hours. For
example, you may want to count different kinds of HTTP methods
used for requests in the logs. If those are stored in a field
named method, you can use
this field as a series
.
Furthermore, we select to search over the last 24 hours in the
time selector in the UI, and also add a function to make each time
bucket one hour long
(withspan=1hour
).
Step-by-Step
Starting with the source repository events.
- logscale
timeChart(span=1h, function=count(), series=method)
Creates 24 time buckets when we search over the last 24 hours, and all searched events get sorted into groups depending on the bucket they belong to (based on their @timestamp value). When all events have been divided up by time, the
count()
function is run on the series field to return the number of each different kinds of events per hour. Event Result set.
Summary and Results
The query is used to create timechart widgets showing number of
different kinds of events per hour over the last 24 hours. In this
example we do not just have one group of events per time bucket, but
multiple groups: one group for every value of
method that exists in the
timespan we are searching in. So if we are still searching over a 24
hour period, and we have received only GET
,
PUT
, and POST
requests
in that timespan, we will get three groups of events per bucket (because
we have three different values for
method) Therefore, we end up
with 72 groups of events. And every group contains only events which
correspond to some time bucket and a specific value of
method. Then
count()
is run on each of these groups, to give us
the number of GET
events per hour,
PUT
events per hour, and
POST
events per hour. When viewing and hovering
over the buckets within the time chart, the display will show the
precise value and time for the displayed bucket, with the time showing
the point where the bucket starts.
Make Data Compatible with Time Chart Widget - Example 1
Make data compatible with
Time Chart Widget using the
timeChart()
function with
window()
and
span
parameter
Query
timeChart(host, function=window( function=avg(cpu_load), span=15min))
Introduction
In this example, the timeChart()
function is used
to create the required input format for the
Time Chart Widget and the
window()
function is used to compute the running
aggregate (avg()
) for the
cpu_load field over a sliding window of data in the
time chart. The span width, for example 15 minutes, is defined by the
span
parameter. This defines
the duration of the average calculation of the input data, the average
value over 15 minutes. The number of buckets created will depend on the
time interval of the query. A 2 hour time interval would create 8
buckets.
Step-by-Step
Starting with the source repository events.
- logscale
timeChart(host, function=window( function=avg(cpu_load), span=15min))
Groups by host, and calculates the average CPU load time per each 15 minutes over the last 24 hours for each host, displaying the results in a Time Chart Widget.
The running average time of CPU load is grouped into spans of 30 minutes. Note that the time interval of the query must be larger than the window span to produce any result.
Event Result set.
Summary and Results
Selecting the number of buckets or the timespan of each bucket enables you to show a consistent view either by time or by number of buckets independent of the time interval of the query. For example, the widget could show 10 buckets whether displaying 15 minutes or 15 days of data; alternatively the display could always show the data for each 15 minutes.
The query is used to make CPU load data compatible with the Time Chart Widget. This query is, for example, useful for CPU load monitoring to identify sustained high CPU usage over specific time periods.
For an example of dividing the input data by the number of buckets, see Make Data Compatible with Time Chart Widget - Example 2.
Make Data Compatible with Time Chart Widget - Example 2
Make data compatible with
Time Chart Widget using the
timeChart()
function with
window()
and
buckets
parameter
Query
timeChart(host, function=window( function=[avg(cpu_load), max(cpu_load)], buckets=3))
Introduction
In this example, the window()
function uses the
number of buckets to calculate average and maximum CPU load. The
timespan for each bucket will depend on the time interval of the query.
The number of buckets are defined by the
buckets
parameter. The
timeChart()
function is used to create the required
input format for the Time Chart Widget.
The query calculates both average AND maximum values across the requested timespan. In this example, the number of buckets is specified, so the events will be distributed across the specified number of buckets using a time span calculated from the time interval of the query. For example, a 15 minute time interval with 3 buckets would use a timespan of 5 minutes per bucket.
Step-by-Step
Starting with the source repository events.
- logscale
timeChart(host, function=window( function=[avg(cpu_load), max(cpu_load)], buckets=3))
Groups by host, and calculates both the average of CPU load time and the maximum CPU load time (using aggregates (
avg()
andmax()
) for the cpu_load field), displaying the results in 5 buckets showing a stacked graph for each host using a Time Chart Widget. Event Result set.
Summary and Results
Selecting the number of buckets or the timespan of each bucket enables you to show a consistent view either by time or by number of buckets independent of the time interval of the query. For example, the widget could show 10 buckets whether displaying 15 minutes or 15 days of data; alternatively the display could always show the data for each 15 minutes.
The query is used to make CPU load data compatible with the Time Chart Widget. This query is, for example, useful for CPU load monitoring to compare intervals, compare hourly performance etc.
For an example of dividing the input data by the timespan of each bucket, see Make Data Compatible with Time Chart Widget - Example 1.
Match Field to Timespan
Match a field to timespan using the eval()
function with timeChart()
Query
timechart(method, span=5min)
| eval(_count=_count/5)
Introduction
In this example, the eval()
function is used
with timeChart()
to match a field to the
timespan, dividing the count by 5 to convert from a 5 minute count
to a per-minute count.
Step-by-Step
Starting with the source repository events.
- logscale
timechart(method, span=5min)
Creates a timechart based on the values of the method field, and groups data into 5 minute buckets (span=5min). By default, it counts events in each bucket and returns the result in a field named _count.
- logscale
| eval(_count=_count/5)
Divides the count by 5 to convert from a 5-minute count to a per-minute count, and returns the new value in the _count field.
This approach is useful when you want to display per-minute rates but also want to benefit from the reduced data points and improved performance of larger time buckets.
Event Result set.
Summary and Results
The query is used to match a field to a timespan. It summarizes the
count into 5 minutes blocks and then displays those using the
timeChart()
timespan
parameter to display the value in those increments.
The eval()
function then summarizes the values by
dividing the 5 minutes counts by 5 to provide a summarized value for
each 5 minutes timespan. You can, for example, use it to test a complex
function or expression with different inputs and quickly check the
output in the returned values.
Parsers Throttling
Query
#kind=logs class=/ParserLimitingJob/ "Top element for parser id"
pct:=100*costSum/threshold
timeChart(function=max(pct), minSpan=10s)
Introduction
Throttling is used to maintain the optimal performance and reliability of the system, as throttling limits the number of API calls or operations within a time window to prevent the overuse of resources.
In this example, the timeChart()
function is used
to show how close (in percentage) the system has been to start
throttling any parser.
Step-by-Step
Starting with the source repository events.
- logscale
#kind=logs class=/ParserLimitingJob/ "Top element for parser id"
Filters on all logs in humio that are tagged with
kind
equal tologs
and then returns the events where the class field has values containing/ParserLimitingJob/
, and where the logline contains the stringTop element for parser id
. - logscale
pct:=100*costSum/threshold
Calculates the percentage of the values in the field costSum divided with values in the field threshold and returns the results in a new field named pct.
- logscale
timeChart(function=max(pct), minSpan=10s)
Shows the calculated sum of the max values in the field pct in percentage in spans of 10 sec in a timechart.
Event Result set.
Summary and Results
The query is used to show how close (in percentage) the system has been to start throttling any parser.
Rounding Within a Timechart
Round down a number in a field and display information in a
timechart using the round()
and
timeChart()
functions
Query
timeChart(function={max(value) | round(_max, how=floor)})timechart(function=max(value))
Introduction
In this example, the round()
function is used
with a floor
parameter to round down a
field value to an integer (whole number) and display information
within a timechart.
Step-by-Step
Starting with the source repository events.
- logscale
timeChart(function={max(value) | round(_max, how=floor)})timechart(function=max(value))
Creates a time chart using
max()
as the aggregate function for a field named value to find the highest value in each time bucket, and returns the result in a field named _max.Rounds the implied field _max from the aggregate
max()
using thefloor
option to round down the value.Example of original _max values:
10.8
,15.3
,20.7
.After floor:
10
,15
,20
. Event Result set.
Summary and Results
The query is used to round down maximum values over time to nearest integer (whole value). This is useful when displaying information in a time chart. Rounding to nearest integer will make it easier to distinguish the differences between values when used on a graph for time-based visualization. The query simplifies the data presentation.
Note
To round to a specific decimal accuracy, the
format()
function must be used.
![]() |
S3 Archiving Backlog
Determine the backlog for an S3 Archiving job to identify tasks affecting merges and potential disk overflow
Query
#kind=logs #vhost=* /S3Archiving/i "Backlog for dataspace"
timeChart(#vhost, function=max(count))
Introduction
Falcon LogScale supports S3 archiving set up per repository. This query shows a continuously increasing backlog for the S3 Archiving job. Since an S3 archiving job can postpone merges, archiving ingested logs can result in disk overflow.
Step-by-Step
Starting with the source repository events.
- logscale
#kind=logs #vhost=* /S3Archiving/i "Backlog for dataspace"
Filters on all the logs that contain the vhost field. This way you can identify the different tasks.
- logscale
timeChart(#vhost, function=max(count))
Formats the result in a timechart containing the field #vhost with the values of the maximum accounted jobs/tasks that have been archived.
Event Result set.
Summary and Results
The query is used to determine the backlog for an S3 Archiving job in order to identify tasks affecting merges and potential disk overflow.
Show Offline Nodes
Show the list of available nodes currently in an offline state
Query
#type=humio #kind=logs class=/ClusterHostAliveStats/ "AliveStats on me"
age > 7200000 /* =2hours */
timeChart(hostId, function=count(hostId,distinct=true), limit=50, minSpan=4h)
Introduction
"Node Offline" events within LogScale are generated when a node is reported offline by the other nodes in the cluster. This query shows Offline Nodes.
Step-by-Step
Starting with the source repository events.
- logscale
#type=humio #kind=logs class=/ClusterHostAliveStats/ "AliveStats on me"
Filters on all logs in humio repository that are tagged with
kind
equal tologs
and then returns the events where the class field has values containing/ClusterHostAliveStats/
, and where the logline contains the stringAliveStats on me
. - logscale
age > 7200000 /* =2hours */
Returns all events where the values of the field age is greater than
7200000 ms
. Notice that this example uses multi-line comments/* =2hours */
to help describe the value which we can describe in more detail by looking at each stage of the calculation as shown below:none7200000ms / 1000 # 7200 seconds / 60 # 120 minutes / 60 # 2 hours = 2
- logscale
timeChart(hostId, function=count(hostId,distinct=true), limit=50, minSpan=4h)
Counts the events grouping by the field hostId, creating an aggregate list and displaying the last 50 returned results in buckets of 4 hours in a
Time Chart
. Event Result set.
Summary and Results
The query is used to show a list of available nodes currently in an offline state.