Parses a string into a timestamp.

This function is important for creating parsers, as it is used to parse the timestamp for an incoming event.

ParameterTypeRequiredDefault ValueDescription
addErrorsbooleanoptional[a] true Whether to add an error field to the event, if it was not possible to find a timestamp.
asstringoptional[a] @timestamp Name of output field that will contain the parsed timestamp. The timestamp is represented as milliseconds since 1970 in UTC. LogScale expects to find the timestamp in the field @timestamp, so do not change this when creating parsers.
caseSensitivebooleanoptional[a] true Whether the timestamp format pattern is case sensitive. For example, the format LLL will accept Feb but not feb in case sensitive mode, while both will be accepted in case insensitive mode.
   Values
   falsePattern is not case sensitive
   truePattern is case sensitive
fieldstringrequired   The field holding the timestamp to be parsed.
format[b]stringoptional[a] yyyy-MM-dd'T'HH:mm:ss[.SSSSSSSSS]XXXXX Pattern used to parse the timestamp. Either a format string as specified in Java's DateTimeFormatter, or one of the special format specifiers (these specifiers are not case-sensitive, that is, MilliSeconds works as well).
   Values
   millisEpoch time in milliseconds (UTC)
   millisecondsEpoch time in milliseconds (UTC)
   nanosEpoch time in nanoseconds (UTC)
   secondsEpoch time in seconds (UTC)
   unixTimeMillisEpoch time in milliseconds (UTC)
   unixTimeSecondsEpoch time in seconds (UTC)
   unixtimeEpoch time in seconds (UTC)
timezonestringoptional[a]   If the timestamp does not contain a timezone, it can be specified using this parameter. Examples are Europe/London, America/New_York and UTC. See the full list of timezones supported by LogScale at Supported Time Zones. Note that if the timestamp does not contain a timezone, and no timezone is specified here, an error is generated. If the timezone is specified here, and one also exists in the timestamp, then this parameter overrides the timezone in the event.
timezoneAsstringoptional[a] @timezone Name of output field that will contain the parsed timezone. LogScale expects to find the timezone in the field @timezone , so do not change when creating parsers.

[a] Optional parameters use their default value unless explicitly set.

[b] The parameter name format can be omitted.

Hide omitted argument names for this function

Show omitted argument names for this function

Before parsing the timestamp, the part of the log containing the timestamp should have been captured in a field. Typically this is done during parsing, but can be extracted during queries using functions like regex() and parseJson() before parseTimestamp().

The parseTimestamp() function formats times using a subset of Java's DateTimeFormatter.

LogScale also supports some special format strings like seconds, milliseconds, and unixtime (see in table below the description of the format parameter for a full list of options).

  • unixtimeMillis UTC time since 1970 in milliseconds

  • unixtime UTC time since 1970 in seconds

For the special formats that specify seconds (that is seconds, unixtime, and unixtimeseconds), the function also supports specifying milliseconds using floating point numbers.

For example, 1690480444.589 means 2023-07-27 19:54:04 and 589 milliseconds.

If the timestamp is parsed it will create a field @timestamp containing the parsed timestamp in UTC milliseconds and a @timezone field containing the original timezone.

It is possible to parse time formats leaving out the year designator as is sometime seen in time formats from Syslog. For example, Mar 15 07:48:13 can be parsed using the format MM d HH:mm:ss. In this case LogScale will guess the year.

The logic used for guessing the year is as follows: if the date (without a specified year) is less than 8 days into the future, or in the past, then the current year is used. Otherwise, if the date is more than 8 days into the future, then the previous year is used. For example, if the current date is March 10 2025 06:00:00, then the inferred year of Mar 18 00:00:00 is 2025. If the current date is March 7 2025 then the inferred year is 2024.

parseTimestamp() Syntax Examples

Extract a timestamp that is using millisecond precision embedded in a JSON value:

logscale
parseJson()
| parseTimestamp("millis", field=timestamp)

Events having a timestamp in ISO8601 format that include a timezone offset can be parsed using the default format:

logscale
expiryTime := "2018-09-08 17:51:04.777Z"
| parseTimestamp(field=expiryTime)

Another example is a timestamp like 2017-12-18T20:39:35-04:00:

logscale
/(?<timestamp>\S+)/
| parseTimestamp(field=timestamp)

Parse timestamps in an accesslog where the timestamp includes an explicit timezone offset like 192.168.1.19 [02/Apr/2014:16:29:32 +0200] GET /hello/test/123 ...

logscale
/(?<client>\S+) \[(?<@timestamp>.+)\] (?<method>\S+) (?<url>\S+)/
| parseTimestamp("dd/MMM/yyyy:HH:mm:ss Z", field=timestamp)

When parsing a timestamp without a timezone, such as 2015-12-18T20:39:35, you must specify the timezone using the timezone parameter, as shown in the following example:

logscale
parseTimestamp("yyyy-MM-dd'T'HH:mm:ss", field=timestamp, timezone="America/New_York")

Important

If the timestamp does not contain a timezone, then one must be specified using the timezone parameter, otherwise an error is generated.

Parse an event with a timestamp not containing year, like Feb 9 12:22:44 hello world

logscale
/(?<timestamp>\S+\s+\S+\s+\S+)/
| parseTimestamp("MMM [ ]d HH:mm:ss", field=timestamp, timezone="Europe/London")

Click + next to an example below to get the full details.

Bucket Counts When Using bucket()

Query

Search Repository: humio-metrics

logscale
bucket(buckets=24, function=sum("count"))
| parseTimestamp(field=_bucket,format=millis)
Introduction

When generating a list of buckets using the bucket() function, the output will always contain one more bucket than the number defined in buckets. This is to accommodate all the values that will fall outside the given time frame across the requested number of buckets. This calculation is due to the events being bound by the bucket in which they have been stored, resulting in bucket() selecting the buckets for the given time range and any remainder. For example, when requesting 24 buckets over a period of one day in the humio-metrics repository:

Step-by-Step
  1. Starting with the source repository events.

  2. logscale
    bucket(buckets=24, function=sum("count"))

    Buckets the events into 24 groups, using the sum() function on the count field.

  3. logscale
    | parseTimestamp(field=_bucket,format=millis)

    Extracts the timestamp from the generated bucket and convert to a date time value; in this example the bucket outputs the timestamp as an epoch value in the _bucket field.

  4. Event Result set.

Summary and Results

The resulting output shows 25 buckets, the original 24 requested one additional that contains all the data after the requested timespan for the requested number of buckets.

_bucket_sum@timestamp
168129000000013226589454281681290000000
168129360000018798915177531681293600000
168129720000019675665410251681297200000
168130080000020588481521111681300800000
168130440000021635766822591681304400000
168130800000022557713476581681308000000
168131160000023427919418721681311600000
168131520000024296393699801681315200000
168131880000025165898691791681318800000
168132240000026034091679931681322400000
168132600000026901890006941681326000000
168132960000027769207776541681329600000
168133320000028735234322021681333200000
168133680000029698651608691681336800000
168134040000030576238906451681340400000
168134400000031446326470261681344000000
168134760000032317593764721681347600000
168135120000033189297770921681351200000
168135480000034060278720761681354800000
168135840000034930857885081681358400000
168136200000035801285516941681362000000
168136560000036671503164701681365600000
168136920000037542079979971681369200000
168137280000038412340505321681372800000
168137640000010400197349271681376400000

Bucket Events Into Groups

Bucket events into 24 groups using the count() function and buckets() function

Query
logscale
bucket(buckets=24, function=sum("count"))
| parseTimestamp(field=_bucket,format=millis)
Introduction

In this example, the bucket() function is used to request 24 buckets over a period of one day in the humio-metrics repository.

Step-by-Step
  1. Starting with the source repository events.

  2. logscale
    bucket(buckets=24, function=sum("count"))

    Buckets the events into 24 groups spanning over a period of one day, using the sum() function on the count field.

  3. logscale
    | parseTimestamp(field=_bucket,format=millis)

    Extracts the timestamp from the generated bucket and converts the timestamp to a date time value. In this example, the bucket outputs the timestamp as an epoch value in the _bucket field. This results in an additional bucket containing all the data after the requested timespan for the requested number of buckets.

  4. Event Result set.

Summary and Results

The query is used to optimizing data storage and query performance by making et easier to manage and locate data subsets when performing analytics tasks. Note that the resulting outputs shows 25 buckets; the original requested 24 buckets and in addition the bucket for the extracted timestamp.

Sample output from the incoming example data:

_bucket_sum@timestamp
168129000000013226589454281681290000000
168129360000018798915177531681293600000
168129720000019675665410251681297200000
168130080000020588481521111681300800000
168130440000021635766822591681304400000
168130800000022557713476581681308000000
168131160000023427919418721681311600000
168131520000024296393699801681315200000
168131880000025165898691791681318800000
168132240000026034091679931681322400000
168132600000026901890006941681326000000
168132960000027769207776541681329600000
168133320000028735234322021681333200000
168133680000029698651608691681336800000
168134040000030576238906451681340400000
168134400000031446326470261681344000000
168134760000032317593764721681347600000
168135120000033189297770921681351200000
168135480000034060278720761681354800000
168135840000034930857885081681358400000
168136200000035801285516941681362000000
168136560000036671503164701681365600000
168136920000037542079979971681369200000
168137280000038412340505321681372800000
168137640000010400197349271681376400000

Make Copy of Events

Make an extra copy of the event to be parsed along with the original event using the copyEvent() function

Query
logscale
copyEvent("arrivaltime")
| case { #type=arrivaltime
| @timestamp:=now() ; *
| parseTimestamp(field=ts) }
Introduction

In this example, an event is stored with both the timestamp from the event and a separate stream based on arrival time (assuming the event has a type that is not arrivaltime).

Step-by-Step
  1. Starting with the source repository events.

  2. logscale
    copyEvent("arrivaltime")

    Creates a copy of the current event, and assigns the type arrivaltime to the copied event.

  3. logscale
    | case { #type=arrivaltime

    Returns a specific value that meets the defined condition. In this case, it checks if the event type is arrivaltime, then categorizes all events by their arrivaltimes.

  4. logscale
    | @timestamp:=now() ; *

    Sets the @timestamp field to the current time now() for all events of the type arrivaltime, and adds the ; separator and * to ensure, that all other fields are kept unchanged. As the now() is placed after the first aggregate function, it is evaluated continuously, and returns the live value of the current system time, which can divert between LogScale nodes.

  5. logscale
    | parseTimestamp(field=ts) }

    As the original events keep the original timestamp, it parses the timestamp from a field named ts for events that are not of type arrivaltime.

  6. Event Result set.

Summary and Results

The query is used to make an extra copy of an event, when parsed, both copies will be visible in the pipeline. The query creates a copy with type arrivaltime, and sets its timestamp to the current time, while the original event retains its original timestamp. This allows tracking both when an event occurred (original timestamp) and when it was received/processed (arrival time). The query is useful in log processing and data management.