Specify a set of fields to select from each event and include in the resulting event set.

It is possible that an aggregate function, such as table() or groupBy() may be more suitable for summarizing and selecting the fields that you want to be displayed.

A use-case for select() is when you want to export a few fields from a large number of events into a CSV file without aggregating the values. Because an implicit tail(200) function is appended in non-aggregating queries, only 200 events might be shown in those cases; however, when exporting the result, you get all matching events.

ParameterTypeRequiredDefault ValueDescription
fields[a]array of stringsrequired   The names of the fields to keep.

[a] The parameter name fields can be omitted.

Hide omitted argument names for this function

Show omitted argument names for this function

select() Examples

Click + next to an example below to get the full details.

Calculate and Sort Ingest Lag Times

Analyze the time difference between event occurrence and ingestion using the select() function with sort()

Query
logscale
select([#repo, #Vendor, #type, @timestamp, @ingesttimestamp])
| ingest_lag_in_mins := ((@ingesttimestamp-@timestamp)/1000)/60
| sort(ingest_lag_in_mins, limit=20000)
Introduction

In this example, the select() function is used to analyze the time difference between when events occurred and when they were ingested into LogScale, helping identify potential ingestion delays or performance issues.

Example incoming data might look like this:

@timestamp@ingesttimestamp#repo#vendor#type
2025-11-05T10:00:00.000Z2025-11-05T10:01:30.000Zwindows-eventsMicrosoftSecurityEvent
2025-11-05T10:00:15.000Z2025-11-05T10:02:45.000Zlinux-syslogLinuxSystemLog
2025-11-05T10:00:30.000Z2025-11-05T10:01:15.000Znetwork-logsCiscoFirewallLog
2025-11-05T10:00:45.000Z2025-11-05T10:05:45.000Zendpoint-logsCrowdStrikeProcessCreate
2025-11-05T10:01:00.000Z2025-11-05T10:01:45.000Zcloud-logsAWSCloudTrail
2025-11-05T10:01:15.000Z2025-11-05T10:04:15.000Zdatabase-logsOracleAuditLog
2025-11-05T10:01:30.000Z2025-11-05T10:02:00.000Zwindows-eventsMicrosoftLoginEvent
2025-11-05T10:01:45.000Z2025-11-05T10:03:45.000Zlinux-syslogLinuxAuthLog
Step-by-Step
  1. Starting with the source repository events.

  2. logscale
    select([#repo, #Vendor, #type, @timestamp, @ingesttimestamp])

    Selects the relevant fields for analysis: #repo, #vendor, #type, @timestamp, and @ingesttimestamp.

  3. logscale
    | ingest_lag_in_mins := ((@ingesttimestamp-@timestamp)/1000)/60

    Creates a new field named ingest_lag_in_mins that calculates the time difference between @ingesttimestamp and @timestamp in minutes. The calculation first converts the millisecond difference to seconds (divided by 1000) and then to minutes (divided by 60).

  4. logscale
    | sort(ingest_lag_in_mins, limit=20000)

    Sorts the results based on the ingest_lag_in_mins field in ascending order. The limit parameter is set to 20000 to ensure all relevant events are included in the analysis.

  5. Event Result set.

Summary and Results

The query is used to calculate and analyze the time difference between when events occur and when they are ingested into LogScale, providing visibility into potential ingestion delays or performance issues.

This query is useful, for example, to troubleshoot correlation rule effectiveness, monitor data pipeline health, ensure real-time analysis capabilities, and identify potential bottlenecks in the data ingestion process.

Sample output from the incoming example data:

#repo#vendor#type@timestamp@ingesttimestampingest_lag_in_mins
network-logsCiscoFirewallLog2025-11-05T10:00:30.000Z2025-11-05T10:01:15.000Z0.75
windows-eventsMicrosoftSecurityEvent2025-11-05T10:00:00.000Z2025-11-05T10:01:30.000Z1.5
cloud-logsAWSCloudTrail2025-11-05T10:01:00.000Z2025-11-05T10:01:45.000Z0.75
windows-eventsMicrosoftLoginEvent2025-11-05T10:01:30.000Z2025-11-05T10:02:00.000Z0.5
linux-syslogLinuxSystemLog2025-11-05T10:00:15.000Z2025-11-05T10:02:45.000Z2.5
linux-syslogLinuxAuthLog2025-11-05T10:01:45.000Z2025-11-05T10:03:45.000Z2.0
database-logsOracleAuditLog2025-11-05T10:01:15.000Z2025-11-05T10:04:15.000Z3.0
endpoint-logsCrowdStrikeProcessCreate2025-11-05T10:00:45.000Z2025-11-05T10:05:45.000Z5.0

Note that the ingest lag is calculated in minutes for easier analysis. Lower values indicate better ingestion performance.

Reduce Large Event Sets to Essential Fields

Reduce large datasets to essential fields using the select() function

Query
logscale
method=GET
select([statuscode, responsetime])
Introduction

The select() function reduces large event set to essential fields. The select() statement creates a table as default and copies data from one table to another.

In this example, an unsorted table is selected for the statuscode field and the responsetime field.

Step-by-Step
  1. Starting with the source repository events.

  2. logscale
    method=GET

    Filters for all HTTP request methods of the type GET.

  3. logscale
    select([statuscode, responsetime])

    Creates an unsorted table showing the statuscode field and the responsetime field.

  4. Event Result set.

Summary and Results

The query is used to filter specific fields from an event set and create a table showing these fields (focused event set). In this example, the table shows the HTTP response status and the time taken to respond to the request which is useful for analyzing HTTP performance, monitoring response codes, and identifying slow requests.. The select() function is useful when you want to export a few fields from a large number of events into a .CSV file without aggregating the values. For more information about export, see Export Data.

Note that whereas the LogScale UI can only show up to 200 events, the exported .CSV file contains all results.

It is possible that an aggregate function, such as table() or groupBy() may be more suitable for summarizing and selecting the fields to be displayed.

Select Fields to Export

Select fields to export as .CSV file using the select() function

Query
logscale
select([@timestamp, @rawstring])
Introduction

The select() function reduces large event set to essential fields. The select() statement creates a table as default and copies data from one table to another.

In this example, an unsorted table is selected for the @timestamp field and the @rawstring field.

Step-by-Step
  1. Starting with the source repository events.

  2. logscale
    select([@timestamp, @rawstring])

    Creates an unsorted table showing the @timestamp field and the @rawstring field.

  3. Event Result set.

Summary and Results

The query is used to filter specific fields from an event set and create a table showing these fields (focused event set). In this example, the table shows the timestamp of the events and the complete raw log entry, which is useful for full log analysis, and data backup. The select function is useful when you want to export a few fields from a large number of events into a .CSV file without aggregating the values. For more information about export, see Export Data.

Note that whereas the LogScale UI can only show up to 200 events, an exported .CSV file contains all results.