Specify a set of fields to select from each event and include in the resulting event set.
It is possible that an aggregate function, such as
table() or groupBy()
may be more suitable for summarizing and selecting the fields
that you want to be displayed.
A use-case for select() is when you want to
export a few fields from a large number of events into a CSV
file without aggregating the values. Because an implicit
tail(200) function is appended in
non-aggregating queries, only 200 events might be shown in those
cases; however, when exporting the result, you get all matching
events.
Hide omitted argument names for this function
Omitted Argument NamesThe argument name for
fieldscan be omitted; the following forms of this function are equivalent:logscale Syntaxselect(["value"])and:
logscale Syntaxselect(fields=["value"])These examples show basic structure only.
select() Examples
Click next to an example below to get the full details.
Calculate and Sort Ingest Lag Times
Analyze the time difference between event occurrence and ingestion
using the select() function with
sort()
Query
select([#repo, #Vendor, #type, @timestamp, @ingesttimestamp])
| ingest_lag_in_mins := ((@ingesttimestamp-@timestamp)/1000)/60
| sort(ingest_lag_in_mins, limit=20000)Introduction
In this example, the select() function is used to
analyze the time difference between when events occurred and when they
were ingested into LogScale, helping identify potential
ingestion delays or performance issues.
Example incoming data might look like this:
| @timestamp | @ingesttimestamp | #repo | #vendor | #type |
|---|---|---|---|---|
| 2025-11-05T10:00:00.000Z | 2025-11-05T10:01:30.000Z | windows-events | Microsoft | SecurityEvent |
| 2025-11-05T10:00:15.000Z | 2025-11-05T10:02:45.000Z | linux-syslog | Linux | SystemLog |
| 2025-11-05T10:00:30.000Z | 2025-11-05T10:01:15.000Z | network-logs | Cisco | FirewallLog |
| 2025-11-05T10:00:45.000Z | 2025-11-05T10:05:45.000Z | endpoint-logs | CrowdStrike | ProcessCreate |
| 2025-11-05T10:01:00.000Z | 2025-11-05T10:01:45.000Z | cloud-logs | AWS | CloudTrail |
| 2025-11-05T10:01:15.000Z | 2025-11-05T10:04:15.000Z | database-logs | Oracle | AuditLog |
| 2025-11-05T10:01:30.000Z | 2025-11-05T10:02:00.000Z | windows-events | Microsoft | LoginEvent |
| 2025-11-05T10:01:45.000Z | 2025-11-05T10:03:45.000Z | linux-syslog | Linux | AuthLog |
Step-by-Step
Starting with the source repository events.
- logscale
select([#repo, #Vendor, #type, @timestamp, @ingesttimestamp])Selects the relevant fields for analysis: #repo, #vendor, #type, @timestamp, and @ingesttimestamp.
- logscale
| ingest_lag_in_mins := ((@ingesttimestamp-@timestamp)/1000)/60Creates a new field named ingest_lag_in_mins that calculates the time difference between @ingesttimestamp and @timestamp in minutes. The calculation first converts the millisecond difference to seconds (divided by 1000) and then to minutes (divided by 60).
- logscale
| sort(ingest_lag_in_mins, limit=20000)Sorts the results based on the ingest_lag_in_mins field in ascending order. The
limitparameter is set to20000to ensure all relevant events are included in the analysis. Event Result set.
Summary and Results
The query is used to calculate and analyze the time difference between when events occur and when they are ingested into LogScale, providing visibility into potential ingestion delays or performance issues.
This query is useful, for example, to troubleshoot correlation rule effectiveness, monitor data pipeline health, ensure real-time analysis capabilities, and identify potential bottlenecks in the data ingestion process.
Sample output from the incoming example data:
| #repo | #vendor | #type | @timestamp | @ingesttimestamp | ingest_lag_in_mins |
|---|---|---|---|---|---|
| network-logs | Cisco | FirewallLog | 2025-11-05T10:00:30.000Z | 2025-11-05T10:01:15.000Z | 0.75 |
| windows-events | Microsoft | SecurityEvent | 2025-11-05T10:00:00.000Z | 2025-11-05T10:01:30.000Z | 1.5 |
| cloud-logs | AWS | CloudTrail | 2025-11-05T10:01:00.000Z | 2025-11-05T10:01:45.000Z | 0.75 |
| windows-events | Microsoft | LoginEvent | 2025-11-05T10:01:30.000Z | 2025-11-05T10:02:00.000Z | 0.5 |
| linux-syslog | Linux | SystemLog | 2025-11-05T10:00:15.000Z | 2025-11-05T10:02:45.000Z | 2.5 |
| linux-syslog | Linux | AuthLog | 2025-11-05T10:01:45.000Z | 2025-11-05T10:03:45.000Z | 2.0 |
| database-logs | Oracle | AuditLog | 2025-11-05T10:01:15.000Z | 2025-11-05T10:04:15.000Z | 3.0 |
| endpoint-logs | CrowdStrike | ProcessCreate | 2025-11-05T10:00:45.000Z | 2025-11-05T10:05:45.000Z | 5.0 |
Note that the ingest lag is calculated in minutes for easier analysis. Lower values indicate better ingestion performance.
Reduce Large Event Sets to Essential Fields
Reduce large datasets to essential fields using the
select() function
Query
method=GET
select([statuscode, responsetime])Introduction
The select() function reduces large event set to
essential fields. The select() statement creates a
table as default and copies data from one table to another.
In this example, an unsorted table is selected for the statuscode field and the responsetime field.
Step-by-Step
Starting with the source repository events.
- logscale
method=GETFilters for all HTTP request methods of the type
GET. - logscale
select([statuscode, responsetime])Creates an unsorted table showing the statuscode field and the responsetime field.
Event Result set.
Summary and Results
The query is used to filter specific fields from an event set and create
a table showing these fields (focused event set). In this example, the
table shows the HTTP response status and the time taken to respond to
the request which is useful for analyzing HTTP performance, monitoring
response codes, and identifying slow requests.. The
select() function is useful when you want to export
a few fields from a large number of events into a .CSV file without
aggregating the values. For more information about export, see
Export Data.
Note that whereas the LogScale UI can only show up to 200 events, the exported .CSV file contains all results.
It is possible that an aggregate function, such as
table() or groupBy() may be
more suitable for summarizing and selecting the fields to be displayed.
Select Fields to Export
Select fields to export as .CSV file using the
select() function
Query
select([@timestamp, @rawstring])Introduction
The select() function reduces large event set to
essential fields. The select() statement creates a
table as default and copies data from one table to another.
In this example, an unsorted table is selected for the @timestamp field and the @rawstring field.
Step-by-Step
Starting with the source repository events.
- logscale
select([@timestamp, @rawstring])Creates an unsorted table showing the @timestamp field and the @rawstring field.
Event Result set.
Summary and Results
The query is used to filter specific fields from an event set and create
a table showing these fields (focused event set). In this example, the
table shows the timestamp of the events and the complete raw log entry,
which is useful for full log analysis, and data backup. The
select function is useful when you want to export a
few fields from a large number of events into a .CSV file without
aggregating the values. For more information about export, see
Export Data.
Note that whereas the LogScale UI can only show up to 200 events, an exported .CSV file contains all results.