Matches or joins data from query results with a table. The table can be provided either as a LookUp file — CSV file or through a limited form of JSON file, uploaded using Lookup Files — or, from LogScale 1.163, as an ad-hoc table Using Ad-hoc Tables.

If you are looking for match expressions, see Match Statements.

ParameterTypeRequiredDefault ValueDescription
columnstringoptional[a]field parameter Which column in the file to use for the match. A single column or an array or columns can be specified.
fieldstringrequired  Which field in the event (log line) must match the given column value. A single field or an array of fields can be specified. Field and column must have the same length, are matched in order and must all match.
file[b]stringrequired  Specifies the source file (when using Look Up files) or the name of the ad-hoc table. The file name should be specified with .csv or .json suffix.
glob (deprecated)booleanoptional[a]false This parameter is deprecated. Use mode=glob instead. (deprecated in 1.23.0)
ignoreCasebooleanoptional[a]false If true, ignore case when matching against the CSV data.
includestring or arrayoptional[a]  The columns to include. If no argument is given, include all columns from the corresponding row in the output event.
modestringoptional[a]string The function to use when matching against keys.
   Valid Values
   cidrThe key is interpreted as a CIDR subnet and the event is matched if the field contains an IP within the subnet. If multiple subnets match, the most specific one is selected or an arbitrary one if there are multiple equally specific subnets.
   globThe key is interpreted as a globbing pattern with * and matched accordingly, for example, a CSV key value of *thisMatch* would match the field value of 123thisMatch456.
   stringThe matching is done using exact string matching.
strictbooleanoptional[a]true If true (the default) selects only the fields that match a key in the file; if false lets all events through (works like the deprecated lookup()).

[a] Optional parameters use their default value unless explicitly set.

[b] The parameter name file can be omitted.

Hide omitted argument names for this function

Show omitted argument names for this function

Hide negatable operation for this function

Show negatable operation for this function

When lookup information from files are loaded from a package, the package name should be specified in addition to the filename. For example:

logscale
match("falcon/investigate/logoninfo.csv",field="loookupname")

For more information on referring to package resources, see Referencing Package Assets.

The default behavior of this function — when strict is set to true — works like an INNER JOIN. When strict is set to false the function enriches events.

Note

Up until LogScale version 1.140, the file is limited to 1,000,000 rows/lines for exact matching using mode=string.

When using mode=glob, the underlying CSV is limited to 20,000 rows/lines.

For self-hosted customers, the maximum value for glob matches is configurable using GLOB_MATCH_LIMIT.

Using match() with Lookup Files

When using match() for joining with Look Up files, two file formats are supported:

  • CSV. The recommended format, ensures better performance, allows for additional functionalities (using parameters) and is compatible with related functions, like readFile(). See CSV File Formats for more information.

  • JSON. Supports object and array-based formats. See JSON File Formats for more information.

CSV File Formats

For Comma Separated Values (CSV) files, whitespace gets included in the keys and values. To include the separator "," in a value, quote using the " character. The following file is a valid CSV file:

csv
userid,name
1,chr
2,krab
"4","p,m"
7,mgr

The first line is interpreted as the column title. When querying, the column in the field should be used to identify which column to match against.

When using match() with a single column:

logscale
match(test.csv, field=somefield, column=column1)

the last matching row in test.csv is used.

itemvalue
field1c
field2e
column3f
itemvalue
field1c
column2e
column3f
field1c
column2d
column3a
itemvalue
field1c
column2e
column3f
field1c
column2d
column3a
field1c
column2b
column3a
methoddescriptionurl
POSTSend Data/humio/api/v1/ingest/elastic-bulk
POSTTransfer Data in/humio/api/v1/ingest/elastic-bulk
POSTTransfer Data in/humio/api/v1/ingest/elastic-bulk
POSTSend Data/humio/api/v1/ingest/elastic-bulk
POSTTransfer Data in/humio/api/v1/ingest/elastic-bulk
POSTSend Data/humio/api/v1/ingest/elastic-bulk

JSON File Formats

For JSON files, two formats are supported:

  • Object-based, where the lookup field does not have an explicit name

  • Array-based, where the information is an array of objects

In the Object-based variant, the lookup values are declared as an object with a key and embedded fields, the key field does not have a name.

json
{
  "1": { "name": "chr" },
  "2": { "name": "krab" },
  "4": { "name": "pmm" },
  "7": { "name": "mgr" }
}

When matching against a file in this case, the name of the field in the JSON object does not need to be used; the key for each value is used instead. For example:

logscale
groupBy(@timezone)
| count(@timezone)
| match(file="short.json",field=_count)

In the above, the value of _count will be matched, outputting the match value:

_countname
2krab

In the array-based variant, the lookup values are declared as an array of objects, you select which field is the key using the field parameter in match().

json
[
  { "userid": "1", "name": "chr" },
  { "userid": "2", "name": "krab" },
  { "userid": "4", "name": "pmm" },
  { "userid": "7", "name": "mgr" }
]

When using this version, the name of the column to be matched must be specified using the column argument to match():

logscale
groupBy(@timezone)
| count(@timezone)
| match(file="long.json",field=_count,column="userid")

This behavior also means that any field in the JSON file can be used as the match value. For example:

logscale Syntax
...
| match(file="long.json",field=codename,column="name")

This can be useful if you have a JSON file that contains multiple possible lookup values for given records.

For

Important

The match() does not report an error if the file format cannot be parsed.

match() Examples

Matches events for which the id field matches the value of the column in the table "users.csv". Does not add any columns.

logscale
match(file="users.csv", column=userid, field=id, include=[])

Matches events for which the id field is matched case-insensitive by the glob-pattern in the column userid in the table users.csv, and add all other columns of the first matching row to those events.

logscale
id =~ match(file="users.csv", column=userid, mode=glob, ignoreCase=true)

Let all events pass through, but events for which the id field matches the value of the userid column in the table users.csv will be enriched with all other columns of the matching row.

logscale
id =~ match(file="users.csv", column=userid, strict=false)

Matches events for which the ip field matches the CIDR subnet of the cidr-block column in the table cidr-file.csv. Only adds the columns info and type from the first matching row.

logscale
match(file="cidr-file.csv", column="cidr-block", field=ip, mode=cidr, include=["info","type"])
column3field1field2
fce