Lookup Files
Security Requirements and Controls
Change files
permissionData read access
permission
Lookup files are used to add additional context to data, enabling you to attach or replace text from events recorded in a repository when searched.
To add a lookup file, create or import a CSV (comma-separated value) or JSON file and upload it to the repository.
These files can be used together with query functions to provide lookups
and matching using the match()
function.
The feature also works with the readFile()
function
for reading a file which is used as data input for your query.
The following operations are available:
For information on how Lookup files interact with the rest of the system, see Lookup Files Operations.
Supported File Types and Formats
LogScale supports two different file formats for uploaded lookup files. JSON and CSV.
CSV Files | JSON Files | |
---|---|---|
Viewable within LogScale UI | Yes | No |
Editable within LogScale UI | Yes | No |
File Size Limit | 200MB | 100MB |
Lookup Files using CSV Format
When using CSV for lookup files, the following rules apply:
Individual fields should be separated by a comma (
,
)Whitespace is always included in the imported fields, the input takes the literal contents split by the comma character.
Fields can optionally be quoted by double quotes, for example to include commas in the imported values.
The first line of the CSV is interpreted as the column header and can be used as the field name when looking up values with functions like
match()
.
For example, the CSV file:
number,code,description
17,udp,UDP
80,http,HTTP Service
ip,"Internet Protocol, pseudo protocol"
Would be interpreted as:
number | code | description |
---|---|---|
17 | udp | UDP |
80 | http | HTTP Service |
ip | Internet Protocol, pseudo protocol |
CSV files can be viewed within the
Files
interface to confirm how the
information has been interpreted.
Lookup Files using JSON Format
When using JSON files, two different formats are supported, object-based and array-based.
Important
Once uploaded, JSON files cannot be viewed or updated. They can be exported to confirm the file format.
Object-based
In the object-based format, the JSON should be formatted as a hash or associative array, with a single key and corresponding object. For example:
json{ "1": { "name": "chr" }, "2": { "name": "krab" }, "4": { "name": "pmm" }, "7": { "name": "mgr" } }
When performing a lookup,
match()
will return the object (as an event with multiple fields), based on the matching key.Array-based
In the array-based format, the JSON should be formatted as an array of objects. In this model, the keys for each individual object become fields that can be matched when performing a lookup. For example, in the file:
json[ { "userid": "1", "name": "chr" }, { "userid": "2", "name": "krab" }, { "userid": "4", "name": "pmm" }, { "userid": "7", "name": "mgr" } ]
The userid and name fields in the JSON object can be used to lookup and return other key/value pairs as event fields/values. For example, the fragment:
logscale Syntax... | match(file="long.json",field=codename,column="name")
Would return the userid field for objects within the lookup file array.
For both formats, the following common apply:
JSON must be formatted in strict notation format. This requires no trailing commas (where there is no additional value),
Individual keys and values should be quoted, even as a number.
Important
Nested objects, that is an object within the returned object, are not supported. For example:
{
"1": { "name": "chr", "roles": { "user" : true }},
"2": { "name": "krab" },
"4": { "name": "pmm" },
"7": { "name": "mgr" }
}
Would return only the simple field, name when
used with match()
; the remainder of the
embedded object would be not be returned or included in the events.
LogScale does not reject files in this format.
Create a File
Click
Files
→ → .Specify a name for the file and then select either
to create an empty file to populate or to use a template from a previously installed package.Click
.If you've created an empty file, click
to add rows and columns.Click
to save the changes.
If you have many changes to make, editing a data table through the
Files
interface page can be tedious:
click and
then edit the table in a spreadsheet program or a simple text editor.
Note
Files larger than 100 MB cannot be viewed in the UI.
![]() |
Figure 33. Create New CSV File
![]() |
Figure 34. File Tab in Search View
Upload Files
Go to the
Files
interface → → .Drag and drop your file or browse for the file to upload.
You can upload a CSV file containing text like what you see below, which is essentially a lookup table that you can use for labels or value lookups.
csvuserid,ip,username,region 1,"212.12.31.23","pete","EU" 2,"212.12.31.231","bob","EU" 3,"98.12.31.21","anders","EU" 4,"121.12.31.23","jeff","US" 5,"82.12.31.23","ted","AU" 6,"62.12.31.23","annie","US" 7,"122.12.31.23","joe","CH" 8,"112.11.11.21","alice","CH" 9,"212.112.131.22","admin","RU" 10,"212.12.31.23","wendy","EU"
Once it has been uploaded, it will look like what you see in figure below.
Figure 35. Import CSV File
Typically, the content is used within the
match()
to lookup fixed reference information. Notice that the values are in quotes, except for the ones for userid, which are integers. See the Lookup API reference page for more information on this topic.Once created of uploaded, the file can be edited and updated withing the user interface. Additional columns and rows can be added to the file using the
button. Clicking the tiny information icon next to the file name displays metadata info about the file (created by, time it was created, etc.)Important
Only CSV files can be edited once uploaded within the user interface.
Once you have finished editing, click
, or click if you wish to download the edited file.
Export or Delete a File
Files can be managed by clicking the menu icon next to each file. You can either export or delete a file:
![]() |
Figure 36. Manage CSV Files
Warning
Deleting a file that is actively used by live queries will stop those queries.
Lookup Files Operations
When using Lookup files and match()
functionality,
consider the following:
Lookup files use server memory proportional to the size of the file on disk; at least as much and typically more. If you have a 1Gb lookup file it will take up at least 1Gb of memory on some, potentially all, hosts within the cluster. This requirement should be taken into account when uploading and sizing the nodes within the cluster.
From LogScale v1.108 on, content of the file is shared among all queries that uses
match()
, that is, the included columns that are common amongmatch()
functions can be reused among queries.From 1.117 version on, whenever a file is updated, live queries and alert queries that use that file will seamlessly continue to run with the new updated file, thus making little difference if you have many small files to update or one large file. Since the file is swapped while the query is running, this also means that events can be queried with different versions of the file.
From LogScale v1.90, if you have large lookup files, wrap the uses of
match()
in saved queries rather than use them directly across multiple different queries to ensure you don't accidentally pass slightly different argument in different queries. However, due to an improved reuse of files introduced in LogScale v1.108, this practice is no longer necessary starting from that version.