Package File Formats

LogScale packages are Zip files containing assets files (e.g. Dashboards and Alerts) a README file, and a manifest.yaml file.

The file structures of a typical package would look something like this:

├── manifest.yaml
├── dashboards/
│   ├──
│   └──
├── alerts/
│   ├──
│   └──
├── queries/
│   ├── query-1.csv
│   └── query-2.json

Below is a description of each file (or file type) as well as a link to the JSON schema defining the file format.

Getting Started

We recommend you use LogScale's export package feature to get started with creating your own packages. See Create a Package. Fill out the form and examine the content of the zip file. It can be a good idea to check your package into source control and maintain it from there, rather than maintain everything in LogScale and export every time

Understanding Package Scopes

As part of the export process you will be asked to input "scope". In LogScale's package system, a scope is essentially a namespace that allows multiple parties to create a package with the same name while avoiding naming collisions. If for instance you create a packages called accesslog-parsers it would be namespaced under your registry scope (essentially username), allowing for packages to be named:


File Formats

The rest of this page describes the respective files that are part of a package. Most files are written in YAML and are best described by example. We also provide a JSON schema for each file (JSON Scheme works in addition to YAML), you can find the Schemas further down this page.


The manifest.yaml contains meta information about the package itself, such as author, license, description, and other meta tags. If you are familiar with NodeJS, this file can be compared to the package.json in the NPM package system, some of this data is exposed to users when browsing the Marketplace.

  • Author - author

    It is suggested that you enter the Author as your company name, although you can use your individual name. It is preferred to NOT include email addresses here. The schema supports email and contributors but these should not be used as do not get shown in the product UI. Example: Acme

  • Minimum LogScale Version - minLogScaleVersion

    This is the minimum LogScale version required for the package to work. Example: 1.30.1

  • Licence

    For inclusion in the marketplace LogScale requires the package to be usable under Apache 2.0 license. This is the default choice when creating a package using the LogScale UI so does not need to be added to the manifest file.


    Any package submitted to the Marketplace without a license, or with a license that is not Apache 2.0 will require amending to include Apache 2.0 license prior to review


name: nginx/nginx # <scope>/<name>
description: |
  This is a brief description of the package. One or two lines.
  name: Thomas Anagrius
logoUrl: data:image/png;base64,...
  - Peter Johnson <>
  - John Doe <>

The logoUrl should be a base64 encoded version of the logo for the package (usually technology logo).

Each package must provide a file in the root directory. You can write it in markdown format and as a minimum it should contain information about what the package does and how you are expected to use it. - Title

(this should be the name of the package but may include a couple of extra words if it help provide a meaningful title, must be less than 40 characters)

Underneath the title there should be a description of what it does and why. Focus on what log sources does it apply to, what data does it use and what does it show from that data. Make it really clear as to what the purpose of the package is.You can include links to external websites as long as they are directly relevant and help explain the package or add context. You can include links to LogScale's documentation library to explain concepts and provide technical details or helpful tips. - Package Contents

List which of below are included. (you can include a description of each component if you wish but limited to 100 characters each. All packages need to include the required parsers. This may change in the future but for now a working parser must be included (it can be subject to dependencies as listed in the dependencies section below).

  • Parser

  • Queries

  • Alerts

  • Dashboards - Use Case

Use Case: (list all that apply from the selection of DevOps, SecOps, ITOps, IoT/OT)

  • DevOps - used by devops teams to provide observability of applications and the platforms the code runs on, including containerized and micro-services type environments.

  • SecOps - useful for security operations teams, typically to provide security monitoring and SIEM like functionality.

  • ITOps - used by IT operations teams to monitor IT systems and to provide investigations and root cause analysis

  • IoT/OT - used to provide security and/or operational management of Internet of Things devices or Operational Technology such as Scada and industrial process control and monitoring.

Technology Vendor: (list of vendors supported by package) e.g. AWS (Amazon Web Services) or Cisco.

  • Support: describe the level of support offered for this package. The default would be:

  • If you have any issues implementing or running this package, please go to Support for assistance.

  • If the package creator is not CrowdStrike/LogScale then describe what support is offered by the package creator.


  • List/describe dependencies for your package to work. These will typically include versions or configurations of the log sources and often the log collection method and formatting of logs. You can include a sample line of log data to illustrate the required format.


Instructions on how to ingest the data and how to install and use the package. Must include any required configuration in external systems such as log sources as well as explaining any optional configurations or choices that need to be made. These should be summary instructions with a link to the LogScale documentation library and the specific integration step by step instructions.

If your package is a collection of template files for a specific technology, a short description is usually enough.

README Files for Apps

If your package is an application, it should also explain:

  • Any additional pre- or post-installation steps users need to take.

  • A general introduction about how to get started, e.g. including configuration files.

README File Restrictions

The README cannot currently contain images or HTML tags. We plan on easing this restriction to allow images that have been vetted by LogScale and hosted on servers hosted by LogScale. Examples
## Corelight Threat Hunting Guide

This package contains alerts and saved searches that use Zeek (aka
Bro) data from Corelight network sensors. The package contains around
60 different alerts/searches that are useful when threat hunting
within Corelight data to detect potentially suspicious activity. This
package is intended to accompany the [Corelight Threat Hunting

The guide provides an excellent introduction to threat hunting with
Corelight data and is aligned to the Mitre ATT&CK framework
(v6). Corelight provides a rich source of logs for threat hunters and
is capable of detecting a large range of attacker tactics and
techniques. The name of each alert/search includes the relevant Mitre
ATT&CK tactic and techniques (as names and references codes). When
threat hunters are reading the Corelight guide they can search the
LogScale package based on tactic and technique names or numbers to
find the relevant LogScale searches/alerts and simply click to run
those searches.

The Corelight Threat Hunting Guide explains what to look for, this
package shows you how.  For more information on how Corelight can
provide valuable data for your security operations please visit

## Package Contents
- Parser for Corelight JSON data
- Saved Searches
- Alerts (with no action defined)

## Use Case
- SecOps

## Technology Vendor

## Support
Supported by LogScale and Corelight. and

## Dependencies
> The parser included assumes data is sent from Corelight in JSON format.

The package uses logs from the following log files from Corelight (all
suffixed with .log): `conn`, `dce_rpc`, `dnp3`, `dns`, `files`,
`http`, `kerberos`, `modes`, `notice`, `rdp`, `smb_files`,
`smb_mapping`, `smtp`, `ssh`, `ssl`, `weird`. Please make sure all
these log types are sent to LogScale as otherwise some of the
searches/alerts may not work.

## Installation

The preferred option for sending logs from Corelight to LogScale is to
choose 'Export to Splunk HEC' (available under the Sensor-> Export
menu in the Corelight management interface). For 'HEC URI', enter your
LogScale service URL appended with `/api/v1/ingest/hec`
(e.g. ``) For further
information on using the LogScale HEC see [the

Installing the package will automatically deploy the `corelight-json`
parser, saved searches, and alerts directly into the repository you
install the package into.

You will need to link the parser to an ingest token within LogScale.

If you have already installed the `corelight-json` parser as part of
the `corelight/sensor` package then you can use this existing parser
as it is identical to the `corelight-json` parser in this


Note that the README provides details ensuring that a user knows at least how their ingested data should arrive to ensure it is compatible with the parsers in the package. It is highly desirable that the README provides details on the exact procedures for data collection (i.e. a howto on getting the specific data source(s) into LogScale).

Asset Files (Alerts, Dashboards, etc)

The best way to get started with asset files is to export assert you have authored in LogScale and tweak them in a text editor.

Each file contains a $schema field which references the JSON schema that it is validated against.


All schemas are hosted at:


Many editors have support for JSON Schemas and will give you suggestions and and validate the files on the fly. One such editor is Visual Studio Code.

Lookup Files

Packages can also contain lookup files. These are either CSV or JSON files, which LogScale can use to look up data in while searching. These files do not have any schemas, and are stored in the data folder. Additionally some file has been added in the data folder of a package, it can be referred to from queries by typing packageScope/packageName/nameOfFile.csv, as opposed to just nameOfFile.csv that you would normally look up such a file with.