Darktrace Detect (Preview)

Darktrace Detect is a network solution for detecting and investigating emerging cyber-threats that evade traditional security tools. Use Darktrace Detect to monitor alert logs, collect and parse data via Syslog, then visualize the data.

This package provides a preview parser for Darktrace Detect events in JSON format.

The parser normalizes data to a common schema based on an OpenTelemetry standard. This schema allows you to search the data without knowing Darktrace's data specifically, and just knowing the common schema instead. It also allows you to combine the data more easily with other data sources which conform to the same schema.

Preview Status

Note that this package is considered a PREVIEW. This means we are seeking feedback on the package, and may make breaking changes to the parser in the future.

Configurations and Sending The Logs to LogScale

See Darktrace Syslog Specification manual for information on how to send Darktrace logs to Falcon LogScale Collector.

Installing the Darktrace Detect Package in LogScale

Find the repository where you want to send the logs, or create a new one.

  1. Navigate to your repository in the LogScale interface, click Settings and then Packages on the left.

  2. Click Marketplace and install the LogScale package for CTD (i.e. darktrace/detect).

  3. When the package has finished installing, click Ingest tokens on the left (still under the Settings, see Ingest Tokens).

  4. In the right panel, click + Add Token to create a new token. Give the token an appropriate name (e.g.the name of the server and the name of the server the token is ingesting logs for), and leave the parser unassigned. You can assign the parser to the LogScale Collector Configuration as described in the documentation Sources & Examples.

    Before leaving this page, view the ingest token and copy it to your clipboard — to save it temporarily elsewhere.

    Now that you have a repository set up in LogScale along with an ingest token you're ready to send logs to LogScale.

  5. Next, configure the Falcon LogScale Collector to ship the logs from your syslog server into LogScale. Follow LogScale Collector Installing the LogScale Collector and Configuring LogScale Collector. LogScale Collector documentation also provides an example of how you can configure your syslog datasource, see this syslog example.

Setting Up Syslog in the DarkTrace Threat Visualizer Dashboard

When setting up the Darktrace Threat Visualizer dashboard, follow the below steps:

  1. Install the Darknet Detect package in the relevant repository.

  2. Create three ingest tokens, then assign them to their respective parsers.

  3. Set up syslog in the Darktrace Threat Visualizer Dashboard to send logs to LogScale Collector, which acts as the syslog receiver.

  4. To send the logs to LogScale, enroll the LogScale Collector using this syslog example.

  5. Open the Darktrace Threat Visualizer Dashboard and navigate to the System Config page. (Main menu › Admin).

  6. From the left-side menu, select Modules, then navigate to the Workflow Integrations section and choose Syslog.

  7. Select Syslog JSON tab and click New to set up new Syslog Forwarder.

  8. Enter the IP Address and Port of the LogCollector that is running the integration in the Server and Server Port field respectively.

Note: For DarkTrace, you need to create different syslog forwarders with different ports for each data stream.

Verify Data is Arriving in LogScale

Once you have completed the above steps the data should be arriving in your LogScale repository.

You can verify this by doing a simple search for #Vendor = "darktrace" | Product = "detect" to see the events.

Package Contents Explained

This package is only for parsing incoming data, and normalizing the data as part of that parsing. The parser normalizes the data to a subset of this schema based on OpenTelemetry standards, while still preserving the original data.

If you want to search using the original field names and values, you can access those in the fields whose names are prefixed with the word "Vendor". Fields which are not prefixed with "Vendor" are standard fields which are either based on the schema (e.g. source.ip) or on LogScale conventions (e.g. @rawstring).

The fields which the parser currently maps the data to, are chosen based on what seems the most relevant, and will potentially be expanded in the future. But the parser won't necessarily normalize every field that has potential to be normalized.

Event Categorisation

As part of the schema, events are categorized by different fields, including:

  • event.category

  • event.kind

  • event.type

event.category is an array, so needs to be searched like so:

array:contains("event.category[]", value="threat")

This will find events where some event.category[n] field contains the value "info", regardless of what `n` is. Note that not all events will be categorized to this level of detail.

Normalized Fields

Here are some of the normalized fields which are being set by this parser:

  • event.* (e.g. event.type,event.kind,event.category, event.url, event.severity)

  • host.* (e.g. host.ip, host.hostname)

  • rule.* (e.g. rule.category, rule.name, rule.author)

  • related.* (e.g. related.ip, related.user)