Forcepoint Data Loss Prevention Data (DLP)(Preview)

Correlate Forcepoint DLP data with endpoint or SSO data to identify insider threats or identity theft more quickly. LogScale offers improved storage of sensitive data access logs for compliance reasons (the native default is 60 days).

The parser included in this package supports two types of logs in CEF format, single events and reports, which are a set of events that are grouped together because they may indicate a specific breach, e.g. downloading files from a server.

The parser normalizes data to a common schema based on an OpenTelemetry standard. This schema allows you to search the data without knowing the data specifically, and just knowing the common schema instead. It also allows you to combine the data more easily with other data sources which conform to the same schema.

Preview Status

Note that this package is considered a PREVIEW. This means we are seeking feedback on the package, and may make breaking changes to the parser in the future. It means also it may not include all documentation which usually we publish for packages.

Given the preview status of this package there are some limitations to what the parser is able to do. Please check out the package on Marketplace to see the most recent list of limitations.

Installing the Package in LogScale

Find the repository where you want to send the Forcepoint Data Loss Prevention Data (DLP)(Preview) events, or Creating a Repository or View.

  1. Navigate to your repository in the LogScale interface, click Settings and then Packages on the left.

  2. Click Marketplace and install the LogScale package for Forcepoint Data Loss Prevention Data (DLP)(Preview) (i.e. forcepoint/dlp).

  3. When the package has finished installing, click Ingest tokens on the left still under the Settings.

  4. In the right panel, click + Add Token to create a new token. Give the token an appropriate name (e.g. the name of the Event Hub it will collect logs from), and leave the assign the parser dlp-cef.

    Before leaving this page, view the ingest token and copy it to your clipboard — to save it temporarily elsewhere.

    Now that you have a repository set up in LogScale along with an ingest token you're ready to send logs to LogScale.

Configurations and Sending the Logs to LogScale

Follow the instructions provide here here to save the logs as syslogs which can then be shipped using the LogScale Collector, see Sources & Examples.

Verify Data is Arriving in LogScale

Once you have completed the above steps the Forcepoint Data Loss Prevention Data (DLP)(Preview) data should be arriving in your LogScale repository.

You can verify this by doing a simple search for the events:

logscale
#Vendor = "forcepoint" 
| Product="dlp"

Package Contents Explained

This package parses incoming data, and normalizing the data as part of that parsing. The parser normalizes the data to a subset of this schema based on OpenTelemetry standards, while still preserving the original data.

If you want to search using the original field names and values, you can access those in the fields whose names are prefixed with the word Vendor. Fields which are not prefixed with Vendor are standard fields which are either based on the schema (e.g. source.ip) or on LogScale conventions (e.g. @rawstring).

The fields which the parser currently maps the data to, are chosen based on what seems the most relevant, and will potentially be expanded in the future. But the parser won't necessarily normalize every field that has potential to be normalized.

Event Categorisation

As part of the schema, events are categorized by fields:

  • event.category

event.category is an array so needs to be searched using the following syntax:

logscale
array:contains("event.category[]", value="network")

For example, the following will find events where some event.type[n] field contains the value network, regardless of what n is.

Note that not all events will be categorized to this level of detail.

Normalized Fields

Here are some of the normalized fields which are being set by this parser:

  • event.* (e.g. event.id )

  • source.* and destination.* (e.g. source.ip , destination.domain )

  • user.* (e.g. user.name )

  • threat.* (e.g. threat.id )

  • agent.* (e.g. agent.type )

  • risk.* (e.g. risk.id )