Zscaler ZIA

Monitor Zscaler™ for suspicious activity more efficiently by correlating Zscaler™ events with other sources in LogScale.

Quickly find early indicators of attack or insider threats by looking at proxy traffic summaries per user, file downloads directly from an IP address and more.

Breaking Changes

This update includes parser changes, which means that data ingested after upgrade will not be backwards compatible with logs ingested with the previous version.

Updating to version 1.0.0 or newer will therefore result in issues with existing queries in for example dashboards or alerts created prior to this version.

See CrowdStrike Parsing Standard (CPS) for more details on the new parser schema.

Installing the Package in LogScale

Find the repository where you want to send the Zscaler events, or Creating a Repository or View.

  1. Navigate to your repository in the LogScale interface, click Settings and then Packages on the left.

  2. Click Marketplace and install the LogScale package for Zscaler (i.e. zscaler/zia).

  3. When the package has finished installing, click Ingest tokens on the left (still under the Settings, see Figure 99, “Ingest Token”).

  4. In the right panel, click + Add Token to create a new token. Give the token an appropriate name (e.g. the name of the Event Hub it will collect logs from), and assign the parser that was installed with the package zscaler.

    Ingest token

    Figure 99. Ingest Token

    Before leaving this page, view the ingest token and copy it to your clipboard — to save it temporarily elsewhere.

    Now that you have a repository set up in LogScale along with an ingest token you're ready to send logs to LogScale.

Configurations and Sending the Logs to LogScale

To get logs from Zscaler ZIA into LogScale, you can use Zscaler Nanolog Streaming Service (NSS), which comes in two variants:

  • Cloud NSS, which allows you to send logs directly to LogScale

  • VM-based NSS, which allows you to collect logs on a VM, where they can be sent to LogScale via syslog

Cloud NSS

Configure the NSS feeds as follows:

  1. Set SIEM Type as "Other"

  2. Set the API URL as the URL for your LogScale cluster with the text /api/v1/ingest/hec/raw appended, to point to the ingest API.

  3. Add a HTTP header, so that Key1 has the text Authorization and Value1 has the value Bearer TOKEN where the token is the ingest token for your given feed.

    If you are making a Web feed, then use the token that you assigned the zscalernss-web parser to in the previous step, and likewise for the other types.

  4. Select JSON as the Feed Output Type, and copy the given format for your feed type from the sections below.

  5. In the Feed Escape Character list, enter these characters: ",\ (that is: double quote, comma, backslash)

VM-Based NSS

For VM-based NSS, you need the following:

  1. NSS servers up and running.

  2. A Falcon LogScale Log Collector that is accessible from the NSS server, and configured to receive syslog events over TCP, and forwarding them to LogScale. You can find an example syslog configuration in Sources & Examples for the log collector handling syslog in the library.

  3. For the feed configuration inside NSS, or SIEM Destination Type, select IP Address.

  4. For SIEM IP Address and SIEM TCP Port, insert the IP address and port where your log collector is listening for data.

  5. For the Feed Output Type, select Custom and copy the given format for your feed type from the sections below.

  6. In the Feed Escape Character list, enter these characters: ",\ (that is: double quote, comma, backslash)


You can add any fields you would like to these formats, and they will be present and usable in LogScale, but they will only be mapped to the schema if you manually extend the parsers to handle those fields.


DNS configuration:


Firewall configuration:


Web configuration:


Tunnel events have multiple possible event types, which each have their own formats. Configure them as follows:

Tunnel Events :


Tunnel Samples :


IKE phase 1 :

\{"datetime":"%s{datetime}","recordtype":"%s{tunnelactionname}","tunneltype":"IPSEC IKEV %d{ikeversion}","vpncredentialname":"%s{vpncredentialname}","elocationname":"%s{elocationname}","sourceip":"%s{sourceip}","destvip":"%s{destvip}","srcport":"%d{srcport}","destinationport":"%d{dstport}","lifetime":"%d{lifetime}","ikeversion":"%d{ikeversion}","spi_in":"%lu{spi_in}","spi_out":"%lu{spi_out}","algo":"%s{algo}","authentication":"%s{authentication}","authtype":"%s{authtype}","recordid":"%d{recordid}"\}

IKE phase 2 :

\{"datetime":"%s{datetime}","tunnelactionname":"%s{tunnelactionname}","tunneltype":"IPSEC IKEV %d{ikeversion}","vpncredentialname":"%s{vpncredentialname}","elocationname":"%s{elocationname}","sourceip":"%s{sourceip}","destvip":"%s{destvip}","srcport":"%d{srcport}","srcportstart":"%d{srcportstart}","destinationpordestportstarttstart":"%d{destportstart}","srcipstart":"%s{srcipstart}","srcipend":"%s{srcipend}","destipstart":"%s{destipstart}","destipend":"%s{destipend}","lifetime":"%d{lifetime}","ikeversion":"%d{ikeversion}","lifebytes":"%d{lifebytes}","spi":"%d{spi}","algo":"%s{algo}","authentication":"%s{authentication}","authtype":"%s{authtype}","protocol":"%s{protocol}","tunnelprotocol":"%s{tunnelprotocol}","policydirection":"%s{policydirection}","recordid":"%d{recordid}"\}

Verify Data is Arriving in LogScale

Once you have completed the above steps the ZIA logs should be arriving in your LogScale repository.

You can verify this by doing a simple search for #vendor = "zscaler" | event.module="zia" to see the events.

Package Contents Explained

This package is only for parsing incoming data, and normalizing the data as part of that parsing. The parser normalizes the data to a subset of this schema based on OpenTelemetry standards, while still preserving the original data.

If you want to search using the original field names and values, you can access those in the fields whose names are prefixed with the word "Vendor". Fields which are not prefixed with "Vendor" are standard fields which are either based on the schema (e.g. source.ip) or on LogScale conventions (e.g. @rawstring).

The fields which the parser currently maps the data to, are chosen based on what seems the most relevant, and will potentially be expanded in the future. But the parser won't necessarily normalize every field that has potential to be normalized.

Event Categorisation

As part of the schema, events are categorized by the following fields:

  • event.kind

  • event.category

  • #event.outcome is avilable a tag

event.kind and #event.outcome can be searched as normal fields, but event.category and event.type are arrays, so need to be searched like so:

array:contains("event.category[]", value="network")

This will find events where some "event.category[n]" field contains the value "network", regardless of what n is.

Note that not all events will be categorized to this level of detail.

Normalized Fields

Here are some of the normalized fields which are being set by this parser:

  • event.* (e.g. event.kind, event.module, event.type, event.dataset, event.category )

  • destination.ip

  • http.* (e.g. http.request.method, http.request.referrer)

  • network.* (e.g. network.protocol)

  • ecs* (e.g. ecs.version )

  • Cps* (e.g. Cps.version )

Example Queries

To see only data from one of the feeds, you can search with the query:

#type = "zscalernss-web"

Which only returns the data that was parsed with the parser for web events (and similarly for the other feed types and their parsers).

To see where your traffic is headed, you can search for: