Juniper SRX Series Firewall
Centralize and monitor your Juniper device logs by ingesting the relevant SRX syslog data to LogScale.
This package provides a parser for Juniper SRX Firewall events in JSON format.
Breaking Changes
This update includes parser changes, which means that data ingested after upgrade will not be backwards compatible with logs ingested with the previous version.
Updating to version 1.0.0 or newer will therefore result in issues with existing queries in for example dashboards or alerts created prior to this version.
See CrowdStrike Parsing Standard (CPS) 1.0 for more details on the new parser schema.
Follow the CPS Migration to update your queries to use the fields and tags that are available in data parsed with version 1.0.0.
Configurations and Sending The Logs to LogScale
See Juniper Documentation for information on how to send Juniper logs to Falcon LogScale Collector.
Installing the Juniper SRX Firewall Package in LogScale
Find the repository where you want to send the logs, or create a new one.
Navigate to your repository in the LogScale interface, click Settings and then on the left.
Click
and install the LogScale package for Juniper SRX (i.e. juniper/srx).When the package has finished installing, click Ingest Tokens).
on the left (still under the , seeIn the right panel, click Sources & Examples.
to create a new token. Give the token an appropriate name (e.g.the name of the server and the name of the server the token is ingesting logs for), and leave the parser unassigned. You can assign the parser to the LogScale Collector Configuration as described in the documentationBefore leaving this page, view the ingest token and copy it to your clipboard — to save it temporarily elsewhere.
Now that you have a repository set up in LogScale along with an ingest token you're ready to send logs to LogScale.
Next, configure the Falcon LogScale Collector to ship the logs from your syslog server into LogScale. Follow LogScale Collector Install LogScale Collector and Configure LogScale Collector. LogScale Collector documentation also provides an example of how you can configure your syslog datasource, see Sources & Examples.
Enroll the LogScale Collector using the following configuration:
sources: syslog_tcp_514: type: syslog mode: tcp port: 514 supportsOctetCounting: false strict: false sink: logscale sinks: logscale: type: humio token: ingest_token assigned to `srx-syslog` parser url: Logscale URL, for example: https://cloud.community.humio.com
Verify Data is Arriving in LogScale
Once you have completed the above steps the data should be arriving in your LogScale repository.
You can verify this by doing a simple search for #Vendor =
"juniper" | #event.module = "srx"
to see the events.
Package Contents Explained
This package parses incoming data, and normalizing the data as part of that parsing. The parser normalizes the data to CrowdStrike Parsing Standard (CPS) 1.0 schema based on OpenTelemetry standards, while still preserving the original data.
If you want to search using the original field names and values, you can access those in the fields whose names are prefixed with the word "Vendor". Fields which are not prefixed with "Vendor" are standard fields which are either based on the schema (e.g. source.ip) or on LogScale conventions (e.g. @rawstring).
The fields which the parser currently maps the data to, are chosen based on what seems the most relevant, and will potentially be expanded in the future. But the parser won't necessarily normalize every field that has potential to be normalized.
Event Categorisation
As part of the schema, events are categorized by different fields, including:
event.category
event.type
event.kind
event.outcome
event.category is an array, so needs to be searched like so:
array:contains("event.category[]", value="info")
This will find events where some event.category[n] field contains the value "info", regardless of what `n` is. Note that not all events will be categorized to this level of detail.
Normalized Fields
Here are some of the normalized fields which are being set by this parser:
event.* (e.g event.module, event.kind, event.risk, event.outcome, event.reason, event.dataset, event.type, event.category, event.action)
ecs.* (e.g ecs.version)
destination.* (e.g destination.port, destination.ip, destination.nat.ip, destination.packets, destination.nat.port, destination.bytes)
file.* (e.g file.name, file.hash.sha)
hash.* (e.g hash.sha)
log.* (e.g log.syslog.hostname, log.syslog.msgid, log.procid, log.header, log.level, log.syslog.priority, log.structured, log.msgid, log.syslog.procid, log.syslog.structured, log.message, log.hostname, log.syslog.version, log.syslog.appname)
network.* (e.g network.bytes, network.iana, network.protocol, network.transport, network.packets)
observer.* (e.g observer.egress.interface.name, observer.type, observer.ingress.zone, observer.egress.zone, observer.product, observer.ingress.interface.name)
rule.* (e.g rule.name)
server.* (e.g server.type, server.ingress.zone, server.nat.ip, server.egress.interface.name, server.bytes, server.egress.zone, server.ingress.interface.name, server.port, server.product, server.packets, server.nat.port, server.ip)
source.* (e.g source.port, source.packets, source.address, source.user.name, source.bytes, source.domain, source.nat.ip, source.nat.port, source.ip)
url.* (e.g url.domain, url.path)
user.* (e.g user.name)