Ingest Listeners
Security Requirements and Controls
Manage Cluster
permissionManageCluster
API permission
Important
Due to the nature of TCP/UDP raw sockets, LogScale is unable to respond back to log forwarders should any errors occur during ingest. This could result in data loss if ingestion errors occur after data arrives to LogScale using these methods.
Ingest listeners are a great way of shipping data to LogScale through raw sockets, using either UDP or TCP. Some example use cases are:
An ingest listener binds a UDP or TCP port on a network interface to a repository with a Parsing Data. All data sent to a network port will be parsed before it is inserted into the repository.
Important
Ingest listeners are not available on system repositories including humio, humio-audit and others.
View Ingest Listeners
Go to
page and select a relevant repository.Click
, under on the side menu click .On the
Ingest listener
page you can see all configured listeners.
Figure 99. Ingest Listeners
Creating Ingest Listeners
Creating a new ingest listener is all about mapping a port on a network interface through a parser to a repository.
Figure 100. Create Listener
Go to
page and select a relevant repository.Click
, under on the side menu click .On the
Ingest listener
page, click and enter the following details for the ingest listener:Name
A name, usually describing the purpose of the ingest listener.
Protocol
Transport protocol for the ingest listener. This can be TCP, gelf/TCP, UDP gelf/UDP, or Netflow/UDP.
Parser
A Parsing Data to send each event on the socket through to extract fields from the line. Usually a timestamp. Netflow/UDP does not need a parser as it has a rather complex syntax, and a built-in handler. Gelf variants currently use only the
tags
aspect of the parser, as the gelf format already has a timestamp specified.Port
Network port to accept data. If you are not running your Docker images with
--net=host
, this port needs to be exposed through the--publish
Docker argument.Bind Interface
The IP of the interface that this ingest listener should listen on.
Charset
The charset used to decode the event stream. The value must be a supported charset in the JVM that LogScale is running on.
Click
.
Reducing Packet Loss from Bursts on UDP
To reduce packet loss in bursts of UDP traffic, please increase the maximum allowed receive buffer size for UDP.
LogScale will try to increase the buffer to up to 128MB, but will accept whatever the system sets as maximum.
# Get the current limit from the kernel (in bytes)
$ sysctl net.core.rmem_max
# Set to 16MB. Decide on a value of (say) 0.5 - 2 seconds worth of inbound UDP packets.
$ sudo sysctl net.core.rmem_max=16777216
Important
Increasing the maximum allowed receive buffer size for UDP must take
place before LogScale is started. You probably want it done when the
system boots. On Debian (Ubuntu) you can achieve this by creating a
file in /etc/sysctl.d/
with a
name such as raise_rmem_max.conf
and the
contents.
net.core.rmem_max=16777216
Adding an Ingest Listener Endpoint
You can ingest events using one of the many Integrations but when your requirements do not match, you can supply a stream of events on TCP, separated by line feeds. This API allows you to create and configure a TCP listener for such events.
Deprecated: Ingest Listeners REST API
The ingest listener REST API is deprecated and replaced by a GraphQL API.
Use cases include accepting rsyslogd forward
format
and similar plain-text event streams.
GET /api/v1/listeners
POST /api/v1/listeners
GET /api/v1/listeners/$ID
DELETE /api/v1/listeners/$ID
If you use rsyslog for the transport of logs, this example serves as a starting point:
# Example input line on the wire: |
14 2017-08-07T10:57:04.270540-05:00 mgrpc kernel: [ 17.920992] Bluetooth: Core ver 2.22 |
Using a simple text editor, create a file named,
create-rsyslogd-rfc3339-parser.json
and copy the following lines into it:
{
"parser": "^(?pri
\\\\d+)
(?datetimestring
\\\\S+) (?host
\\\\S*) (?syslogtag
\\\\S*): ?(?message
.*)",
"kind": "regex",
"parseKeyValues": true,
"dateTimeFormat": "yyyy-MM-dd'T'HH:mm:ss[.SSSSSS]XXX",
"dateTimeFields": ["datetimestring"]
}
Then execute the following from the command-line:
curl -XPOST \
-d @create-rsyslogd-rfc3339-parser.json \
-H "Authorization: Bearer $TOKEN" \
-H 'Content-Type: application/json' \
"$YOUR_LOGSCALE_URL/api/v1/repositories/$REPOSITORY_NAME/parsers/rsyslogd-rfc3339"
Example setting up a listener using the
rsyslogd forward format
added
above.
Using a simple text editor, create a file named,
create-rsyslogd-listener.json
and
copy the following lines into it:
{
"listenerPort": 7777,
"kind": "tcp",
"dataspaceID": "$REPOSITORY_NAME",
"parser": "rsyslogd-rfc3339",
"bindInterface": "0.0.0.0",
"name": "my rsyslog input",
"vhost": 1
}
The setting bindInterface
is
optional. If set, sets local interface to bind on to select network
interface. Also, vhost
is optional. If set, only the cluster node with that index binds the
port.
Execute the following from the command-line:
curl -XPOST \
-d @create-rsyslogd-listener.json \
-H 'Content-Type: application/json' \
-H "Authorization: Bearer $TOKEN" \
'$YOUR_LOGSCALE_URL/api/v1/listeners'
curl -H "Authorization: Bearer $TOKEN" "$YOUR_LOGSCALE_URL/api/v1/listeners"
curl -H "Authorization: Bearer $TOKEN" "$YOUR_LOGSCALE_URL/api/v1/listeners/tcp7777"
Listeners also support UDP by setting
kind
to
udp
. For UDP, each UDP datagram is
ingested as a single log line (it is not split by newlines).
It is possible to specify that fields in the incoming events should be
turned into tags. This can be done by setting
"tagFields": ["fielda", "fieldb"]
when creating a listener. Only use tags like this if you really need
them.
To reduce packet loss in bursts of UDP traffic, increase the maximum allowed receive buffer size for UDP. LogScale will try to increase the buffer up to 128 MB but will accept whatever the system sets as maximum.
# To set to 16 MB.
sudo sysctl net.core.rmem_max=16777216