Ingest Data from Azure Event Hubs

Security Requirements and Controls

Available: Azure Ingest v1.189.0

Azure Ingest is available in v1.189.0 Self hosted and v1.195.0 on cloud.

Falcon LogScale can ingest logs from Azure Event Hubs, which can then be managed in Falcon LogScale and leveraged using queries, alerts and alarms.

In this section we will run through the configuration process of ingesting this data.

Falcon LogScale ingests from the Azure Event Hub and scales ingest based on the number of partitions configured in the event hub. There is typically some latency between the events occurring to them being available, both from the side producing the events (for example, Azure Monitor) and from the Falcon LogScale consumer.

Azure log data is an extremely valuable data source that comes in a variety of flavors depending on the services you are looking to learn more about. Some of the most common data sources include Microsoft Defender™, Azure Monitor™ and Microsoft Entra ID™.

These logs can be directed to an Azure Event Hub where they can be ingested by Falcon LogScale. Falcon LogScale continuously polls the Azure Event Hub in batches and processes and ingests the data.

Important

Using event hubs is charged based on Azure Event Hub pricing, see https://azure.microsoft.com/en-us/pricing/details/event-hubs/.

Prerequisites for Ingesting Azure Data

To ingest data from a Azure Event Hub Configure or setup:

Assign the following roles to your App Service Principle:

  • Event Hub Namespace

    • Role: Contributor (on the Event Hub Namespace)

    • Required Permissions:

      • List partitions

      • Get Event Hub properties

      • Consume events from Event Hub

  • Storage Account

    • Role: Storage Blob Data Contributor (on the Storage Account)

    • Required Permissions:

      • Read blobs

      • Write blobs

      • Update blobs

Note

Checkpoint Storage (Progress Tracking)

Falcon LogScale writes checkpoints to Blob Storage after processing events from each partition. On restart or failover, Falcon LogScale reads these checkpoints to resume processing from the last committed position.

This prevents data loss and duplicate processing.

Distributed Locking (Cluster Coordination)

In a Falcon LogScale cluster deployment, Blob Storage acts as a distributed lock manager, which helps to ensure that each Event Hub partition is consumed by exactly one Falcon LogScale node at a time.

This prevents duplicate event processing across cluster nodes.

App Configurations

Register an app on the Microsoft Identity Platform as described here.

  • Create new App Registration.

  • Get the Client ID and Tenant ID for your application

  • Generate a secret (save immediately, only shown once)

  • Note the Secret ID for reference.