Cluster topology

LogScale clusters are made up of different multiple logical components. It is possible to run nodes for a specific purpose, but it is also possible to run nodes with several responsibilities. The overall logical cluster topology looks like this:

graph LR; L1[Log shipper] L2[Log shipper] L3[Log Collector] C1[Client] C2[Client] subgraph LoadBalancer direction LR LB[LB Node] LBB[LB Node] LBC[LB Node] end IN[[Ingest Nodes]] L2 --Ingest Data--> LoadBalancer L1 --> LoadBalancer L3 --> LoadBalancer LoadBalancer --> IN subgraph "LogScale" direction LR KQ[Kafka Queue] QCN[Query Coordination Nodes] UI[UI/API Nodes] DN[Digest Nodes] SN[Storage Nodes] GD[Global Database] end BS((Bucket Storage)) C1 --Query Requests--> LoadBalancer C2 --UI/API Requests--> LoadBalancer LoadBalancer --> QCN LoadBalancer --> UI IN --> KQ KQ --> DN QCN <--> UI QCN --Internal Query Requests--> DN & SN DN --Merged Segments--> SN DN <--Segments--> BS SN <--Segments--> BS QCN <--Segments--> BS click IN "#training-arch-in" "Ingest Nodes" click DN "#training-arch-dn" "Digest Nodes" click SN "#training-arch-sn" "Storage Nodes" click QCN "#training-arch-qcn" "Query Coordination Nodes" click UI "#training-arch-ui" "UI Nodes" click GD "#training-arch-fd" "Global Database" click BS "#training-arch-bs" "Bucket Storage"

Figure 1. Kubernetes Deployment Cluster Topology


The individual components are:

  • Ingest

    Ingest processes receive requests from log shippers (including Falcon Falcon Log Collector and Third-Party Log Shippers) that contain events through several supported ingest APIs. Events are parsed using system or user-defined Parsing Data and are subsequently placed in a Kafka queue for further processing by the digest processes.

  • Digest

    Digest processes read events from the Kafka ingest queue and build data files called segments. Queries for recent data in segment files are handled by the digest nodes. This includes data pushed to LogScale's live queries which are queries that are continuously running and aggregating data. Once segment files are completed they are placed in bucket storage, if configured, and future queries are serviced by the storage processes.

  • Storage

    Storage processes store segment files and process queries for the segment files for which they are assigned. Older segments that may no longer reside on a storage instance will download the segment from bucket storage if configured. For most cases, it is recommended that digest nodes are configured to also be storage nodes.

  • Query Coordination

    Query coordination processes receive queries from users, dashboards, alerts, and scheduled searches and create a query plan that sends internal queries to the digest and storage processes that own the segment files required for the query. These do not need to be reachable via the load balancer, but can be reached via the UI/API nodes.

  • UI and API

    UI and API processes handle requests from client using a browsers or clients making API requests against the LogScale cluster.

  • Kafka and ZooKeeper

    Kafka is used by LogScale as an internal cluster communication mechanism and as a queue for ingested events. Depending on the Kafka version in use, ZooKepper may also be required. More recent version of Kafka are backwards compatible with ZooKeeper, but do not require it. Kafka administrators should review and follow Kafka project guidance to migrate from ZooKeeper.

  • Bucket Storage

    Bucket storage relies on a compatible object storage system such as Google Cloud Storage, MinIO, S3, or an S3 compatible API (support may be limited). When using bucket storage the segment files that are completed by the digest processes will upload the segment files to object storage. This feature allows redundancy for the segment files in the face of server failure in addition to allowing the querying of segment files that are no longer stored on storage process nodes due to age or storage capacity.

    The bucket storage functionality assumes it can read objects, so it is not compatible with write-only object storage systems. Any segment file that is being uploaded to bucket storage will be encrypted and decrypted on the LogScale nodes, even if the bucket storage system contains a built in encryption feature. More details on bucket storage can be found here: Bucket Storage.