Additional Considerations
Prometheus Endpoint
It is possible to configure LogScale to expose an endpoint that provides LogScale metrics in a format that is supported by Prometheus, a popular metrics solution in the Kubernetes ecosystem. The most common method is to make sure pods expose the metrics on a defined port, and then configure Prometheus to automatically discover pods that have explicitly marked a certain port to be scraped. If we use the basic cluster example, this is how we would do achieve that:
Add
PROMETHEUS_METRICS_PORT
to the "environmentVariables" list in theHumioCluster
resources:yamlapiVersion: core.humio.com/v1alpha1 kind: HumioCluster ... spec: ... environmentVariables: - name: PROMETHEUS_METRICS_PORT value: "8401" ...
Add pod annotations to
HumioCluster
pods, which Prometheus can discover and automatically start scraping the metrics endpoints on each pod:
apiVersion: core.humio.com/v1alpha1
kind: HumioCluster
...
spec:
...
podAnnotations:
prometheus.io/scrape: "true"
prometheus.io/port: "8401"
...
Service mesh
If LogScale cluster pods are added to a service mesh, there are a couple of items worth highlighting.
If the service mesh already provides mutual TLS, it is recommended to disable the TLS connectivity on the LogScale side and rely on the service mesh to handle it. When the service mesh handles TLS, it means any built-in observability features of the service mesh works.
If the service mesh relies on injecting a proxy as an additional container/sidecar to the pods, it is important to ensure network connectivity works during the entire shutdown sequence for LogScale cluster pods. If the service mesh proxy starts shutting down before LogScale is done shutting down, it may impact LogScale's ability to handle data in a safe manner. This includes connectivity to Kafka, bucket storage and any other important components.
Logging humio helm chart
As shown above, the humio-operator project takes care of LogScale cluster management, but does not solve the task of shipping logs for Kubernetes containers to a LogScale cluster. To solve that, we have a separate Helm chart located in https://github.com/humio/humio-helm-charts/ which installs a log shipper and sends container logs to the specified LogScale cluster.
Horizontal Pod Autoscaler
The built in Kubernetes resource type HorizontalPodAutoscaler is not supported.
Traefik Ingress Controller Example
An example for a more complex ingress controller.
apiVersion: cert-manager.io/v1alpha2
kind: Certificate
metadata:
name: basic-cluster-1-externally-trusted-certificate
namespace: example-clusters
spec:
commonName: basic-cluster-1.logscale.local
secretName: basic-cluster-1-externally-trusted-certificate
dnsNames:
- basic-cluster-1.logscale.local
issuerRef:
name: letsencrypt-traefik-prod
kind: ClusterIssuer
---
apiVersion: traefik.containo.us/v1alpha1
kind: ServersTransport
metadata:
name: basic-cluster-1-transportconfig
namespace: example-clusters
spec:
disableHTTP2: true
insecureSkipVerify: false
rootCAsSecrets:
- basic-cluster-1
serverName: basic-cluster-1.example-clusters
---
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: basic-cluster-1
namespace: example-clusters
annotations:
kubernetes.io/ingress.class: traefik
spec:
entryPoints:
- websecure
routes:
- match: Host(`basic-cluster-1.logscale.local`)
kind: Rule
services:
- kind: Service
name: basic-cluster-1
port: 8080
scheme: https
serversTransport: basic-cluster-1-transportconfig
tls:
secretName: basic-cluster-1-externally-trusted-certificate