Kafka Configuration Migration Guide

LogScale 1.173.0 deprecated the EXTRA_KAFKA_CONFIGS_FILE environment variable in favor of individual Kafka environment variables. Starting with LogScale 1.225.0, LogScale will not start if EXTRA_KAFKA_CONFIGS_FILE is present.

The Humio Operator handles this transition by detecting deprecated configurations and providing clear migration guidance to prevent startup failures.

Automatic Operator Behavior

The operator automatically manages the deprecation based on your LogScale version:

LogScale < 1.173.0

Uses EXTRA_KAFKA_CONFIGS_FILE (existing behavior) with no deprecation warnings.

LogScale 1.173.0 - 1.224.x

Uses EXTRA_KAFKA_CONFIGS_FILE with deprecation warning. Logs: "EXTRA_KAFKA_CONFIGS_FILE is deprecated, consider migrating..."

LogScale 1.225.0+ with Operator 0.34.0

Automatically skips EXTRA_KAFKA_CONFIGS_FILE to prevent startup failure. Logs: "Skipping EXTRA_KAFKA_CONFIGS_FILE for LogScale 1.225.0+ to prevent startup failure"

LogScale 1.225.0+ with Operator 0.34.1+

Places HumioCluster in ConfigError state when deprecated extraKafkaConfigs are detected, providing clear migration guidance instead of allowing potential startup failures.

Migration Steps

LogScale 1.173.0+ provides individual Kafka environment variables with these prefixes:

  • KAFKA_ADMIN_* - Admin client configuration

  • KAFKA_CHATTER_CONSUMER_* - Chatter consumer configuration

  • KAFKA_CHATTER_PRODUCER_* - Chatter producer configuration

  • KAFKA_GLOBAL_CONSUMER_* - Global consumer configuration

  • KAFKA_GLOBAL_PRODUCER_* - Global producer configuration

  • KAFKA_INGEST_QUEUE_CONSUMER_* - Ingest queue consumer configuration

  • KAFKA_INGEST_QUEUE_PRODUCER_* - Ingest queue producer configuration

  • KAFKA_COMMON_* - Configuration applied to all clients (client-specific settings take precedence)

Convert Kafka property names using these rules:

  1. Uppercase the property name

  2. Replace dots (.) with underscores (_)

  3. Add the appropriate prefix

Examples:

bootstrap.servers=kafka:9092         → KAFKA_COMMON_BOOTSTRAP_SERVERS=kafka:9092
request.timeout.ms=30000             → KAFKA_COMMON_REQUEST_TIMEOUT_MS=30000
batch.size=16384                     → KAFKA_GLOBAL_PRODUCER_BATCH_SIZE=16384
auto.offset.reset=earliest           → KAFKA_GLOBAL_CONSUMER_AUTO_OFFSET_RESET=earliest
compression.type=gzip                → KAFKA_GLOBAL_PRODUCER_COMPRESSION_TYPE=gzip
HumioCluster Configuration Update

Before (Deprecated):

yaml
apiVersion: core.humio.com/v1alpha1
kind: HumioCluster
metadata:
  name: my-humio-cluster
spec:
  # DEPRECATED: Will cause HumioCluster to go into ConfigError state in LogScale 1.225.0+
  extraKafkaConfigs: |
    bootstrap.servers=kafka:9092
    request.timeout.ms=30000
    compression.type=gzip

After (Recommended):

yaml
apiVersion: core.humio.com/v1alpha1
kind: HumioCluster
metadata:
  name: my-humio-cluster
spec:
  # Use individual environment variables
  commonEnvironmentVariables:
    - name: KAFKA_COMMON_BOOTSTRAP_SERVERS
      value: kafka:9092
    - name: KAFKA_COMMON_REQUEST_TIMEOUT_MS
      value: "30000"
    - name: KAFKA_GLOBAL_PRODUCER_COMPRESSION_TYPE
      value: gzip
Configuration Options

Using spec.commonEnvironmentVariables (Recommended) - Apply to all node pools in the cluster:

yaml
spec:
  commonEnvironmentVariables:
    - name: KAFKA_COMMON_BOOTSTRAP_SERVERS
      value: kafka:9092

Using spec.environmentVariables - Apply to the main node pool only:

yaml
spec:
  environmentVariables:
    - name: KAFKA_COMMON_BOOTSTRAP_SERVERS
      value: kafka:9092

Using spec.nodePools[].environmentVariables - Apply to specific node pools:

yaml
spec:
  nodePools:
    - name: ingest-nodes
      environmentVariables:
        - name: KAFKA_INGEST_QUEUE_PRODUCER_COMPRESSION_TYPE
          value: gzip
Common Migration Examples

Basic Kafka Connection

yaml
# OLD
extraKafkaConfigs: |
  bootstrap.servers=kafka-1:9092,kafka-2:9092,kafka-3:9092
  request.timeout.ms=30000

# NEW
commonEnvironmentVariables:
  - name: KAFKA_COMMON_BOOTSTRAP_SERVERS
    value: kafka-1:9092,kafka-2:9092,kafka-3:9092
  - name: KAFKA_COMMON_REQUEST_TIMEOUT_MS
    value: "30000"

SSL/TLS Configuration

yaml
# OLD
extraKafkaConfigs: |
  security.protocol=SSL
  ssl.truststore.location=/etc/ssl/certs/truststore.jks
  ssl.truststore.password=changeit
  ssl.keystore.location=/etc/ssl/certs/keystore.jks
  ssl.keystore.password=changeit
  ssl.key.password=changeit

# NEW
commonEnvironmentVariables:
  - name: KAFKA_COMMON_SECURITY_PROTOCOL
    value: SSL
  - name: KAFKA_COMMON_SSL_TRUSTSTORE_LOCATION
    value: /etc/ssl/certs/truststore.jks
  - name: KAFKA_COMMON_SSL_TRUSTSTORE_PASSWORD
    valueFrom:
      secretKeyRef:
        name: kafka-ssl-secrets
        key: truststore-password
  - name: KAFKA_COMMON_SSL_KEYSTORE_LOCATION
    value: /etc/ssl/certs/keystore.jks
  - name: KAFKA_COMMON_SSL_KEYSTORE_PASSWORD
    valueFrom:
      secretKeyRef:
        name: kafka-ssl-secrets
        key: keystore-password
  - name: KAFKA_COMMON_SSL_KEY_PASSWORD
    valueFrom:
      secretKeyRef:
        name: kafka-ssl-secrets
        key: key-password

Performance Tuning with Client-Specific Settings

yaml
# OLD
extraKafkaConfigs: |
  producer.batch.size=65536
  producer.linger.ms=10
  producer.compression.type=lz4
  consumer.fetch.min.bytes=1024
  consumer.fetch.max.wait.ms=500

# NEW - Using specific client prefixes
commonEnvironmentVariables:
  # Global producer settings
  - name: KAFKA_GLOBAL_PRODUCER_BATCH_SIZE
    value: "65536"
  - name: KAFKA_GLOBAL_PRODUCER_LINGER_MS
    value: "10"
  - name: KAFKA_GLOBAL_PRODUCER_COMPRESSION_TYPE
    value: lz4

  # Ingest queue producer (more specific)
  - name: KAFKA_INGEST_QUEUE_PRODUCER_COMPRESSION_TYPE
    value: gzip  # Overrides global setting for ingest queue

  # Global consumer settings
  - name: KAFKA_GLOBAL_CONSUMER_FETCH_MIN_BYTES
    value: "1024"
  - name: KAFKA_GLOBAL_CONSUMER_FETCH_MAX_WAIT_MS
    value: "500"
Validation

Check Operator Logs:

shell
kubectl logs -l app.kubernetes.io/name=humio-operator -n humio-operator-system

Look for messages like:

  • "EXTRA_KAFKA_CONFIGS_FILE is deprecated, consider migrating..."

  • "Skipping EXTRA_KAFKA_CONFIGS_FILE for LogScale 1.225.0+ to prevent startup failure" (Operator 0.34.0)

  • "HumioCluster placed in ConfigError state due to deprecated extraKafkaConfigs" (Operator 0.34.1+)

Verify Environment Variables:

shell
kubectl exec -it <pod-name> -- env | grep KAFKA_

Test Your Configuration: After migrating, verify that LogScale starts successfully, Kafka connectivity works as expected, performance characteristics are maintained, and security settings are properly applied.

Troubleshooting
LogScale Won't Start After Upgrade to 1.225.0+

With Operator 0.34.1+, the operator prevents this by placing the HumioCluster in ConfigError state when deprecated extraKafkaConfigs are detected. Check the HumioCluster status and operator logs for migration guidance.

Missing Kafka Configuration

Verify environment variable names follow the transformation rules (uppercase, dots to underscores, proper prefix).

Configuration Precedence Issues

Remember that client-specific settings (e.g., KAFKA_INGEST_QUEUE_PRODUCER_*) take precedence over common settings (KAFKA_COMMON_*).

SSL/TLS Connection Issues

Ensure certificate paths are correct and passwords are properly referenced from secrets.

Understanding Client Types

When choosing the right prefix, understand what each client type handles:

  • KAFKA_COMMON_: Applied to all Kafka clients

  • KAFKA_ADMIN_: Administrative operations (topic creation, etc.)

  • KAFKA_CHATTER_CONSUMER_/KAFKA_CHATTER_PRODUCER_: Internal cluster communication

  • KAFKA_GLOBAL_CONSUMER_/KAFKA_GLOBAL_PRODUCER_: General data processing

  • KAFKA_INGEST_QUEUE_CONSUMER_/KAFKA_INGEST_QUEUE_PRODUCER_: Data ingestion pipelines

For most configurations, start with KAFKA_COMMON_ and use more specific prefixes only when you need different settings for different client types.