Troubleshooting
Some common issues and how to resolve them are described in the following sections.
Please first refer to Best Practice.
ConfigError States
Observing ConfigError
states in Humio Operator
typically indicate configuration-related issues after an upgrade. Some
common causes of ConfigError
include:
Version Compatibility Issues: could be caused by a mismatch between the CRD version and the Humio Operator version. It can also be due to incompatible configuration parameters after the upgrade.
Resource Specification Problems: could be caused by invalid or missing required fields, deprecated fields still in use or incorrect format in specifications.
To troubleshoot these errors you can use the following steps:
Check the specific error message:
shellkubectl describe humiocluster <cluster-name> # Look for Events and Status.Message fields
Verify CRD versions:
shellkubectl get crd humioclusters.core.humio.com -o yaml | grep versions
Check the Humio Operator logs:
shellkubectl logs -n <namespace> -l app=humio-operator
Some common solutions to these problems are:
For CRD version mismatches try updating CRDs to match Operator version and verify all Humio CRDs are at the correct version.
For spec issues compare your HumioCluster spec with current version documentation, remove deprecated fields, and add any newly required fields.
For general config issues verify all referenced secrets exist, check storage class availability, and validate network policies.
Version mismatches
If you get version mismatch errors when upgrading Humio Operator, start by checking current versions:
# Check CRD versions
kubectl get crd humioclusters.core.humio.com -o yaml | grep -A 5 "versions:"
# Check Operator version
kubectl get deployment humio-operator -o jsonpath='{.spec.template.spec.containers[0].image}'
To resolve errors:
Back up the existing state with the following command:
shell# Backup current CRDs kubectl get crds -o yaml | grep -A1 "name: humio" > humio-crds-backup.yaml
Update CRDs to match Humio Operator version:
shell# Get correct CRDs for your operator version kubectl apply -f https://raw.githubusercontent.com/humio/humio-operator/<version>/config/crd/bases/core.humio.com_humioclusters.yaml kubectl apply -f https://raw.githubusercontent.com/humio/humio-operator/<version>/config/crd/bases/core.humio.com_humioexternalclusters.yaml kubectl apply -f https://raw.githubusercontent.com/humio/humio-operator/<version>/config/crd/bases/core.humio.com_humioalerts.yaml ...
Verify and restart:
shell# Verify CRD versions kubectl get crds | grep humio # Restart operator pod kubectl delete pod -l app=humio-operator -n <namespace>
Timestamp Mismatch
If you get mixed timestamps when upgrading Humio Operator, first check timestamp inconsistencies:
# View CRD timestamps
kubectl get crds -o custom-columns=NAME:.metadata.name,CREATED:.metadata.creationTimestamp | grep humio
This produces output that can help you determine problems with timestamps:
NAME CREATED
humioclusters.core.humio.com 2025-09-15T10:30:00Z
humioexternalclusters.core.humio.com 2025-09-15T10:30:00Z
humioalerts.core.humio.com 2025-10-01T14:45:00Z
All CRDs should typically have similar timestamps after a clean installation.
To resolve any issues:
Back up the existing state with the following command:
shell# Backup CRDs and their configurations kubectl get crds -o yaml | grep -A1 "name: humio" > humio-crds-backup.yaml kubectl get humioclusters -A -o yaml > humioclusters-backup.yaml
Clean up inconsistent CRDs, for example:
shell# Remove CRDs with mismatched timestamps kubectl delete crd humioclusters.core.humio.com kubectl delete crd humioexternalclusters.core.humio.com kubectl delete crd humioalerts.core.humio.com ...
Apply fresh CRDs:
shell# Apply CRDs from correct version kubectl apply -f <operator-version>/crds/
Verify consistency:
shell# Check new timestamps kubectl get crds -o custom-columns=NAME:.metadata.name,CREATED:.metadata.creationTimestamp | grep humio
Check for consistency of the timestamps.
Helm ownership conflicts
First, check the current ownership status:
# View annotations on resources
kubectl get humioclusters -o yaml | grep "meta.helm.sh/release-name"
# Check Helm release status
helm list | grep humio-operator
There are three ways to solve conflict issues:
Take a "clean slate" approach by removing and reinstalling:
shell# Backup existing resources kubectl get humioclusters -A -o yaml > humioclusters-backup.yaml # Remove Helm release helm uninstall humio-operator # Reinstall with correct configuration helm install humio-operator humio-operator/humio-operator
Take over the resource management:
shell# Remove Helm ownership annotations kubectl annotate humioclusters --all meta.helm.sh/release-name- meta.helm.sh/release-namespace-
Let Helm take control:
shell# Use --force flag (careful!) helm upgrade humio-operator humio-operator/humio-operator --force
Recovering from failed upgrades
Here are some tips on recovering from failed Humio Operator upgrade. First check the current status:
# Check operator status
kubectl get pods -l app=humio-operator
kubectl describe deploy humio-operator
# Check CRDs
kubectl get crds | grep humio
# Check Helm release
helm status humio-operator
To recover from issues:
Rollback using Helm:
shell# List previous versions helm history humio-operator # Rollback to previous version helm rollback humio-operator <revision-number>
Use manual recovery:
shell# Backup current state kubectl get humioclusters -A -o yaml > humioclusters-backup.yaml kubectl get crds -o yaml | grep -A1 "name: humio" > humio-crds-backup.yaml # Remove problematic resources kubectl delete deploy humio-operator kubectl delete crds humioclusters.core.humio.com humioexternalclusters.core.humio.com # Reinstall correct version helm install humio-operator humio-operator/humio-operator --version <known-working-version>
Implement a CRD-specific recovery:
shell# Apply correct CRD versions kubectl apply -f https://raw.githubusercontent.com/humio/humio-operator/<version>/config/crd/bases/core.humio.com_humioclusters.yaml
If problems persist, contact CrowdStrike Support.