Troubleshooting: Longhorn Manager Stuck in CrashLoopBackOff State Due to Inaccessible Webhook

| January 17, 2025

Applicable versions

Longhorn >= v1.5.0.

Symptoms

The webhook services were merged into Longhorn Manager in v1.5.0. Because of the merge, Longhorn Manager now initializes the admission and conversion webhook services first during startup. To ensure that these services are accessible, Longhorn sends a request to the webhook service URL before starting the Longhorn Manager service.

In certain situations, the webhook service may become inaccessible and cause the Longhorn Manager pod to enter a CrashLoopBackOff state. This failure can lead to repeated attempts to restart the pod.

The following sections outline the most common root causes for this issue and their corresponding solutions.

Root Cause 1: Misconfigured Firewall

Incorrect firewall configuration may block communication between pods on different nodes in your Kubernetes cluster. Longhorn Manager is unable to access the webhook service, resulting in the CrashLoopBackOff state.

Check your firewall rules and ensure that inter-pod communication is not blocked.

Root Cause 2: DNS Resolution Issues

DNS resolution is crucial for accessing services via their internal Kubernetes DNS names. When DNS resolution is not functioning as expected, Longhorn Manager may be unable to reach the webhook service via its DNS name.

Execute the webhook service in a pod, and then check if DNS resolution is functioning correctly by running the following commands:

kubectl exec -it <pod-name> -- /bin/bash
curl https://longhorn-conversion-webhook.longhorn-system.svc:9501/v1/healthz

You can also check if either CoreDNS or Kube-DNS is running correctly. For more information, see Debugging DNS Resolution in the Kubernetes documentation.

Root Cause 3: Hairpinning Not Implemented Correctly

Hairpinning allows a pod to access itself via its service IP. In some cases, however, a pod may fail to access a service via the service’s internal DNS name. This issue is common in single-node clusters and may also occur in some multi-node clusters.

Verify that the hairpin-mode flag, which ensures that a pod can access itself via its service IP, is set correctly. For more information, see Edge case: A Pod fails to reach itself via the Service IP in the Kubernetes documentation.

Back to Knowledge Base

Recent articles

Troubleshooting: Two active engines during volume live migration
Troubleshooting: Longhorn Manager Stuck in CrashLoopBackOff State Due to Inaccessible Webhook
Troubleshooting: Instance Manager Pods Are Restarted
Troubleshooting: NoExecute taint prevents workloads from terminating
Troubleshooting: Orphan ISCSI Session Error
Failure to Attach Volumes After Upgrade to Longhorn v1.5.x
Kubernetes resource revision frequency expectations
SELinux and Longhorn
Troubleshooting: RWX Volume Fails to Be Attached Caused by `Protocol not supported`
Troubleshooting: fstrim doesn't work on old kernel
Troubleshooting: Failed RWX mount due to connection timeout
Space consumption guideline
Troubleshooting: Unexpected expansion leads to degradation or attach failure
Troubleshooting: Failure to delete orphaned Pod volume directory
Troubleshooting: Volume attachment fails due to SELinux denials in Fedora downstream distributions
Troubleshooting: Volumes Stuck in Attach/Detach Loop When Using Longhorn on OKD
Troubleshooting: Velero restores Longhorn PersistentVolumeClaim stuck in the Pending state when using the Velero CSI Plugin version before v0.4.0
Analysis: Potential Data/Filesystem Corruption
Instruction: How To Migrate Longhorn Chart Installed In Old Rancher UI To The Chart In New Rancher UI
Troubleshooting: Unable to access an NFS backup target
Troubleshooting: Pod with `volumeMode: Block` is stuck in terminating
Troubleshooting: Instance manager pods are restarted every hour
Troubleshooting: Open-iSCSI on RHEL based systems
Troubleshooting: Upgrading volume engine is stuck in deadlock
Tip: Set Longhorn To Only Use Storage On A Specific Set Of Nodes
Troubleshooting: Some old instance manager pods are still running after upgrade
Troubleshooting: Volume cannot be cleaned up after the node of the workload pod is down and recovered
Troubleshooting: DNS Resolution Failed
Troubleshooting: Generate pprof runtime profiling data
Troubleshooting: Pod stuck in creating state when Longhorn volumes filesystem is corrupted
Troubleshooting: None-standard Kubelet directory
Troubleshooting: Longhorn default settings do not persist
Troubleshooting: Recurring job does not create new jobs after detaching and attaching volume
Troubleshooting: Use Traefik 2.x as ingress controller
Troubleshooting: Create Support Bundle with cURL
Troubleshooting: Longhorn RWX shared mount ownership is shown as nobody in consumer Pod
Troubleshooting: `MountVolume.SetUp failed for volume` due to multipathd on the node
Troubleshooting: Longhorn-UI: Error during WebSocket handshake: Unexpected response code: 200 #2265
Troubleshooting: Longhorn volumes take a long time to finish mounting
Troubleshooting: `volume readonly or I/O error`
Troubleshooting: `volume pvc-xxx not scheduled`

© 2019-2025 Longhorn Authors | Documentation Distributed under CC-BY-4.0


© 2025 The Linux Foundation. All rights reserved. The Linux Foundation has registered trademarks and uses trademarks. For a list of trademarks of The Linux Foundation, please see our Trademark Usage page.