Troubleshooting: Failure to delete orphaned Pod volume directory

| July 20, 2023

Applicable versions

All Longhorn versions.

Kubernetes versions before v1.28. A backported PR to v1.27 is awaiting merging.

Symptoms

In the event of a worker node failure, while hosting active Pods, the Pods are gracefully evicted as the node undergoes downtime and awaits restoration. During this period, the kubelet, which is responsible for managing the node, will generate the following error messages at regular intervals of two seconds.

orphaned pod <pod-uid> found, but error not a directory occurred when trying to remove the volumes dir

Reason

This situation occurs when a node goes through a downtime state and then takes some time before entering the recovery phase. During this process, the affected Pods are evicted and relocated to other nodes. However, due to the disruption, the connection between the kubelet and the longhorn-csi-plugin is severed. Therefore, the kubelet encountered difficulties when deleting the vol_data.json file. This process is used to perform self-housekeeping tasks to clean up orphan volume mount points associated with evicted Pods. The kubelet, while capable of removing directories, cannot delete individual files, resulting in incomplete cleanup in this specific situation. (source code).

Solution

Once the node and kubelet have been restored, the longhorn-csi-plugin will automatically restart, allowing the Pod to remount the volume and resume its running state.

However, in cases where the Pod and its associated volume are rescheduled to a different node, leaving behind a lingering vol_data.json file on the crashed node, manual intervention is required. You will need to manually delete the vol_data.json file located within the /var/lib/kubelet/pods/<pod-uid>/volumes/kubernetes.io~csi/pvc_<pod-uid>/ directory.

Within the present Kubernetes master branch, the issue is addressed in version 1.28.x, thereby ensuring that orphaned Pod volume mount points are properly cleaned up within the reconciliation loop. Moreover, a PR addressing the issue has been backported to version 1.27 and is presently awaiting the merging process.

Back to Knowledge Base

Recent articles

Troubleshooting: NoExecute taint prevents workloads from terminating
Troubleshooting: Orphan ISCSI Session Error
Failure to Attach Volumes After Upgrade to Longhorn v1.5.x
Kubernetes resource revision frequency expectations
SELinux and Longhorn
Troubleshooting: RWX Volume Fails to Be Attached Caused by `Protocol not supported`
Troubleshooting: fstrim doesn't work on old kernel
Troubleshooting: Failed RWX mount due to connection timeout
Space consumption guideline
Troubleshooting: Unexpected expansion leads to degradation or attach failure
Troubleshooting: Failure to delete orphaned Pod volume directory
Troubleshooting: Volume attachment fails due to SELinux denials in Fedora downstream distributions
Troubleshooting: Volumes Stuck in Attach/Detach Loop When Using Longhorn on OKD
Troubleshooting: Velero restores Longhorn PersistentVolumeClaim stuck in the Pending state when using the Velero CSI Plugin version before v0.4.0
Analysis: Potential Data/Filesystem Corruption
Instruction: How To Migrate Longhorn Chart Installed In Old Rancher UI To The Chart In New Rancher UI
Troubleshooting: Unable to access an NFS backup target
Troubleshooting: Pod with `volumeMode: Block` is stuck in terminating
Troubleshooting: Instance manager pods are restarted every hour
Troubleshooting: Open-iSCSI on RHEL based systems
Troubleshooting: Upgrading volume engine is stuck in deadlock
Tip: Set Longhorn To Only Use Storage On A Specific Set Of Nodes
Troubleshooting: Some old instance manager pods are still running after upgrade
Troubleshooting: Volume cannot be cleaned up after the node of the workload pod is down and recovered
Troubleshooting: DNS Resolution Failed
Troubleshooting: Generate pprof runtime profiling data
Troubleshooting: Pod stuck in creating state when Longhorn volumes filesystem is corrupted
Troubleshooting: None-standard Kubelet directory
Troubleshooting: Longhorn default settings do not persist
Troubleshooting: Recurring job does not create new jobs after detaching and attaching volume
Troubleshooting: Use Traefik 2.x as ingress controller
Troubleshooting: Create Support Bundle with cURL
Troubleshooting: Longhorn RWX shared mount ownership is shown as nobody in consumer Pod
Troubleshooting: `MountVolume.SetUp failed for volume` due to multipathd on the node
Troubleshooting: Longhorn-UI: Error during WebSocket handshake: Unexpected response code: 200 #2265
Troubleshooting: Longhorn volumes take a long time to finish mounting
Troubleshooting: `volume readonly or I/O error`
Troubleshooting: `volume pvc-xxx not scheduled`

© 2019-2024 Longhorn Authors | Documentation Distributed under CC-BY-4.0


© 2024 The Linux Foundation. All rights reserved. The Linux Foundation has registered trademarks and uses trademarks. For a list of trademarks of The Linux Foundation, please see our Trademark Usage page.