Troubleshooting: Velero restores Longhorn PersistentVolumeClaim stuck in the Pending state when using the Velero CSI Plugin version before v0.4.0

Ray Chang | December 15, 2022

Applicable versions

All Longhorn versions.

Symptoms

PersistentVolumeClaim is stuck in the Pending state when restoring Longhorn with Velero with Velero CSI Plugin version before v0.4.0.

Reason

For Longhorn versions using longhornio/csi-provisioner:v2.1.2, when it processes a PVC to provision the volume, Longhorn CSI provisioner will only recognize the volume.beta.kubernetes.io/storage-provisioner annotation which will be tagged together with volume.kubernetes.io/storage-provisioner to each PVC via Kubernetes after determining the storage provisioner. The PVC with these annotations will be backed up together via Velero.

After restoring the PVC via Velero with its CSI plugin (< 0.4), it will only remove the volume.beta.kubernetes.io/storage-provisioner but keep the volume.kubernetes.io/storage-provisioner annotation intact from the PVC, because the plugin doesn’t respect the general available volume.kubernetes.io/storage-provisioner annotation. Because Kubernetes will not add volume.kubernetes.io/storage-provisioner to the PVC which already has the beta annotation, it will cause the restoring PVC will be failed to be processed by the built-in Longhorn CSI provisioner and be stuck in the Pending state.

This compatibility issue is caused by the Velero CSI plugin and it has been fixed in the following versions, so since the 0.4 version, all annotations will be respected to ensure the corresponding volume provision is correct.

Solution

It is recommended to use the Velero CSI plugin version >= 0.4 for PVC backup and restore because it is compatible with different storage-provisioner annotations supported by different versions of CSI Provisioner.

Back to Knowledge Base

Recent articles

Troubleshooting: Velero restores Longhorn PersistentVolumeClaim stuck in the Pending state when using the Velero CSI Plugin version before v0.4.0
Analysis: Potential Data/Filesystem Corruption
Instruction: How To Migrate Longhorn Chart Installed In Old Rancher UI To The Chart In New Rancher UI
Troubleshooting: Unable to access an NFS backup target
Troubleshooting: Pod with `volumeMode: Block` is stuck in terminating
Troubleshooting: Instance manager pods are restarted every hour
Troubleshooting: Open-iSCSI on RHEL based systems
Troubleshooting: Upgrading volume engine is stuck in deadlock
Tip: Set Longhorn To Only Use Storage On A Specific Set Of Nodes
Troubleshooting: Some old instance manager pods are still running after upgrade
Troubleshooting: Volume cannot be cleaned up after the node of the workload pod is down and recovered
Troubleshooting: DNS Resolution Failed
Troubleshooting: Generate pprof runtime profiling data
Troubleshooting: Pod stuck in creating state when Longhorn volumes filesystem is corrupted
Troubleshooting: None-standard Kubelet directory
Troubleshooting: Longhorn default settings do not persist
Troubleshooting: Recurring job does not create new jobs after detaching and attaching volume
Troubleshooting: Use Traefik 2.x as ingress controller
Troubleshooting: Create Support Bundle with cURL
Troubleshooting: Longhorn RWX shared mount ownership is shown as nobody in consumer Pod
Troubleshooting: `MountVolume.SetUp failed for volume` due to multipathd on the node
Troubleshooting: Longhorn-UI: Error during WebSocket handshake: Unexpected response code: 200 #2265
Troubleshooting: Longhorn volumes take a long time to finish mounting
Troubleshooting: `volume readonly or I/O error`
Troubleshooting: `volume pvc-xxx not scheduled`

© 2019-2023 Longhorn Authors | Documentation Distributed under CC-BY-4.0


© 2023 The Linux Foundation. All rights reserved. The Linux Foundation has registered trademarks and uses trademarks. For a list of trademarks of The Linux Foundation, please see our Trademark Usage page.