Upgrading Longhorn Manager
Prerequisite: Always back up volumes before upgrading. If anything goes wrong, you can restore the volume using the backup.
To upgrade with kubectl, run this command:
kubectl apply -f https://raw.githubusercontent.com/longhorn/longhorn/v0.8.1/deploy/longhorn.yaml
To upgrade with Helm, run this command:
helm upgrade longhorn ./longhorn/chart
On Kubernetes clusters managed by Rancher 2.1 or newer, the steps to upgrade Longhorn manager are the same as the installation steps.
Next, upgrade Longhorn engine.
If a volume is launched and used in Longhorn v0.6.2 or older, the related persistent volumes (PVs) and persistent volume claims (PVCs) are still managed by the old CSI plugin, which will be deprecated in a later Longhorn version.
Therefore, the PVCs and PVs should be migrated to use the new CSI plugin for the volume in Longhorn v0.8.1.
If you don’t know when the volumes were created, find out which volumes need to be migrated by running the following command:
kubectl get pv --output=jsonpath="{.items[?(@.spec.csi.driver==\"io.rancher.longhorn\")].spec.csi.volumeHandle}"
Remove finalizer external-attacher/io-rancher-longhorn
for the related PV.
kubectl edit pv <The corresponding PV of the volume found in step 1>
Shut down the related workloads and detach the volumes.
Run this script for each volume:
curl -s https://raw.githubusercontent.com/longhorn/longhorn/v0.8.1/scripts/migrate-for-pre-070-volumes.sh |bash -s -- <volume name>
Or run the script for all volumes:
curl -s https://raw.githubusercontent.com/longhorn/longhorn/v0.8.1/scripts/migrate-for-pre-070-volumes.sh |bash -s -- --all
Result: The volumes have been migrated to use the new CSI driver.
If the migration prerequisites are not satisfied and there is no error log failed to delete then recreate PV/PVC, users need to manually check the current PVC/PV then recreate them if needed: <error log>
, the script will do nothing for the PV and PVC. Users can check the migration prerequisites and steps and retry it.
If the migration fails and the error log mentioned above is printed out, users need to manually handle the migration for the failed volume.
Update spec.persistentVolumeReclaimPolicy
to Retain
and remove the all finalizers in metadata.finalizers
for the PV with this command:
kubectl edit pv <The PV name>
Delete the PVC and PV with this command:
kubectl delete pvc <The PVC name> && kubectl delete pv <The PV name>
Use the Longhorn UI to recreate the PV and PVC. Make sure the options Create PVC
and Use Previous PVC
are checked.
failed to delete then recreate PV/PVC, users need to manually check the current PVC/PV then recreate them if needed: failed to wait for the old PV deletion complete
This error is caused by missing migration step 2 in the old doc. Users can follow the above failure handling steps to complete the migration manually.
The related issues:
You will need to follow this guide to upgrade the Longhorn manager from v0.6.2 to v0.7.0.
Live upgrades are not supported from v0.6.2 to v0.7.0.
The Longhorn manager can be upgraded with kubectl or with the Rancher catalog app.
kubectl delete -f https://raw.githubusercontent.com/longhorn/longhorn/v0.7.0/examples/storageclass.yaml
Upgrade
button in the Rancher UINext, upgrade Longhorn engine.
Use kubectl apply -f https://raw.githubusercontent.com/longhorn/longhorn/v0.7.0/deploy/longhorn.yaml
And wait for all the pods to become running and Longhorn UI working.
$ kubectl -n longhorn-system get pod
NAME READY STATUS RESTARTS AGE
compatible-csi-attacher-69857469fd-rj5vm 1/1 Running 4 3d12h
csi-attacher-79b9bfc665-56sdb 1/1 Running 0 3d12h
csi-attacher-79b9bfc665-hdj7t 1/1 Running 0 3d12h
csi-attacher-79b9bfc665-tfggq 1/1 Running 3 3d12h
csi-provisioner-68b7d975bb-5ggp8 1/1 Running 0 3d12h
csi-provisioner-68b7d975bb-frggd 1/1 Running 2 3d12h
csi-provisioner-68b7d975bb-zrr65 1/1 Running 0 3d12h
engine-image-ei-605a0f3e-8gx4s 1/1 Running 0 3d14h
engine-image-ei-605a0f3e-97gxx 1/1 Running 0 3d14h
engine-image-ei-605a0f3e-r6wm4 1/1 Running 0 3d14h
instance-manager-e-a90b0bab 1/1 Running 0 3d14h
instance-manager-e-d1458894 1/1 Running 0 3d14h
instance-manager-e-f2caa5e5 1/1 Running 0 3d14h
instance-manager-r-04417b70 1/1 Running 0 3d14h
instance-manager-r-36d9928a 1/1 Running 0 3d14h
instance-manager-r-f25172b1 1/1 Running 0 3d14h
longhorn-csi-plugin-72bsp 4/4 Running 0 3d12h
longhorn-csi-plugin-hlbg8 4/4 Running 0 3d12h
longhorn-csi-plugin-zrvhl 4/4 Running 0 3d12h
longhorn-driver-deployer-66b6d8b97c-snjrn 1/1 Running 0 3d12h
longhorn-manager-pf5p5 1/1 Running 0 3d14h
longhorn-manager-r5npp 1/1 Running 1 3d14h
longhorn-manager-t59kt 1/1 Running 0 3d14h
longhorn-ui-b466b6d74-w7wzf 1/1 Running 0 50m
Next, upgrade Longhorn engine.
"longhorn" is invalid: provisioner: Forbidden: updates to provisioner are forbidden.
This means you need to clean up the old longhorn
storageClass for Longhorn v0.7.0 upgrade, since we’ve changed the provisioner from rancher.io/longhorn
to driver.longhorn.io
.
Noticed the PVs created by the old storageClass will still use rancher.io/longhorn
as provisioner. Longhorn v0.7.0 supports attach/detach/deleting of the PVs created by the previous version of Longhorn, but it doesn’t support creating new PVs using the old provisioner name. Please use the new StorageClass for the new volumes.
If you are using YAML file:
kubectl delete -f https://raw.githubusercontent.com/longhorn/longhorn/v0.7.0/examples/storageclass.yaml
kubectl apply https://raw.githubusercontent.com/longhorn/longhorn/v0.7.0/deploy/longhorn.yaml
If you are using Rancher App:
kubectl delete -f https://raw.githubusercontent.com/longhorn/longhorn/v0.7.0/examples/storageclass.yaml
kind CustomResourceDefinition with the name "xxx" already exists in the cluster and wasn't defined in the previous release...
This is a Helm bug.
Please make sure that you have not deleted the old Longhorn CRDs via the command curl -s https://raw.githubusercontent.com/longhorn/longhorn-manager/master/hack/cleancrds.sh | bash -s v062
or executed Longhorn uninstaller before executing the following command. Otherwise you MAY LOSE all the data stored in the Longhorn system.
Clean up:
kubectl -n longhorn-system delete ds longhorn-manager
curl -s https://raw.githubusercontent.com/longhorn/longhorn-manager/master/hack/cleancrds.sh | bash -s v070
Upgrade
button in the Rancher UI.These steps should not be executed if you want to maintain the ability to roll back from a v0.7.0 installation.
kubectl -n longhorn-system get pod -o yaml|grep "longhorn-manager:v0.6.2"
No results should appear.
Important: You must make sure all the v0.6.2 pods have been deleted, otherwise the data will be lost.
curl -s https://raw.githubusercontent.com/longhorn/longhorn-manager/master/hack/cleancrds.sh | bash -s v062
Since we upgrade the CSI framework from v0.4.2 to v1.1.0 in this release, rolling back from Longhorn v0.7.0 to v0.6.2 or lower means downgrading the CSI plugin. But Kubernetes does not support the downgrading the CSI plugin. Therefore, restarting kubelet is unavoidable. Please be careful, and follow the instructions exactly.
Prerequisite: To rollback from v0.7.0 installation, you must not have cleaned up the v0.6.2 CRDs.
Steps to roll back:
Clean up the components introduced by Longhorn v0.7.0 upgrade
kubectl delete -f https://raw.githubusercontent.com/longhorn/longhorn/v0.7.0/examples/storageclass.yaml
curl -s https://raw.githubusercontent.com/longhorn/longhorn-manager/master/hack/cleancrds.sh | bash -s v070
Restart the Kubelet container on all nodes or restart all the nodes. This step WILL DISRUPT all the workloads in the system.
Connect to the node then run:
docker restart kubelet
Rollback: Use kubectl apply
or the Rancher catalog app to roll back Longhorn.
© 2019-2024 Longhorn Authors | Documentation Distributed under CC-BY-4.0
© 2024 The Linux Foundation. All rights reserved. The Linux Foundation has registered trademarks and uses trademarks. For a list of trademarks of The Linux Foundation, please see our Trademark Usage page.