Important Notes
This page summarizes the key notes for Longhorn v1.10.0. For the full release note, see here.
If your Longhorn cluster was initially deployed with a version earlier than v1.3.0, the Custom Resources (CRs) were created using the v1beta1
APIs. While the upgrade from Longhorn v1.8 to v1.9 automatically migrates all CRs to the new v1beta2
version, a manual CR migration is strongly advised before upgrading from Longhorn v1.9
to v1.10
.
Certain operations, such as an etcd
or CRD restore, may leave behind v1beta1
data. Manually migrating your CRs ensures that all Longhorn data is properly updated to the v1beta2
API, preventing potential compatibility issues and unexpected behavior with the new Longhorn version.
Following the manual migration, verify that v1beta1
has been removed from the CRD stored versions to ensure completion and a successful upgrade.
For more details, see Kubernetes official document for CRD storage version, and Issue #11886.
Before upgrading from Longhorn v1.9 to v1.10, perform the following manual CRD storage version migration.
Note: If your Longhorn installation uses a namespace other than
longhorn-system
, replacelonghorn-system
with your custom namespace throughout the commands.
# Temporarily disable the CR validation webhook to allow updating read-only settings CRs.
kubectl patch validatingwebhookconfiguration longhorn-webhook-validator \
--type=merge \
-p "$(kubectl get validatingwebhookconfiguration longhorn-webhook-validator -o json | \
jq '.webhooks[0].rules |= map(if .apiGroups == ["longhorn.io"] and .resources == ["settings"] then
.operations |= map(select(. != "UPDATE")) else . end)')"
# Migrate CRDs that ever stored v1beta1 resources
migration_time="$(date +%Y-%m-%dT%H:%M:%S)"
crds=($(kubectl get crd -l app.kubernetes.io/name=longhorn -o json | jq -r '.items[] | select(.status.storedVersions | index("v1beta1")) | .metadata.name'))
for crd in "${crds[@]}"; do
echo "Migrating ${crd} ..."
for name in $(kubectl -n longhorn-system get "$crd" -o jsonpath='{.items[*].metadata.name}'); do
# Attach additional annotations to trigger v1beta1 resource updating in the latest storage version.
kubectl patch "${crd}" "${name}" -n longhorn-system --type=merge -p='{"metadata":{"annotations":{"migration-time":"'"${migration_time}"'"}}}'
done
# Clean up the stored version in CRD status
kubectl patch crd "${crd}" --type=merge -p '{"status":{"storedVersions":["v1beta2"]}}' --subresource=status
done
# Re-enable the CR validation webhook.
kubectl patch validatingwebhookconfiguration longhorn-webhook-validator \
--type=merge \
-p "$(kubectl get validatingwebhookconfiguration longhorn-webhook-validator -o json | \
jq '.webhooks[0].rules |= map(if .apiGroups == ["longhorn.io"] and .resources == ["settings"] then
.operations |= (. + ["UPDATE"] | unique) else . end)')"
After running the script, verify the CRD stored versions using this command:
kubectl get crd -l app.kubernetes.io/name=longhorn -o=jsonpath='{range .items[*]}{.metadata.name}{": "}{.status.storedVersions}{"\n"}{end}'
Crucially, all Longhorn CRDs MUST have only "v1beta2"
listed in storedVersions
(i.e., "v1beta1"
must be completely absent) before proceeding to the v1.10 upgrade.
Example of successful output:
backingimagedatasources.longhorn.io: ["v1beta2"]
backingimagemanagers.longhorn.io: ["v1beta2"]
backingimages.longhorn.io: ["v1beta2"]
backupbackingimages.longhorn.io: ["v1beta2"]
backups.longhorn.io: ["v1beta2"]
backuptargets.longhorn.io: ["v1beta2"]
backupvolumes.longhorn.io: ["v1beta2"]
engineimages.longhorn.io: ["v1beta2"]
engines.longhorn.io: ["v1beta2"]
instancemanagers.longhorn.io: ["v1beta2"]
nodes.longhorn.io: ["v1beta2"]
orphans.longhorn.io: ["v1beta2"]
recurringjobs.longhorn.io: ["v1beta2"]
replicas.longhorn.io: ["v1beta2"]
settings.longhorn.io: ["v1beta2"]
sharemanagers.longhorn.io: ["v1beta2"]
snapshots.longhorn.io: ["v1beta2"]
supportbundles.longhorn.io: ["v1beta2"]
systembackups.longhorn.io: ["v1beta2"]
systemrestores.longhorn.io: ["v1beta2"]
volumeattachments.longhorn.io: ["v1beta2"]
volumes.longhorn.io: ["v1beta2"]
With these steps completed, the Longhorn upgrade to v1.10 should now proceed without issues.
If you did not apply the required pre-upgrade migration steps and the CRs are not fully migrated to v1beta2
, the longhorn-manager
Pods may fail to operate correctly. A common error message for this issue is:
Upgrade failed: cannot patch "backingimagedatasources.longhorn.io" with kind CustomResourceDefinition: CustomResourceDefinition.apiextensions.k8s.io "backingimagedatasources.longhorn.io" is invalid: status.storedVersions[0]: Invalid value: "v1beta1": missing from spec.versions; v1beta1 was previously a storage version, and must remain in spec.versions until a storage migration ensures no data remains persisted in v1beta1 and removes v1beta1 from status.storedVersions
To fix this issue, you must perform a forced downgrade back to the exact Longhorn v1.9.x version that was running before the failed upgrade attempt.
If Longhorn was installed using kubectl
, you must patch the current-longhorn-version
setting before downgrading. Replace `"v1.9.x” to the original version before upgrade in the following commands.
# Attaching annotation to allow patching current-longhorn-version.
kubectl patch settings.longhorn.io current-longhorn-version -n longhorn-system --type=merge -p='{"metadata":{"annotations":{"longhorn.io/update-setting-from-longhorn":""}}}'
# Temporarily override current version to allow old version installation
# Replace the value "v1.9.x" with the original version before upgrade.
kubectl patch settings.longhorn.io current-longhorn-version -n longhorn-system --type=merge -p='{"value":"v1.9.x"}'
After modifying current-longhorn-version
, you can proceed to downgrade to the original Longhorn v1.9.x deployment.
If Longhorn was installed using Helm, the downgrade is allowed by disabling the preUpgradeChecker.upgradeVersionCheck
flag.
Once the downgrade is complete and the Longhorn system is stable on the v1.9.x version, you must immediately follow the steps outlined in the Manual CRD Migration Guide. This step is crucial to migrate all remaining v1beta1
CRs to v1beta2
before attempting the Longhorn v1.10 upgrade again.
longhorn.io/v1beta1
APIThe v1beta1
Longhorn API version was removed in v1.10.0.
For more details, see Issue #10249.
replica.status.evictionRequested
FieldThe deprecated replica.status.evictionRequested
field has been removed.
For more details, see Issue #7022
Due to the upgrade of the CSI external snapshotter to v8.2.0, all clusters must be running Kubernetes v1.25 or later before you can upgrade to Longhorn v1.8.0 or a newer version.
During an upgrade, a new Longhorn manager may start before the Custom Resource Definitions (CRDs) are applied. This sequencing ensures the controller does not process objects containing deprecated data or fields. However, it can cause the Longhorn manager to fail during the initial upgrade phase if the CRD has not yet been applied.
If the Longhorn manager crashes during the upgrade, check the logs to determine if the failure is due to the CRD not being applied. In such cases, the logs may contain error messages similar to the following:
time="2025-03-27T06:59:55Z" level=fatal msg="Error starting manager: upgrade resources failed: BackingImage in version \"v1beta2\" cannot be handled as a BackingImage: strict decoding error: unknown field \"spec.diskFileSpecMap\", unknown field \"spec.diskSelector\", unknown field \"spec.minNumberOfCopies\", unknown field \"spec.nodeSelector\", unknown field \"spec.secret\", unknown field \"spec.secretNamespace\"" func=main.main.DaemonCmd.func3 file="daemon.go:94"
When upgrading via Helm or Rancher App Marketplace, Longhorn performs pre-upgrade checks. If a check fails, the upgrade stops, and the reason for the failure is recorded in an event.
For more detail, see Upgrading Longhorn Manager.
Automated pre-upgrade checks do not cover all scenarios. Manual checks via kubectl or the UI are recommended:
Settings have been consolidated for easier management across V1 and V2 Data Engines. Each setting now uses one of the following formats:
1024
){"v1": "value1", "v2": "value2"}
)v1
key only (e.g., {"v1": "value1"}
)v2
key only (e.g., {"v2": "value1"}
)For more information, see Longhorn Settings.
A new System Info category has been added to show cluster-level information more clearly.
For more details, see Issue #11656
The UI now display a summary of attachment tickets on each volume overview page for improved visibility into volume state.
For more details, see Issue #11400 and Issue #11401.
Longhorn now supports Kubernetes CSIStorageCapacity, which enables the scheduler to verify node storage before scheduling pods that use StorageClasses with WaitForFirstConsumer.
This reduces scheduling errors and improves reliability.
For more information, see GitHub Issue #10685
Starting in Longhorn v1.10.0, backup block size can be configured when creating a volume, allowing optimization for performance, efficiency, and cost.
For more information, see Create Longhorn Volumes.
The backup sync agent exposes a pprof
server for profiling runtime resource usage during backup sync operations.
For more information, see Profiling.
You can now configure the instance-manager pod liveness probes. This allows the system to better distinguish between temporary delays and actual failures, which helps reduce unnecessary restarts and improves overall cluster stability.
For more information, see Longhorn Settings.
Backing Image Manager CRs now use a compact, collision-resistant naming format to reduce conflict risk.
For details, see Issue #11455
RBAC permissions have been refined to minimize privileges and improve cluster security.
For details, see Issue #11345
V1 volumes now support single-stack IPv6 Kubernetes clusters.
Warning: Dual-stack Kubernetes clusters and V2 volumes are not supported in this release.
For details, see Issue #2259.
Live upgrades of V2 volumes are not supported. Ensure all V2 volumes are detached before upgrading.
The V2 Data Engine can run without Hugepage by setting data-engine-hugepage-enabled
to {"v2":"false"}
.
This reduces memory pressure on low‑spec nodes and increases deployment flexibility. Performance may be lower compared to running with Hugepage.
Interrupt mode has been added to the V2 Data Engine to help reduce CPU usage. This feature is especially beneficial for clusters with idle or low I/O workloads, where conserving CPU resources is more important than minimizing latency.
While interrupt mode lowers CPU consumption, it may introduce slightly higher I/O latency compared to polling mode. In addition, the current implementation uses a hybrid approach, which still incurs a minimal, constant CPU load even when interrupts are enabled.
For more information, see Interrupt Mode for more information.
Limitation: Interrupt mode currently supports only AIO disks.
Longhorn now supports volume and snapshot cloning for V2 data engine volumes. For more information, see Volume Clone Support.
Provides Quality of Service (QoS) control for V2 volume replica rebuilds. You can configure bandwidth limits globally or per volume to prevent storage throughput overload on source and destination nodes.
For more information, see Replica Rebuild QoS.
Longhorn now supports volume expansion for V2 Data Engine volumes. Users can expand the volume through the UI or by modifying the PVC manifest.
For more information, see V2 Volume Expansion.
© 2019-2025 Longhorn Authors | Documentation Distributed under CC-BY-4.0
© 2025 The Linux Foundation. All rights reserved. The Linux Foundation has registered trademarks and uses trademarks. For a list of trademarks of The Linux Foundation, please see our Trademark Usage page.