Important Notes
This page summarizes the key notes for Longhorn v1.10.1. For the full release note, see here.
longhorn.io/v1beta1
APIThe v1beta1
Longhorn API version was removed in v1.10.0.
For more details, see Issue #10249.
replica.status.evictionRequested
FieldThe deprecated replica.status.evictionRequested
field has been removed.
For more details, see Issue #7022
Due to the upgrade of the CSI external snapshotter to v8.2.0, all clusters must be running Kubernetes v1.25 or later before you can upgrade to Longhorn v1.8.0 or a newer version.
During an upgrade, a new Longhorn manager may start before the Custom Resource Definitions (CRDs) are applied. This sequencing ensures the controller does not process objects containing deprecated data or fields. However, it can cause the Longhorn manager to fail during the initial upgrade phase if the CRD has not yet been applied.
If the Longhorn manager crashes during the upgrade, check the logs to determine if the failure is due to the CRD not being applied. In such cases, the logs may contain error messages similar to the following:
time="2025-03-27T06:59:55Z" level=fatal msg="Error starting manager: upgrade resources failed: BackingImage in version \"v1beta2\" cannot be handled as a BackingImage: strict decoding error: unknown field \"spec.diskFileSpecMap\", unknown field \"spec.diskSelector\", unknown field \"spec.minNumberOfCopies\", unknown field \"spec.nodeSelector\", unknown field \"spec.secret\", unknown field \"spec.secretNamespace\"" func=main.main.DaemonCmd.func3 file="daemon.go:94"
When upgrading via Helm or Rancher App Marketplace, Longhorn performs pre-upgrade checks. If a check fails, the upgrade stops, and the reason for the failure is recorded in an event.
For more detail, see Upgrading Longhorn Manager.
Automated pre-upgrade checks do not cover all scenarios. Manual checks via kubectl or the UI are recommended:
Settings have been consolidated for easier management across V1 and V2 Data Engines. Each setting now uses one of the following formats:
1024
){"v1": "value1", "v2": "value2"}
)v1
key only (e.g., {"v1": "value1"}
)v2
key only (e.g., {"v2": "value1"}
)For more information, see Longhorn Settings.
A new System Info category has been added to show cluster-level information more clearly.
For more details, see Issue #11656
The UI now display a summary of attachment tickets on each volume overview page for improved visibility into volume state.
For more details, see Issue #11400 and Issue #11401.
Longhorn now supports Kubernetes CSIStorageCapacity, which enables the scheduler to verify node storage before scheduling pods that use StorageClasses with WaitForFirstConsumer.
This reduces scheduling errors and improves reliability.
For more information, see GitHub Issue #10685
Starting in Longhorn v1.10.0, backup block size can be configured when creating a volume, allowing optimization for performance, efficiency, and cost.
For more information, see Create Longhorn Volumes.
The backup sync agent exposes a pprof
server for profiling runtime resource usage during backup sync operations.
For more information, see Profiling.
You can now configure the instance-manager pod liveness probes. This allows the system to better distinguish between temporary delays and actual failures, which helps reduce unnecessary restarts and improves overall cluster stability.
For more information, see Longhorn Settings.
Backing Image Manager CRs now use a compact, collision-resistant naming format to reduce conflict risk.
For details, see Issue #11455
RBAC permissions have been refined to minimize privileges and improve cluster security.
For details, see Issue #11345
V1 volumes now support single-stack IPv6 Kubernetes clusters.
Warning: Dual-stack Kubernetes clusters and V2 volumes are not supported in this release.
For details, see Issue #2259.
Live upgrades of V2 volumes are not supported. Ensure all V2 volumes are detached before upgrading.
The V2 Data Engine can run without Hugepage by setting data-engine-hugepage-enabled
to {"v2":"false"}
.
This reduces memory pressure on low‑spec nodes and increases deployment flexibility. Performance may be lower compared to running with Hugepage.
Interrupt mode has been added to the V2 Data Engine to help reduce CPU usage. This feature is especially beneficial for clusters with idle or low I/O workloads, where conserving CPU resources is more important than minimizing latency.
While interrupt mode lowers CPU consumption, it may introduce slightly higher I/O latency compared to polling mode. In addition, the current implementation uses a hybrid approach, which still incurs a minimal, constant CPU load even when interrupts are enabled.
For more information, see Interrupt Mode for more information.
Limitation: Interrupt mode currently supports only AIO disks.
Longhorn now supports volume and snapshot cloning for V2 data engine volumes. For more information, see Volume Clone Support.
Provides Quality of Service (QoS) control for V2 volume replica rebuilds. You can configure bandwidth limits globally or per volume to prevent storage throughput overload on source and destination nodes.
For more information, see Replica Rebuild QoS.
Longhorn now supports volume expansion for V2 Data Engine volumes. Users can expand the volume through the UI or by modifying the PVC manifest.
For more information, see V2 Volume Expansion.
© 2019-2025 Longhorn Authors | Documentation Distributed under CC-BY-4.0
© 2025 The Linux Foundation. All rights reserved. The Linux Foundation has registered trademarks and uses trademarks. For a list of trademarks of The Linux Foundation, please see our Trademark Usage page.