Space consumption guideline

| August 25, 2023

Applicable versions

All Longhorn versions, but some features are introduced in v1.4.0 or v1.5.0

Volumes consume much more space than expected

Due to the fact that Longhorn volumes can hold historic data as snapshots, the volume actual size can be much greater than the spec size. For more details, you can check this section for a better understanding over the concept of volume size.

Besides, some operations like backup, rebuilding, or expansion, will lead to a hidden system snapshot creation. Hence, there may be some snapshots even if users never create a snapshot for a volume manually.

To eliminate space being wasted the historic data/snapshots, we would recommend applying a recurring job like snapshot-delete that limits the snapshot counts of volumes. You can check the recurring job section and see how to work.

Filesystem used size is much smaller than volume actual size

The reason for this symptom is explained in the volume size section as well. Briefly, a Longhorn volume is a block device which does not recognize the filesystem used on top of it. Deleting a file is a filesystem layer operation that does not actually free up blocks from the underlying volume.

In order to ask the volume or the block device to release the blocks for removed files, you can rely on fstrim. This trim operation is introduced since Longhorn v1.4.0. Please see this section for details.

If you make the trim operation automatic, you can apply filesystem-trim recurring jobs for volumes. But notice that this operation is similar to write operations, which may be resource-consuming. Please do not trigger the trim operations for lots of volumes at the same time.

Disk exhaustion

In this case, the node is probably marked as NotReady due to the disk pressure. Therefore, the most critical measure is to recover the node while avoiding losing volume data.

To do recover nodes and disk, we would recommend directly removing some redundant replica directories for the full disk. Here redundant replicas means that the corresponding volumes have healthy replicas in other disks. Later on Longhorn will automatically rebuild new replicas in other disks if possible. Besides, users may need to expand the existing disks or add more disks to avoid future disk exhaustion issues.

Notice that the disk exhaustion may be caused by replicas being unevenly scheduled. Users can check setting Replica Auto Balance for this scenario.

Back to Knowledge Base

Recent articles

Troubleshooting: NoExecute taint prevents workloads from terminating
Troubleshooting: Orphan ISCSI Session Error
Failure to Attach Volumes After Upgrade to Longhorn v1.5.x
Kubernetes resource revision frequency expectations
SELinux and Longhorn
Troubleshooting: RWX Volume Fails to Be Attached Caused by `Protocol not supported`
Troubleshooting: fstrim doesn't work on old kernel
Troubleshooting: Failed RWX mount due to connection timeout
Space consumption guideline
Troubleshooting: Unexpected expansion leads to degradation or attach failure
Troubleshooting: Failure to delete orphaned Pod volume directory
Troubleshooting: Volume attachment fails due to SELinux denials in Fedora downstream distributions
Troubleshooting: Volumes Stuck in Attach/Detach Loop When Using Longhorn on OKD
Troubleshooting: Velero restores Longhorn PersistentVolumeClaim stuck in the Pending state when using the Velero CSI Plugin version before v0.4.0
Analysis: Potential Data/Filesystem Corruption
Instruction: How To Migrate Longhorn Chart Installed In Old Rancher UI To The Chart In New Rancher UI
Troubleshooting: Unable to access an NFS backup target
Troubleshooting: Pod with `volumeMode: Block` is stuck in terminating
Troubleshooting: Instance manager pods are restarted every hour
Troubleshooting: Open-iSCSI on RHEL based systems
Troubleshooting: Upgrading volume engine is stuck in deadlock
Tip: Set Longhorn To Only Use Storage On A Specific Set Of Nodes
Troubleshooting: Some old instance manager pods are still running after upgrade
Troubleshooting: Volume cannot be cleaned up after the node of the workload pod is down and recovered
Troubleshooting: DNS Resolution Failed
Troubleshooting: Generate pprof runtime profiling data
Troubleshooting: Pod stuck in creating state when Longhorn volumes filesystem is corrupted
Troubleshooting: None-standard Kubelet directory
Troubleshooting: Longhorn default settings do not persist
Troubleshooting: Recurring job does not create new jobs after detaching and attaching volume
Troubleshooting: Use Traefik 2.x as ingress controller
Troubleshooting: Create Support Bundle with cURL
Troubleshooting: Longhorn RWX shared mount ownership is shown as nobody in consumer Pod
Troubleshooting: `MountVolume.SetUp failed for volume` due to multipathd on the node
Troubleshooting: Longhorn-UI: Error during WebSocket handshake: Unexpected response code: 200 #2265
Troubleshooting: Longhorn volumes take a long time to finish mounting
Troubleshooting: `volume readonly or I/O error`
Troubleshooting: `volume pvc-xxx not scheduled`

© 2019-2024 Longhorn Authors | Documentation Distributed under CC-BY-4.0


© 2024 The Linux Foundation. All rights reserved. The Linux Foundation has registered trademarks and uses trademarks. For a list of trademarks of The Linux Foundation, please see our Trademark Usage page.