Troubleshooting: Longhorn volumes take a long time to finish mounting

Phan Le | February 26, 2021

Applicable versions

All Longhorn versions.

Symptoms

When starting a workload pod that uses Longhorn volumes, the Longhorn UI shows that the Longhorn volumes are attached quickly, but it takes a long time for the volumes to finish mounting and for the workload to be able to start.

This issue only happens when the Longhorn volumes have many files/directories and securityContext.fsGroup is set in the workload pod as shown below:

spec:
  securityContext:
    runAsUser: 1000
    runAsGroup: 3000
    fsGroup: 2000

Reason

By default, when seeing fsGroup field, each time a volume is mounted, Kubernetes recursively calls chown() and chmod() on all the files and directories inside the volume. This happens even if group ownership of the volume already matches the requested fsGroup, and can be pretty expensive for larger volumes with lots of small files, which causes pod startup to take a long time.

Solution

There is no workaround for this problem in Kubernetes version v1.19.x and before.

Since version v1.20.x, Kubernetes introduces a new beta feature: the field fsGroupChangePolicy. I.e.,

spec:
  securityContext:
    runAsUser: 1000
    runAsGroup: 3000
    fsGroup: 2000
    fsGroupChangePolicy: "OnRootMismatch"

When fsGroupChangePolicy is set to OnRootMismatch, if the root of the volume already has the correct permissions, the recursive permission and ownership change will be skipped. It means that if users don’t change the pod.spec.securityContext.fsGroup between pod’s startups, K8s will only have to check the permissions and ownership of the root and the mounting process will be much faster compared to always recursively changing the volumes’ ownership and permissions.

Back to Knowledge Base

Recent articles

Instruction: How To Migrate Longhorn Chart Installed In Old Rancher UI To The Chart In New Rancher UI
Troubleshooting: Unable to access an NFS backup target
Troubleshooting: Pod with `volumeMode: Block` is stuck in terminating
Troubleshooting: Instance manager pods are restarted every hour
Troubleshooting: Open-iSCSI on RHEL based systems
Troubleshooting: Upgrading volume engine is stuck in deadlock
Tip: Set Longhorn To Only Use Storage On A Specific Set Of Nodes
Troubleshooting: Some old instance manager pods are still running after upgrade
Troubleshooting: Volume cannot be cleaned up after the node of the workload pod is down and recovered
Troubleshooting: DNS Resolution Failed
Troubleshooting: Generate pprof runtime profiling data
Troubleshooting: Pod stuck in creating state when Longhorn volumes filesystem is corrupted
Troubleshooting: None-standard Kubelet directory
Troubleshooting: Longhorn default settings do not persist
Troubleshooting: Recurring job does not create new jobs after detaching and attaching volume
Troubleshooting: Use Traefik 2.x as ingress controller
Troubleshooting: Create Support Bundle with cURL
Troubleshooting: Longhorn RWX shared mount ownership is shown as nobody in consumer Pod
Troubleshooting: `MountVolume.SetUp failed for volume` due to multipathd on the node
Troubleshooting: Longhorn-UI: Error during WebSocket handshake: Unexpected response code: 200 #2265
Troubleshooting: Longhorn volumes take a long time to finish mounting
Troubleshooting: `volume readonly or I/O error`
Troubleshooting: `volume pvc-xxx not scheduled`

© 2019-2022 Longhorn Authors | Documentation Distributed under CC-BY-4.0


© 2022 The Linux Foundation. All rights reserved. The Linux Foundation has registered trademarks and uses trademarks. For a list of trademarks of The Linux Foundation, please see our Trademark Usage page.