Tip: Set Longhorn To Only Use Storage On A Specific Set Of Nodes

| November 15, 2021

Applicable versions

All Longhorn versions.

Background

Let’s say you have a cluster of 5 nodes (node-1, node-2, …, node-5). You have some fast disks on node-1, node-2, and node-3 so you want Longhorn to use storage on those nodes only. There are a few ways to do this as below.

Tell Longhorn to create a default disk on a specific set of nodes

  • Label node-1, node-2, and node-3 with label node.longhorn.io/create-default-disk=true (e.g., kubectl label nodes node-1 node.longhorn.io/create-default-disk=true)
  • Install Longhorn with the setting Create Default Disk on Labeled Nodes set to true.

Result: workloads that use Longhorn volumes can run on any nodes. Longhorn only uses storage on node-1, node-2, and node-3 for replica scheduling.

Create a StorageClass that select a specific set of nodes

  • Install Longhorn normally on all nodes
  • Go to the node page on Longhorn UI, tag the node node-1, node-2, and node-3 with a tag, e.g., storage
  • Create a new StorageClass that have node selector nodeSelector: "storage" . E.g.,
    kind: StorageClass
    apiVersion: storage.k8s.io/v1
    metadata:
      name: my-longhorn-sc
    provisioner: driver.longhorn.io
    allowVolumeExpansion: true
    reclaimPolicy: Delete
    volumeBindingMode: Immediate
    parameters:
      numberOfReplicas: "3"
      staleReplicaTimeout: "2880" # 48 hours in minutes
      fromBackup: ""
      fsType: "ext4"
      nodeSelector: "storage"
    
  • Use the StorageClass my-longhorn-sc for the PVCs of workload. E.g.,
    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      name: my-longhorn-volv-pvc
    spec:
      accessModes:
        - ReadWriteOnce
      storageClassName: my-longhorn-s
      resources:
        requests:
          storage: 2Gi
    

Result: workloads that use Longhorn volumes can run on any nodes. Longhorn only schedules replicas of my-longhorn-volv-pvc on the node node-1, node-2, and node-3

Deploy Longhorn components only on a specific set of nodes

  • Label node-1, node-2, and node-3 with label storage=longhorn (e.g., kubectl label nodes node-1 storage=longhorn)
  • Set node selector for Longhorn components by following the instruction to deploy Longhorn components only on node with label storage=longhorn

Result: Longhorn components are only deployed on node-1, node-2, and node-3. Workloads that use Longhorn volumes can only be scheduled on node-1, node-2, and node-3.

Back to Knowledge Base

Recent articles

Troubleshooting: NoExecute taint prevents workloads from terminating
Troubleshooting: Orphan ISCSI Session Error
Failure to Attach Volumes After Upgrade to Longhorn v1.5.x
Kubernetes resource revision frequency expectations
SELinux and Longhorn
Troubleshooting: RWX Volume Fails to Be Attached Caused by `Protocol not supported`
Troubleshooting: fstrim doesn't work on old kernel
Troubleshooting: Failed RWX mount due to connection timeout
Space consumption guideline
Troubleshooting: Unexpected expansion leads to degradation or attach failure
Troubleshooting: Failure to delete orphaned Pod volume directory
Troubleshooting: Volume attachment fails due to SELinux denials in Fedora downstream distributions
Troubleshooting: Volumes Stuck in Attach/Detach Loop When Using Longhorn on OKD
Troubleshooting: Velero restores Longhorn PersistentVolumeClaim stuck in the Pending state when using the Velero CSI Plugin version before v0.4.0
Analysis: Potential Data/Filesystem Corruption
Instruction: How To Migrate Longhorn Chart Installed In Old Rancher UI To The Chart In New Rancher UI
Troubleshooting: Unable to access an NFS backup target
Troubleshooting: Pod with `volumeMode: Block` is stuck in terminating
Troubleshooting: Instance manager pods are restarted every hour
Troubleshooting: Open-iSCSI on RHEL based systems
Troubleshooting: Upgrading volume engine is stuck in deadlock
Tip: Set Longhorn To Only Use Storage On A Specific Set Of Nodes
Troubleshooting: Some old instance manager pods are still running after upgrade
Troubleshooting: Volume cannot be cleaned up after the node of the workload pod is down and recovered
Troubleshooting: DNS Resolution Failed
Troubleshooting: Generate pprof runtime profiling data
Troubleshooting: Pod stuck in creating state when Longhorn volumes filesystem is corrupted
Troubleshooting: None-standard Kubelet directory
Troubleshooting: Longhorn default settings do not persist
Troubleshooting: Recurring job does not create new jobs after detaching and attaching volume
Troubleshooting: Use Traefik 2.x as ingress controller
Troubleshooting: Create Support Bundle with cURL
Troubleshooting: Longhorn RWX shared mount ownership is shown as nobody in consumer Pod
Troubleshooting: `MountVolume.SetUp failed for volume` due to multipathd on the node
Troubleshooting: Longhorn-UI: Error during WebSocket handshake: Unexpected response code: 200 #2265
Troubleshooting: Longhorn volumes take a long time to finish mounting
Troubleshooting: `volume readonly or I/O error`
Troubleshooting: `volume pvc-xxx not scheduled`

© 2019-2024 Longhorn Authors | Documentation Distributed under CC-BY-4.0


© 2024 The Linux Foundation. All rights reserved. The Linux Foundation has registered trademarks and uses trademarks. For a list of trademarks of The Linux Foundation, please see our Trademark Usage page.