Node Disk Support
Longhorn now supports the addition and management of various disk types (AIO, NVMe, and VirtIO) on nodes, enhancing filesystem operations, storage performance, and compatibility.
Enhanced Storage Performance
Utilizing NVMe and VirtIO disks allows for faster disk operations, significantly improving overall performance.
Filesystem Compatibility
Disks managed with NVMe or VirtIO drivers offer better filesystem support, including advanced operations like trimming.
Flexibility
Users can select the disk type that best fits their environment: AIO for traditional setups, NVMe for high-performance needs, or VirtIO for virtualized environments.
Ease of Management
Automatic detection of disk drivers simplifies the addition and management of disks, reducing administrative overhead.
Longhorn automatically detects the disk type if node.spec.disks[i].diskDriver
is set to auto
, optimizing storage performance. The detection and management is as follows:
node.status.diskStatus[i].diskDriver
is set to nvme
.node.status.diskStatus[i].diskDriver
is set to virtio-blk
.node.status.diskStatus[i].diskDriver
is set to aio
.Alternatively, users can manually set node.spec.disks[i].diskDriver
to aio
to force the use of the aio bdev driver.
To support NVMe and VirtIO disks, you need to find the BDF (Bus, Device, Function) of the disk as a disk path to be added to the Longhorn node. The following examples provide an introduction to configuring NVMe disks, VirtIO disks, and others.
Note
Once these disks are managed by the NVMe bdev driver or VirtIO bdev driver, instead of the Linux kernel driver, they will no be listed under /dev/nvmeXnY or /dev/vdbX.
List the disks
First, identify the NVMe disks available on your system by running the following command:
# ls -al /sys/block/
Example output:
lrwxrwxrwx 1 root root 0 Jul 30 12:20 loop0 -> ../devices/virtual/block/loop0
lrwxrwxrwx 1 root root 0 Jul 30 12:20 nvme0n1 -> ../devices/pci0000:00/0000:00:01.2/0000:02:00.0/nvme/nvme0/nvme0n1
lrwxrwxrwx 1 root root 0 Jul 30 12:20 nvme0n1 -> ../devices/pci0000:00/0000:00:01.2/0000:05:00.0/nvme/nvme1/nvme1n1
Get the BDF of the NVMe disk
Identify the BDF of the NVMe disk /dev/nvme1n1
. From the example above, the BDF is 0000:05:00.0
.
Add the NVMe disk to spec.disks
of node.longhorn.io
nvme-disk:
allowScheduling: true
diskType: block
diskDriver: auto
evictionRequested: false
path: 0000:05:00.0
storageReserved: 0
tags: []
Check the status.diskStatus
. The disk should be detected without errors, and the diskDriver should be set to nvme
.
Note: Alternative Disk Configuration
If you add the disk using a different path, such as:
nvme-disk: allowScheduling: true diskType: block diskDriver: auto evictionRequested: false path: /dev/nvme1n1 storageReserved: 0 tags: []
In this case, the disk will be managed by the aio bdev driver, and the
node.status.diskStatus[i].diskDriver
is set toaio
.
The steps are similar to NVMe disks.
List the disks
First, identify the VirtIO disks available on your system by running the following command:
# ls -al /sys/block/
Example output:
lrwxrwxrwx 1 root root 0 Jul 30 12:20 loop0 -> ../devices/virtual/block/loop0
lrwxrwxrwx 1 root root 0 Feb 22 14:04 vda -> ../devices/pci0000:00/0000:00:02.3/0000:04:00.0/virtio2/block/vda
lrwxrwxrwx 1 root root 0 Feb 22 14:24 vdb -> ../devices/pci0000:00/0000:00:02.6/0000:07:00.0/virtio5/block/vdb
Get the BDF of the VirtIO disk
Identify the BDF of the VirtIO disk /dev/vdb
. From the example above, the BDF is 0000:07:00.0
.
Add the NVMe disk to spec.disks
of node.longhorn.io
nvme-disk:
allowScheduling: true
diskType: block
diskDriver: auto
evictionRequested: false
path: 0000:07:00.0
storageReserved: 0
tags: []
Check the status.diskStatus
. The disk should be detected without errors, and the diskDriver
should be set to virtio-blk
.
Note: Alternative Disk Configuration
If you add the disk using a different path, such as:
nvme-disk: allowScheduling: true diskType: block diskDriver: auto evictionRequested: false path: /dev/vdb storageReserved: 0 tags: []
In this case, the disk will be managed by the aio bdev driver, and the
node.status.diskStatus[i].diskDriver
is set toaio
.
When neither NVMe nor VirtIO drivers can manage a disk, Longhorn will default to using the aio bdev driver. Users can also manually configure this.
Add the disk to spec.disks
of node.longhorn.io
default-disk-loop:
allowScheduling: true
diskDriver: aio
diskType: block
evictionRequested: false
path: /dev/loop12
storageReserved: 0
tags: []
Check node.status.diskStatus. The disk should be detected without errors, and the node.status.diskStatus[i].diskDriver
is set to aio
.
© 2019-2024 Longhorn Authors | Documentation Distributed under CC-BY-4.0
© 2024 The Linux Foundation. All rights reserved. The Linux Foundation has registered trademarks and uses trademarks. For a list of trademarks of The Linux Foundation, please see our Trademark Usage page.