OCP/OKD Support
To deploy Longhorn on a cluster provisioned with OpenShift 4.x, some additional configurations are required.
Note: OKD currently does not support the ARM platform. For more information, see the OKD website and GitHub issue #1165 (OKD in ARM platform).
Please refer to this section Install with Helm first.
Install Longhorn with the following settings:
Setting | Value | Example |
---|---|---|
openshift.enabled | true | N/A |
image.openshift.oauthProxy.repository | Upstream image | quay.io/openshift/origin-oauth-proxy |
image.openshift.oauthProxy.tag | Version 4.1 or later | 4.15 |
helm install longhorn longhorn/longhorn \
--namespace longhorn-system \
--create-namespace \
--set openshift.enabled=true \
--set image.openshift.oauthProxy.repository=quay.io/openshift/origin-oauth-proxy \
--set image.openshift.oauthProxy.tag=4.15
oc
CommandPerform the following steps to install Longhorn on OKD clusters.
longhorn-okd.yaml
file.wget https://raw.githubusercontent.com/longhorn/longhorn/v1.7.2/deploy/longhorn-okd.yaml
Specify the target oauth-proxy
container image in the longhorn-okd.yaml
file (for example, quay.io/openshift/origin-oauth-proxy:4.15
).
Run the following command:
oc apply -f longhorn-okd.yaml
One way to monitor the progress of the installation is to watch pods being created in the longhorn-system
namespace:
oc get pods \
--namespace longhorn-system \
--watch
For more information, see Install with Kubectl.
To understand more about configuring the disks for Longhorn, please refer to the section Configuring Defaults for Nodes and Disks
Longhorn will use the directory /var/lib/longhorn
as default storage mount point and that means Longhorn use the root device as the default storage. If you don’t want to use the root device as the Longhorn storage, set defaultSettings.createDefaultDiskLabeledNodes true when installing Longhorn by helm:
--set defaultSettings.createDefaultDiskLabeledNodes=true
And then add another device formatted to Longhorn storage
Create the filesystem on the device with the label longhorn
on the storage node. Get into the node by oc command:
oc get nodes --no-headers | awk '{print $1}'
oc debug node/${NODE_NAME} -t -- chroot /host bash
Check if the device is present and format it with Longhorn label:
lsblk
sudo mkfs.ext4 -L longhorn /dev/${DEVICE_NAME}
The secondary drive needs to be mounted automatically when node boots up by the MachineConfig
that can be created and deployed by:
cat <<EOF >>auto-mount-machineconfig.yaml
apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfig
metadata:
labels:
machineconfiguration.openshift.io/role: worker
name: 71-mount-storage-worker
spec:
config:
ignition:
version: 3.2.0
systemd:
units:
- name: var-mnt-longhorn.mount
enabled: true
contents: |
[Unit]
Before=local-fs.target
[Mount]
# Example mount point, you can change it to where you like for each device.
Where=/var/mnt/longhorn
What=/dev/disk/by-label/longhorn
Options=rw,relatime,discard
[Install]
WantedBy=local-fs.target
EOF
oc apply -f auto-mount-machineconfig.yaml
Please refer to the section Customizing Default Disks for New Nodes to label and annotate storage node on where your device is by oc commands:
oc get nodes --no-headers | awk '{print $1}'
oc annotate node ${NODE_NAME} --overwrite node.longhorn.io/default-disks-config='[{"path":"/var/mnt/longhorn","allowScheduling":true}]'
oc label node ${NODE_NAME} --overwrite node.longhorn.io/create-default-disk=config
Note: You might need to reboot the node to validate the modified configuration.
© 2019-2024 Longhorn Authors | Documentation Distributed under CC-BY-4.0
© 2024 The Linux Foundation. All rights reserved. The Linux Foundation has registered trademarks and uses trademarks. For a list of trademarks of The Linux Foundation, please see our Trademark Usage page.