Troubleshooting: Orphan ISCSI Session Error

| June 28, 2024

Applicable versions

  • All longhorn versions


When an Instance Manager pod crashes, the Open-iSCSI daemon (iscsid) on the host might print error logs every few seconds. You can view these error logs using the command journalctl -u iscsid -f.

Example 1:

Dec 19 13:19:36 k3s-node-2 iscsid[3160778]: connect to failed (No route to host)

Example 2:

Jun 28 19:54:59 phan-v672-pool2-1967f397-tprqc iscsid[17303]: cannot make a connection to (-1,22)


When the engine process is crashed without having a chance to logout of the iscsi session and delete the tgt target, it leaves orphan/stale iscsi session on the host. Furthermore, the instance manager pod is already restarted so its IP has already changed. However, The iscsid is still trying to connect to the non-existing IP recorded in the orphan/stale iscsi session. As the result we see the error logs above printed out every few seconds.

While annoying, the logs do not indicate any severe issues.


  1. Identify the IP from the logs ( in the following example), and verify that the IP is not assigned to any Longhorn Instance Manager pod using the command k get pods -l -o wide -n longhorn-system. Example:
    Jun 28 19:50:20 phan-v672-pool2-1967f397-tprqc iscsid[17303]: cannot make a connection to (-1,22)
  2. List all active nodes and find the nodes with the same IP as shown in the logs. Example:
    root@phan-v672-pool2-1967f397-tprqc:~# iscsiadm -m node show,1,1
  3. Log out of these nodes. Example:
    # Replace the -T and -p with your actual target name and IP
    root@phan-v672-pool2-1967f397-tprqc:~# iscsiadm -m node -T -p --logout
    Logging out of session [sid: 7, target:, portal:,3260]
    Logout of [sid: 7, target:, portal:,3260] successful.
    root@phan-v672-pool2-1967f397-tprqc:~# iscsiadm -m node -T -p --logout
    Logging out of session [sid: 8, target:, portal:,3260]
    Logout of [sid: 8, target:, portal:,3260] successful.
  4. Check if the error logs are no longer printed.
Back to Knowledge Base

Recent articles

Troubleshooting: NoExecute taint prevents workloads from terminating
Troubleshooting: Orphan ISCSI Session Error
Failure to Attach Volumes After Upgrade to Longhorn v1.5.x
Kubernetes resource revision frequency expectations
SELinux and Longhorn
Troubleshooting: RWX Volume Fails to Be Attached Caused by `Protocol not supported`
Troubleshooting: fstrim doesn't work on old kernel
Troubleshooting: Failed RWX mount due to connection timeout
Space consumption guideline
Troubleshooting: Unexpected expansion leads to degradation or attach failure
Troubleshooting: Failure to delete orphaned Pod volume directory
Troubleshooting: Volume attachment fails due to SELinux denials in Fedora downstream distributions
Troubleshooting: Volumes Stuck in Attach/Detach Loop When Using Longhorn on OKD
Troubleshooting: Velero restores Longhorn PersistentVolumeClaim stuck in the Pending state when using the Velero CSI Plugin version before v0.4.0
Analysis: Potential Data/Filesystem Corruption
Instruction: How To Migrate Longhorn Chart Installed In Old Rancher UI To The Chart In New Rancher UI
Troubleshooting: Unable to access an NFS backup target
Troubleshooting: Pod with `volumeMode: Block` is stuck in terminating
Troubleshooting: Instance manager pods are restarted every hour
Troubleshooting: Open-iSCSI on RHEL based systems
Troubleshooting: Upgrading volume engine is stuck in deadlock
Tip: Set Longhorn To Only Use Storage On A Specific Set Of Nodes
Troubleshooting: Some old instance manager pods are still running after upgrade
Troubleshooting: Volume cannot be cleaned up after the node of the workload pod is down and recovered
Troubleshooting: DNS Resolution Failed
Troubleshooting: Generate pprof runtime profiling data
Troubleshooting: Pod stuck in creating state when Longhorn volumes filesystem is corrupted
Troubleshooting: None-standard Kubelet directory
Troubleshooting: Longhorn default settings do not persist
Troubleshooting: Recurring job does not create new jobs after detaching and attaching volume
Troubleshooting: Use Traefik 2.x as ingress controller
Troubleshooting: Create Support Bundle with cURL
Troubleshooting: Longhorn RWX shared mount ownership is shown as nobody in consumer Pod
Troubleshooting: `MountVolume.SetUp failed for volume` due to multipathd on the node
Troubleshooting: Longhorn-UI: Error during WebSocket handshake: Unexpected response code: 200 #2265
Troubleshooting: Longhorn volumes take a long time to finish mounting
Troubleshooting: `volume readonly or I/O error`
Troubleshooting: `volume pvc-xxx not scheduled`

© 2019-2024 Longhorn Authors | Documentation Distributed under CC-BY-4.0

© 2024 The Linux Foundation. All rights reserved. The Linux Foundation has registered trademarks and uses trademarks. For a list of trademarks of The Linux Foundation, please see our Trademark Usage page.