Troubleshooting: Instance Manager Pods Are Restarted

| December 12, 2024

Applicable versions

All Longhorn versions.

Symptoms

The Instance Manager pods are restarted, which causes a large number of iSCSI connection errors. Longhorn Engines are disconnected from the replicas, making the volume unstable.

Example of iSCSI errors in the kernel log:

Nov 21 00:54:02 node-xxx kernel:  connection438:0: detected conn error (1020)
Nov 21 00:54:02 node-xxx kernel:  connection437:0: detected conn error (1020)
Nov 21 00:54:02 node-xxx kernel:  connection436:0: detected conn error (1020)
Nov 21 00:54:02 node-xxx kernel:  connection435:0: detected conn error (1020)
Nov 21 00:54:02 node-xxx kernel:  connection434:0: detected conn error (1020)
Nov 21 00:54:02 node-xxx kernel:  connection433:0: detected conn error (1020)
Nov 21 00:54:02 node-xxx kernel:  connection432:0: detected conn error (1020)
....
Nov 21 00:54:02 node-xxx kernel:  connection275:0: detected conn error (1020) 

Example of messages displayed when the Instance Manager container is suddenly terminated:

time="2024-11-21T06:12:20.651526777Z" level=info msg="shim disconnected" id=548c02c5bc17426da586373f902e8d5811d5efe4e45d5fbd0495920626d014d9 namespace=k8s.io
time="2024-11-21T06:12:20.651603253Z" level=warning msg="cleaning up after shim disconnected" id=548c02c5bc17426da586373f902e8d5811d5efe4e45d5fbd0495920626d014d9 namespace=k8s.io
time="2024-11-21T06:12:21.819863412Z" level=info msg="Container to stop \"548c02c5bc17426da586373f902e8d5811d5efe4e45d5fbd0495920626d014d9\" must be in running or unknown state, current state \"CONTAINER_EXITED\""

Root Cause

The Instance Manager pod is a critical component that is responsible for managing the engine and replica processes of the volume. If the Instance Manager pod becomes unstable and then crashes or restarts unexpectedly, the volumes also become unstable.

An Instance Manager pod can be restarted or deleted for various reasons, including the following:

High CPU Loading

The Instance Manager pod has a liveness probe that periodically checks the health of servers in the pod. An excessive number of running replica or engine processes may overload the servers and prevent them from responding to the liveness probe in a timely manner. The delayed response may cause the liveness probe to fail, prompting Kubernetes to either restart the container or terminate the pod.

The solution is to monitor CPU, memory, and network usage in the Kubernetes cluster. When resource usage is high, consider adding nodes or increasing the CPU and memory resources of existing nodes in the cluster. You can also use the Replica Auto Balance settings to better balance the load across nodes.

Old Instance Manager Pod Terminated

When you upgrade Longhorn, a new Instance Manager with an updated image and engine image is created. However, Longhorn does not delete the old Instance Manager pod until all volumes are upgraded to the new engine image, and all replica and engine processes are stopped. In this case, termination of the Instance Manager is expected.

For more information about upgrading volumes and how the transition process works, see Upgrade.

Danger Zone Settings Updated

Changes to certain settings in the Danger Zone are applied only after the system-managed components (for example, Instance Manager, CSI Driver, and engine images) are restarted.

Longhorn waits until all volumes detached before restarting Instance Manager pods.

Back to Knowledge Base

Recent articles

Troubleshooting: Two active engines during volume live migration
Troubleshooting: Longhorn Manager Stuck in CrashLoopBackOff State Due to Inaccessible Webhook
Troubleshooting: Instance Manager Pods Are Restarted
Troubleshooting: NoExecute taint prevents workloads from terminating
Troubleshooting: Orphan ISCSI Session Error
Failure to Attach Volumes After Upgrade to Longhorn v1.5.x
Kubernetes resource revision frequency expectations
SELinux and Longhorn
Troubleshooting: RWX Volume Fails to Be Attached Caused by `Protocol not supported`
Troubleshooting: fstrim doesn't work on old kernel
Troubleshooting: Failed RWX mount due to connection timeout
Space consumption guideline
Troubleshooting: Unexpected expansion leads to degradation or attach failure
Troubleshooting: Failure to delete orphaned Pod volume directory
Troubleshooting: Volume attachment fails due to SELinux denials in Fedora downstream distributions
Troubleshooting: Volumes Stuck in Attach/Detach Loop When Using Longhorn on OKD
Troubleshooting: Velero restores Longhorn PersistentVolumeClaim stuck in the Pending state when using the Velero CSI Plugin version before v0.4.0
Analysis: Potential Data/Filesystem Corruption
Instruction: How To Migrate Longhorn Chart Installed In Old Rancher UI To The Chart In New Rancher UI
Troubleshooting: Unable to access an NFS backup target
Troubleshooting: Pod with `volumeMode: Block` is stuck in terminating
Troubleshooting: Instance manager pods are restarted every hour
Troubleshooting: Open-iSCSI on RHEL based systems
Troubleshooting: Upgrading volume engine is stuck in deadlock
Tip: Set Longhorn To Only Use Storage On A Specific Set Of Nodes
Troubleshooting: Some old instance manager pods are still running after upgrade
Troubleshooting: Volume cannot be cleaned up after the node of the workload pod is down and recovered
Troubleshooting: DNS Resolution Failed
Troubleshooting: Generate pprof runtime profiling data
Troubleshooting: Pod stuck in creating state when Longhorn volumes filesystem is corrupted
Troubleshooting: None-standard Kubelet directory
Troubleshooting: Longhorn default settings do not persist
Troubleshooting: Recurring job does not create new jobs after detaching and attaching volume
Troubleshooting: Use Traefik 2.x as ingress controller
Troubleshooting: Create Support Bundle with cURL
Troubleshooting: Longhorn RWX shared mount ownership is shown as nobody in consumer Pod
Troubleshooting: `MountVolume.SetUp failed for volume` due to multipathd on the node
Troubleshooting: Longhorn-UI: Error during WebSocket handshake: Unexpected response code: 200 #2265
Troubleshooting: Longhorn volumes take a long time to finish mounting
Troubleshooting: `volume readonly or I/O error`
Troubleshooting: `volume pvc-xxx not scheduled`

© 2019-2025 Longhorn Authors | Documentation Distributed under CC-BY-4.0


© 2025 The Linux Foundation. All rights reserved. The Linux Foundation has registered trademarks and uses trademarks. For a list of trademarks of The Linux Foundation, please see our Trademark Usage page.