Troubleshooting: Recurring job does not create new jobs after detaching and attaching volume

Chin-Ya Huang | April 21, 2021

Applicable versions

All Longhorn versions.

Symptoms

Recurring job does not create new jobs when the volume is attached after being detached for a long time.

According to Kubernetes CronJob limitations:

For every CronJob, the CronJob Controller checks how many schedules it missed in the duration from its last scheduled time until now. If there are more than 100 missed schedules, then it does not start the job and logs the error

Cannot determine if job needs to be started. Too many missed start time (> 100). Set or decrease .spec.startingDeadlineSeconds or check clock skew.

That means the duration of the attach/detach operation that the recurring job can tolerate is depending on the scheduled interval.

For example, if the recurring backup job is set to run every minute, then the toleration would be 100 minutes.

Solution

Directly delete the stuck cronjob to allow Longhorn to recreate it.

ip-172-30-0-211:/home/ec2-user # kubectl -n longhorn-system get cronjobs
NAME                                                  SCHEDULE    SUSPEND   ACTIVE   LAST SCHEDULE   AGE
pvc-394e6006-9c34-47da-bf27-2286ae669be1-c-ptl8l1-c   * * * * *   False     1        47s             2m23s

ip-172-30-0-211:/home/ec2-user # kubectl -n longhorn-system delete cronjobs/pvc-394e6006-9c34-47da-bf27-2286ae669be1-c-ptl8l1-c
cronjob.batch "pvc-394e6006-9c34-47da-bf27-2286ae669be1-c-ptl8l1-c" deleted

ip-172-30-0-211:/home/ec2-user # kubectl -n longhorn-system get cronjobs
No resources found in longhorn-system namespace.

ip-172-30-0-211:/home/ec2-user # sleep 60

ip-172-30-0-211:/home/ec2-user # kubectl -n longhorn-system get cronjobs
NAME                                                  SCHEDULE    SUSPEND   ACTIVE   LAST SCHEDULE   AGE
pvc-394e6006-9c34-47da-bf27-2286ae669be1-c-ptl8l1-c   * * * * *   False     1        2s             3m21s
Back to Knowledge Base

Recent articles

Instruction: How To Migrate Longhorn Chart Installed In Old Rancher UI To The Chart In New Rancher UI
Troubleshooting: Unable to access an NFS backup target
Troubleshooting: Pod with `volumeMode: Block` is stuck in terminating
Troubleshooting: Instance manager pods are restarted every hour
Troubleshooting: Open-iSCSI on RHEL based systems
Troubleshooting: Upgrading volume engine is stuck in deadlock
Tip: Set Longhorn To Only Use Storage On A Specific Set Of Nodes
Troubleshooting: Some old instance manager pods are still running after upgrade
Troubleshooting: Volume cannot be cleaned up after the node of the workload pod is down and recovered
Troubleshooting: DNS Resolution Failed
Troubleshooting: Generate pprof runtime profiling data
Troubleshooting: Pod stuck in creating state when Longhorn volumes filesystem is corrupted
Troubleshooting: None-standard Kubelet directory
Troubleshooting: Longhorn default settings do not persist
Troubleshooting: Recurring job does not create new jobs after detaching and attaching volume
Troubleshooting: Use Traefik 2.x as ingress controller
Troubleshooting: Create Support Bundle with cURL
Troubleshooting: Longhorn RWX shared mount ownership is shown as nobody in consumer Pod
Troubleshooting: `MountVolume.SetUp failed for volume` due to multipathd on the node
Troubleshooting: Longhorn-UI: Error during WebSocket handshake: Unexpected response code: 200 #2265
Troubleshooting: Longhorn volumes take a long time to finish mounting
Troubleshooting: `volume readonly or I/O error`
Troubleshooting: `volume pvc-xxx not scheduled`

© 2019-2022 Longhorn Authors | Documentation Distributed under CC-BY-4.0


© 2022 The Linux Foundation. All rights reserved. The Linux Foundation has registered trademarks and uses trademarks. For a list of trademarks of The Linux Foundation, please see our Trademark Usage page.