Strengthen Longhorn with Automated End-to-End Tests
Chris Chien | January 19, 2026
E2E testing is a core part of ensuring Longhorn remains stable, resilient, and production-ready. While integration tests validate individual components, E2E tests validate the entire system under real-world conditions; especially automated tests derived from user-reported issues, disruptive operations such as node shutdowns or reboots, and scenarios that may be repeated hundreds of times to uncover stability issues. Automating these scenarios helps the QA team quickly validate every Longhorn release, reducing manual workload and ensuring issues are found early.
Longhorn maintains two major public automated test suites: Integration Tests and End-to-End (E2E) Tests.
The integration test suite focuses on validating the core Longhorn functionality, including:
The integration test suite is gradually being deprecated in favor of moving all test cases into the E2E test framework.
The E2E test suite is written in Robot Framework using a BDD-style structure. It focuses on:
As part of ongoing improvements, integration tests are actively being migrated into the E2E suite. The goal is to eventually use one unified test framework. Since E2E tests are written in a BDD-style format, it’s much easier to understand the test steps from the logs and see exactly why a test failed, which greatly helps with debugging.
This is how an E2E test flows through the framework:
Robot Test Case (.robot) → Robot Keyword (.resource) → Python Backend Logic (_keywords.py)
Robot Framework test for recurring backup job interruptions
*** Test Cases ***
Recurring backup job interruptions when Allow Recurring Job While Volume Is Detached is disabled
[Documentation] https://longhorn.github.io/longhorn-tests/manual/release-specific/v1.1.0/recurring-backup-job-interruptions/
... Scenario 1- Allow Recurring Job While Volume Is Detached disabled, attached pod scaled down while the recurring backup was in progress.
Given Setting allow-recurring-job-while-volume-detached is set to false
And Create storageclass longhorn-test with dataEngine=${DATA_ENGINE}
And Create statefulset 0 using RWO volume with longhorn-test storageclass and size 5 Gi
And Write 4096 MB data to file data in statefulset 0
When Create backup recurringjob 0
... groups=["default"]
... cron=*/2 * * * *
... concurrency=1
... labels={"test":"recurringjob"}
Then Wait for backup recurringjob 0 started
And Scale statefulset 0 to 0
And Verify backup list contains backup no error for statefulset 0 volume
And Wait for statefulset 0 volume detached
When Sleep 180
And Verify no new backup created
Resource keywords act as reusable building blocks shared across test cases. They:
Example keyword:
*** Settings ***
Documentation RecurringJob Keywords
Library Collections
Library ../libs/keywords/common_keywords.py
Library ../libs/keywords/recurringjob_keywords.py
*** Keywords ***
Wait for ${job_task} recurringjob ${job_id} started
${job_name} = generate_name_with_suffix ${job_task} ${job_id}
${job_pod_name} = wait_for_recurringjob_pod_create ${job_name}
Longhorn’s E2E tests follow a layered design that keeps test cases readable, maintainable, and easy to extend. Each E2E test case is composed of:
*.robot files.Example:
*** Test Cases ***
Example test
Given Setting example-setting is set to false
And Create statefulset 0 using RWO volume
When Perform some operation
Then Verify expected result
This structure allows contributors to write tests in a clean, readable format while relying on shared keywords and Python functions in the backend.
Write readable, BDD-style steps:
When Create backup recurringjob 0
...
Then Wait for backup recurringjob 0 started
.resource filePlace the keyword in a resource file that matches the feature area (for example, recurringjob.resource):
Wait for ${job_task} recurringjob ${job_id} started
${job_name} = generate_name_with_suffix ${job_task} ${job_id}
${job_pod_name} = wait_for_recurringjob_pod_create ${job_name}
Implement backend logic in Python when there is no existing keyword or library that you can reuse.
Example:
# e2e/libs/keywords/recurringjob_keywords.py
def wait_for_recurringjob_pod_create(self, job_name):
logging(f'Waiting for recurringjob {job_name} pod start')
start_time = datetime.now(timezone.utc)
retry_count, retry_interval = get_retry_count_and_interval()
for i in range(retry_count):
pods = list_namespaced_pod("longhorn-system", f"recurring-job.longhorn.io={job_name}")
for pod in pods:
if pod.metadata.creation_timestamp and pod.metadata.creation_timestamp > start_time:
logging(f"Pod {pod.metadata.name} created at {pod.metadata.creation_timestamp}")
return
time.sleep(retry_interval)
assert False, f"No new pod of {job_name} is created after {start_time}"
The function monitors Longhorn recurring job pods using the label recurring-job.longhorn.io={job_name}.
Many common operations already exist and should be reused:
e2e/libs/keywords/common_keywords.py
e2e/libs/keywords/*_keywords.py
e2e/keywords/*.resource
longhorn-tests/e2e/tests/negative/recurring_backup_job_interruptions.robot
└─ calls → Wait for backup recurringjob 0 started
(Robot test case step)
longhorn-tests/e2e/keywords/recurringjob.resource
├─ imports → Library longhorn-tests/e2e/libs/keywords/recurringjob_keywords.py
└─ defines → Wait for ${job_task} recurringjob ${job_id} started
(Keyword define calls Python)
longhorn-tests/e2e/libs/keywords/recurringjob_keywords.py
└─ implements → wait_for_recurringjob_pod_create
(Python function that waits for the recurring job pod to start)
Following the above steps, convert each step in the manual test case into clear, human-readable Robot Framework test case.
From the longhorn-tests/e2e folder, execute the test with:
/run.sh -t "Recurring backup job interruptions when Allow Recurring Job While Volume Is Detached is disabled"
Robot Framework will generate a standard test report.

Recent posts
Strengthen Longhorn with Automated End-to-End Tests© 2019-2026 Longhorn Authors | Documentation Distributed under CC-BY-4.0
© 2026 The Linux Foundation. All rights reserved. The Linux Foundation has registered trademarks and uses trademarks. For a list of trademarks of The Linux Foundation, please see our Trademark Usage page.