Skip to content

[tests] Fix flaky E2E timeouts, add diagnostics and retry cleanup#2255

Draft
kvaps wants to merge 1 commit intomainfrom
fix/e2e-flaky-timeouts
Draft

[tests] Fix flaky E2E timeouts, add diagnostics and retry cleanup#2255
kvaps wants to merge 1 commit intomainfrom
fix/e2e-flaky-timeouts

Conversation

@kvaps
Copy link
Copy Markdown
Member

@kvaps kvaps commented Mar 20, 2026

What this PR does

E2E tests in PR CI regularly fail due to HelmRelease readiness timeouts
that are too short for a freshly started QEMU/Talos sandbox, and retries
running against a dirty cluster state left by the previous attempt.

Changes:

  • Increase HelmRelease wait timeouts across all affected app tests:
    qdrant (60s→180s), redis (20s→120s), vminstance disk (5s→120s),
    vminstance vm (5s→180s), mariadb (30s→120s), clickhouse (20s→120s),
    kafka (30s→120s), mongodb (60s→120s)
  • Add diagnostic output (HelmRelease yaml, pods, events) on failure in
    each affected bats file, following the pattern already used in harbor.bats
  • Add hack/e2e-cleanup-app.sh and cleanup-test-app-% Makefile target to
    remove leftover CRs and HelmReleases from tenant-test before each retry
  • Wire cleanup into the E2E workflow retry loop so retries run on a clean state

Release note

[tests] Fix flaky E2E tests: increase HelmRelease timeouts, add failure diagnostics, clean up namespace before retrying

Increase HelmRelease readiness timeouts that were too short for a
freshly-started QEMU/Talos sandbox:
- qdrant: 60s -> 180s
- redis: 20s -> 120s
- vminstance vm-disk: 5s -> 120s, vm-instance: 5s -> 180s
- mariadb: 30s -> 120s
- clickhouse: 20s -> 120s
- kafka: 30s -> 120s
- mongodb: 60s -> 120s

Add diagnostic output (HelmRelease yaml, pods, events) on failure in
all affected bats files, following the pattern already used in
harbor.bats.

Add cleanup before retry in the E2E workflow: introduce
hack/e2e-cleanup-app.sh and a cleanup-test-app-% Makefile target that
removes leftover CRs and HelmReleases from tenant-test namespace before
each subsequent attempt, so retries do not run against a dirty cluster.

Co-Authored-By: Claude <[email protected]>
Signed-off-by: Andrei Kvapil <[email protected]>
@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai bot commented Mar 20, 2026

Important

Review skipped

Draft detected.

Please check the settings in the CodeRabbit UI or the .coderabbit.yaml file in this repository. To trigger a single review, invoke the @coderabbitai review command.

⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 63b645fd-24f8-4c28-8fc4-86bbe26efc47

You can disable this status message by setting the reviews.review_status to false in the CodeRabbit configuration file.

Use the checkbox below for a quick retry:

  • 🔍 Trigger review
✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch fix/e2e-flaky-timeouts
📝 Coding Plan
  • Generate coding plan for human review comments

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Tip

CodeRabbit can approve the review once all CodeRabbit's comments are resolved.

Enable the reviews.request_changes_workflow setting to automatically approve the review once all CodeRabbit's comments are resolved.

@gemini-code-assist
Copy link
Copy Markdown
Contributor

Summary of Changes

Hello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request addresses the issue of flaky End-to-End (E2E) tests in the CI pipeline, which frequently fail due to insufficient HelmRelease readiness timeouts and dirty cluster states during test retries. The changes aim to stabilize the E2E suite by providing more generous wait times for application deployments, offering better diagnostic information on failures, and ensuring that each test retry starts with a clean environment.

Highlights

  • Increased HelmRelease Timeouts: Adjusted wait timeouts for HelmRelease readiness across multiple E2E application tests (qdrant, redis, vminstance, mariadb, clickhouse, kafka, mongodb) to prevent premature failures on slow environments.
  • Enhanced Failure Diagnostics: Implemented detailed diagnostic output (HelmRelease YAML, pods, events) to be displayed upon HelmRelease readiness timeout failures in various bats test files, aiding in quicker debugging.
  • Introduced E2E Cleanup Script: Added a new shell script (hack/e2e-cleanup-app.sh) designed to remove leftover custom resources and HelmReleases from the tenant-test namespace.
  • Integrated Cleanup into E2E Workflow: Created a Makefile target (cleanup-test-app-%) and wired it into the E2E retry loop to ensure that subsequent test retries run against a clean cluster state.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Ignored Files
  • Ignored by pattern: .github/workflows/** (1)
    • .github/workflows/pull-requests.yaml
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request aims to fix flaky E2E tests by increasing timeouts, adding diagnostics, and implementing a cleanup mechanism for retries. The changes are a good step towards improving test stability. My review includes suggestions to reduce code duplication in the diagnostic reporting, improve the correctness and robustness of the new cleanup script by properly scoping resource deletion, and replacing a fixed sleep with a more reliable waiting mechanism.

Comment on lines +20 to +24
kubectl api-resources --verbs=list --namespaced -o name 2>/dev/null |
grep '\.apps\.cozystack\.io' |
while read -r resource; do
kubectl -n "$NS" delete "$resource" --all --ignore-not-found --wait=false 2>/dev/null || true
done
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

This part of the script deletes all custom resources within the apps.cozystack.io API group in the tenant-test namespace, regardless of the $APP argument passed to the script. This is inconsistent with the HelmRelease cleanup logic below, which is correctly scoped to the app.

This global cleanup is risky because:

  1. It's misleading for a script named e2e-cleanup-app.sh.
  2. It could lead to unexpected behavior or race conditions if tests were ever parallelized in the same namespace.
  3. It makes assumptions about the state of the namespace that might not always hold true.

The custom resource deletion should be scoped to the specific $APP, similar to how HelmReleases are handled. You might need to handle pluralization (e.g., qdrant -> qdrants) and composite apps (like vminstance which also has vmdisk) explicitly.

Comment on lines +38 to +46
kubectl -n tenant-test wait hr clickhouse-$name --timeout=120s --for=condition=ready || {
echo "=== HelmRelease status ===" >&2
kubectl -n tenant-test get hr clickhouse-$name -o yaml 2>&1 || true
echo "=== Pods ===" >&2
kubectl -n tenant-test get pods 2>&1 || true
echo "=== Events ===" >&2
kubectl -n tenant-test get events --sort-by='.lastTimestamp' 2>&1 | tail -30 || true
false
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

This diagnostic block is duplicated across multiple *.bats files in this PR (e.g., kafka.bats, mariadb.bats, etc.). To improve maintainability and reduce code duplication, consider extracting this logic into a shared helper script or function.

For example, you could create a hack/e2e-diagnostics.sh script that takes the HelmRelease name as an argument and is called on failure. This would centralize the diagnostic logic, making it easier to update in the future.

This change would look something like this in the test file:

kubectl -n tenant-test wait hr clickhouse-$name --timeout=120s --for=condition=ready || hack/e2e-diagnostics.sh clickhouse-$name

This comment applies to kafka.bats, mariadb.bats, mongodb.bats, qdrant.bats, redis.bats, and vminstance.bats as well.

done

# Give controllers a moment to process deletions
sleep 10
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

Using a fixed sleep to wait for controllers to process deletions can be a source of flakiness. If controllers take longer than 10 seconds, the cleanup might be incomplete, causing the subsequent test retry to fail. If they are much faster, the sleep introduces unnecessary delays in the test suite.

A more robust approach would be to wait for the resources to be fully deleted. You can achieve this by changing --wait=false to --wait=true in your kubectl delete commands. This will block until the resource is gone.

Example:

kubectl -n \"$NS\" delete hr \"$hr\" --ignore-not-found --wait=true

This will make the cleanup script slower, but more reliable. Given that this script runs on test retries, reliability is likely more important than a few extra seconds of cleanup time.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant