[tests] Fix flaky E2E timeouts, add diagnostics and retry cleanup#2255
[tests] Fix flaky E2E timeouts, add diagnostics and retry cleanup#2255
Conversation
Increase HelmRelease readiness timeouts that were too short for a freshly-started QEMU/Talos sandbox: - qdrant: 60s -> 180s - redis: 20s -> 120s - vminstance vm-disk: 5s -> 120s, vm-instance: 5s -> 180s - mariadb: 30s -> 120s - clickhouse: 20s -> 120s - kafka: 30s -> 120s - mongodb: 60s -> 120s Add diagnostic output (HelmRelease yaml, pods, events) on failure in all affected bats files, following the pattern already used in harbor.bats. Add cleanup before retry in the E2E workflow: introduce hack/e2e-cleanup-app.sh and a cleanup-test-app-% Makefile target that removes leftover CRs and HelmReleases from tenant-test namespace before each subsequent attempt, so retries do not run against a dirty cluster. Co-Authored-By: Claude <[email protected]> Signed-off-by: Andrei Kvapil <[email protected]>
|
Important Review skippedDraft detected. Please check the settings in the CodeRabbit UI or the ⚙️ Run configurationConfiguration used: defaults Review profile: CHILL Plan: Pro Run ID: You can disable this status message by setting the Use the checkbox below for a quick retry:
✨ Finishing Touches🧪 Generate unit tests (beta)
📝 Coding Plan
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment Tip CodeRabbit can approve the review once all CodeRabbit's comments are resolved.Enable the |
Summary of ChangesHello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request addresses the issue of flaky End-to-End (E2E) tests in the CI pipeline, which frequently fail due to insufficient HelmRelease readiness timeouts and dirty cluster states during test retries. The changes aim to stabilize the E2E suite by providing more generous wait times for application deployments, offering better diagnostic information on failures, and ensuring that each test retry starts with a clean environment. Highlights
🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console. Ignored Files
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here. Footnotes
|
There was a problem hiding this comment.
Code Review
This pull request aims to fix flaky E2E tests by increasing timeouts, adding diagnostics, and implementing a cleanup mechanism for retries. The changes are a good step towards improving test stability. My review includes suggestions to reduce code duplication in the diagnostic reporting, improve the correctness and robustness of the new cleanup script by properly scoping resource deletion, and replacing a fixed sleep with a more reliable waiting mechanism.
| kubectl api-resources --verbs=list --namespaced -o name 2>/dev/null | | ||
| grep '\.apps\.cozystack\.io' | | ||
| while read -r resource; do | ||
| kubectl -n "$NS" delete "$resource" --all --ignore-not-found --wait=false 2>/dev/null || true | ||
| done |
There was a problem hiding this comment.
This part of the script deletes all custom resources within the apps.cozystack.io API group in the tenant-test namespace, regardless of the $APP argument passed to the script. This is inconsistent with the HelmRelease cleanup logic below, which is correctly scoped to the app.
This global cleanup is risky because:
- It's misleading for a script named
e2e-cleanup-app.sh. - It could lead to unexpected behavior or race conditions if tests were ever parallelized in the same namespace.
- It makes assumptions about the state of the namespace that might not always hold true.
The custom resource deletion should be scoped to the specific $APP, similar to how HelmReleases are handled. You might need to handle pluralization (e.g., qdrant -> qdrants) and composite apps (like vminstance which also has vmdisk) explicitly.
| kubectl -n tenant-test wait hr clickhouse-$name --timeout=120s --for=condition=ready || { | ||
| echo "=== HelmRelease status ===" >&2 | ||
| kubectl -n tenant-test get hr clickhouse-$name -o yaml 2>&1 || true | ||
| echo "=== Pods ===" >&2 | ||
| kubectl -n tenant-test get pods 2>&1 || true | ||
| echo "=== Events ===" >&2 | ||
| kubectl -n tenant-test get events --sort-by='.lastTimestamp' 2>&1 | tail -30 || true | ||
| false | ||
| } |
There was a problem hiding this comment.
This diagnostic block is duplicated across multiple *.bats files in this PR (e.g., kafka.bats, mariadb.bats, etc.). To improve maintainability and reduce code duplication, consider extracting this logic into a shared helper script or function.
For example, you could create a hack/e2e-diagnostics.sh script that takes the HelmRelease name as an argument and is called on failure. This would centralize the diagnostic logic, making it easier to update in the future.
This change would look something like this in the test file:
kubectl -n tenant-test wait hr clickhouse-$name --timeout=120s --for=condition=ready || hack/e2e-diagnostics.sh clickhouse-$nameThis comment applies to kafka.bats, mariadb.bats, mongodb.bats, qdrant.bats, redis.bats, and vminstance.bats as well.
| done | ||
|
|
||
| # Give controllers a moment to process deletions | ||
| sleep 10 |
There was a problem hiding this comment.
Using a fixed sleep to wait for controllers to process deletions can be a source of flakiness. If controllers take longer than 10 seconds, the cleanup might be incomplete, causing the subsequent test retry to fail. If they are much faster, the sleep introduces unnecessary delays in the test suite.
A more robust approach would be to wait for the resources to be fully deleted. You can achieve this by changing --wait=false to --wait=true in your kubectl delete commands. This will block until the resource is gone.
Example:
kubectl -n \"$NS\" delete hr \"$hr\" --ignore-not-found --wait=trueThis will make the cleanup script slower, but more reliable. Given that this script runs on test retries, reliability is likely more important than a few extra seconds of cleanup time.
What this PR does
E2E tests in PR CI regularly fail due to HelmRelease readiness timeouts
that are too short for a freshly started QEMU/Talos sandbox, and retries
running against a dirty cluster state left by the previous attempt.
Changes:
qdrant (60s→180s), redis (20s→120s), vminstance disk (5s→120s),
vminstance vm (5s→180s), mariadb (30s→120s), clickhouse (20s→120s),
kafka (30s→120s), mongodb (60s→120s)
each affected bats file, following the pattern already used in harbor.bats
remove leftover CRs and HelmReleases from tenant-test before each retry
Release note