-
Notifications
You must be signed in to change notification settings - Fork 122
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Access to external services (e.g. quay) randomly fail during cluster-up steps #1330
Comments
/cc @akalenyu |
Btw if we will have a way to know what is the expected hash versus the one that exists in the cache of CI, if any, But it is more complicated of course. (atm, even if we have the exact requested hash, we do contact quay as far as i remember, even just for headers) EDIT |
also can be fixed but not relevant to this ticket, it happens on every cluster-up failures it seems, since the refactor |
According @brianmcarey the gocli image is local to the test pod now Thanks |
Yes the gocli image should be available to the test pods so we shouldn't see this issue anymore. |
What happened:
On kubevirt/kubevirt jobs run access failures to quay are seen from time [1].
These occurences fails the e2e jobs on start.
Here is an example:
[1] https://prow.ci.kubevirt.io/view/gs/kubevirt-prow/pr-logs/pull/kubevirt_kubevirt/13184/pull-kubevirt-e2e-k8s-1.29-sig-network-1.2/1856305515764649984#1:build-log.txt%3A330
What you expected to happen:
The expectation is to assume the internet connectivity and the service (quay) may not be 100% up and some flakes may occur due to many factors.
Therefore, the expectation is for such attempts to retry with a backoff and a defined timeout.
How to reproduce it (as minimally and precisely as possible):
Random.
Additional context:
Environment:
The text was updated successfully, but these errors were encountered: