You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
One side effect of my "load test classes with the classloader used to run them" changes is that application augmentation and application start may now be separated in time. At the moment, we start dev services as part of augmentation. When augmentation and application start were butted up against each other, that seemed fine, but now they're separated, it doesn't feel quite right. We do all the discover up front, so we now do all the augmentation up front, which means we start dev services before they're needed.
This has several unfortunate consequences:
If the tests in a test class are disabled for whatever reason, we still start the dev services (annoying, but an edge case irritation)
If two distinct @QuarkusTestResources share the same dev service, one of them will fail (bad!). We shut down the dev service when we stop the application, but we start the dev service during augmentation, not during startup so there's an asymmetry. The second test will think it can share a dev service with the first one, but it can't, because it will be shut down at the end of the first test. (New problem: distinct resources sharing a dev service will cause test failures #45786)
A similar issue, if one test in a suite uses a profile and the other one doesn't, and the test hardcodes a port for the dev service, the two services will try to start on the same port, and one will fail. integration-tests/opentelemetry-redis-instrumentation uses this pattern.
If there is a catastrophic test failure, such as JUnit failing to discover tests (caused by an exception in the test discovery phase), containers will not be cleaned up
My first thought to fix this was to just make sure that the DockerStatusBuildItem wasn't produced until application start. That's pleasingly clean, but sadly doesn't work, because application start takes a BuildResult, so we can't be producing build items after that without totally changing the whole model.
Another option is to replicate the model. So as well as BuildItems, we could have StartupItems, which get chained in the same way BuildItems do, but which are for things that shouldn't happen during augmentation. A less radical option is to do some sort of partial-hacky-re-augmentation where something goes through BuildItems that should semantically be StartupItems and starts them. That's maybe not actually that different, in implementation, but it keeps the API the same so it's less invasive. So DockerStatusBuildItem could have an extra flag that says I actually am a runtime consideration and anything that consumes it then also gets deferred to runtime.
Lots of Quarkiverse extensions start dev services, so we can't just change stuff without doing a lot of checks.
@cescoffier points out that some of the config generated by the dev services may be build time properties. @radcortez says
I agree that DevServices should be runtime. We have a lot of issues, by trying to configure dev services with build time configuration, when what we need is the runtime configuration.
Build time properties can always be moved to build time runtime fixed. But not the other way around.
The text was updated successfully, but these errors were encountered:
One side effect of my "load test classes with the classloader used to run them" changes is that application augmentation and application start may now be separated in time. At the moment, we start dev services as part of augmentation. When augmentation and application start were butted up against each other, that seemed fine, but now they're separated, it doesn't feel quite right. We do all the discover up front, so we now do all the augmentation up front, which means we start dev services before they're needed.
This has several unfortunate consequences:
integration-tests/opentelemetry-redis-instrumentation
uses this pattern.My first thought to fix this was to just make sure that the DockerStatusBuildItem wasn't produced until application start. That's pleasingly clean, but sadly doesn't work, because application start takes a BuildResult, so we can't be producing build items after that without totally changing the whole model.
Another option is to replicate the model. So as well as BuildItems, we could have StartupItems, which get chained in the same way BuildItems do, but which are for things that shouldn't happen during augmentation. A less radical option is to do some sort of partial-hacky-re-augmentation where something goes through BuildItems that should semantically be StartupItems and starts them. That's maybe not actually that different, in implementation, but it keeps the API the same so it's less invasive. So
DockerStatusBuildItem
could have an extra flag that saysI actually am a runtime consideration
and anything that consumes it then also gets deferred to runtime.Lots of Quarkiverse extensions start dev services, so we can't just change stuff without doing a lot of checks.
@cescoffier points out that some of the config generated by the dev services may be build time properties. @radcortez says
The text was updated successfully, but these errors were encountered: