-
Notifications
You must be signed in to change notification settings - Fork 45
Deleting a KnativeServing CR with changes, and then reapplying blank KS CR should use default serving yaml #138
Comments
/assign garron |
I'm not sure about this one. Is the expectation that all the k-s ConfigMaps will be kept in sync with the CR's |
I'm not sure what the behavior should be. This behavior is the same for the registry information and the the ingress gateway though which I think should be reverted. |
It feels like if we're going to support overrides (Gateways, ConfigMaps) in the CR spec, then that should be the primary way in which configuration for serving is updated. I'm inclined to think that the operator should be responsible for ensuring that the system is operating per it's configuration (combination of original yaml content and the overrides in the CR), even if that means overwriting changes made through manual edits. However, the design discussion regarding platform-specific overrides (#122) makes me wonder if the operator will be alone in managing these resources or not. Either way, it certainly seems wrong to me that updating the gateways and then reverting that change leaves the gateways configured in a way that isn't expressed by the operator's configuration. It feels like it breaks the declarative nature. I'm curious if others have thoughts on this. |
It's worth mentioning that when the gateway is overridden and then removed, in order to restore what would have been the default functionality, we need to make use of the original configuration to get the default value for selector. This isn't currently plumbed through to the transform functions (e.g. the one customizing the gateway). |
I did some more poking around and the example provided in the description also happens across deleting and recreating the KnativeServing resource. If instead of just applying the second (clean/empty) spec in the example, you first "kubectl delete" the KnativeServing resource and the knative-serving namespace, when applying the empty spec to create a fresh KnativeServing, the config-logging ConfigMap gets created with the overrides from before. This happens until I delete the operator's pod and let it start back up, which results in the proper desired Manifest getting re-initialized from the yaml+CR. My initial thoughts on how to resolve this would be to stop holding onto the derived state coming from the yaml + CR-based transforms, and instead combine the pristine yaml-defined resources and the transforms at reconciliation time (without storing the result long-term). However, this doesn't seem to be how manifestival was intended to be used because it depends on the internal state of the Manifest reflecting the desired state. CC: @jcrossley3 Unless I'm misunderstanding how this works, it also appears that all instances of KnativeServing (should we eventually support multiple) share the same desired Manifest in memory. This additional issue seems like it would also be resolved by applying the transforms to clean yaml-based resources on reconciliation because the transforms take into account the configuration of the specific KnativeServing resource being reconciled and the yaml doesn't vary per KnativeServing resource. |
One way to accomplish what I'm describing which doesn't involve too much in the way of changes to how manifestival works would be to add support for my_manifest.WithTransforms(...) which returns a new Manifest, which could then be used for the subsequent .ApplyAll(), checkDeployments, etc. |
Thanks for the legwork on this, @garron. I consider this a bug. I'd prefer to fix the way |
I wrote up #163 to cover approaches in regards to enforcing the CR's configuration in order to focus this issue on maintaining appropriate state across deletion/creation of KnativeServing CR. |
After creating the KnativeServing instance, we then update it twice, once with an entry for the config-logging ConfigMap and then again without the key, verifying that the operator correctly syncs the ConfigMap each time.
This relies on the Transform function now being immutable, returning a new Manifest instead of mutating its source resources. This is related to knative#163 as well.
After creating the KnativeServing instance, we then update it twice, once with an entry for the config-logging ConfigMap and then again without the key, verifying that the operator correctly syncs the ConfigMap each time.
This relies on the Transform function now being immutable, returning a new Manifest instead of mutating its source resources. This is related to knative#163 as well.
This relies on the Transform function now being immutable, returning a new Manifest instead of mutating its source resources. This is related to knative#163 as well.
…to be canonical (#165) * Add an e2e test for #138 After creating the KnativeServing instance, we then update it twice, once with an entry for the config-logging ConfigMap and then again without the key, verifying that the operator correctly syncs the ConfigMap each time. * Uses a pre-release of manifestival to fix #138 This relies on the Transform function now being immutable, returning a new Manifest instead of mutating its source resources. This is related to #163 as well. * Address golint warnings * Transform manifest once and pass to each reconcile stage * Let's confine the arbitrariness of the configmap to the fixture This also makes the WaitForConfigMap helper more helpful
Applying a KnativeServing with say a spec of
And then applying the spec of:
The config is not reverted. This is the same for the other config options.
I believe it is because the way the mutations happen is that they mutate the original manifest: https://github.com/knative/serving-operator/tree/e0413caae7af488613c242dda73dfd93ee4a2b97/pkg/reconciler/knativeserving/common. They need to not mutate the original manifest and instead return a new manifest.
Example:
serving-operator/pkg/reconciler/knativeserving/common/config_maps.go
Line 48 in e0413ca
The text was updated successfully, but these errors were encountered: