Skip to content
This repository has been archived by the owner on Jun 24, 2020. It is now read-only.

Deleting a KnativeServing CR with changes, and then reapplying blank KS CR should use default serving yaml #138

Closed
trshafer opened this issue Aug 9, 2019 · 9 comments · Fixed by #165
Assignees

Comments

@trshafer
Copy link
Contributor

trshafer commented Aug 9, 2019

Applying a KnativeServing with say a spec of

apiVersion: serving.knative.dev/v1alpha1
kind: KnativeServing
metadata:
  name: knative-serving
spec:
  config:
    logging:
      loglevel.controller: "debug"

And then applying the spec of:

apiVersion: serving.knative.dev/v1alpha1
kind: KnativeServing
metadata:
  name: knative-serving

The config is not reverted. This is the same for the other config options.

I believe it is because the way the mutations happen is that they mutate the original manifest: https://github.com/knative/serving-operator/tree/e0413caae7af488613c242dda73dfd93ee4a2b97/pkg/reconciler/knativeserving/common. They need to not mutate the original manifest and instead return a new manifest.

Example:

unstructured.SetNestedField(cm.Object, v, "data", k)

@trshafer
Copy link
Contributor Author

/assign garron

@jcrossley3
Copy link
Contributor

I'm not sure about this one. Is the expectation that all the k-s ConfigMaps will be kept in sync with the CR's spec.config? Because as long as it's possible for any of those ConfigMaps to be edited manually -- irrespective of the CR -- that may lead to some confusion.

@trshafer
Copy link
Contributor Author

I'm not sure what the behavior should be. This behavior is the same for the registry information and the the ingress gateway though which I think should be reverted.

@garron
Copy link
Contributor

garron commented Sep 4, 2019

It feels like if we're going to support overrides (Gateways, ConfigMaps) in the CR spec, then that should be the primary way in which configuration for serving is updated. I'm inclined to think that the operator should be responsible for ensuring that the system is operating per it's configuration (combination of original yaml content and the overrides in the CR), even if that means overwriting changes made through manual edits.

However, the design discussion regarding platform-specific overrides (#122) makes me wonder if the operator will be alone in managing these resources or not.

Either way, it certainly seems wrong to me that updating the gateways and then reverting that change leaves the gateways configured in a way that isn't expressed by the operator's configuration. It feels like it breaks the declarative nature.

I'm curious if others have thoughts on this.

@garron
Copy link
Contributor

garron commented Sep 4, 2019

It's worth mentioning that when the gateway is overridden and then removed, in order to restore what would have been the default functionality, we need to make use of the original configuration to get the default value for selector. This isn't currently plumbed through to the transform functions (e.g. the one customizing the gateway).

@garron
Copy link
Contributor

garron commented Sep 6, 2019

I did some more poking around and the example provided in the description also happens across deleting and recreating the KnativeServing resource. If instead of just applying the second (clean/empty) spec in the example, you first "kubectl delete" the KnativeServing resource and the knative-serving namespace, when applying the empty spec to create a fresh KnativeServing, the config-logging ConfigMap gets created with the overrides from before. This happens until I delete the operator's pod and let it start back up, which results in the proper desired Manifest getting re-initialized from the yaml+CR.

My initial thoughts on how to resolve this would be to stop holding onto the derived state coming from the yaml + CR-based transforms, and instead combine the pristine yaml-defined resources and the transforms at reconciliation time (without storing the result long-term). However, this doesn't seem to be how manifestival was intended to be used because it depends on the internal state of the Manifest reflecting the desired state. CC: @jcrossley3

Unless I'm misunderstanding how this works, it also appears that all instances of KnativeServing (should we eventually support multiple) share the same desired Manifest in memory. This additional issue seems like it would also be resolved by applying the transforms to clean yaml-based resources on reconciliation because the transforms take into account the configuration of the specific KnativeServing resource being reconciled and the yaml doesn't vary per KnativeServing resource.

@garron
Copy link
Contributor

garron commented Sep 6, 2019

One way to accomplish what I'm describing which doesn't involve too much in the way of changes to how manifestival works would be to add support for my_manifest.WithTransforms(...) which returns a new Manifest, which could then be used for the subsequent .ApplyAll(), checkDeployments, etc.

@jcrossley3
Copy link
Contributor

Thanks for the legwork on this, @garron. I consider this a bug. I'd prefer to fix the way Transform works in manifestival (return a new manifest rather than munging state) than add a new "non-broken" function. In the meantime, one (non-ideal) workaround that wouldn't require manifestival changes is to deep-copy the manifest.Resources prior to running Transform and reassign the copy after. Hacky, but should at least prove your theory.

@garron
Copy link
Contributor

garron commented Sep 10, 2019

I wrote up #163 to cover approaches in regards to enforcing the CR's configuration in order to focus this issue on maintaining appropriate state across deletion/creation of KnativeServing CR.

@trshafer trshafer changed the title KnativeServing options are not removed when KnativeServing CR changes Deleting a KnativeServing CR with changes, and then reapplying blank KS CR should use default serving yaml Sep 10, 2019
jcrossley3 added a commit to jcrossley3/serving-operator that referenced this issue Sep 11, 2019
After creating the KnativeServing instance, we then update it twice,
once with an entry for the config-logging ConfigMap and then again
without the key, verifying that the operator correctly syncs the
ConfigMap each time.
jcrossley3 added a commit to jcrossley3/serving-operator that referenced this issue Sep 11, 2019
This relies on the Transform function now being immutable, returning a
new Manifest instead of mutating its source resources.

This is related to knative#163 as well.
jcrossley3 added a commit to jcrossley3/serving-operator that referenced this issue Sep 11, 2019
After creating the KnativeServing instance, we then update it twice,
once with an entry for the config-logging ConfigMap and then again
without the key, verifying that the operator correctly syncs the
ConfigMap each time.
jcrossley3 added a commit to jcrossley3/serving-operator that referenced this issue Sep 11, 2019
This relies on the Transform function now being immutable, returning a
new Manifest instead of mutating its source resources.

This is related to knative#163 as well.
jcrossley3 added a commit to jcrossley3/serving-operator that referenced this issue Sep 11, 2019
This relies on the Transform function now being immutable, returning a
new Manifest instead of mutating its source resources.

This is related to knative#163 as well.
knative-prow-robot pushed a commit that referenced this issue Sep 12, 2019
…to be canonical (#165)

* Add an e2e test for #138

After creating the KnativeServing instance, we then update it twice,
once with an entry for the config-logging ConfigMap and then again
without the key, verifying that the operator correctly syncs the
ConfigMap each time.

* Uses a pre-release of manifestival to fix #138

This relies on the Transform function now being immutable, returning a
new Manifest instead of mutating its source resources.

This is related to #163 as well.

* Address golint warnings

* Transform manifest once and pass to each reconcile stage

* Let's confine the arbitrariness of the configmap to the fixture

This also makes the WaitForConfigMap helper more helpful
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
3 participants