Skip to content
This repository has been archived by the owner on Aug 11, 2021. It is now read-only.

Wrong final value type after 0.5.0 upgrade #230

Open
Lexmark-peachj opened this issue Jun 11, 2021 · 4 comments
Open

Wrong final value type after 0.5.0 upgrade #230

Lexmark-peachj opened this issue Jun 11, 2021 · 4 comments
Labels
bug Something isn't working

Comments

@Lexmark-peachj
Copy link

Terraform, Provider, Kubernetes versions

Terraform version: v0.15.5
Provider version: 0.5.0
Kubernetes version: v1.20.7

Affected Resource(s)

  • kubernetes_manifest

Terraform Configuration Files

variable "kubeconfig" {
  type        = string
  description = "Path to the temporary kubeconfig file"
}

provider "kubernetes-alpha" {
  config_path = var.kubeconfig
}

resource "kubernetes_manifest" "lcs-sli" {
  provider = kubernetes-alpha
  manifest = {
    apiVersion = "monitoring.coreos.com/v1"
    kind       = "PrometheusRule"
    metadata = {
      labels = {
        app     = "prometheus-operator"
        release = "prometheus-operator"
      }
      name      = "prometheus-operator-k8s-sli-lcs.rules"
      namespace = "monitoring"
    }
    spec = {
      groups = [
        {
          name = "k8s-sli-lcs.rules"
          rules = [
            {
              expr   = "istio_requests_total{reporter=\"source\",response_code!~\"5..\",destination_workload!=\"unknown\",source_workload!=\"unknown\"}"
              record = "reporter:istio_requests_total:not500_all_dw"
            },
            {
              expr   = "istio_requests_total{reporter=\"source\",destination_workload!=\"unknown\",source_workload!=\"unknown\"}"
              record = "reporter:istio_requests_total:all_dw"
            },
            {
              expr   = "istio_request_duration_milliseconds_sum{reporter=\"source\",destination_workload!=\"unknown\",source_workload!=\"unknown\"}"
              record = "reporter:istio_request_duration_milliseconds_sum:all_dw"
            },
            {
              expr   = "istio_request_duration_milliseconds_count{reporter=\"source\",destination_workload!=\"unknown\",source_workload!=\"unknown\"}"
              record = "reporter:istio_request_duration_milliseconds_count:all_dw"
            },
            {
              expr   = "rate(reporter:istio_requests_total:not500_all_dw[1d])"
              record = "reporter:istio_requests_total:not500_all_dw_1d:mean"
            },
            {
              expr   = "rate(reporter:istio_requests_total:all_dw[1d])"
              record = "reporter:istio_requests_total:all_dw_1d:mean"
            },
            {
              expr   = "rate(reporter:istio_requests_total:not500_all_dw[7d])"
              record = "reporter:istio_requests_total:not500_all_dw_7d:mean"
            },
            {
              expr   = "rate(reporter:istio_requests_total:all_dw[7d])"
              record = "reporter:istio_requests_total:all_dw_7d:mean"
            },
            {
              expr   = "rate(reporter:istio_requests_total:all_dw[2m])"
              record = "reporter:istio_requests_total:all_dw:mean"
            },
            {
              expr   = "rate(reporter:istio_request_duration_milliseconds_sum:all_dw[2m])"
              record = "reporter:istio_request_duration_milliseconds_sum:all_dw:mean"
            },
            {
              expr   = "rate(reporter:istio_request_duration_milliseconds_count:all_dw[2m])"
              record = "reporter:istio_request_duration_milliseconds_count:all_dw:mean"
            },
            {
              expr   = "sum(reporter:istio_requests_total:not500_all_dw_1d:mean)"
              record = "reporter:istio_requests_total:not500_all_dw_1d"
            },
            {
              expr   = "sum(reporter:istio_requests_total:all_dw_1d:mean)"
              record = "reporter:istio_requests_total:all_dw_1d"
            },
            {
              expr   = "sum(reporter:istio_requests_total:not500_all_dw_7d:mean)"
              record = "reporter:istio_requests_total:not500_all_dw_7d"
            },
            {
              expr   = "sum(reporter:istio_requests_total:all_dw_7d:mean)"
              record = "reporter:istio_requests_total:all_dw_7d"
            },
            {
              expr   = "sum(reporter:istio_requests_total:all_dw:mean)"
              record = "reporter:istio_requests_total:all_dw:sum"
            },
            {
              expr   = "sum(reporter:istio_request_duration_milliseconds_sum:all_dw:mean) by (destination_workload)"
              record = "reporter:istio_request_duration_milliseconds_sum:all_dw:sum"
            },
            {
              expr   = "sum(reporter:istio_request_duration_milliseconds_count:all_dw:mean) by (destination_workload)"
              record = "reporter:istio_request_duration_milliseconds_count:all_dw:sum"
            },
          ]
        },
      ]
    }
  }
}

Steps to Reproduce

  1. Create and save a plan using terraform plan -out tfplan
  2. Run terraform apply

Expected Behavior

We're trying to apply a CRD for prometheus. Prometheus adds an annotation to the CRD called "prometheus-operator-validated" after applying this resource. We are not setting any annotations in our config. 0.4.1 does not attempt to manage this annotation and an apply works successfully. With 0.5.0, the apply should succeed or a lifecycle rule to ignore change to the annotations should work.

Actual Behavior

 Error: Provider produced inconsistent result after apply
│ 
│ When applying changes to kubernetes_manifest.lcs-sli, provider "provider[\"registry.terraform.io/hashicorp/kubernetes-alpha\"]" produced an unexpected new value: .object:
│ wrong final value type: incorrect object attributes.
│ 
│ This is a bug in the provider, which should be reported in the provider's own issue tracker.

We also tried to add a lifecycle_rule as follows:

  lifecycle {
    ignore_changes = [
      manifest.metadata.annotations
    ]
  }

This resulted in the same problem. A plan shows that it wants to remove the annotations and an apply fails because the annotation gets put back on by Prometheus before the apply finishes.

Community Note

  • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment
@Lexmark-peachj Lexmark-peachj added the bug Something isn't working label Jun 11, 2021
@dniel
Copy link

dniel commented Jun 11, 2021

Got the same result when trying to deploy a HelmRelease with version 0.5.0

@GustavJaner
Copy link

Have the same error when applying prometheus rules with the new release. Everything working fine when applying with v0.4.1, but when applying the same config with v0.5.0 it results in the same error as posted in this issue Error: Provider produced inconsistent result after apply... produced an unexpected new value: .object: wrong final value type: incorrect object attributes. ...

@pschiffe
Copy link

I think I had this issue with v0.5 and

manifest = {
    apiVersion = "elbv2.k8s.aws/v1beta1"
    kind       = "TargetGroupBinding"

I was able to workaround it by adding finalizers = ["elbv2.k8s.aws/resources"] to the metadata of the manifest.

@alexsomesan
Copy link
Member

This is a known issue. It happens when both the user and some cluster components both add values to the "annotations" list. We're currently looking at ways to address this.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

5 participants