-
Notifications
You must be signed in to change notification settings - Fork 107
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
failed to create all resources on apparently successful run #237
Comments
Have similar issue having one manifest with resources separated by |
I was having a similar issue. What worked for me was to use # ingress_nginx_controller.tf
# Get the deploy.yaml config file
data "http" "ingress_nginx_controller" {
url = "https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.2.0/deploy/static/provider/aws/deploy.yaml"
}
# pass the content to kubectl_file_documents. https://registry.terraform.io/providers/gavinbunney/kubectl/latest/docs/data-sources/kubectl_file_documents
data "kubectl_file_documents" "ingress_nginx_config" {
content = data.http.ingress_nginx_controller.response_body
}
# Use kubectl_file_documents to split multi-document into the kubectl_manifest resource
resource "kubectl_manifest" "ingress-nginx-controller" {
for_each = data.kubectl_file_documents.ingress_nginx_config.manifests
yaml_body = each.value
} The plan output is rather large for me to paste here, but here's a snippet: ...
# module.helm.module.minikube[0].kubectl_manifest.ingress-nginx-controller["/apis/rbac.authorization.k8s.io/v1/namespaces/ingress-nginx/roles/ingress-nginx-admission"] will be created
+ resource "kubectl_manifest" "ingress-nginx-controller" {
+ api_version = "rbac.authorization.k8s.io/v1"
+ apply_only = false
+ force_conflicts = false
+ force_new = false
+ id = (known after apply)
+ kind = "Role"
+ live_manifest_incluster = (sensitive value)
+ live_uid = (known after apply)
+ name = "ingress-nginx-admission"
+ namespace = "ingress-nginx"
+ server_side_apply = false
+ uid = (known after apply)
+ validate_schema = true
+ wait_for_rollout = true
+ yaml_body = (sensitive value)
+ yaml_body_parsed = <<-EOT
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
labels:
app.kubernetes.io/component: admission-webhook
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/version: 1.2.0
name: ingress-nginx-admission
namespace: ingress-nginx
rules:
- apiGroups:
- ""
resources:
- secrets
verbs:
- get
- create
EOT
+ yaml_incluster = (sensitive value)
}
Plan: 19 to add, 0 to change, 0 to destroy. |
Trying to use kubectl provider to install an Nginx Ingress Controller to an EKS cluster in a gitlab pipeline was apparently successful, however not all resources were created.
ingress_nginx_controller.tf
ci job output
Checking with kubectl gives the following, so I know it's succeeded in running
But applying the same manifest again with kubectl creates a lot more resources
The text was updated successfully, but these errors were encountered: