kf-tools--pipelines ended in degraded in my first local quickstart #113
-
I am truly a newbie to Kubeflow and deployKF. I want to deploy my first Kubeflow with deployKF to get started. I followed https://www.deploykf.org/guides/local-quickstart to set up an instance of deployKF v0.1.4 (29ac97a) on my Fedora 38 machine with moby-engine 24.0.5 and k3d 5.6.0, but it ended up with Four pods are failing to run:
|
Beta Was this translation helpful? Give feedback.
Replies: 2 comments 13 replies
-
@cheese can you check the state AND logs of the Pods in the After you save the kubectl delete pods -n kyverno -l "app.kubernetes.io/part-of=kyverno,app.kubernetes.io/component=background-controller" After it comes back up, check that the following secrets were created:
|
Beta Was this translation helpful? Give feedback.
-
Logs of I deleted the pod but the secrets were not created after it came back. |
Beta Was this translation helpful? Give feedback.
@cheese it really looks like something deleted the minio secrets which are the source of these clones:
Secret/generated--kubeflow-pipelines--backend-object-store-auth
(in thedeploykf-minio
namespace)Secret/generated--kubeflow-pipelines--profile-object-store-auth--team-1
(in thedeploykf-minio
namespace)Secret/generated--kubeflow-pipelines--profile-object-store-auth--team-1-prod
(in thedeploykf-minio
namespace)Did you delete these manaully?
(NOTE: they show as "need pruning" in the ArgoCD UI, which might be confusing because they should NOT be deleted, they aren't created by ArgoCD, but instead by a
pre-install
Job Pod)Either way, you can trigger the job to recreate them by re-sync…