-
Notifications
You must be signed in to change notification settings - Fork 51
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
NiFiKop Dataflow and Parameters context CRDs stuck on waiting for referenced cluster to be ready #497
Comments
I believe the main suspect is the section gracefulActionState in which the value of actionState may not satisfy the podIsReady function. I am however unable to understand how to tackle the problem and what is causing such state (being a single node whose canvas service is available). I need some direction to understand what's going on and how am I supposed to look into this scenario. apiVersion: nifi.konpyutaika.com/v1
kind: NifiCluster
metadata:
name: nifi
namespace: mwe
status:
nodesState:
'1':
configurationState: ConfigInSync
creationTime: '2024-12-11T08:07:58Z'
gracefulActionState:
TaskStarted: Wed, 11 Dec 2024 08:07:58 GMT
actionState: GracefulUpscaleRunning
actionStep: CONNECTING
errorMessage: ''
initClusterNode: true
podIsReady: true
prometheusReportingTask:
id: b4c5e6f2-0193-1000-ffff-ffffa1a79750
version: 2
rollingUpgradeStatus:
errorCount: 0
lastSuccess: ''
rootProcessGroupId: b4c486c6-0193-1000-1245-03922ff5d924
state: ClusterRunning
spec:
clusterImage: apache/nifi:1.25.0
disruptionBudget: {}
initContainerImage: bash:5.2.2
ldapConfiguration: {}
listenersConfig:
internalListeners:
- containerPort: 8443
name: https
type: https
- containerPort: 6007
name: cluster
type: cluster
- containerPort: 10000
name: s2s
type: s2s
- containerPort: 6342
name: load-balance
type: load-balance
- containerPort: 18080
name: registry
- containerPort: 9092
name: prometheus
type: prometheus
sslSecrets:
create: true
tlsSecretName: nifi-tls
nifiClusterTaskSpec:
retryDurationMinutes: 10
nodeConfigGroups:
default_group:
imagePullPolicy: IfNotPresent
isNode: true
podMetadata: {}
resourcesRequirements:
limits:
cpu: '1'
memory: 2G
requests:
cpu: 500m
memory: 1G
serviceAccountName: default
storageConfigs:
- metadata: {}
mountPath: /opt/nifi/nifi-current/logs
name: logs
pvcSpec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
storageClassName: default
- metadata: {}
mountPath: /opt/nifi/data
name: data
pvcSpec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
storageClassName: default
- metadata: {}
mountPath: /opt/nifi/flowfile_repository
name: flowfile-repository
pvcSpec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
storageClassName: default
- metadata: {}
mountPath: /opt/nifi/nifi-current/conf
name: conf
pvcSpec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
storageClassName: default
- metadata: {}
mountPath: /opt/nifi/content_repository
name: content-repository
pvcSpec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
storageClassName: default
- metadata: {}
mountPath: /opt/nifi/provenance_repository
name: provenance-repository
pvcSpec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
storageClassName: default
nodes:
- id: 1
nodeConfigGroup: default_group
readOnlyConfig:
authorizerConfig: {}
bootstrapNotificationServicesConfig: {}
bootstrapProperties: {}
logbackConfig: {}
nifiProperties:
webProxyHosts:
- localhost
- localhost:443
- localhost:8443
- 127.0.0.1
- 127.0.0.1:80
- 127.0.0.1:443
- 127.0.0.1:8443
zookeeperProperties: {}
pod:
labels:
cluster-name: nifi
propagateLabels: true
readOnlyConfig:
authorizerConfig: {}
bootstrapNotificationServicesConfig: {}
bootstrapProperties: {}
logbackConfig: {}
nifiProperties:
overrideConfigs: >
nifi.sensitive.props.key=9qT79q3MVyyv
nifi.flow.configuration.archive.enabled=false
# bind to loopback network interface
nifi.web.https.network.interface.eth0=eth0
nifi.web.https.network.interface.lo=lo
# S2S properties
nifi.remote.route.http.local.when=true
nifi.remote.route.http.local.port=443
nifi.remote.route.http.local.secure=true
nifi.remote.route.http.local.hostname=${s2s.target.hostname:substringBefore('.'):substringBeforeLast('-')}.localhost
# Cluster protocol properties
nifi.cluster.protocol.heartbeat.interval=25 sec
nifi.cluster.protocol.heartbeat.missable.max=8
zookeeperProperties: {}
secretRef:
name: ''
service:
headlessEnabled: false
labels:
cluster-name: nifi
singleUserConfiguration:
authorizerEnabled: true
enabled: true
secretKeys:
password: password
username: username
secretRef:
name: nifi-single-user-auth
zkAddress: zookeeper:2181
zkPath: /nifi MWE files available at |
Type of question
Implementation Assistance
Support question
What steps will reproduce the bug?
headless: false
What is the expected behavior?
NiFiParameterContext not reporting 'cluster not ready' events and defined
parameter context values available within the canvas.
What do you see instead?
NiFiParameterContext reporting events of type
'The referenced cluster is not ready yet : nifi in nifi'.
Values defined by parameter context are not available in the canvas.
Possible solution
No response
NiFiKop version
v1.3.1
Golang version
go version go1.18.1 linux/amd64
Kubernetes version
Client Version: v1.28.1
Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
Server Version: v1.28.4+k0s
NiFi version
1.25.0
Additional context
NA
NiFiKop version
v.1.3.1-release
Golang version
go version go1.18.1 linux/amd64
Kubernetes version
Client Version: v1.28.4
Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
Server Version: v1.27.3
NiFi version
1.25.0
The text was updated successfully, but these errors were encountered: