From 2940bbcf34bcf99c05a20d5d523c9d23d94e2369 Mon Sep 17 00:00:00 2001 From: delldubey Date: Tue, 25 Jun 2024 18:37:01 +0530 Subject: [PATCH] Updates the msg for clarity (#1152) Co-authored-by: boyamurthy <92081029+boyamurthy@users.noreply.github.com> --- content/docs/csidriver/release/powerstore.md | 24 ++++++++++---------- content/v1/csidriver/release/powerstore.md | 2 +- content/v2/csidriver/release/powerstore.md | 2 +- content/v3/csidriver/release/powerstore.md | 4 ++-- 4 files changed, 16 insertions(+), 16 deletions(-) diff --git a/content/docs/csidriver/release/powerstore.md b/content/docs/csidriver/release/powerstore.md index afd58eddce..a6faeca3db 100644 --- a/content/docs/csidriver/release/powerstore.md +++ b/content/docs/csidriver/release/powerstore.md @@ -29,18 +29,18 @@ description: Release notes for PowerStore CSI driver ### Known Issues -| Issue | Resolution or workaround, if known | -|--------------------------------------------------------------------------------------------------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| -| Delete namespace that has PVCs and pods created with the driver. The External health monitor sidecar crashes as a result of this operation | Deleting the namespace deletes the PVCs first and then removes the pods in the namespace. This brings a condition where pods exist without their PVCs and causes the external-health-monitor sidecar to crash. This is a known issue and has been reported at https://github.com/kubernetes-csi/external-health-monitor/issues/100
| -| fsGroupPolicy may not work as expected without root privileges for NFS only
https://github.com/kubernetes/examples/issues/260 | To get the desired behavior set "allowRoot: "true" in the storage class parameter | -| If the NVMeFC pod is not getting created and the host looses the ssh connection, causing the driver pods to go to error state | remove the nvme_tcp module from the host incase of NVMeFC connection | -| When a node goes down, the block volumes attached to the node cannot be attached to another node | This is a known issue and has been reported at https://github.com/kubernetes-csi/external-attacher/issues/215. Workaround:
1. Force delete the pod running on the node that went down
2. Delete the volumeattachment to the node that went down.
Now the volume can be attached to the new node. | -| When driver node pods enter CrashLoopBackOff and PVC remains in pending state with one of the following events:
1. failed to provision volume with StorageClass ``: error generating accessibility requirements: no available topology found
2. waiting for a volume to be created, either by external provisioner "csi-powerstore.dellemc.com" or manually created by system administrator. | Check whether all array details present in the secret file are valid and remove any invalid entries if present.
Redeploy the driver. | -| If an ephemeral pod is not being created in OpenShift 4.13 and is failing with the error "error when creating pod: the pod uses an inline volume provided by CSIDriver csi-powerstore.dellemc.com, and the namespace has a pod security enforcement level that is lower than privileged." | This issue occurs because OpenShift 4.13 introduced the CSI Volume Admission plugin to restrict the use of a CSI driver capable of provisioning CSI ephemeral volumes during pod admission https://docs.openshift.com/container-platform/4.13/storage/container_storage_interface/ephemeral-storage-csi-inline.html . Therefore, an additional label "security.openshift.io/csi-ephemeral-volume-profile" needs to be added to the CSIDriver object to support inline ephemeral volumes. | -| In OpenShift 4.13, the root user is not allowed to perform write operations on NFS shares, when root squashing is enabled. | The workaround for this issue is to disable root squashing by setting allowRoot: "true" in the NFS storage class. | -| If the volume limit is exhausted and there are pending pods and PVCs due to `exceed max volume count`, the pending PVCs will be bound to PVs, and the pending pods will be scheduled to nodes when the driver pods are restarted. | It is advised not to have any pending pods or PVCs once the volume limit per node is exhausted on a CSI Driver. There is an open issue reported with Kubenetes at https://github.com/kubernetes/kubernetes/issues/95911 with the same behavior. | -| If two separate networks are configured for ISCSI and NVMeTCP, the driver may encounter difficulty identifying the second network (e.g., NVMeTCP). | This is a known issue, and the workaround involves creating a single network on the array to serve both ISCSI and NVMeTCP purposes. | -| Creation of clone PVC using snapshot will fail when the source PVC is deleted | This is a known issue, and the workaround is do not delete the source PVC if you want to create clone volumes using snapshot. | +| Issue | Resolution or workaround, if known | +|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| +| Delete namespace that has PVCs and pods created with the driver. The External health monitor sidecar crashes as a result of this operation | Deleting the namespace deletes the PVCs first and then removes the pods in the namespace. This brings a condition where pods exist without their PVCs and causes the external-health-monitor sidecar to crash. This is a known issue and has been reported at https://github.com/kubernetes-csi/external-health-monitor/issues/100
| +| fsGroupPolicy may not work as expected without root privileges for NFS only
https://github.com/kubernetes/examples/issues/260 | To get the desired behavior set "allowRoot: "true" in the storage class parameter | +| If the NVMeFC pod is not getting created and the host looses the ssh connection, causing the driver pods to go to error state | remove the nvme_tcp module from the host incase of NVMeFC connection | +| When a node goes down, the block volumes attached to the node cannot be attached to another node | This is a known issue and has been reported at https://github.com/kubernetes-csi/external-attacher/issues/215. Workaround:
1. Force delete the pod running on the node that went down
2. Delete the volumeattachment to the node that went down.
Now the volume can be attached to the new node. | +| When driver node pods enter CrashLoopBackOff and PVC remains in pending state with one of the following events:
1. failed to provision volume with StorageClass ``: error generating accessibility requirements: no available topology found
2. waiting for a volume to be created, either by external provisioner "csi-powerstore.dellemc.com" or manually created by system administrator. | Check whether all array details present in the secret file are valid and remove any invalid entries if present.
Redeploy the driver. | +| If an ephemeral pod is not being created in OpenShift 4.13 and is failing with the error "error when creating pod: the pod uses an inline volume provided by CSIDriver csi-powerstore.dellemc.com, and the namespace has a pod security enforcement level that is lower than privileged." | This issue occurs because OpenShift 4.13 introduced the CSI Volume Admission plugin to restrict the use of a CSI driver capable of provisioning CSI ephemeral volumes during pod admission https://docs.openshift.com/container-platform/4.13/storage/container_storage_interface/ephemeral-storage-csi-inline.html . Therefore, an additional label "security.openshift.io/csi-ephemeral-volume-profile" needs to be added to the CSIDriver object to support inline ephemeral volumes. | +| In OpenShift 4.13, the root user is not allowed to perform write operations on NFS shares, when root squashing is enabled. | The workaround for this issue is to disable root squashing by setting allowRoot: "true" in the NFS storage class. | +| If the volume limit is exhausted and there are pending pods and PVCs due to `exceed max volume count`, the pending PVCs will be bound to PVs, and the pending pods will be scheduled to nodes when the driver pods are restarted. | It is advised not to have any pending pods or PVCs once the volume limit per node is exhausted on a CSI Driver. There is an open issue reported with Kubenetes at https://github.com/kubernetes/kubernetes/issues/95911 with the same behavior. | +| If two separate networks are configured for ISCSI and NVMeTCP, the driver may encounter difficulty identifying the second network (e.g., NVMeTCP). | This is a known issue, and the workaround involves creating a single network on the array to serve both ISCSI and NVMeTCP purposes. | +| When a PV/PVC is deleted in Kubernetes, it will trigger the deletion of the underlying volume and snapshot on the array as a default behaviour. This can result in a situation where the VolumeSnapshot and VolumeSnapshotContent will still show "readyToUse: true", but leaves them unusable because it is no longer backed by underlying storage snapshot. This will not allow the creation of a PVC from snapshot and this could also lead to a data loss situations. | This is a known issue, and the workaround is use of **retain** policy on the various PV, VolumeSnapshot and VolumeSnapshotContent that you wish to use for cloning. | ### Note: diff --git a/content/v1/csidriver/release/powerstore.md b/content/v1/csidriver/release/powerstore.md index 96b9075faa..fff98bb37c 100644 --- a/content/v1/csidriver/release/powerstore.md +++ b/content/v1/csidriver/release/powerstore.md @@ -36,7 +36,7 @@ description: Release notes for PowerStore CSI driver | In OpenShift 4.13, the root user is not allowed to perform write operations on NFS shares, when root squashing is enabled. | The workaround for this issue is to disable root squashing by setting allowRoot: "true" in the NFS storage class. | | If the volume limit is exhausted and there are pending pods and PVCs due to `exceed max volume count`, the pending PVCs will be bound to PVs, and the pending pods will be scheduled to nodes when the driver pods are restarted. | It is advised not to have any pending pods or PVCs once the volume limit per node is exhausted on a CSI Driver. There is an open issue reported with Kubenetes at https://github.com/kubernetes/kubernetes/issues/95911 with the same behavior. | | If two separate networks are configured for ISCSI and NVMeTCP, the driver may encounter difficulty identifying the second network (e.g., NVMeTCP). | This is a known issue, and the workaround involves creating a single network on the array to serve both ISCSI and NVMeTCP purposes. | -| Creation of clone PVC using snapshot will fail when the source PVC is deleted | This is a known issue, and the workaround is do not delete the source PVC if you want to create clone volumes using snapshot. | +| When a PV/PVC is deleted in Kubernetes, it will trigger the deletion of the underlying volume and snapshot on the array as a default behaviour. This can result in a situation where the VolumeSnapshot and VolumeSnapshotContent will still show "readyToUse: true", but leaves them unusable because it is no longer backed by underlying storage snapshot. This will not allow the creation of a PVC from snapshot and this could also lead to a data loss situations. | This is a known issue, and the workaround is use of **retain** policy on the various PV, VolumeSnapshot and VolumeSnapshotContent that you wish to use for cloning. | ### Note: diff --git a/content/v2/csidriver/release/powerstore.md b/content/v2/csidriver/release/powerstore.md index 8143eeaaa7..895385d80a 100644 --- a/content/v2/csidriver/release/powerstore.md +++ b/content/v2/csidriver/release/powerstore.md @@ -32,7 +32,7 @@ description: Release notes for PowerStore CSI driver | If an ephemeral pod is not being created in OpenShift 4.13 and is failing with the error "error when creating pod: the pod uses an inline volume provided by CSIDriver csi-powerstore.dellemc.com, and the namespace has a pod security enforcement level that is lower than privileged." | This issue occurs because OpenShift 4.13 introduced the CSI Volume Admission plugin to restrict the use of a CSI driver capable of provisioning CSI ephemeral volumes during pod admission https://docs.openshift.com/container-platform/4.13/storage/container_storage_interface/ephemeral-storage-csi-inline.html . Therefore, an additional label "security.openshift.io/csi-ephemeral-volume-profile" needs to be added to the CSIDriver object to support inline ephemeral volumes. | | In OpenShift 4.13, the root user is not allowed to perform write operations on NFS shares, when root squashing is enabled. | The workaround for this issue is to disable root squashing by setting allowRoot: "true" in the NFS storage class. | | If the volume limit is exhausted and there are pending pods and PVCs due to `exceed max volume count`, the pending PVCs will be bound to PVs, and the pending pods will be scheduled to nodes when the driver pods are restarted. | It is advised not to have any pending pods or PVCs once the volume limit per node is exhausted on a CSI Driver. There is an open issue reported with Kubenetes at https://github.com/kubernetes/kubernetes/issues/95911 with the same behavior. | -| Creation of clone PVC using snapshot will fail when the source PVC is deleted | This is a known issue, and the workaround is do not delete the source PVC if you want to create clone volumes using snapshot. | +| When a PV/PVC is deleted in Kubernetes, it will trigger the deletion of the underlying volume and snapshot on the array as a default behaviour. This can result in a situation where the VolumeSnapshot and VolumeSnapshotContent will still show "readyToUse: true", but leaves them unusable because it is no longer backed by underlying storage snapshot. This will not allow the creation of a PVC from snapshot and this could also lead to a data loss situations. | This is a known issue, and the workaround is use of **retain** policy on the various PV, VolumeSnapshot and VolumeSnapshotContent that you wish to use for cloning. | ### Note: diff --git a/content/v3/csidriver/release/powerstore.md b/content/v3/csidriver/release/powerstore.md index 09a0daef20..13a63b2dc6 100644 --- a/content/v3/csidriver/release/powerstore.md +++ b/content/v3/csidriver/release/powerstore.md @@ -28,8 +28,8 @@ description: Release notes for PowerStore CSI driver | fsGroupPolicy may not work as expected without root privileges for NFS only
https://github.com/kubernetes/examples/issues/260 | To get the desired behavior set "allowRoot: "true" in the storage class parameter | | If the NVMeFC pod is not getting created and the host looses the ssh connection, causing the driver pods to go to error state | remove the nvme_tcp module from the host incase of NVMeFC connection | | When a node goes down, the block volumes attached to the node cannot be attached to another node | This is a known issue and has been reported at https://github.com/kubernetes-csi/external-attacher/issues/215. Workaround:
1. Force delete the pod running on the node that went down
2. Delete the volumeattachment to the node that went down.
Now the volume can be attached to the new node. | -| Creation of clone PVC using snapshot will fail when the source PVC is deleted | This is a known issue, and the workaround is do not delete the source PVC if you want to create clone volumes using snapshot. | -| When driver node pods enter CrashLoopBackOff and PVC remains in pending state with one of the following events:
1. failed to provision volume with StorageClass ``: error generating accessibility requirements: no available topology found
2. waiting for a volume to be created, either by external provisioner "csi-powerstore.dellemc.com" or manually created by system administrator. | Check whether all array details present in the secret file are valid and remove any invalid entries if present.
Redeploy the driver. +| When driver node pods enter CrashLoopBackOff and PVC remains in pending state with one of the following events:
1. failed to provision volume with StorageClass ``: error generating accessibility requirements: no available topology found
2. waiting for a volume to be created, either by external provisioner "csi-powerstore.dellemc.com" or manually created by system administrator. | Check whether all array details present in the secret file are valid and remove any invalid entries if present.
Redeploy the driver. +| When a PV/PVC is deleted in Kubernetes, it will trigger the deletion of the underlying volume and snapshot on the array as a default behaviour. This can result in a situation where the VolumeSnapshot and VolumeSnapshotContent will still show "readyToUse: true", but leaves them unusable because it is no longer backed by underlying storage snapshot. This will not allow the creation of a PVC from snapshot and this could also lead to a data loss situations. | This is a known issue, and the workaround is use of **retain** policy on the various PV, VolumeSnapshot and VolumeSnapshotContent that you wish to use for cloning. | ### Note: