diff --git a/content/docs/concepts/csidriver/release/powermax.md b/content/docs/concepts/csidriver/release/powermax.md
index e1788c0bb8..8a12830af2 100644
--- a/content/docs/concepts/csidriver/release/powermax.md
+++ b/content/docs/concepts/csidriver/release/powermax.md
@@ -56,7 +56,7 @@ Starting from CSI v2.4.0, only Unisphere 10.0 REST endpoints are supported. It i
| Automatic SRDF group creation is failing with "Unable to get Remote Port on SAN for Auto SRDF" for PowerMaxOS 10.1 arrays | Create the SRDF Group and add it to the storage class |
| [Node stage is failing with error "wwn for FC device not found"](https://github.com/dell/csm/issues/1070)| This is an intermittent issue, rebooting the node will resolve this issue |
| When the driver is installed using CSM Operator , few times, pods created using block volume are getting stuck in containercreating/terminating state or devices are not available inside the pod. | Update the daemonset with parameter `mountPropagation: "Bidirectional"` for volumedevices-path under volumeMounts section.|
-| When running CSI-PowerMax with Replication in a multi-cluster configuration, the driver on the target cluster fails and the following error is seen in logs: `error="CSI reverseproxy service host or port not found, CSI reverseproxy not installed properly"` | The reverseproxy service needs to be created manually on the target cluster. Follow [the instructions here](../../../deployment/csmoperator/modules/replication#configuration-steps) to create it.|
+| When running CSI-PowerMax with Replication in a multi-cluster configuration, the driver on the target cluster fails and the following error is seen in logs: `error="CSI reverseproxy service host or port not found, CSI reverseproxy not installed properly"` | The reverseproxy service needs to be created manually on the target cluster. Follow [the instructions here](docs/getting-started/installation/kubernetes/powermax/csmoperator/csm-modules/replication/#configuration-steps) to create it.|
### Note:
- Support for Kubernetes alpha features like Volume Health Monitoring will not be available in Openshift environment as Openshift doesn't support enabling of alpha features for Production Grade clusters.
diff --git a/content/docs/concepts/csidriver/troubleshooting/powerflex.md b/content/docs/concepts/csidriver/troubleshooting/powerflex.md
index 10695b75ec..c24cbc7f51 100644
--- a/content/docs/concepts/csidriver/troubleshooting/powerflex.md
+++ b/content/docs/concepts/csidriver/troubleshooting/powerflex.md
@@ -20,7 +20,7 @@ description: Troubleshooting PowerFlex Driver
| When you run the command `kubectl apply -f snapclass-v1.yaml`, you get the error `error: unable to recognize "snapclass-v1.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"` | Check to make sure that the v1 snapshotter CRDs are installed, and not the v1beta1 CRDs, which are no longer supported. |
| The controller pod is stuck and producing errors such as" `Failed to watch *v1.VolumeSnapshotContent: failed to list *v1.VolumeSnapshotContent: the server could not find the requested resource (get volumesnapshotcontents.snapshot.storage.k8s.io)` | Make sure that v1 snapshotter CRDs and v1 snapclass are installed, and not v1beta1, which is no longer supported. |
| Driver install or upgrade fails because of an incompatible Kubernetes version, even though the version seems to be within the range of compatibility. For example: `Error: UPGRADE FAILED: chart requires kubeVersion: >= 1.21.0 <= 1.28.0 which is incompatible with Kubernetes V1.21.11-mirantis-1` | If you are using an extended Kubernetes version, see the helm Chart at `helm/csi-vxflexos/Chart.yaml` and use the alternate `kubeVersion` check that is provided in the comments. Note: this is not meant to be used to enable the use of pre-release alpha and beta versions, which is not supported. |
-| Volume metrics are missing | Enable [Volume Health Monitoring](../../features/powerflex#volume-health-monitoring) |
+| Volume metrics are missing | Enable [Volume Health Monitoring](docs/concepts/csidriver/features/powerflex#volume-health-monitoring) |
| When a node goes down, the block volumes attached to the node cannot be attached to another node | This is a known issue and has been reported at https://github.com/kubernetes-csi/external-attacher/issues/215. Workaround:
1. Force delete the pod running on the node that went down
2. Delete the volumeattachment to the node that went down.
Now the volume can be attached to the new node. |
| CSI-PowerFlex volumes cannot mount; are being recognized as multipath devices | CSI-PowerFlex does not support multipath; to fix:
1. Remove any multipath mapping involving a powerflex volume with `multipath -f `
2. Blacklist CSI-PowerFlex volumes in multipath config file |
| When attempting a driver upgrade, you see: ```spec.fsGroupPolicy: Invalid value: "xxx": field is immutable``` | You cannot upgrade between drivers with different fsGroupPolicies. See [upgrade documentation](docs/getting-started/upgrade/kubernetes/powerflex/helm) for more details |
diff --git a/content/docs/concepts/observability/troubleshooting/_index.md b/content/docs/concepts/observability/troubleshooting/_index.md
index 7cc46b71ab..ae29fb765a 100644
--- a/content/docs/concepts/observability/troubleshooting/_index.md
+++ b/content/docs/concepts/observability/troubleshooting/_index.md
@@ -112,7 +112,7 @@ A workaround on most browsers is to accept the `karavi-topology` certificate by
Deploy certificate with new Grafana instance
- Please follow the steps in Sample Grafana Deployment but attach the certificate to your `grafana-values.yaml` before deploying. The file should look like:
+ Please follow the steps in Sample Grafana Deployment but attach the certificate to your `grafana-values.yaml` before deploying. The file should look like:
```yaml
# grafana-values.yaml
diff --git a/content/docs/concepts/replication/release/_index.md b/content/docs/concepts/replication/release/_index.md
index 581f01587e..49b33d2f3f 100644
--- a/content/docs/concepts/replication/release/_index.md
+++ b/content/docs/concepts/replication/release/_index.md
@@ -27,4 +27,4 @@ Description: >
### Known Issues
| Symptoms | Prevention, Resolution or Workaround |
| --- | --- |
-| When running CSI-PowerMax with Replication in a multi-cluster configuration, the driver on the target cluster fails and the following error is seen in logs: `error="CSI reverseproxy service host or port not found, CSI reverseproxy not installed properly"` | The reverseproxy service needs to be created manually on the target cluster. Follow [the instructions here](../../deployment/csmoperator/modules/replication#configuration-steps) to create it.|
+| When running CSI-PowerMax with Replication in a multi-cluster configuration, the driver on the target cluster fails and the following error is seen in logs: `error="CSI reverseproxy service host or port not found, CSI reverseproxy not installed properly"` | The reverseproxy service needs to be created manually on the target cluster. Follow [the instructions here](docs/getting-started/installation/kubernetes/powermax/csmoperator/csm-modules/replication/) to create it.|
diff --git a/content/docs/getting-started/installation/kubernetes/powerflex/helm/_index.md b/content/docs/getting-started/installation/kubernetes/powerflex/helm/_index.md
index 063b797dbf..dc1f46a54a 100644
--- a/content/docs/getting-started/installation/kubernetes/powerflex/helm/_index.md
+++ b/content/docs/getting-started/installation/kubernetes/powerflex/helm/_index.md
@@ -227,9 +227,9 @@ Use the below command to replace or update the secret:
| **vgsnapshotter** | This section allows the configuration of the volume group snapshotter(vgsnapshotter) pod. | - | - |
| enabled | A boolean that enable/disable vg snapshotter feature. | No | false |
| image | Image for vg snapshotter. | No | " " |
-| **podmon** | [Podmon](../../../../../deployment/helm/modules/installation/resiliency/) is an optional feature to enable application pods to be resilient to node failure. | - | - |
+| **podmon** | [Podmon](./csm-modules/resiliency/) is an optional feature to enable application pods to be resilient to node failure. | - | - |
| enabled | A boolean that enables/disables podmon feature. | No | false |
-| **authorization** | [Authorization](../../../../../deployment/helm/modules/installation/authorization-v2.0/) is an optional feature to apply credential shielding of the backend PowerFlex. | - | - |
+| **authorization** | [Authorization](./csm-modules/authorizationv2.0/) is an optional feature to apply credential shielding of the backend PowerFlex. | - | - |
| enabled | A boolean that enables/disables authorization feature. | No | false |
| proxyHost | Hostname of the csm-authorization server. | No | Empty |
| skipCertificateValidation | A boolean that enables/disables certificate validation of the csm-authorization proxy server. | No | true |
diff --git a/content/docs/getting-started/installation/kubernetes/powermax/helm/_index.md b/content/docs/getting-started/installation/kubernetes/powermax/helm/_index.md
index a62256b115..715fcdb6b9 100644
--- a/content/docs/getting-started/installation/kubernetes/powermax/helm/_index.md
+++ b/content/docs/getting-started/installation/kubernetes/powermax/helm/_index.md
@@ -113,16 +113,16 @@ Install Helm 3 on the master node before you install CSI Driver for PowerMax.
| selfSignedCert | Set selfSignedCert to use a self-signed certificate | No | true |
| certificateFile | certificateFile has tls.key content in encoded format | No | tls.crt.encoded64 |
| privateKeyFile | privateKeyFile has tls.key content in encoded format | No | tls.key.encoded64 |
-| **authorization** | [Authorization](../../../../../deployment/helm/modules/installation/authorization-v2.0/) is an optional feature to apply credential shielding of the backend PowerMax. | - | - |
+| **authorization** | [Authorization](./csm-modules/authorizationv2.0/) is an optional feature to apply credential shielding of the backend PowerMax. | - | - |
| enabled | A boolean that enables/disables authorization feature. | No | false |
| proxyHost | Hostname of the csm-authorization server. | No | Empty |
| skipCertificateValidation | A boolean that enables/disables certificate validation of the csm-authorization proxy server. | No | true |
-| **migration** | [Migration](../../../../../replication/migration/migrating-volumes-same-array) is an optional feature to enable migration between storage classes | - | - |
+| **migration** | [Migration](../../../../../concepts/replication/migration/migrating-volumes-same-array) is an optional feature to enable migration between storage classes | - | - |
| enabled | A boolean that enables/disables migration feature. | No | false |
| image | Image for dell-csi-migrator sidecar. | No | " " |
| nodeRescanSidecarImage | Image for node rescan sidecar which rescans nodes for identifying new paths. | No | " " |
| migrationPrefix | enables migration sidecar to read required information from the storage class fields | No | migration.storage.dell.com |
-| **replication** | [Replication](../../../../../deployment/helm/modules/installation/replication/) is an optional feature to enable replication & disaster recovery capabilities of PowerMax to Kubernetes clusters. | - | - |
+| **replication** | [Replication](./csm-modules/replication/) is an optional feature to enable replication & disaster recovery capabilities of PowerMax to Kubernetes clusters. | - | - |
| enabled | A boolean that enables/disables replication feature. | No | false |
| replicationContextPrefix | enables side cars to read required information from the volume context | No | powermax |
| replicationPrefix | Determine if replication is enabled | No | replication.storage.dell.com |
diff --git a/content/docs/getting-started/installation/kubernetes/powerscale/helm/_index.md b/content/docs/getting-started/installation/kubernetes/powerscale/helm/_index.md
index 2d33f025ab..10aa7846fc 100644
--- a/content/docs/getting-started/installation/kubernetes/powerscale/helm/_index.md
+++ b/content/docs/getting-started/installation/kubernetes/powerscale/helm/_index.md
@@ -32,9 +32,9 @@ The following are requirements to be met before installing the CSI Driver for Po
- Mount propagation is enabled on container runtime that is being used
- `nfs-utils` package must be installed on nodes that will mount volumes
- If using Snapshot feature, satisfy all Volume Snapshot requirements
-- If enabling CSM for Authorization, please refer to the [Authorization deployment steps](../../../../../deployment/helm/modules/installation/authorization-v2.0/) first
-- If enabling CSM for Replication, please refer to the [Replication deployment steps](../../../../../deployment/helm/modules/installation/replication/) first
-- If enabling CSM for Resiliency, please refer to the [Resiliency deployment steps](../../../../../deployment/helm/modules/installation/resiliency/) first
+- If enabling CSM for Authorization, please refer to the [Authorization deployment steps](../helm/csm-modules/authorizationv2.0/) first
+- If enabling CSM for Replication, please refer to the [Replication deployment steps](../helm/csm-modules/replication/) first
+- If enabling CSM for Resiliency, please refer to the [Resiliency deployment steps](../helm/csm-modules/resiliency/) first
### (Optional) Volume Snapshot Requirements
@@ -156,11 +156,11 @@ CRDs should be configured during replication prepare stage with repctl as descri
| ignoreUnresolvableHosts | Allows new host to add to existing export list though any of the existing hosts from the same exports are unresolvable/doesn't exist anymore. | No | false |
| noProbeOnStart | Define whether the controller/node plugin should probe all the PowerScale clusters during driver initialization | No | false |
| autoProbe | Specify if automatically probe the PowerScale cluster if not done already during CSI calls | No | true |
- | **authorization** | [Authorization](../../../../../deployment/helm/modules/installation/authorization-v2.0/) is an optional feature to apply credential shielding of the backend PowerScale. | - | - |
+ | **authorization** | [Authorization](../helm/csm-modules/authorizationv2.0/) is an optional feature to apply credential shielding of the backend PowerScale. | - | - |
| enabled | A boolean that enables/disables authorization feature. If enabled, isiAuthType must be set to 1. | No | false |
| proxyHost | Hostname of the csm-authorization server. | No | Empty |
| skipCertificateValidation | A boolean that enables/disables certificate validation of the csm-authorization proxy server. | No | true |
- | **podmon** | [Podmon](../../../../../deployment/helm/modules/installation/resiliency/) is an optional feature to enable application pods to be resilient to node failure. | - | - |
+ | **podmon** | [Podmon](../helm/csm-modules/resiliency/) is an optional feature to enable application pods to be resilient to node failure. | - | - |
| enabled | A boolean that enables/disables podmon feature. | No | false |
*NOTE:*
@@ -215,7 +215,7 @@ Create isilon-creds secret using the following command:
- If any key/value is present in all *my-isilon-settings.yaml*, *secret*, and storageClass, then the values provided in storageClass parameters take precedence.
- The user has to validate the yaml syntax and array-related key/values while replacing or appending the isilon-creds secret. The driver will continue to use previous values in case of an error found in the yaml file.
- For the key isiIP/endpoint, the user can give either IP address or FQDN. Also, the user can prefix 'https' (For example, https://192.168.1.1) with the value.
- - The *isilon-creds* secret has a *mountEndpoint* parameter which should only be updated and used when [Authorization](../../../../../authorization) is enabled.
+ - The *isilon-creds* secret has a *mountEndpoint* parameter which should only be updated and used when [Authorization](../../../../../concepts/authorization) is enabled.
7. Install OneFS CA certificates by following the instructions from the next section, if you want to validate OneFS API server's certificates. If not, create an empty secret using the following command and an empty secret must be created for the successful installation of CSI Driver for PowerScale.