Skip to content

Commit

Permalink
Add note covering non unique host NQNs on OCP.
Browse files Browse the repository at this point in the history
  • Loading branch information
donatwork committed Jan 6, 2025
1 parent 9c44beb commit 173ecdd
Show file tree
Hide file tree
Showing 4 changed files with 125 additions and 2 deletions.
33 changes: 32 additions & 1 deletion content/docs/deployment/csmoperator/drivers/powermax.md
Original file line number Diff line number Diff line change
Expand Up @@ -114,6 +114,37 @@ modprobe nvme_tcp
**Cluster requirments**

- All OpenShift or Kubernetes nodes connecting to Dell storage arrays must use unique host NQNs.

> The OpenShift deployment process for RHCOS will set the same host NQN for all nodes. The host NQN is stored in the file /etc/nvme/hostnqn. One possible solution to ensure unique host NQNs is to add the following machine config to your OCP cluster:

```yaml
apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfig
metadata:
labels:
machineconfiguration.openshift.io/role: worker
name: 99-worker-custom-nvme-hostnqn
spec:
config:
ignition:
version: 3.2.0
systemd:
units:
- contents: |
[Unit]
Description=Custom CoreOS Generate NVMe Hostnqn
[Service]
Type=oneshot
ExecStart=/usr/bin/sh -c '/usr/sbin/nvme gen-hostnqn > /etc/nvme/hostnqn'
RemainAfterExit=yes
[Install]
WantedBy=multi-user.target
enabled: true
name: custom-coreos-generate-nvme-hostnqn.service
```

- The driver requires the NVMe command-line interface (nvme-cli) to manage the NVMe clients and targets. The NVMe CLI tool is installed in the host using the following command on RPM oriented Linux distributions.

```bash
Expand Down Expand Up @@ -408,7 +439,7 @@ Create a secret named powermax-certs in the namespace where the CSI PowerMax dri
# Choose which transport protocol to use (ISCSI, FC, NVMETCP, auto) defaults to auto if nothing is specified
X_CSI_TRANSPORT_PROTOCOL: ""
# IP address of the Unisphere for PowerMax (Required), Defaults to https://0.0.0.0:8443
X_CSI_POWERMAX_ENDPOINT: "https://10.0.0.0:8443"
X_CSI_POWERMAX_ENDPOINT: "https://10.0.0.0:8443"
# List of comma-separated array ID(s) which will be managed by the driver (Required)
X_CSI_MANAGED_ARRAYS: "000000000000,000000000000,"
```
Expand Down
31 changes: 31 additions & 0 deletions content/docs/deployment/csmoperator/drivers/powerstore.md
Original file line number Diff line number Diff line change
Expand Up @@ -82,6 +82,37 @@ Refer to the [Dell Host Connectivity Guide](https://elabnavigator.dell.com/vault
The following requirements must be fulfilled in order to successfully use the NVMe protocols with the CSI PowerStore driver:

- All OpenShift or Kubernetes nodes connecting to Dell storage arrays must use unique host NQNs.

> The OpenShift deployment process for RHCOS will set the same host NQN for all nodes. The host NQN is stored in the file /etc/nvme/hostnqn. One possible solution to ensure unique host NQNs is to add the following machine config to your OCP cluster:

```yaml
apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfig
metadata:
labels:
machineconfiguration.openshift.io/role: worker
name: 99-worker-custom-nvme-hostnqn
spec:
config:
ignition:
version: 3.2.0
systemd:
units:
- contents: |
[Unit]
Description=Custom CoreOS Generate NVMe Hostnqn
[Service]
Type=oneshot
ExecStart=/usr/bin/sh -c '/usr/sbin/nvme gen-hostnqn > /etc/nvme/hostnqn'
RemainAfterExit=yes
[Install]
WantedBy=multi-user.target
enabled: true
name: custom-coreos-generate-nvme-hostnqn.service
```

- The driver requires the NVMe command-line interface (nvme-cli) to manage the NVMe clients and targets. The NVMe CLI tool is installed in the host using the following command on RPM oriented Linux distributions.

```bash
Expand Down
32 changes: 31 additions & 1 deletion content/docs/deployment/helm/drivers/installation/powermax.md
Original file line number Diff line number Diff line change
Expand Up @@ -117,10 +117,40 @@ modprobe nvme_tcp

> Starting with OCP 4.14 NVMe/TCP is enabled by default on RCOS nodes.


**Cluster requirments**

- All OpenShift or Kubernetes nodes connecting to Dell storage arrays must use unique host NQNs.

> The OpenShift deployment process for RHCOS will set the same host NQN for all nodes. The host NQN is stored in the file /etc/nvme/hostnqn. One possible solution to ensure unique host NQNs is to add the following machine config to your OCP cluster:

```yaml
apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfig
metadata:
labels:
machineconfiguration.openshift.io/role: worker
name: 99-worker-custom-nvme-hostnqn
spec:
config:
ignition:
version: 3.2.0
systemd:
units:
- contents: |
[Unit]
Description=Custom CoreOS Generate NVMe Hostnqn
[Service]
Type=oneshot
ExecStart=/usr/bin/sh -c '/usr/sbin/nvme gen-hostnqn > /etc/nvme/hostnqn'
RemainAfterExit=yes
[Install]
WantedBy=multi-user.target
enabled: true
name: custom-coreos-generate-nvme-hostnqn.service
```

- The driver requires the NVMe command-line interface (nvme-cli) to manage the NVMe clients and targets. The NVMe CLI tool is installed in the host using the following command on RPM oriented Linux distributions.

```bash
Expand Down
31 changes: 31 additions & 0 deletions content/docs/deployment/helm/drivers/installation/powerstore.md
Original file line number Diff line number Diff line change
Expand Up @@ -85,6 +85,37 @@ Refer to the [Dell Host Connectivity Guide](https://elabnavigator.dell.com/vault
The following requirements must be fulfilled in order to successfully use the NVMe protocols with the CSI PowerStore driver:

- All OpenShift or Kubernetes nodes connecting to Dell storage arrays must use unique host NQNs.

> The OpenShift deployment process for RHCOS will set the same host NQN for all nodes. The host NQN is stored in the file /etc/nvme/hostnqn. One possible solution to ensure unique host NQNs is to add the following machine config to your OCP cluster:

```yaml
apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfig
metadata:
labels:
machineconfiguration.openshift.io/role: worker
name: 99-worker-custom-nvme-hostnqn
spec:
config:
ignition:
version: 3.2.0
systemd:
units:
- contents: |
[Unit]
Description=Custom CoreOS Generate NVMe Hostnqn
[Service]
Type=oneshot
ExecStart=/usr/bin/sh -c '/usr/sbin/nvme gen-hostnqn > /etc/nvme/hostnqn'
RemainAfterExit=yes
[Install]
WantedBy=multi-user.target
enabled: true
name: custom-coreos-generate-nvme-hostnqn.service
```

- The driver requires the NVMe command-line interface (nvme-cli) to manage the NVMe clients and targets. The NVMe CLI tool is installed in the host using the following command on RPM oriented Linux distributions.

```bash
Expand Down

0 comments on commit 173ecdd

Please sign in to comment.