Skip to content

Commit

Permalink
HPE GreenLake for File Storage beta docs (#216)
Browse files Browse the repository at this point in the history
* GL4F CSI beta

Signed-off-by: Michael Mattsson <[email protected]>
  • Loading branch information
datamattsson authored Sep 24, 2024
1 parent f7a7c8e commit eebb67d
Show file tree
Hide file tree
Showing 6 changed files with 464 additions and 0 deletions.
101 changes: 101 additions & 0 deletions docs/filex_csi_driver/deployment.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,101 @@
# Overview

The HPE GreenLake for File Storage CSI Driver is deployed by using industry standard means, either a Helm chart or an Operator.

[TOC]

## Helm

[Helm](https://helm.sh) is the package manager for Kubernetes. Software is being delivered in a format designated as a "chart". Helm is a [standalone CLI](https://helm.sh/docs/intro/install/) that interacts with the Kubernetes API server using your `KUBECONFIG` file.

The official Helm chart for the HPE GreenLake for File Storage CSI Driver is hosted on [Artifact Hub](https://artifacthub.io/packages/helm/hpe-storage/hpe-greenlake-file-csi-driver). In an effort to avoid duplicate documentation, please see the chart for instructions on how to deploy the CSI driver using Helm.

- Go to the chart on [Artifact Hub](https://artifacthub.io/packages/helm/hpe-storage/hpe-greenlake-file-csi-driver).

!!! note
It's possible to use the HPE CSI Driver for Kubernetes steps for v2.4.2 or later to mirror the required images to an internal registry for installing into an [air-gapped environment](../csi_driver/deployment.md#helm_for_air-gapped_environments).

## Operator

The [Operator pattern](https://kubernetes.io/docs/concepts/extend-kubernetes/operator/) is based on the idea that software should be instantiated and run with a set of custom controllers in Kubernetes. It creates a native experience for any software running on Kubernetes.

### Red Hat OpenShift Container Platform

<!--
The HPE GreenLake for File Storage CSI Operator is a fully certified Operator for OpenShift. There are a few tweaks needed and there's a separate section for OpenShift.
- See [Red Hat OpenShift](../partners/redhat_openshift/index.md) in the partner ecosystem section
-->
During the beta, it's only possible to sideload the HPE GreenLake for File Storage CSI Operator using the Operator SDK.

The installation procedures assumes the "hpe-storage" `Namespace` exists:

```text
oc create ns hpe-storage
```

<div id="scc" />First, deploy or [download]({{ config.site_url}}partners/redhat_openshift/examples/scc/hpe-filex-csi-scc.yaml) the SCC:

```text
oc apply -f {{ config.site_url}}partners/redhat_openshift/examples/scc/hpe-filex-csi-scc.yaml
```

Install the Operator:

```text
operator-sdk run bundle --timeout 5m -n hpe-storage quay.io/hpestorage/filex-csi-driver-operator-bundle-ocp:v1.0.0-beta
```

The next step is to create a `HPEGreenLakeFileCSIDriver` resource, this can also be done in the OpenShift cluster console.

```yaml fct_label="HPE GreenLake for File Storage CSI Operator v1.0.0-beta"
# oc apply -f {{ config.site_url }}filex_csi_driver/examples/deployment/hpegreenlakefilecsidriver-v1.0.0-beta-sample.yaml
{% include "examples/deployment/hpegreenlakefilecsidriver-v1.0.0-beta-sample.yaml" %}```

For reference, this is how the Operator is uninstalled:

```text
operator-sdk cleanup hpe-filex-csi-operator -n hpe-storage
```

## Add a Storage Backend

Once the CSI driver is deployed, two additional resources need to be created to get started with dynamic provisioning of persistent storage, a `Secret` and a `StorageClass`.

!!! tip
Naming the `Secret` and `StorageClass` is entirely up to the user, however, to keep up with the examples on SCOD, it's highly recommended to use the names illustrated here.

### Secret Parameters

All parameters are mandatory and described below.

| Parameter | Description |
| ----------- | ----------- |
| endpoint | This is the management hostname or IP address of the actual backend storage system. |
| username | Backend storage system username with the correct privileges to perform storage management. |
| password | Backend storage system password. |

Example:

```yaml
apiVersion: v1
kind: Secret
metadata:
name: hpe-file-backend
namespace: hpe-storage
stringData:
endpoint: 192.168.1.1
username: my-csi-user
password: my-secret-password
```

Create the `Secret` using `kubectl`:

```text
kubectl create -f secret.yaml
```

!!! tip
In a real world scenario it's more practical to name the `Secret` something that makes sense for the organization. It could be the hostname of the backend or the role it carries, i.e "hpe-greenlake-file-sanjose-prod".

Next step involves [creating a default StorageClass](using.md#base_storageclass_parameters).
Original file line number Diff line number Diff line change
@@ -0,0 +1,44 @@
apiVersion: storage.hpe.com/v1
kind: HPEGreenLakeFileCSIDriver
metadata:
name: hpegreenlakefilecsidriver-sample
spec:
# Default values copied from <project_dir>/helm-charts/hpe-greenlake-file-csi-driver/values.yaml
controller:
affinity: {}
labels: {}
nodeSelector: {}
resources:
limits:
cpu: 2000m
memory: 1Gi
requests:
cpu: 100m
memory: 128Mi
tolerations: []
disableNodeConformance: false
imagePullPolicy: IfNotPresent
images:
csiAttacher: registry.k8s.io/sig-storage/csi-attacher:v4.6.1
csiControllerDriver: quay.io/hpestorage/filex-csi-driver:v1.0.0-beta
csiNodeDriver: quay.io/hpestorage/filex-csi-driver:v1.0.0-beta
csiNodeDriverRegistrar: registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.10.1
csiNodeInit: quay.io/hpestorage/filex-csi-init:v1.0.0-beta
csiProvisioner: registry.k8s.io/sig-storage/csi-provisioner:v5.0.1
csiResizer: registry.k8s.io/sig-storage/csi-resizer:v1.11.1
csiSnapshotter: registry.k8s.io/sig-storage/csi-snapshotter:v8.0.1
kubeletRootDir: /var/lib/kubelet
node:
affinity: {}
labels: {}
nodeSelector: {}
resources:
limits:
cpu: 2000m
memory: 1Gi
requests:
cpu: 100m
memory: 128Mi
tolerations: []


98 changes: 98 additions & 0 deletions docs/filex_csi_driver/index.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,98 @@
# Introduction

A Container Storage Interface ([CSI](https://github.com/container-storage-interface/spec)) Driver for Kubernetes. The HPE GreenLake for File Storage CSI Driver perform data management operations on storage resources.

## Table of Contents

[TOC]

## Features and Capabilities

Below is the official table for CSI features we track and deem readily available for use after we've officially tested and validated it in the [platform matrix](#compatibility_and_support).

| Feature | K8s maturity | Since K8s version | HPE GreenLake for File Storage CSI Driver |
|---------------------------|-------------------|-------------------|-------------------------------------------|
| Dynamic Provisioning | GA | 1.13 | 1.0.0 |
| Volume Expansion | GA | 1.24 | 1.0.0 |
| Volume Snapshots | GA | 1.20 | 1.0.0 |
| PVC Data Source | GA | 1.18 | 1.0.0 |
| Generic Ephemeral Volumes | GA | 1.23 | 1.0.0 |

!!! tip
Familiarize yourself with the basic requirements below for running the CSI driver on your Kubernetes cluster. It's then highly recommended to continue installing the CSI driver with either a [Helm chart](deployment.md#helm) or an [Operator](deployment.md#operator).

## Compatibility and Support

These are the combinations HPE has tested and can provide official support services around for each of the CSI driver releases.

!!! caution "Disclaimer"
The HPE Greenlake for File Storage CSI Driver is currently **NOT** supported by HPE and is considered beta software.

<a name="latest_release"></a>
#### HPE GreenLake for File Storage CSI Driver v1.0.0-beta

Release highlights:

* Initial beta release

<table>
<tr>
<th>Kubernetes</th>
<td>1.28-1.31<sup>1</sup></td>
</tr>
<tr>
<th>Helm Chart</th>
<td><a href="https://artifacthub.io/packages/helm/hpe-storage/hpe-greenlake-for-file-csi-driver/1.0.0-beta">v1.0.0-beta</a> on ArtifactHub</td>
</tr>
<!--tr>
<th>Operators</th>
<td>
<a href="https://operatorhub.io/operator/hpe-csi-operator/stable/hpe-csi-operator.v2.5.1">v2.5.1</a> on OperatorHub<br />
<a href="https://catalog.redhat.com/software/container-stacks/detail/5e9874643f398525a0ceb004">v2.5.1</a> via OpenShift console
</td>
</tr-->
<tr>
<th>Worker&nbsp;OS</th>
<td>
Red Hat Enterprise Linux<sup>2</sup> 7.x, 8.x, 9.x, Red Hat CoreOS 4.14-4.16<br />
Ubuntu 16.04, 18.04, 20.04, 22.04, 24.04<br />
SUSE Linux Enterprise Server 15 SP4, SP5, SP6 and SLE Micro<sup>4</sup> equivalents
</tr>
<tr>
<th>Platforms<sup>3</sup></th>
<td>
HPE GreenLake for File Storage MP OS 1.2 or later
</td>
</tr>
<tr>
<th>Data&nbsp;Protocols</th>
<td>NFSv3 and NFSv4.1</td>
</tr>
<!--tr>
<th>Blogs</th>
<td>
<a href="https://community.hpe.com/t5/around-the-storage-block/hpe-csi-driver-for-kubernetes-2-5-0-improved-stateful-workload/ba-p/7220864">HPE CSI Driver for Kubernetes 2.5.0: Improved stateful workload resilience and robustness</a>
</td>
</tr-->
</table>

<small>
<sup>1</sup> = For HPE Ezmeral Runtime Enterprise, SUSE Rancher, Mirantis Kubernetes Engine and others; Kubernetes clusters must be deployed within the currently supported range of "Worker OS" platforms listed in the above table. See [partner ecosystems](../partners) for other variations. Lowest tested and known working version is Kubernetes 1.21.<br />
<sup>2</sup> = The HPE CSI Driver will recognize CentOS, AlmaLinux and Rocky Linux as RHEL derives and they are supported by HPE. While RHEL 7 and its derives will work, the host OS have been EOL'd and support is limited.<br/>
<sup>3</sup> = Learn about each data platform's team [support commitment](../legal/support/index.md).<br/>
<sup>4</sup> = SLE Micro nodes may need to be conformed manually, run `transactional-update -n pkg install nfs-client` and reboot if the CSI node driver doesn't start.<br/>
</small>
<!--
#### Release Archive
HPE currently supports up to three minor releases of the HPE CSI Driver for Kubernetes.
* [Unsupported releases](archive.md)
-->

## Known Limitations

* Always check with the Kubernetes vendor distribution which CSI features are available for use and supported by the vendor.
* Inline Ephemeral Volumes are currently not supported. Use Generic Ephemeral Volumes instead as a workaround.
Loading

0 comments on commit eebb67d

Please sign in to comment.