From ecdeb7aa9f39299c8389ed41d2a238c59cc01b31 Mon Sep 17 00:00:00 2001 From: travisn Date: Wed, 26 Apr 2023 17:20:11 -0600 Subject: [PATCH] docs: clarify docs for wording or obsolete info Update the getting started guide, prereqs, architecture, and other basic docs with more concise wording and removed some obsolete information. Some links are removed so they don't distract the user from reading the critical information that we are conveying in the rook docs. Signed-off-by: travisn --- .../Prerequisites/prerequisites.md | 42 ++++----- .../Getting-Started/example-configurations.md | 32 +++---- Documentation/Getting-Started/quickstart.md | 92 +++++++++++-------- .../Getting-Started/release-cycle.md | 2 +- .../Getting-Started/storage-architecture.md | 4 +- Documentation/README.md | 2 +- GOVERNANCE.md | 8 +- INSTALL.md | 3 - 8 files changed, 99 insertions(+), 86 deletions(-) diff --git a/Documentation/Getting-Started/Prerequisites/prerequisites.md b/Documentation/Getting-Started/Prerequisites/prerequisites.md index 5edda955849e..a81edf61efaf 100644 --- a/Documentation/Getting-Started/Prerequisites/prerequisites.md +++ b/Documentation/Getting-Started/Prerequisites/prerequisites.md @@ -7,7 +7,7 @@ and Rook is granted the required privileges (see below for more information). ## Minimum Version -Kubernetes **v1.21** or higher is supported for the Ceph operator. +Kubernetes **v1.21** or higher is supported. ## CPU Architecture @@ -15,14 +15,14 @@ Architectures supported are `amd64 / x86_64` and `arm64`. ## Ceph Prerequisites -In order to configure the Ceph storage cluster, at least one of these local storage types is required: +To configure the Ceph storage cluster, at least one of these local storage types is required: * Raw devices (no partitions or formatted filesystems) * Raw partitions (no formatted filesystem) * LVM Logical Volumes (no formatted filesystem) * Persistent Volumes available from a storage class in `block` mode -You can confirm whether your partitions or devices are formatted with filesystems with the following command: +Confirm whether the partitions or devices are formatted with filesystems with the following command: ```console $ lsblk -f @@ -34,7 +34,7 @@ vda vdb ``` -If the `FSTYPE` field is not empty, there is a filesystem on top of the corresponding device. In this example, you can use `vdb` for Ceph and can't use `vda` or its partitions. +If the `FSTYPE` field is not empty, there is a filesystem on top of the corresponding device. In this example, `vdb` is available to Rook, while `vda` and its partitions have a filesystem and are not available. ## Admission Controller @@ -43,23 +43,23 @@ Enabling the Rook admission controller is recommended to provide an additional l To deploy the Rook admission controllers, install the cert manager before Rook is installed: ```console -kubectl apply -f https://github.com/jetstack/cert-manager/releases/download/v1.7.1/cert-manager.yaml +kubectl apply -f https://github.com/jetstack/cert-manager/releases/download/v1.11.1/cert-manager.yaml ``` ## LVM package Ceph OSDs have a dependency on LVM in the following scenarios: -* OSDs are created on raw devices or partitions * If encryption is enabled (`encryptedDevice: "true"` in the cluster CR) * A `metadata` device is specified LVM is not required for OSDs in these scenarios: -* Creating OSDs on PVCs using the `storageClassDeviceSets` +* OSDs are created on raw devices or partitions +* OSDs are created on PVCs using the `storageClassDeviceSets` -If LVM is required for your scenario, LVM needs to be available on the hosts where OSDs will be running. -Some Linux distributions do not ship with the `lvm2` package. This package is required on all storage nodes in your k8s cluster to run Ceph OSDs. +If LVM is required, LVM needs to be available on the hosts where OSDs will be running. +Some Linux distributions do not ship with the `lvm2` package. This package is required on all storage nodes in the k8s cluster to run Ceph OSDs. Without this package even though Rook will be able to successfully create the Ceph OSDs, when a node is rebooted the OSD pods running on the restarted node will **fail to start**. Please install LVM using your Linux distribution's package manager. For example: @@ -93,14 +93,14 @@ Ceph requires a Linux kernel built with the RBD module. Many Linux distributions have this module, but not all. For example, the GKE Container-Optimised OS (COS) does not have RBD. -You can test your Kubernetes nodes by running `modprobe rbd`. -If it says 'not found', you may have to rebuild your kernel and include at least -the `rbd` module, install a newer kernel, or choose a different Linux distribution. +Test your Kubernetes nodes by running `modprobe rbd`. +If the rbd module is 'not found', rebuild the kernel to include the `rbd` module, +install a newer kernel, or choose a different Linux distribution. Rook's default RBD configuration specifies only the `layering` feature, for broad compatibility with older kernels. If your Kubernetes nodes run a 5.4 -or later kernel you may wish to enable additional feature flags. The `fast-diff` -and `object-map` features are especially useful. +or later kernel, additional feature flags can be enabled in the +storage class. The `fast-diff` and `object-map` features are especially useful. ```yaml imageFeatures: layering,fast-diff,object-map,deep-flatten,exclusive-lock @@ -108,8 +108,8 @@ imageFeatures: layering,fast-diff,object-map,deep-flatten,exclusive-lock ### CephFS -If you will be creating volumes from a Ceph shared file system (CephFS), the recommended minimum kernel version is **4.17**. -If you have a kernel version less than 4.17, the requested PVC sizes will not be enforced. Storage quotas will only be +If creating RWX volumes from a Ceph shared file system (CephFS), the recommended minimum kernel version is **4.17**. +If the kernel version is less than 4.17, the requested PVC sizes will not be enforced. Storage quotas will only be enforced on newer kernels. ## Distro Notes @@ -118,21 +118,21 @@ Specific configurations for some distributions. ### NixOS -When you use NixOS, the kernel modules will be found in the non-standard path `/run/current-system/kernel-modules/lib/modules/`, +For NixOS, the kernel modules will be found in the non-standard path `/run/current-system/kernel-modules/lib/modules/`, and they'll be symlinked inside the also non-standard path `/nix`. -For Rook Ceph containers to be able to load the required modules, they need read access to those locations. +Rook containers require read access to those locations to be able to load the required modules. They have to be bind-mounted as volumes in the CephFS and RBD plugin pods. -If you install Rook with Helm, uncomment these example settings in `values.yaml`: +If installing Rook with Helm, uncomment these example settings in `values.yaml`: * `csi.csiCephFSPluginVolume` * `csi.csiCephFSPluginVolumeMount` * `csi.csiRBDPluginVolume` * `csi.csiRBDPluginVolumeMount` -If you deploy without Helm, add those same values to the corresponding environment variables in the operator pod, -or the corresponding keys in its `ConfigMap`: +If deploying without Helm, add those same values to the settings in the `rook-ceph-operator-config` +ConfigMap found in operator.yaml: * `CSI_CEPHFS_PLUGIN_VOLUME` * `CSI_CEPHFS_PLUGIN_VOLUME_MOUNT` diff --git a/Documentation/Getting-Started/example-configurations.md b/Documentation/Getting-Started/example-configurations.md index e5cf46729f40..25ac95b6f4e7 100644 --- a/Documentation/Getting-Started/example-configurations.md +++ b/Documentation/Getting-Started/example-configurations.md @@ -2,7 +2,7 @@ title: Example Configurations --- -Configuration for Rook and Ceph can be configured in multiple ways to provide block devices, shared filesystem volumes or object storage in a kubernetes namespace. We have provided several examples to simplify storage setup, but remember there are many tunables and you will need to decide what settings work for your use case and environment. +Configuration for Rook and Ceph can be configured in multiple ways to provide block devices, shared filesystem volumes or object storage in a kubernetes namespace. While several examples are provided to simplify storage setup, settings are available to optimize various production environments. See the **[example yaml files](https://github.com/rook/rook/blob/master/deploy/examples)** folder for all the rook/ceph setup example spec files. @@ -16,7 +16,7 @@ The [crds.yaml](https://github.com/rook/rook/blob/master/deploy/examples/crds.ya kubectl create -f crds.yaml -f common.yaml ``` -The examples all assume the operator and all Ceph daemons will be started in the same namespace. If you want to deploy the operator in a separate namespace, see the comments throughout `common.yaml`. +The examples all assume the operator and all Ceph daemons will be started in the same namespace. If deploying the operator in a separate namespace, see the comments throughout `common.yaml`. ## Operator @@ -31,13 +31,13 @@ Settings for the operator are configured through environment variables on the op ## Cluster CRD -Now that your operator is running, let's create your Ceph storage cluster. This CR contains the most critical settings +Now that the operator is running, create the Ceph storage cluster with the CephCluster CR. This CR contains the most critical settings that will influence how the operator configures the storage. It is important to understand the various ways to configure -the cluster. These examples represent a very small set of the different ways to configure the storage. +the cluster. These examples represent several different ways to configure the storage. -* [`cluster.yaml`](https://github.com/rook/rook/blob/master/deploy/examples/cluster.yaml): This file contains common settings for a production storage cluster. Requires at least three worker nodes. +* [`cluster.yaml`](https://github.com/rook/rook/blob/master/deploy/examples/cluster.yaml): Common settings for a production storage cluster. Requires at least three worker nodes. * [`cluster-test.yaml`](https://github.com/rook/rook/blob/master/deploy/examples/cluster-test.yaml): Settings for a test cluster where redundancy is not configured. Requires only a single node. -* [`cluster-on-pvc.yaml`](https://github.com/rook/rook/blob/master/deploy/examples/cluster-on-pvc.yaml): This file contains common settings for backing the Ceph Mons and OSDs by PVs. Useful when running in cloud environments or where local PVs have been created for Ceph to consume. +* [`cluster-on-pvc.yaml`](https://github.com/rook/rook/blob/master/deploy/examples/cluster-on-pvc.yaml): Common settings for backing the Ceph Mons and OSDs by PVs. Useful when running in cloud environments or where local PVs have been created for Ceph to consume. * [`cluster-external.yaml`](https://github.com/rook/rook/blob/master/deploy/examples/cluster-external.yaml): Connect to an [external Ceph cluster](../CRDs/Cluster/ceph-cluster-crd.md#external-cluster) with minimal access to monitor the health of the cluster and connect to the storage. * [`cluster-external-management.yaml`](https://github.com/rook/rook/blob/master/deploy/examples/cluster-external-management.yaml): Connect to an [external Ceph cluster](../CRDs/Cluster/ceph-cluster-crd.md#external-cluster) with the admin key of the external cluster to enable remote creation of pools and configure services such as an [Object Store](../Storage-Configuration/Object-Storage-RGW/object-storage.md) or a [Shared Filesystem](../Storage-Configuration/Shared-Filesystem-CephFS/filesystem-storage.md). @@ -47,27 +47,27 @@ See the [Cluster CRD](../CRDs/Cluster/ceph-cluster-crd.md) topic for more detail ## Setting up consumable storage -Now we are ready to setup [block](https://ceph.com/ceph-storage/block-storage/), [shared filesystem](https://ceph.com/ceph-storage/file-system/) or [object storage](https://ceph.com/ceph-storage/object-storage/) in the Rook Ceph cluster. These kinds of storage are respectively referred to as CephBlockPool, CephFilesystem and CephObjectStore in the spec files. +Now we are ready to setup Block, Shared Filesystem or Object storage in the Rook cluster. These storage types are respectively created with the CephBlockPool, CephFilesystem and CephObjectStore CRs. ### Block Devices -Ceph can provide raw block device volumes to pods. Each example below sets up a storage class which can then be used to provision a block device in kubernetes pods. The storage class is defined with [a pool](http://docs.ceph.com/docs/master/rados/operations/pools/) which defines the level of data redundancy in Ceph: +Ceph provides raw block device volumes to pods. Each example below sets up a storage class which can then be used to provision a block device in application pods. The storage class is defined with a Ceph pool which defines the level of data redundancy in Ceph: -* [`storageclass.yaml`](https://github.com/rook/rook/blob/master/deploy/examples/csi/rbd/storageclass.yaml): This example illustrates replication of 3 for production scenarios and requires at least three worker nodes. Your data is replicated on three different kubernetes worker nodes and intermittent or long-lasting single node failures will not result in data unavailability or loss. -* [`storageclass-ec.yaml`](https://github.com/rook/rook/blob/master/deploy/examples/csi/rbd/storageclass-ec.yaml): Configures erasure coding for data durability rather than replication. [Ceph's erasure coding](http://docs.ceph.com/docs/master/rados/operations/erasure-code/) is more efficient than replication so you can get high reliability without the 3x replication cost of the preceding example (but at the cost of higher computational encoding and decoding costs on the worker nodes). Erasure coding requires at least three worker nodes. See the [Erasure coding](../CRDs/Block-Storage/ceph-block-pool-crd.md#erasure-coded) documentation for more details. -* [`storageclass-test.yaml`](https://github.com/rook/rook/blob/master/deploy/examples/csi/rbd/storageclass-test.yaml): Replication of 1 for test scenarios and it requires only a single node. Do not use this for applications that store valuable data or have high-availability storage requirements, since a single node failure can result in data loss. +* [`storageclass.yaml`](https://github.com/rook/rook/blob/master/deploy/examples/csi/rbd/storageclass.yaml): This example illustrates replication of 3 for production scenarios and requires at least three worker nodes. Data is replicated on three different kubernetes worker nodes. Intermittent or long-lasting single node failures will not result in data unavailability or loss. +* [`storageclass-ec.yaml`](https://github.com/rook/rook/blob/master/deploy/examples/csi/rbd/storageclass-ec.yaml): Configures erasure coding for data durability rather than replication. Ceph's erasure coding is more efficient than replication so you can get high reliability without the 3x replication cost of the preceding example (but at the cost of higher computational encoding and decoding costs on the worker nodes). Erasure coding requires at least three worker nodes. See the [Erasure coding](../CRDs/Block-Storage/ceph-block-pool-crd.md#erasure-coded) documentation. +* [`storageclass-test.yaml`](https://github.com/rook/rook/blob/master/deploy/examples/csi/rbd/storageclass-test.yaml): Replication of 1 for test scenarios. Requires only a single node. Do not use this for production applications. A single node failure can result in full data loss. -The storage classes are found in different sub-directories depending on the driver: +The block storage classes are found in the examples directory: -* `csi/rbd`: The CSI driver for block devices. This is the preferred driver going forward. +* `csi/rbd`: the CSI driver examples for block devices -See the [Ceph Pool CRD](../CRDs/Block-Storage/ceph-block-pool-crd.md) topic for more details on the settings. +See the [CephBlockPool CRD](../CRDs/Block-Storage/ceph-block-pool-crd.md) topic for more block storage settings. ### Shared Filesystem -Ceph filesystem (CephFS) allows the user to 'mount' a shared posix-compliant folder into one or more hosts (pods in the container world). This storage is similar to NFS shared storage or CIFS shared folders, as explained [here](https://ceph.com/ceph-storage/file-system/). +Ceph filesystem (CephFS) allows the user to mount a shared posix-compliant folder into one or more application pods. This storage is similar to NFS shared storage or CIFS shared folders, as explained [here](https://ceph.com/ceph-storage/file-system/). -File storage contains multiple pools that can be configured for different scenarios: +Shared Filesystem storage contains configurable pools for different scenarios: * [`filesystem.yaml`](https://github.com/rook/rook/blob/master/deploy/examples/filesystem.yaml): Replication of 3 for production scenarios. Requires at least three worker nodes. * [`filesystem-ec.yaml`](https://github.com/rook/rook/blob/master/deploy/examples/filesystem-ec.yaml): Erasure coding for production scenarios. Requires at least three worker nodes. diff --git a/Documentation/Getting-Started/quickstart.md b/Documentation/Getting-Started/quickstart.md index 1d113477609a..ed75797c0fa6 100644 --- a/Documentation/Getting-Started/quickstart.md +++ b/Documentation/Getting-Started/quickstart.md @@ -2,14 +2,13 @@ title: Quickstart --- -Welcome to Rook! We hope you have a great experience installing the Rook **cloud-native storage orchestrator** platform to enable highly available, durable Ceph storage in your Kubernetes cluster. +Welcome to Rook! We hope you have a great experience installing the Rook **cloud-native storage orchestrator** platform to enable highly available, durable Ceph storage in Kubernetes clusters. -If you have any questions along the way, please don't hesitate to ask us in our [Slack channel](https://rook-io.slack.com). You can sign up for our Slack [here](https://slack.rook.io). +Don't hesitate to ask questions in our [Slack channel](https://rook-io.slack.com). Sign up for the Rook Slack [here](https://slack.rook.io). -This guide will walk you through the basic setup of a Ceph cluster and enable you to consume block, object, and file storage -from other pods running in your cluster. +This guide will walk through the basic setup of a Ceph cluster and enable K8s applications to consume block, object, and file storage. -**Always use a virtual machine when testing Rook. Never use your host system where local devices may mistakenly be consumed.** +**Always use a virtual machine when testing Rook. Never use a host system where local devices may mistakenly be consumed.** ## Minimum Version @@ -21,9 +20,9 @@ Architectures released are `amd64 / x86_64` and `arm64`. ## Prerequisites -To make sure you have a Kubernetes cluster that is ready for `Rook`, you can [follow these instructions](Prerequisites/prerequisites.md). +To check if a Kubernetes cluster is ready for `Rook`, see the [prerequisites](Prerequisites/prerequisites.md). -In order to configure the Ceph storage cluster, at least one of these local storage options are required: +To configure the Ceph storage cluster, at least one of these local storage options are required: * Raw devices (no partitions or formatted filesystems) * Raw partitions (no formatted filesystem) @@ -32,7 +31,7 @@ In order to configure the Ceph storage cluster, at least one of these local stor ## TL;DR -A simple Rook cluster can be created with the following kubectl commands and [example manifests](https://github.com/rook/rook/blob/master/deploy/examples). +A simple Rook cluster is created for Kubernetes with the following `kubectl` commands and [example manifests](https://github.com/rook/rook/blob/master/deploy/examples). ```console $ git clone --single-branch --branch master https://github.com/rook/rook.git @@ -41,11 +40,20 @@ kubectl create -f crds.yaml -f common.yaml -f operator.yaml kubectl create -f cluster.yaml ``` -After the cluster is running, you can create [block, object, or file](#storage) storage to be consumed by other applications in your cluster. +After the cluster is running, applications can consume [block, object, or file](#storage) storage. ## Deploy the Rook Operator -The first step is to deploy the Rook operator. Check that you are using the [example yaml files](https://github.com/rook/rook/blob/master/deploy/examples) that correspond to your release of Rook. For more options, see the [example configurations documentation](example-configurations.md). +The first step is to deploy the Rook operator. + +!!! important + The [Rook Helm Chart](../Helm-Charts/operator-chart.md) is available to deploy the operator instead of creating the below manifests. + +!!! note + Check that the [example yaml files](https://github.com/rook/rook/blob/master/deploy/examples) are from a tagged release of Rook. + +!!! note + These steps are for a standard production Rook deployment in Kubernetes. For Openshift, testing, or more options, see the [example configurations documentation](example-configurations.md). ```console cd deploy/examples @@ -55,21 +63,17 @@ kubectl create -f crds.yaml -f common.yaml -f operator.yaml kubectl -n rook-ceph get pod ``` -You can also deploy the operator with the [Rook Helm Chart](../Helm-Charts/operator-chart.md). +Before starting the operator in production, consider these settings: -Before you start the operator in production, there are some settings that you may want to consider: - -1. Consider if you want to enable certain Rook features that are disabled by default. See the [operator.yaml](https://github.com/rook/rook/blob/master/deploy/examples/operator.yaml) for these and other advanced settings. - 1. Device discovery: Rook will watch for new devices to configure if the `ROOK_ENABLE_DISCOVERY_DAEMON` setting is enabled, commonly used in bare metal clusters. - 2. Node affinity and tolerations: The CSI driver by default will run on any node in the cluster. To configure the CSI driver affinity, several settings are available. - -If you wish to deploy into a namespace other than the default `rook-ceph`, see the -[Ceph advanced configuration section](../Storage-Configuration/Advanced/ceph-configuration.md#using-alternate-namespaces) on the topic. +1. Some Rook features are disabled by default. See the [operator.yaml](https://github.com/rook/rook/blob/master/deploy/examples/operator.yaml) for these and other advanced settings. + 1. Device discovery: Rook will watch for new devices to configure if the `ROOK_ENABLE_DISCOVERY_DAEMON` setting is enabled, commonly used in bare metal clusters. + 2. Node affinity and tolerations: The CSI driver by default will run on any node in the cluster. To restrict the CSI driver affinity, several settings are available. +2. If deploying Rook into a namespace other than the default `rook-ceph`, see the topic on +[using an alternative namespace](../Storage-Configuration/Advanced/ceph-configuration.md#using-alternate-namespaces). ## Cluster Environments -The Rook documentation is focused around starting Rook in a production environment. Examples are also -provided to relax some settings for test environments. When creating the cluster later in this guide, consider these example cluster manifests: +The Rook documentation is focused around starting Rook in a variety of environments. While creating the cluster in this guide, consider these example cluster manifests: * [cluster.yaml](https://github.com/rook/rook/blob/master/deploy/examples/cluster.yaml): Cluster settings for a production cluster running on bare metal. Requires at least three worker nodes. * [cluster-on-pvc.yaml](https://github.com/rook/rook/blob/master/deploy/examples/cluster-on-pvc.yaml): Cluster settings for a production cluster running in a dynamic cloud environment. @@ -79,8 +83,13 @@ See the [Ceph example configurations](example-configurations.md) for more detail ## Create a Ceph Cluster -Now that the Rook operator is running we can create the Ceph cluster. For the cluster to survive reboots, -make sure you set the `dataDirHostPath` property that is valid for your hosts. For more settings, see the documentation on [configuring the cluster](../CRDs/Cluster/ceph-cluster-crd.md). +Now that the Rook operator is running we can create the Ceph cluster. + +!!! important + The [Rook Cluster Helm Chart](../Helm-Charts/ceph-cluster-chart.md) is available to deploy the operator instead of creating the below manifests. + +!!! important + For the cluster to survive reboots, set the `dataDirHostPath` property that is valid for the hosts. For more settings, see the documentation on [configuring the cluster](../CRDs/Cluster/ceph-cluster-crd.md). Create the cluster: @@ -88,9 +97,10 @@ Create the cluster: kubectl create -f cluster.yaml ``` -Use `kubectl` to list pods in the `rook-ceph` namespace. You should be able to see the following pods once they are all running. +Verify the cluster is running by viewing the pods in the `rook-ceph` namespace. + The number of osd pods will depend on the number of nodes in the cluster and the number of devices configured. -If you did not modify the `cluster.yaml` above, it is expected that one OSD will be created per node. +For the default `cluster.yaml` above, one OSD will be created for each available device found on each node. !!! hint If the `rook-ceph-mon`, `rook-ceph-mgr`, or `rook-ceph-osd` pods are not created, please refer to the @@ -106,7 +116,8 @@ csi-rbdplugin-hbsm7 3/3 Running 0 csi-rbdplugin-provisioner-5b5cd64fd-nvk6c 6/6 Running 0 140s csi-rbdplugin-provisioner-5b5cd64fd-q7bxl 6/6 Running 0 140s rook-ceph-crashcollector-minikube-5b57b7c5d4-hfldl 1/1 Running 0 105s -rook-ceph-mgr-a-64cd7cdf54-j8b5p 1/1 Running 0 77s +rook-ceph-mgr-a-64cd7cdf54-j8b5p 2/2 Running 0 77s +rook-ceph-mgr-b-657d54fc89-2xxw7 2/2 Running 0 56s rook-ceph-mon-a-694bb7987d-fp9w7 1/1 Running 0 105s rook-ceph-mon-b-856fdd5cb9-5h2qk 1/1 Running 0 94s rook-ceph-mon-c-57545897fc-j576h 1/1 Running 0 85s @@ -124,7 +135,7 @@ To verify that the cluster is in a healthy state, connect to the [Rook toolbox]( * All mons should be in quorum * A mgr should be active -* At least one OSD should be active +* At least three OSDs should be `up` and `in` * If the health is not `HEALTH_OK`, the warnings or errors should be investigated ```console @@ -135,12 +146,13 @@ $ ceph status services: mon: 3 daemons, quorum a,b,c (age 3m) - mgr: a(active, since 2m) + mgr:a(active, since 2m), standbys: b osd: 3 osds: 3 up (since 1m), 3 in (since 1m) []...] ``` -If the cluster is not healthy, please refer to the [Ceph common issues](../Troubleshooting/ceph-common-issues.md) for more details and potential solutions. +!!! hint + If the cluster is not healthy, please refer to the [Ceph common issues](../Troubleshooting/ceph-common-issues.md) for potential solutions. ## Storage @@ -148,34 +160,38 @@ For a walkthrough of the three types of storage exposed by Rook, see the guides * **[Block](../Storage-Configuration/Block-Storage-RBD/block-storage.md)**: Create block storage to be consumed by a pod (RWO) * **[Shared Filesystem](../Storage-Configuration/Shared-Filesystem-CephFS/filesystem-storage.md)**: Create a filesystem to be shared across multiple pods (RWX) -* **[Object](../Storage-Configuration/Object-Storage-RGW/object-storage.md)**: Create an object store that is accessible inside or outside the Kubernetes cluster +* **[Object](../Storage-Configuration/Object-Storage-RGW/object-storage.md)**: Create an object store that is accessible with an S3 endpoint inside or outside the Kubernetes cluster ## Ceph Dashboard -Ceph has a dashboard in which you can view the status of your cluster. Please see the [dashboard guide](../Storage-Configuration/Monitoring/ceph-dashboard.md) for more details. +Ceph has a dashboard to view the status of the cluster. See the [dashboard guide](../Storage-Configuration/Monitoring/ceph-dashboard.md). ## Tools -Create a toolbox pod for full access to a ceph admin client for debugging and troubleshooting your Rook cluster. Please see the [toolbox documentation](../Troubleshooting/ceph-toolbox.md) for setup and usage information. Also see our [advanced configuration](../Storage-Configuration/Advanced/ceph-configuration.md) document for helpful maintenance and tuning examples. +Create a toolbox pod for full access to a ceph admin client for debugging and troubleshooting the Rook cluster. See the [toolbox documentation](../Troubleshooting/ceph-toolbox.md) for setup and usage information. + +The [Rook Krew plugin](https://github.com/rook/kubectl-rook-ceph) provides commands to view status and troubleshoot issues. + +See the [advanced configuration](../Storage-Configuration/Advanced/ceph-configuration.md) document for helpful maintenance and tuning examples. ## Monitoring -Each Rook cluster has some built in metrics collectors/exporters for monitoring with [Prometheus](https://prometheus.io/). -To learn how to set up monitoring for your Rook cluster, you can follow the steps in the [monitoring guide](../Storage-Configuration/Monitoring/ceph-monitoring.md). +Each Rook cluster has built-in metrics collectors/exporters for monitoring with Prometheus. +To configure monitoring, see the [monitoring guide](../Storage-Configuration/Monitoring/ceph-monitoring.md). ## Telemetry -To allow us to understand usage, the maintainers for Rook and Ceph would like to receive telemetry reports for Rook clusters. +The Rook maintainers would like to receive telemetry reports for Rook clusters. The data is anonymous and does not include any identifying information. -We invite you to enable the telemetry reporting feature with the following command in the toolbox: +Enable the telemetry reporting feature with the following command in the toolbox: ``` ceph telemetry on ``` -The telemetry is disabled by default. For more details on what is reported and how your privacy is protected, +For more details on what is reported and how your privacy is protected, see the [Ceph Telemetry Documentation](https://docs.ceph.com/en/latest/mgr/telemetry/). ## Teardown -When you are done with the test cluster, see [these instructions](../Storage-Configuration/ceph-teardown.md) to clean up the cluster. +When finished with the test cluster, see [the cleanup guide](../Storage-Configuration/ceph-teardown.md). diff --git a/Documentation/Getting-Started/release-cycle.md b/Documentation/Getting-Started/release-cycle.md index ccda57f0ee8e..d19f5fb8ed0f 100644 --- a/Documentation/Getting-Started/release-cycle.md +++ b/Documentation/Getting-Started/release-cycle.md @@ -27,4 +27,4 @@ The minimum version supported by a Rook release is specified in the Rook expects to support the most recent six versions of Kubernetes. While these K8s versions may not all be supported by the K8s release cycle, we understand that -you may have clusters that take time to update. +clusters may take time to update. diff --git a/Documentation/Getting-Started/storage-architecture.md b/Documentation/Getting-Started/storage-architecture.md index 05873f6f5178..0599b4e9ff55 100644 --- a/Documentation/Getting-Started/storage-architecture.md +++ b/Documentation/Getting-Started/storage-architecture.md @@ -25,8 +25,8 @@ Rook automatically configures the Ceph-CSI driver to mount the storage to your p The `rook/ceph` image includes all necessary tools to manage the cluster. Rook is not in the Ceph data path. Many of the Ceph concepts like placement groups and crush maps -are hidden so you don't have to worry about them. Instead Rook creates a simplified user experience for admins that is in terms -of physical resources, pools, volumes, filesystems, and buckets. At the same time, advanced configuration can be applied when needed with the Ceph tools. +are hidden so you don't have to worry about them. Instead, Rook creates a simplified user experience for admins that is in terms +of physical resources, pools, volumes, filesystems, and buckets. Advanced configuration can be applied when needed with the Ceph tools. Rook is implemented in golang. Ceph is implemented in C++ where the data path is highly optimized. We believe this combination offers the best of both worlds. diff --git a/Documentation/README.md b/Documentation/README.md index d49595108ab1..ede5c852315c 100644 --- a/Documentation/README.md +++ b/Documentation/README.md @@ -28,4 +28,4 @@ For detailed design documentation, see also the [design docs](https://github.com ## Need help? Be sure to join the Rook Slack -If you have any questions along the way, please don't hesitate to ask us in our [Slack channel](https://rook-io.slack.com). You can sign up for our Slack [here](https://slack.rook.io). +If you have any questions along the way, don't hesitate to ask in our [Slack channel](https://rook-io.slack.com). Sign up for the Rook Slack [here](https://slack.rook.io). diff --git a/GOVERNANCE.md b/GOVERNANCE.md index 91e7cd108c4e..446841a73191 100644 --- a/GOVERNANCE.md +++ b/GOVERNANCE.md @@ -90,13 +90,13 @@ Beyond your contributions to the project, consider: If you are meeting these requirements, express interest to the [steering committee](OWNERS.md#steering-committee) directly that your organization is interested in adding a maintainer. -* We may ask you to do some PRs from our backlog. -* As you gain experience with the code base and our standards, we will ask you to do code reviews +* We may ask you to resolve some issues from our backlog. +* As you gain experience with the code base and our standards, we will ask you to perform code reviews for incoming PRs (i.e., all maintainers are expected to shoulder a proportional share of community reviews). -* After a period of approximately 2-3 months of working together and making sure we see eye to eye, +* After a period of approximately several months of working together and making sure we see eye to eye, the steering committee will confer and decide whether to grant maintainer status or not. - We make no guarantees on the length of time this will take, but 2-3 months is the approximate + We make no guarantees on the length of time this will take, but several months is the approximate goal. ### Removing a maintainer diff --git a/INSTALL.md b/INSTALL.md index 0ae15ffdb3d0..5b589490a81f 100644 --- a/INSTALL.md +++ b/INSTALL.md @@ -38,9 +38,6 @@ On every commit to PR and master the CI will build, run unit tests, and run inte If the build is for master or a release, the build will also be published to [dockerhub.com](https://cloud.docker.com/u/rook/repository/list). -!!! Note - If the pull request title follows Rook's [contribution guidelines](https://rook.io/docs/rook/latest/Contributing/development-flow/#commit-structure), the CI will automatically run the appropriate test scenario. For example if a pull request title is "ceph: add a feature", then the tests for the Ceph storage provider will run. Similarly, tests will only run for a single provider with the "cassandra:" and "nfs:" prefixes. - ## Building for other platforms You can also run the build for all supported platforms: