Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

📖 Document the CAPIProvider resource for management of CAPI Operator resources #57

Merged
merged 1 commit into from
Jan 11, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 0 additions & 1 deletion docs/getting-started/install_capi_operator.md
Original file line number Diff line number Diff line change
Expand Up @@ -49,7 +49,6 @@ helm install capi-operator capi-operator/cluster-api-operator
--set cert-manager.enabled=true
--timeout 90s
--secret-name <secret_name>
--secret-namespace <secret_namespace>
--wait
```

Expand Down
17 changes: 8 additions & 9 deletions docs/getting-started/install_turtles_operator.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,10 +26,10 @@ helm repo update
To install `Cluster API Operator` as a dependency to the `Rancher Turtles`, a minimum set of additional helm flags should be specified:

```bash
helm install rancher-turtles turtles/rancher-turtles --version v0.2.0
-n rancher-turtles-system
--dependency-update
--create-namespace --wait
helm install rancher-turtles turtles/rancher-turtles --version v0.4.0 \
-n rancher-turtles-system \
--dependency-update \
--create-namespace --wait \
--timeout 180s
```

Expand All @@ -41,23 +41,22 @@ helm install rancher-turtles turtles/rancher-turtles --version v0.2.0

This is the basic, recommended configuration, which manages the creation of a secret containing the required feature flags (`CLUSTER_TOPOLOGY`, `EXP_CLUSTER_RESOURCE_SET` and `EXP_MACHINE_POOL` enabled) in the core provider namespace.

If you need to override the default behavior and use an existing secret (or add custom environment variables), you can pass the secret name and namespace helm flags. In this case, as a user, you are in charge of managing the secret creation and its content, including the minimum required features: `CLUSTER_TOPOLOGY`, `EXP_CLUSTER_RESOURCE_SET` and `EXP_MACHINE_POOL` enabled.
If you need to override the default behavior and use an existing secret (or add custom environment variables), you can pass the secret name helm flag. In this case, as a user, you are in charge of managing the secret creation and its content, including the minimum required features: `CLUSTER_TOPOLOGY`, `EXP_CLUSTER_RESOURCE_SET` and `EXP_MACHINE_POOL` enabled.

```bash
helm install ...
# Passing secret name and namespace for additional environment variables
--set cluster-api-operator.cluster-api.configSecret.name=<secret_name>
--set cluster-api-operator.cluster-api.configSecret.namespace=<secret_namespace>
```

The following is an example of a user-managed secret `cluster-api-operator.cluster-api.configSecret.name=variables`, `cluster-api-operator.cluster-api.configSecret.namespace=default` with `CLUSTER_TOPOLOGY`, `EXP_CLUSTER_RESOURCE_SET` and `EXP_MACHINE_POOL` feature flags set and an extra custom variable:
The following is an example of a user-managed secret `cluster-api-operator.cluster-api.configSecret.name=variables` with `CLUSTER_TOPOLOGY`, `EXP_CLUSTER_RESOURCE_SET` and `EXP_MACHINE_POOL` feature flags set and an extra custom variable:

```yaml title="secret.yaml"
apiVersion: v1
kind: Secret
metadata:
name: variables
namespace: default
namespace: rancher-turtles-system
type: Opaque
stringData:
CLUSTER_TOPOLOGY: "true"
Expand Down Expand Up @@ -89,7 +88,7 @@ helm repo update
and then it can be installed into the `rancher-turtles-system` namespace with:

```bash
helm install rancher-turtles turtles/rancher-turtles --version v0.2.0
helm install rancher-turtles turtles/rancher-turtles --version v0.4.0
-n rancher-turtles-system
--set cluster-api-operator.enabled=false
--set cluster-api-operator.cluster-api.enabled=false
Expand Down
1 change: 0 additions & 1 deletion docs/reference-guides/rancher-turtles-chart/values.md
Original file line number Diff line number Diff line change
Expand Up @@ -46,7 +46,6 @@ cluster-api-operator:
version: v1.4.6 # version of CAPI to install (default: v1.4.6)
configSecret:
name: "" # (provide only if using a user-managed secret) name of the config secret to use for core CAPI controllers, used by the CAPI operator. See https://github.com/kubernetes-sigs/cluster-api-operator/tree/main/docs#installing-azure-infrastructure-provider docs for more details.
namespace: "" # (provide only if using a user-managed secret) namespace of the config secret to use for core CAPI controllers, used by the CAPI operator.
defaultName: "capi-env-variables" # default name for the automatically created secret.
core:
namespace: capi-system
Expand Down
43 changes: 41 additions & 2 deletions docs/tasks/capi-operator/add_infrastructure_provider.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
---
sidebar_position: 2
sidebar_position: 3
---

# Add Infrastructure Provider
Expand All @@ -12,6 +12,45 @@ Next, install [Azure Infrastructure Provider](https://capz.sigs.k8s.io/). Before

Since the provider requires variables to be set, create a secret containing them in the same namespace as the provider. It is also recommended to include a `github-token` in the secret. This token is used to fetch the provider repository, and it is required for the provider to be installed. The operator may exceed the rate limit of the GitHub API without the token. Like [clusterctl](https://cluster-api.sigs.k8s.io/clusterctl/overview.html?highlight=github_token#avoiding-github-rate-limiting), the token needs only the `repo` scope.

## Option 1: CAPIProvider resource

This section describes how to install the Azure `InfrastructureProvider` via `CAPIProvider`, which is responsible for managing Cluster API Azure CRDs and the Cluster API Azure controller.

*Example:*

```yaml
---
apiVersion: v1
kind: Secret
metadata:
name: azure-variables
namespace: capz-system
type: Opaque
stringData:
AZURE_CLIENT_ID_B64: Zm9vCg==
AZURE_CLIENT_SECRET_B64: Zm9vCg==
AZURE_SUBSCRIPTION_ID_B64: Zm9vCg==
AZURE_TENANT_ID_B64: Zm9vCg==
github-token: ghp_fff
---
apiVersion: turtles-capi.cattle.io/v1alpha1
kind: CAPIProvider
metadata:
name: azure
namespace: capz-system
spec:
version: v1.9.3
type: infrastructure # required
configSecret:
name: azure-variables # This will additionally poulate the default set of feature gates for the provider
```

## Option 2: Raw InfrastructureProvider resource

This section describes how to install the Azure `InfrastructureProvider`, which is responsible for managing the Cluster API Azure CRDs and the Cluster API Azure controller.

*Example:*

```yaml
---
apiVersion: v1
Expand All @@ -27,7 +66,7 @@ stringData:
AZURE_TENANT_ID_B64: Zm9vCg==
github-token: ghp_fff
---
apiVersion: operator.cluster.x-k8s.io/v1alpha1
apiVersion: operator.cluster.x-k8s.io/v1alpha2
kind: InfrastructureProvider
metadata:
name: azure
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -11,5 +11,6 @@ This section describes the basic process of installing `CAPI` providers using th
- `ControlPlaneProvider`
- `InfrastructureProvider`
- `AddonProvider`
- `IPAMProvider`

Please note that this example provides a basic configuration of Azure Infrastructure provider for getting started. More detailed examples and CRD descriptions are provided in the `Cluster API Operator` [documentation](https://github.com/kubernetes-sigs/cluster-api-operator/tree/main/docs#readme)
62 changes: 62 additions & 0 deletions docs/tasks/capi-operator/capiprovider_resource.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,62 @@
---
sidebar_position: 2
---

# CAPIProvider Resource

The `CAPIProvider` resource allows managing Cluster API Operator manifests in a declarative way. It is used to provision and configure Cluster API providers like AWS, vSphere etc.

`CAPIProvider` follows a GitOps model - the spec fields are declarative user inputs. The controller only updates status.

[ARD](https://github.com/rancher-sandbox/rancher-turtles/blob/main/docs/adr/0007-rancher-turtles-public-api.md)

## Usage

To use the `CAPIProvider` resource:

1. Create a `CAPIProvider` resource with the desired provider name, type, credentials, configuration, and features.
1. The `CAPIProvider` controller will handle templating the required Cluster API Operator manifests based on the `CAPIProvider` spec.
1. The status field on the `CAPIProvider` resource will reflect the state of the generated manifests.
1. Manage the `CAPIProvider` object declaratively to apply changes to the generated provider manifests.

Here is an example `CAPIProvider` manifest:

```yaml
apiVersion: turtles-capi.cattle.io/v1alpha1
kind: CAPIProvider
metadata:
name: aws-infra
namespace: default
spec:
name: aws
type: infrastructure
credentials:
rancherCloudCredential: aws-creds # Rancher credentials secret for AWS
configSecret:
name: aws-config
features:
clusterResourceSet: true
```

This will generate an AWS infrastructure provider with the supplied mapping for rancher credential secret and custom enabled features.

The `CAPIProvider` controller will own all the generated provider resources, allowing garbage collection by deleting the `CAPIProvider` object.

## Specification

The key fields in the `CAPIProvider` spec are:

- `name` - Name of the provider (aws, vsphere etc). Inherited from `metadata.name` if is not specified.
- `type` - Kind of provider resource (infrastructure, controlplane etc)
- `credentials` - Source credentials for provider specification
- `configSecret` - Name of the provider config secret, where the variables and synced credential will be stored. By default if not specified, will inherit the name of the `CAPIProvider` resource
- `features` - Enabled provider features
- `variables` - Variables is a map of environment variables to add to the content of the `configSecret`

Full documentation on the CAPIProvider resource - [here](https://doc.crds.dev/github.com/rancher-sandbox/rancher-turtles/turtles-capi.cattle.io/CAPIProvider/[email protected]).

## Deletion

When a `CAPIProvider` resource is deleted, the kubernetes garbage collection will clean up all the generated provider resources that it owns. This includes:
- Cluster API Operator resource instance
- Secret referenced by the `configSecret`
33 changes: 26 additions & 7 deletions docs/tasks/capi-operator/installing_core_provider.md
Original file line number Diff line number Diff line change
@@ -1,13 +1,36 @@
---
sidebar_position: 3
sidebar_position: 4
---

# Installing the CoreProvider

This section describes how to install the CoreProvider, which is responsible for managing the Cluster API CRDs and the Cluster API controller.

Any existing namespace could be utilized for providers in the Kubernetes cluster. However, before creating a provider object, make sure the specified namespace has been created. In the example below, we use the `capi-system` namespace. To create this namespace through either the Command Line Interface (CLI) by running `kubectl create namespace capi-system`, or the declarative approach described in the [official Kubernetes documentation](https://kubernetes.io/docs/tasks/administer-cluster/namespaces-walkthrough/#create-new-namespaces) could be used.

:::note
Only one CoreProvider can be installed at the same time on a single cluster.
:::

## Option 1: CAPIProvider resource

This section describes how to install the `CoreProvider` via `CAPIProvider`, which is responsible for managing the Cluster API CRDs and the Cluster API controller.

*Example:*

```yaml
apiVersion: turtles-capi.cattle.io/v1alpha1
kind: CAPIProvider
metadata:
name: cluster-api
namespace: capi-system
spec:
version: v1.4.6
type: core # required
```

## Option 2: CoreProvider resource

This section describes how to install the `CoreProvider`, which is responsible for managing the Cluster API CRDs and the Cluster API controller.

*Example:*

```yaml
Expand All @@ -19,7 +42,3 @@ metadata:
spec:
version: v1.4.6
```

:::note
Only one CoreProvider can be installed at the same time on a single cluster.
:::