From 5fde08a32bdaead44c1c756f977d15abb97dbe72 Mon Sep 17 00:00:00 2001 From: David Martin Date: Fri, 10 Jan 2025 14:41:26 +0000 Subject: [PATCH] Link to docs site for any user content Signed-off-by: David Martin --- README.md | 155 +------- config/install/README.md | 637 +------------------------------ doc/install/install-openshift.md | 428 --------------------- 3 files changed, 6 insertions(+), 1214 deletions(-) delete mode 100644 doc/install/install-openshift.md diff --git a/README.md b/README.md index b21b2eafe..fcdaf4108 100644 --- a/README.md +++ b/README.md @@ -7,160 +7,15 @@ [![OpenSSF Best Practices](https://www.bestpractices.dev/projects/9242/badge)](https://www.bestpractices.dev/projects/9242) [![FOSSA Status](https://app.fossa.com/api/projects/git%2Bgithub.com%2FKuadrant%2Fkuadrant-operator.svg?type=shield)](https://app.fossa.com/projects/git%2Bgithub.com%2FKuadrant%2Fkuadrant-operator?ref=badge_shield) -The Operator to install and manage the lifecycle of the [Kuadrant](https://github.com/Kuadrant/) components deployments. - ## Overview -Kuadrant is a re-architecture of API Management using Cloud Native concepts and separating the components to be less coupled, -more reusable and leverage the underlying kubernetes platform. It aims to deliver a smooth experience to providers and consumers -of applications & services when it comes to rate limiting, authentication, authorization, discoverability, change management, usage contracts, insights, etc. - -Kuadrant aims to produce a set of loosely coupled functionalities built directly on top of Kubernetes. -Furthermore, it only strives to provide what Kubernetes doesn’t offer out of the box, i.e. Kuadrant won’t be designing a new Gateway/proxy, -instead it will opt to connect with what’s there and what’s being developed (think Envoy, Istio, GatewayAPI). - -Kuadrant is a system of cloud-native k8s components that grows as users’ needs grow. - -- From simple protection of a Service (via **AuthN**) that is used by teammates working on the same cluster, or “sibling” services, up to **AuthZ** of users using OIDC plus custom policies. -- From no rate-limiting to rate-limiting for global service protection on to rate-limiting by users/plans - -## Architecture - -Kuadrant relies on the [Gateway API](https://gateway-api.sigs.k8s.io/) and one Gateway API provider -being installed on the cluster. Currently only [Istio](https://istio.io/) and -[EnvoyGateway](https://gateway.envoyproxy.io/) are supported -to operate the cluster ingress gateway to provide API management with **authentication** (authN), -**authorization** (authZ) and **rate limiting** capabilities. - -### Kuadrant components - -| CRD | Description | -| -------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -| Control Plane | The control plane takes the customer desired configuration (declaratively as kubernetes custom resources) as input and ensures all components are configured to obey customer's desired behavior.
This repository contains the source code of the kuadrant control plane | -| [Kuadrant Operator](https://github.com/Kuadrant/kuadrant-operator) | A Kubernetes Operator to manage the lifecycle of the kuadrant deployment | -| [Authorino](https://github.com/Kuadrant/authorino) | The AuthN/AuthZ enforcer. As the [external istio authorizer](https://istio.io/latest/docs/tasks/security/authorization/authz-custom/) ([envoy external authorization](https://www.envoyproxy.io/docs/envoy/latest/configuration/http/http_filters/ext_authz_filter) serving gRPC service) | -| [Limitador](https://github.com/Kuadrant/limitador) | The external rate limiting service. It exposes a gRPC service implementing the [Envoy Rate Limit protocol (v3)](https://www.envoyproxy.io/docs/envoy/latest/api-v3/service/ratelimit/v3/rls.proto) | -| [Authorino Operator](https://github.com/Kuadrant/authorino-operator) | A Kubernetes Operator to manage Authorino instances | -| [Limitador Operator](https://github.com/Kuadrant/limitador-operator) | A Kubernetes Operator to manage Limitador instances | -| [DNS Operator](https://github.com/Kuadrant/dns-operator) | A Kubernetes Operator to manage DNS records in external providers | - -### Provided APIs - -The kuadrant control plane owns the following [Custom Resource Definitions, CRDs](https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/): - -| CRD | Description | Example | -| ------------------------------------------------------------------------------------------------------------- | -------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------- | -| AuthPolicy CRD [\[doc\]](doc/overviews/auth.md) [[reference]](doc/reference/authpolicy.md) | Enable AuthN and AuthZ based access control on workloads | [AuthPolicy CR](https://github.com/Kuadrant/kuadrant-operator/blob/main/examples/toystore/authpolicy.yaml) | -| RateLimitPolicy CRD [\[doc\]](doc/overviews/rate-limiting.md) [[reference]](doc/reference/ratelimitpolicy.md) | Enable access control on workloads based on HTTP rate limiting | [RateLimitPolicy CR](https://raw.githubusercontent.com/Kuadrant/kuadrant-operator/main/examples/toystore/ratelimitpolicy_httproute.yaml) | -| DNSPolicy CRD [\[doc\]](doc/overviews/dns.md) [[reference]](doc/reference/dnspolicy.md) | Enable DNS management | [DNSPolicy CR](config/samples/kuadrant_v1_dnspolicy.yaml) | -| TLSPolicy CRD [\[doc\]](doc/overviews/tls.md) [[reference]](doc/reference/tlspolicy.md) | Enable TLS management | [TLSPolicy CR](config/samples/kuadrant_v1_tlspolicy.yaml) | - -Additionally, Kuadrant provides the following CRDs - -| CRD | Owner | Description | Example | -| ----------------------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------- | ----------------------------------- | --------------------------------------------------------------------------------------------------------------------------------- | -| [Kuadrant CRD](https://github.com/Kuadrant/kuadrant-operator/blob/main/api/v1beta1/kuadrant_types.go) | [Kuadrant Operator](https://github.com/Kuadrant/kuadrant-operator) | Represents an instance of kuadrant | [Kuadrant CR](https://github.com/Kuadrant/kuadrant-operator/blob/main/config/samples/kuadrant_v1beta1_kuadrant.yaml) | -| [Limitador CRD](https://github.com/Kuadrant/limitador-operator/blob/main/api/v1alpha1/limitador_types.go) | [Limitador Operator](https://github.com/Kuadrant/limitador-operator) | Represents an instance of Limitador | [Limitador CR](https://github.com/Kuadrant/limitador-operator/blob/main/config/samples/limitador_v1alpha1_limitador.yaml) | -| [Authorino CRD](https://docs.kuadrant.io/latest/authorino-operator/#the-authorino-custom-resource-definition-crd) | [Authorino Operator](https://github.com/Kuadrant/authorino-operator) | Represents an instance of Authorino | [Authorino CR](https://github.com/Kuadrant/authorino-operator/blob/main/config/samples/authorino-operator_v1beta1_authorino.yaml) | - -Kuadrant Architecture - -## Getting started - -### Pre-requisites - -- Istio or Envoy Gateway is installed in the cluster. Otherwise, refer to the - [Istio getting started guide](https://istio.io/latest/docs/setup/getting-started/) - or [EnvoyGateway getting started guide](https://gateway.envoyproxy.io/docs/). -- Kubernetes Gateway API is installed in the cluster. -- cert-manager is installed in the cluster. Otherwise, refer to the - [cert-manager installation guide](https://cert-manager.io/docs/installation/). - -### Installing Kuadrant - -Installing Kuadrant is a two-step procedure. Firstly, install the Kuadrant Operator and secondly, -request a Kuadrant instance by creating a _Kuadrant_ custom resource. - -#### 1. Install the Kuadrant Operator - -The Kuadrant Operator is available in public community operator catalogs, such as the Kubernetes [OperatorHub.io](https://operatorhub.io/operator/kuadrant-operator) and the [Openshift Container Platform and OKD OperatorHub](https://redhat-openshift-ecosystem.github.io/community-operators-prod). - -**Kubernetes** - -The operator is available from [OperatorHub.io](https://operatorhub.io/operator/kuadrant-operator). -Just go to the linked page and follow installation steps (or just run these two commands): - -```sh -# Install Operator Lifecycle Manager (OLM), a tool to help manage the operators running on your cluster. - -curl -sL https://github.com/operator-framework/operator-lifecycle-manager/releases/download/v0.23.1/install.sh | bash -s v0.23.1 - -# Install the operator by running the following command: - -kubectl create -f https://operatorhub.io/install/kuadrant-operator.yaml -``` - -**Openshift** - -The operator is available from the [Openshift Console OperatorHub](https://docs.openshift.com/container-platform/4.11/operators/user/olm-installing-operators-in-namespace.html#olm-installing-from-operatorhub-using-web-console_olm-installing-operators-in-namespace). -Just follow installation steps choosing the "Kuadrant Operator" from the catalog: - -![Kuadrant Operator in OperatorHub](https://content.cloud.redhat.com/hs-fs/hubfs/ogFyppY.png?width=449&height=380&name=ogFyppY.png) - -#### 2. Request a Kuadrant instance - -Create the namespace: - -```sh -kubectl create namespace kuadrant -``` - -Apply the `Kuadrant` custom resource: - -```sh -kubectl -n kuadrant apply -f - < Note: for multiple clusters, it would make sense to do the installation via a tool like [argocd](https://argo-cd.readthedocs.io/en/stable/). For other methods of addressing multiple clusters take a look at the [kubectl docs](https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/) - -> Note: this document focuses on AWS integration for DNS. If you want to use a different provider, there are examples under the [configure directory](https://github.com/Kuadrant/kuadrant-operator/tree/main/config/install/configure) - -## Basic Installation - -This first step will install just Kuadrant at a given released version (post v1.x) in the `kuadrant-system` namespace and the Sail Operator. There will be no credentials/dns providers configured (This is the most basic setup but means TLSPolicy and DNSPolicy will not be able to be used). - -Start by creating the following `kustomization.yaml` in a directory locally. For the purpose of this doc, we will use: `~/kuadrant/` directory. - -```bash -export KUADRANT_DIR=~/kuadrant -mkdir -p $KUADRANT_DIR/install -touch $KUADRANT_DIR/install/kustomization.yaml - -``` - -> Setting the version to install: You can set the version of kuadrant to install by adding / changing the `?ref=v1.0.1` in the resource links. - -```yaml -# add this to the kustomization.yaml -apiVersion: kustomize.config.k8s.io/v1beta1 -kind: Kustomization -resources: - - https://github.com/Kuadrant/kuadrant-operator//config/install/standard?ref=v1.0.1 #set the versio by adding ?ref=v1.0.1 change this version as needed (see https://github.com/Kuadrant/kuadrant-operator/releases) - #- https://github.com/Kuadrant/kuadrant-operator//config/install/openshift?ref=v1.0.1 #use if targeting an OCP cluster. Change this version as needed (see https://github.com/Kuadrant/kuadrant-operator/releases). - -patches: # remove this subscription patch if you are installing a development version. It will then use the "preview" channel - - patch: |- - apiVersion: operators.coreos.com/v1alpha1 - kind: Subscription - metadata: - name: kuadrant - spec: - source: kuadrant-operator-catalog - sourceNamespace: kuadrant-system - name: kuadrant-operator - channel: 'stable' #set to preview if not using a release (for example if using main) - -``` - -And execute the following to apply it to a cluster: - -```bash -# change the location depending on where you created the kustomization.yaml -kubectl apply -k $KUADRANT_DIR/install - -``` - -#### Verify the operators are installed: - -OLM should begin installing the dependencies for Kuadrant. To wait for them to be ready, run: - -```bash -kubectl -n kuadrant-system wait --timeout=160s --for=condition=Available deployments --all -``` - -> Note: you may see ` no matching resources found ` if the deployments are not yet present. - -Once OLM has finished installing the operators (this can take several minutes). You should see the following in the kuadrant-system namespace: - -```bash -kubectl get deployments -n kuadrant-system - -## Output (kuadrant-console-plugin deployment only installed on OpenShift) -# NAME READY UP-TO-DATE AVAILABLE AGE -# authorino-operator 1/1 1 1 83m -# dns-operator-controller-manager 1/1 1 1 83m -# kuadrant-console-plugin 1/1 1 1 83m -# kuadrant-operator-controller-manager 1/1 1 1 83m -# limitador-operator-controller-manager 1/1 1 1 83m - -``` - -You can also view the subscription for information about the install: - -```bash -kubectl get subscription -n kuadrant-system -o=yaml - -``` - -### Install the operand components - -Kuadrant has 2 additional operand components that it manages: `Authorino` that provides data plane auth integration and `Limitador` that provides data plane rate limiting. To set these up lets add a new `kustomization.yaml` in a new sub directory. We will re-use this later for further configuration. We do this as a separate step as we want to have the operators installed first. - -Add the following to your local directory. For the purpose of this doc, we will use: `$KUADRANT_DIR/configure/kustomization.yaml`. - -```bash -mkdir -p $KUADRANT_DIR/configure -touch $KUADRANT_DIR/configure/kustomization.yaml - -``` - -Add the following to the new kustomization.yaml: - - -```yaml -apiVersion: kustomize.config.k8s.io/v1beta1 -kind: Kustomization -resources: - - https://github.com/Kuadrant/kuadrant-operator//config/install/configure/standard?ref=v1.0.1 #change this version as needed (see https://github.com/Kuadrant/kuadrant-operator/releases) - -``` - -Lets apply this to your cluster: - -```bash - -kubectl apply -k $KUADRANT_DIR/configure - -``` - -### Verify kuadrant is installed and ready: - -```bash -kubectl get kuadrant kuadrant -n kuadrant-system -o=wide - -# NAME STATUS AGE -# kuadrant Ready 109s - -``` - -You should see the condition with type `Ready` with a message of `kuadrant is ready`. - - -### Verify Istio is configured and ready: - -```bash -kubectl get istio -n gateway-system - -#sample output -# NAME REVISIONS READY IN USE ACTIVE REVISION VERSION AGE -# default 1 1 1 Healthy v1.23.0 3d22h -``` - - - -At this point Kuadrant is installed and ready to be used as is Istio as the gateway provider. This means AuthPolicy and RateLimitPolicy can now be configured and used to protect any Gateways you create. - - -## Configure DNS and TLS integration - -In this section will build on the previous steps and expand the `kustomization.yaml` we created in `$KUADRANT_DIR/configure`. - -In order for cert-manager and the Kuadrant DNS operator to be able to access and manage DNS records and setup TLS certificates and provide external connectivity for your endpoints, you need to setup a credential for these components. To do this, we will use a Kubernetes secret via a kustomize secret generator. You can find other example overlays for each supported cloud provider under the [configure directory](https://github.com/Kuadrant/kuadrant-operator/tree/main/config/install/configure). - -An example lets-encrypt certificate issuer is provided, but for more information on certificate issuers take a look at the [cert-manager documentation](https://cert-manager.io/docs/configuration/acme/). - - -Lets modify our existing local kustomize overlay to setup these secrets and the cluster certificate issuer: - -First you will need to setup the required `.env` file specified in the kuztomization.yaml file in the same directory as your existing configure kustomization. Below is an example for AWS: - -```bash -touch $KUADRANT_DIR/configure/aws-credentials.env - -``` -Add the following to your new file - -``` -AWS_ACCESS_KEY_ID=xxx -AWS_SECRET_ACCESS_KEY=xxx -AWS_REGION=eu-west-1 - -``` - -With this setup, lets update our configure kustomization to generate the needed secrets. We will also define a TLS ClusterIssuer (see below). The full `kustomization.yaml` file should look like: - -```yaml -apiVersion: kustomize.config.k8s.io/v1beta1 -kind: Kustomization -resources: - - https://github.com/Kuadrant/kuadrant-operator//config/install/configure/standard?ref=v1.0.1 #change this version as needed (see https://github.com/Kuadrant/kuadrant-operator/releases) - - cluster-issuer.yaml #(comment if you dont want to use it. The issuer yaml is defined below). Ensure you name the file correctly. - - -generatorOptions: - disableNameSuffixHash: true - labels: - app.kubernetes.io/part-of: kuadrant - app.kubernetes.io/managed-by: kustomize - -secretGenerator: - - name: aws-provider-credentials - namespace: cert-manager # assumes cert-manager namespace exists. - envs: - - aws-credentials.env # notice this matches the .env file above. You will need to setup this file locally - type: 'kuadrant.io/aws' - - name: aws-provider-credentials - namespace: gateway-system # this is the namespace where your gateway will be provisioned - envs: - - aws-credentials.env #notice this matches the .env file above. you need to set up this file locally first. - type: 'kuadrant.io/aws' - - -``` - -Below is an example Lets-Encrypt Cluster Issuer that uses the aws credential we setup above. Create this in the same directory as the configure kustomization.yaml: - -```bash -touch $KUADRANT_DIR/configure/cluster-issuer.yaml -``` - -Add the following to this new file: - -```yaml -# example lets-encrypt cluster issuer that will work with the credentials we will add -apiVersion: cert-manager.io/v1 -kind: ClusterIssuer -metadata: - name: lets-encrypt-aws -spec: - acme: - privateKeySecretRef: - name: le-secret - server: https://acme-v02.api.letsencrypt.org/directory - solvers: - - dns01: - route53: - accessKeyIDSecretRef: - key: AWS_ACCESS_KEY_ID - name: aws-provider-credentials #notice this matches the name of the secret we created. - region: us-east-1 #override if needed - secretAccessKeySecretRef: - key: AWS_SECRET_ACCESS_KEY - name: aws-provider-credentials - -``` - -To apply our changes (note this doesn't need to be done in different steps, but is done so here to illustrate how you can build up your configuration of Kuadrant) execute: - -```bash -kubectl apply -k $KUADRANT_DIR/configure -``` - -The cluster issuer should become ready: - -```bash -kubectl get clusterissuer -o=wide - -# NAME READY STATUS AGE -# lets-encrypt-aws True The ACME account was registered with the ACME server 14s - -``` - -We create two credentials. One for use with `DNSPolicy` in the gateway-system namespace and one for use by cert-manager in the `cert-manager` namespace. With these credentials in place and the cluster issuer configured. You are now ready to start using DNSPolicy and TLSPolicy to secure and connect your Gateways. - - -## Use an External Redis - -To connect `Limitador` (the component responsible for rate limiting requests) to redis so that its counters are stored and can be shared with other limitador instances follow these steps: - -Again we will build on the kustomization we started. In the same way we did for the cloud provider credentials, we need to setup a `redis-credential.env` file in the same directory as the kustomization. - - -```bash -touch $KUADRANT_DIR/configure/redis-credentials.env - -``` - -Add the redis connection string to this file in the following format: - -``` -URL=redis://xxxx -``` - -Next we need to add a new secret generator to our existing configure file at `$KUADRANT_DIR/configure/kustomization.yaml` add it below the other `secretGenerators` - -```yaml - - name: redis-credentials - namespace: kuadrant-system - envs: - - redis-credentials.env - type: 'kuadrant.io/redis' -``` - -We also need to patch the existing `Limitador` resource. Add the following to the `$KUADRANT_DIR/configure/kustomization.yaml` - - -```yaml - -patches: - - patch: |- - apiVersion: limitador.kuadrant.io/v1alpha1 - kind: Limitador - metadata: - name: limitador - namespace: kuadrant-system - spec: - storage: - redis: - configSecretRef: - name: redis-credentials - -``` - -Your full `kustomize.yaml` will now be: - -```yaml -apiVersion: kustomize.config.k8s.io/v1beta1 -kind: Kustomization -resources: - - https://github.com/Kuadrant/kuadrant-operator//config/install/configure/standard?ref=v1.0.1 #change this version as needed (see https://github.com/Kuadrant/kuadrant-operator/releases) - - cluster-issuer.yaml #(comment if you dont want to use it. The issuer yaml is defined below). Ensure you name the file correctly. - - -generatorOptions: - disableNameSuffixHash: true - labels: - app.kubernetes.io/part-of: kuadrant - app.kubernetes.io/managed-by: kustomize - -secretGenerator: - - name: aws-provider-credentials - namespace: cert-manager # assumes cert-manager namespace exists. - envs: - - aws-credentials.env # notice this matches the .env file above. You will need to setup this file locally - type: 'kuadrant.io/aws' - - name: aws-provider-credentials - namespace: gateway-system # this is the namespace where your gateway will be provisioned - envs: - - aws-credentials.env #notice this matches the .env file above. you need to set up this file locally first. - type: 'kuadrant.io/aws' - - name: redis-credentials - namespace: kuadrant-system - envs: - - redis-credentials.env - type: 'kuadrant.io/redis' - -patches: - - patch: |- - apiVersion: limitador.kuadrant.io/v1alpha1 - kind: Limitador - metadata: - name: limitador - namespace: kuadrant-system - spec: - storage: - redis: - configSecretRef: - name: redis-credentials - -``` - - -Re-Apply the configuration to setup the new secret and configuration: - -```bash -kubectl apply -k $KUADRANT_DIR/configure/ -``` - -Limitador is now configured to use the provided redis connection URL as a data store for rate limit counters. Limitador will become temporarily unavailable as it restarts. - -### Validate - -Validate Kuadrant is in a ready state as before: - -```bash -kubectl get kuadrant kuadrant -n kuadrant-system -o=wide - -# NAME STATUS AGE -# kuadrant Ready 61m - -``` - - -## Resilient Deployment of data plane components - -### Limitador: TopologyConstraints, PodDisruptionBudget and Resource Limits - -To set limits, replicas and a `PodDisruptionBudget` for limitador you can add the following to the existing limitador patch in your local `limitador` in the `$KUADRANT_DIR/configure/kustomize.yaml` spec: - -```yaml -pdb: - maxUnavailable: 1 -replicas: 2 -resourceRequirements: - requests: - cpu: 10m - memory: 10Mi # set these based on your own needs. -``` - -re-apply the configuration. This will result in two instances of limitador becoming available and a `podDisruptionBudget` being setup: - -```bash -kubectl apply -k $KUADRANT_DIR/configure/ - -``` - -For topology constraints, you will need to patch the limitador deployment directly: - -add the below `yaml` to a `limitador-topoloy-patch.yaml` file under a `$KUADRANT_DIR/configure/patches` directory: - -```bash -mkdir -p $KUADRANT_DIR/configure/patches -touch $KUADRANT_DIR/configure/patches/limitador-topoloy-patch.yaml -``` - -```yaml -spec: - template: - spec: - topologySpreadConstraints: - - maxSkew: 1 - topologyKey: kubernetes.io/hostname - whenUnsatisfiable: ScheduleAnyway - labelSelector: - matchLabels: - limitador-resource: limitador - - maxSkew: 1 - topologyKey: kubernetes.io/zone - whenUnsatisfiable: ScheduleAnyway - labelSelector: - matchLabels: - limitador-resource: limitador - -``` - -Apply this to the existing limitador deployment - -```bash -kubectl patch deployment limitador-limitador -n kuadrant-system --patch-file $KUADRANT_DIR/configure/patches/limitador-topoloy-patch.yaml -``` - -### Authorino: TopologyConstraints, PodDisruptionBudget and Resource Limits - -To increase the number of replicas for Authorino add a new patch to the `$KUADRANT_DIR/configure/kustomization.yaml` - -```yaml - - patch: |- - apiVersion: operator.authorino.kuadrant.io/v1beta1 - kind: Authorino - metadata: - name: authorino - namespace: kuadrant-system - spec: - replicas: 2 - -``` - -and re-apply the configuration: - -```bash -kubectl apply -k $KUADRANT_DIR/configure/ -``` - -To add resource limits and or topology constraints to Authorino. You will need to patch the Authorino deployment directly: -Add the below `yaml` to a `authorino-topoloy-patch.yaml` under the `$KUADRANT_DIR/configure/patches` directory: - -```bash -touch $KUADRANT_DIR/configure/patches/authorino-topoloy-patch.yaml -``` - -```yaml -spec: - template: - spec: - containers: - - name: authorino - resources: - requests: - cpu: 10m # set your own needed limits here - memory: 10Mi # set your own needed limits here - topologySpreadConstraints: - - maxSkew: 1 - topologyKey: kubernetes.io/hostname - whenUnsatisfiable: ScheduleAnyway - labelSelector: - matchLabels: - authorino-resource: authorino - - maxSkew: 1 - topologyKey: kubernetes.io/zone - whenUnsatisfiable: ScheduleAnyway - labelSelector: - matchLabels: - authorino-resource: authorino - -``` - -Apply the patch: - -```bash -kubectl patch deployment authorino -n kuadrant-system --patch-file $KUADRANT_DIR/configure/patches/authorino-topoloy-patch.yaml -``` - -Kuadrant is now installed and ready to use and the data plane components are configured to be distributed and resilient. - -For reference the full configure kustomization should look like: -```yaml -kind: Kustomization -resources: - - https://github.com/Kuadrant/kuadrant-operator//config/install/configure/standard?ref=v1.0.1 #change this version as needed (see https://github.com/Kuadrant/kuadrant-operator/releases) - - cluster-issuer.yaml -generatorOptions: - disableNameSuffixHash: true - labels: - app.kubernetes.io/part-of: kuadrant - app.kubernetes.io/managed-by: kustomize - -secretGenerator: - - name: aws-provider-credentials - namespace: cert-manager # assumes cert-manager namespace exists. - envs: - - aws-credentials.env # notice this matches the .env file above. You will need to setup this file locally - type: 'kuadrant.io/aws' - - name: aws-provider-credentials - namespace: gateway-system # this is the namespace where your gateway will be provisioned - envs: - - aws-credentials.env #notice this matches the .env file above. you need to set up this file locally first. - type: 'kuadrant.io/aws' - - name: redis-credentials - namespace: kuadrant-system - envs: - - redis-credentials.env - type: 'kuadrant.io/redis' - -patches: - - patch: |- - apiVersion: limitador.kuadrant.io/v1alpha1 - kind: Limitador - metadata: - name: limitador - namespace: kuadrant-system - spec: - pdb: - maxUnavailable: 1 - replicas: 2 - resourceRequirements: - requests: - cpu: 10m - memory: 10Mi # set these based on your own needs. - storage: - redis: - configSecretRef: - name: redis-credentials - - patch: |- - apiVersion: operator.authorino.kuadrant.io/v1beta1 - kind: Authorino - metadata: - name: authorino - namespace: kuadrant-system - spec: - replicas: 2 - -``` -The configure directory should contain the following: - -``` -configure/ -├── aws-credentials.env -├── cluster-issuer.yaml -├── kustomization.yaml -├── patches -│   ├── authorino-topoloy-patch.yaml -│   └── limitador-topoloy-patch.yaml -└── redis-credentials.env -``` - -## Set up observability (OpenShift Only) - -Verify that user workload monitoring is enabled in your Openshift cluster. -If it not enabled, check the [Openshift documentation](https://docs.openshift.com/container-platform/4.17/observability/monitoring/enabling-monitoring-for-user-defined-projects.html) for how to do this. - - -```bash -kubectl get configmap cluster-monitoring-config -n openshift-monitoring -o jsonpath='{.data.config\.yaml}'|grep enableUserWorkload -# (expected output) -# enableUserWorkload: true -``` - -Install the gateway & Kuadrant metrics components and configuration, including Grafana. - -```bash -# change the version as needed -kubectl apply -k https://github.com/Kuadrant/kuadrant-operator//config/install/configure/observability?ref=v1.0.1 -``` - -Configure the Openshift thanos-query instance as a data source in Grafana. - -```bash -TOKEN="Bearer $(oc whoami -t)" -HOST="$(kubectl -n openshift-monitoring get route thanos-querier -o jsonpath='https://{.status.ingress[].host}')" -echo "TOKEN=$TOKEN" > config/observability/openshift/grafana/datasource.env -echo "HOST=$HOST" >> config/observability/openshift/grafana/datasource.env -kubectl apply -k config/observability/openshift/grafana -``` - -Create the example dashboards in Grafana - -```bash -kubectl apply -k https://github.com/Kuadrant/kuadrant-operator//examples/dashboards?ref=v1.0.1 -``` - -Access the Grafana UI, using the default user/pass of root/secret. -You should see the example dashboards in the 'monitoring' folder. -For more information on the example dashboards, check out the [documentation](https://docs.kuadrant.io/latest/kuadrant-operator/doc/observability/examples/). - -```bash -kubectl -n monitoring get routes grafana-route -o jsonpath="https://{.status.ingress[].host}" -``` - - -### Next Steps - -- Try out one of our user-guides [secure, connect protect](https://docs.kuadrant.io/latest/kuadrant-operator/doc/user-guides/full-walkthrough/secure-protect-connect-k8s/#overview) +See https://docs.kuadrant.io/dev/install-olm/ for how to use these files for installation of Kuadrant. \ No newline at end of file diff --git a/doc/install/install-openshift.md b/doc/install/install-openshift.md deleted file mode 100644 index c702f4da8..000000000 --- a/doc/install/install-openshift.md +++ /dev/null @@ -1,428 +0,0 @@ -# Install Kuadrant on an OpenShift cluster - -!!! note - - You must perform these steps on each OpenShift cluster that you want to use Kuadrant on. - - In this document we use AWS route 53 as the example setup. - -!!! warning - - Kuadrant uses a number of labels to search and filter resources on the cluster. - All required labels are formatted as `kuadrant.io/*`. - Removal of any labels with the prefix may cause unexpected behaviour and degradation of the product. - -## Prerequisites - -- OpenShift Container Platform 4.16.x or later with community Operator catalog available. -- AWS/Azure or GCP with DNS capabilities. -- Accessible Redis instance. - -## Procedure - -### Step 1 - Set up your environment - -We use env vars for convenience only here. If you know these values you can setup the required yaml files in anyway that suites your needs. - -```bash -export AWS_ACCESS_KEY_ID=xxxxxxx # Key ID from AWS with Route 53 access -export AWS_SECRET_ACCESS_KEY=xxxxxxx # Access key from AWS with Route 53 access -export REDIS_URL=redis://user:xxxxxx@some-redis.com:10340 # A Redis cluster URL -``` - -Set the version of Kuadrant to the latest released version: https://github.com/Kuadrant/kuadrant-operator/releases/ - -``` -export KUADRANT_VERSION='vX.Y.Z' -``` - -### Step 2 - Install Gateway API v1 - -Before you can use Kuadrant, you must install Gateway API v1 as follows: - -```bash -kubectl apply -f https://github.com/kubernetes-sigs/gateway-api/releases/download/v1.1.0/standard-install.yaml -``` - -### Step 3 - Install cert-manager - -Before you can use Kuadrant, you must install cert-manager. Cert-Manager is used by kuadrant to manage TLS certificates for your gateways. - -> The minimum supported version of cert-manager is v1.14.0. - -Install one of the different flavours of the Cert-Manager. - -#### Install community version of the cert-manager - -Consider [installing cert-manager via OperatorHub](https://cert-manager.io/docs/installation/operator-lifecycle-manager/), -which you can do from the OpenShift web console. - -More installation options at [cert-manager.io](https://cert-manager.io/docs/installation/) - -#### Install cert-manager Operator for Red Hat OpenShift - -You can install the [cert-manager Operator for Red Hat OpenShift](https://docs.openshift.com/container-platform/4.16/security/cert_manager_operator/cert-manager-operator-install.html) -by using the web console. - -> **Note:** Before using Kuadrant's `TLSPolicy` you will need to setup a certificate issuer refer to the [cert-manager docs for more details](https://cert-manager.io/docs/configuration/acme/dns01/route53/#creating-an-issuer-or-clusterissuer) - -### Step 4 - (Optional) Install and configure Istio with the Sail Operator - -!!! note - - Skip this step if planing to use [Envoy Gateway](https://gateway.envoyproxy.io/) as Gateway API provider - -Kuadrant integrates with Istio as a Gateway API provider. You can set up an Istio-based Gateway API provider by using the Sail Operator. - -#### Install Istio - -To install the Istio Gateway provider, run the following commands: - -```bash -kubectl create ns gateway-system -``` - -```bash -kubectl apply -f - < ${TMP}/envoy-gateway.yaml -yq e '.extensionApis.enableEnvoyPatchPolicy = true' -i ${TMP}/envoy-gateway.yaml -kubectl create configmap -n envoy-gateway-system envoy-gateway-config --from-file=envoy-gateway.yaml=${TMP}/envoy-gateway.yaml -o yaml --dry-run=client | kubectl replace -f - -kubectl rollout restart deployment envoy-gateway -n envoy-gateway-system -``` - -Wait for Envoy Gateway to become available:: - -```bash -kubectl wait --timeout=5m -n envoy-gateway-system deployment/envoy-gateway --for=condition=Available -``` - -### Step 6 - Optional: Configure observability and metrics (Istio only) - -Kuadrant provides a set of example dashboards that use known metrics exported by Kuadrant and Gateway components to provide insight into different components of your APIs and Gateways. While not essential, it is recommended to set these up. -First, enable [monitoring for user-defined projects](https://docs.openshift.com/container-platform/4.17/observability/monitoring/enabling-monitoring-for-user-defined-projects.html#enabling-monitoring-for-user-defined-projects_enabling-monitoring-for-user-defined-projects). -This will allow the scraping of metrics from the gateway and Kuadrant components. -The [example dashboards and alerts](https://docs.kuadrant.io/latest/kuadrant-operator/doc/observability/examples/) for observing Kuadrant functionality use low-level CPU metrics and network metrics available from the user monitoring stack in OpenShift. They also use resource state metrics from Gateway API and Kuadrant resources. - -To scrape these additional metrics, you can install a `kube-state-metrics` instance, with a custom resource configuration as follows: - -```bash -kubectl apply -f https://raw.githubusercontent.com/Kuadrant/kuadrant-operator/main/config/observability/openshift/kube-state-metrics.yaml -kubectl apply -k https://github.com/Kuadrant/gateway-api-state-metrics/config/kuadrant?ref=0.7.0 -``` - -To enable request metrics in Istio and scrape them, create the following resource: - -```bash -kubectl apply -f https://raw.githubusercontent.com/Kuadrant/kuadrant-operator/refs/heads/main/config/observability/prometheus/monitors/istio/service-monitor-istiod.yaml -``` - -Some example dashboards show aggregations based on the path of requests. -By default, Istio metrics don't include labels for request paths. -However, you can enable them with the below Telemetry resource. -Note that this may lead to a [high cardinality](https://www.robustperception.io/cardinality-is-key/) label where multiple time-series are generated, -which can have an impact on performance and resource usage. - -```bash -kubectl apply -f https://raw.githubusercontent.com/Kuadrant/kuadrant-operator/main/config/observability/openshift/telemetry.yaml -``` - -You can configure scraping of metrics from the various Kuadrant operators with the below resources. - -```bash -kubectl create ns kuadrant-system -kubectl apply -f https://raw.githubusercontent.com/Kuadrant/kuadrant-operator/refs/heads/main/config/observability/prometheus/monitors/operators.yaml -``` - -!!! note - - There is 1 more metrics configuration that needs to be applied so that all relevant metrics are being scraped. - That configuration depends on where you deploy your Gateway later. - The steps to configure that are detailed in the follow on [Secure, protect, and connect](../user-guides/full-walkthrough/secure-protect-connect.md) guide. - -The [example Grafana dashboards and alerts](https://docs.kuadrant.io/latest/kuadrant-operator/doc/observability/examples/) for observing Kuadrant functionality use low-level CPU metrics and network metrics available from the user monitoring stack in OpenShift. They also use resource state metrics from Gateway API and Kuadrant resources. - -For Grafana installation details, see [installing Grafana on OpenShift](https://cloud.redhat.com/experts/o11y/ocp-grafana/). That guide also explains how to set up a data source for the Thanos Query instance in OpenShift. For more detailed information about accessing the Thanos Query endpoint, see the [OpenShift documentation](https://docs.openshift.com/container-platform/4.17/observability/monitoring/accessing-third-party-monitoring-apis.html#accessing-metrics-from-outside-cluster_accessing-monitoring-apis-by-using-the-cli). - -!!! note - - For some dashboard panels to work correctly, HTTPRoutes must include a "service" and "deployment" label with a value that matches the name of the service & deployment being routed to. eg. "service=myapp, deployment=myapp". - This allows low level Istio & envoy metrics to be joined with Gateway API state metrics. - -### Step 7 - Setup the catalogsource - -Before installing the Kuadrant Operator, you must enter the following commands to set up secrets that you will use later. -If you haven't aleady created the `kuadrant-system` namespace during the optional observability setup, do that first: - -```bash -kubectl create ns kuadrant-system -``` - -Set up a `CatalogSource` as follows: - -```bash -kubectl apply -f - < **Overview**. -4. In the **Dynamic Plugins** section of the status box, click **View all**. -5. In the **Console plugins** area, find the `kuadrant-console-plugin` plugin. It should be listed but disabled. -6. Click the **Disabled** button next to the `kuadrant-console-plugin` plugin. -7. Select the **Enabled** radio button, and then click **Save**. -8. Wait for the plugin status to change to **Loaded**. - -Once the plugin is loaded, refresh the console. You should see a new **Kuadrant** section in the navigation sidebar. - -## Next steps - -- [Secure, protect, and connect APIs with Kuadrant on OpenShift](../user-guides/full-walkthrough/secure-protect-connect.md)