Skip to content

Commit

Permalink
add aurora in the tests
Browse files Browse the repository at this point in the history
  • Loading branch information
leiicamundi committed Sep 12, 2024
1 parent 921c4e7 commit 8746379
Show file tree
Hide file tree
Showing 7 changed files with 220 additions and 13 deletions.
76 changes: 76 additions & 0 deletions .github/actions/aurora-manage-cluster/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,76 @@
# Deploy RDS Aurora Cluster GitHub Action

This GitHub Action automates the deployment of an Amazon RDS Aurora cluster using Terraform. It installs Terraform and AWS CLI, and outputs the Aurora cluster endpoint along with other relevant details.

## Description

The **Deploy RDS Aurora Cluster** action enables you to:

- Automate the deployment of an RDS Aurora cluster on AWS.
- Use Terraform for infrastructure as code.
- Install specific versions of Terraform and AWS CLI.
- Output the Aurora cluster endpoint, Terraform state URL, and all other Terraform outputs dynamically.

## Inputs

The following inputs are required for the action:

| Input | Description | Required | Default |
|-------|-------------|----------|---------|
| `aws-region` | AWS region where the RDS Aurora cluster will be deployed. | Yes | - |
| `cluster-name` | Name of the RDS Aurora cluster to deploy. | Yes | - |
| `engine-version` | Version of the Aurora engine to use. | Yes | see `action.yml` |
| `instance-class` | Instance class for the Aurora cluster. | Yes | `db.t3.medium` |
| `num-instances` | Number of instances in the Aurora cluster. | Yes | `1` |
| `username` | Username for the PostgreSQL admin user. | Yes | - |
| `password` | Password for the PostgreSQL admin user. | Yes | - |
| `vpc-id` | VPC ID to create the cluster in. | No | - |
| `subnet-ids` | List of subnet IDs to create the cluster in. | No | - |
| `cidr-blocks` | CIDR blocks to allow access from and to. | No | - |
| `tags` | Tags to add to the resources. | No | `{}` |
| `s3-backend-bucket` | Name of the S3 bucket to store Terraform state. | Yes | - |
| `s3-bucket-region` | Region of the bucket containing the resources states. Fallbacks to `aws-region` if not set. | No | - |
| `tf-modules-revision` | Git revision of the Terraform modules to use. | Yes | `main` |
| `tf-modules-path` | Path where the Terraform Aurora modules will be cloned. | Yes | `./.action-tf-modules/aurora/` |
| `tf-cli-config-credentials-hostname` | The hostname of a HCP Terraform/Terraform Enterprise instance for the CLI configuration file. | No | `app.terraform.io` |
| `tf-cli-config-credentials-token` | The API token for a HCP Terraform/Terraform Enterprise instance. | No | - |
| `tf-terraform-version` | The version of Terraform CLI to install. | No | `latest` |
| `tf-terraform-wrapper` | Whether to install a wrapper for the Terraform binary. | No | `true` |
| `awscli-version` | Version of the AWS CLI to use. | Yes | see `action.yml` |

## Outputs

The action provides the following outputs:

| Output | Description |
|--------|-------------|
| `aurora-endpoint` | The endpoint of the deployed Aurora cluster. |
| `terraform-state-url` | URL of the Terraform state file in the S3 bucket. |
| `all-terraform-outputs` | All outputs from Terraform. |

## Usage

To use this GitHub Action, include it in your workflow file:

```yaml
jobs:
deploy_aurora:
runs-on: ubuntu-latest
steps:
- name: Deploy Aurora Cluster
uses: camunda/camunda-tf-eks-module/aurora-manage-cluster@main
with:
aws-region: 'us-west-2'
cluster-name: 'my-aurora-cluster'
engine-version: '15.4'
instance-class: 'db.t3.medium'
num-instances: '2'
username: 'admin'
password: ${{ secrets.DB_PASSWORD }}
vpc-id: 'vpc-12345678'
subnet-ids: 'subnet-12345,subnet-67890'
cidr-blocks: '10.0.0.0/16'
tags: '{"env": "prod", "team": "devops"}'
s3-backend-bucket: 'my-terraform-state-bucket'
s3-bucket-region: 'us-west-2'
```
3 changes: 0 additions & 3 deletions .github/actions/aurora-manage-cluster/action.yml
Original file line number Diff line number Diff line change
Expand Up @@ -32,13 +32,10 @@ inputs:
required: true
vpc-id:
description: 'VPC ID to create the cluster in'
required: true
subnet-ids:
description: 'List of subnet IDs to create the cluster in'
required: true
cidr-blocks:
description: 'CIDR blocks to allow access from and to'
required: true
tags:
description: 'Tags to add to the resources'
default: '{}'
Expand Down
1 change: 0 additions & 1 deletion .github/actions/eks-manage-cluster/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -72,6 +72,5 @@ jobs:
| Output Name | Description |
|----------------------------|------------------------------------------------------------------|
| `eks-cluster-endpoint` | The API endpoint of the deployed EKS cluster. |
| `eks-cluster-id` | The ID of the deployed EKS cluster. |
| `terraform-state-url` | URL of the Terraform state file in the S3 bucket. |
| `all-terraform-outputs` | All outputs from Terraform. |
6 changes: 0 additions & 6 deletions .github/actions/eks-manage-cluster/action.yml
Original file line number Diff line number Diff line change
Expand Up @@ -97,10 +97,6 @@ outputs:
description: 'The API endpoint of the deployed EKS cluster'
value: ${{ steps.apply.outputs.cluster_endpoint }}

eks-cluster-id:
description: 'The ID of the deployed EKS cluster'
value: ${{ steps.apply.outputs.cluster_id }}

terraform-state-url:
description: 'URL of the Terraform state file in the S3 bucket'
value: ${{ steps.utility.outputs.terraform-state-url }}
Expand Down Expand Up @@ -181,8 +177,6 @@ runs:
terraform apply -no-color eks.plan
export cluster_endpoint="$(terraform output -raw cluster_endpoint)"
echo "cluster_endpoint=$cluster_endpoint" >> "$GITHUB_OUTPUT"
export cluster_id="$(terraform output -raw cluster_id)"
echo "cluster_id=$cluster_id" >> "$GITHUB_OUTPUT"
- name: Configure kubectl
shell: bash
Expand Down
139 changes: 139 additions & 0 deletions .github/workflows/test-gha-aurora-manage-cluster.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,139 @@
name: Aurora Cluster creation and destruction test

on:
schedule:
- cron: '0 2 * * 1' # At 02:00 on Monday.

workflow_dispatch:
inputs:
cluster_name:
description: "Aurora Cluster name."
required: false
type: string
delete_cluster:
description: "Whether to delete the Aurora cluster."
required: false
type: boolean
default: true
db_username:
description: "Database username."
required: false
type: string
db_password:
description: "Database password."
required: false
type: string

pull_request:
paths:
- modules/fixtures/backend.tf
- modules/fixtures/fixtures.default.aurora.tfvars
- modules/aurora/**.tf
- .tool-versions
- .github/workflows/test-gha-aurora-manage-cluster.yml
- .github/actions/aurora-manage-cluster/*.yml
- justfile

concurrency:
group: "${{ github.workflow }}-${{ github.ref }}"
cancel-in-progress: true

env:
AWS_PROFILE: "infex"
AWS_REGION: "eu-west-2"

# please keep those synced with tests.yml
TF_STATE_BUCKET: "tests-eks-tf-state-eu-central-1"
TF_STATE_BUCKET_REGION: "eu-central-1"

jobs:
action-test:
runs-on: ubuntu-latest
steps:
- name: Checkout repository
uses: actions/checkout@v4
with:
ref: ${{ github.head_ref }}
fetch-depth: 0

- name: Install tooling using asdf
uses: asdf-vm/actions/install@05e0d2ed97b598bfce82fd30daf324ae0c4570e6 # v3

- name: Get Cluster Info
id: commit_info
run: |
if [[ -n "${{ github.event.inputs.cluster_name }}" ]]; then
cluster_name="${{ github.event.inputs.cluster_name }}"
else
cluster_name="aurora-$(git rev-parse --short HEAD)"
fi
if [[ -n "${{ github.event.inputs.db_username }}" ]]; then
db_username="${{ github.event.inputs.db_username }}"
else
db_username="user$(openssl rand -hex 4)"
fi
if [[ -n "${{ github.event.inputs.db_password }}" ]]; then
db_password="${{ github.event.inputs.db_password }}"
else
db_password="$(openssl rand -base64 12)"
fi
echo "cluster_name=$cluster_name" | tee -a "$GITHUB_OUTPUT"
echo "db_username=$db_username" | tee -a "$GITHUB_OUTPUT"
echo "db_password=$db_password" | tee -a "$GITHUB_OUTPUT"
tf_modules_revision=$(git rev-parse HEAD)
echo "tf_modules_revision=$tf_modules_revision" | tee -a "$GITHUB_OUTPUT"
- name: Import Secrets
id: secrets
uses: hashicorp/vault-action@v3
with:
url: ${{ secrets.VAULT_ADDR }}
method: approle
roleId: ${{ secrets.VAULT_ROLE_ID }}
secretId: ${{ secrets.VAULT_SECRET_ID }}
exportEnv: false
secrets: |
secret/data/products/infrastructure-experience/ci/common AWS_ACCESS_KEY;
secret/data/products/infrastructure-experience/ci/common AWS_SECRET_KEY;
- name: Add profile credentials to ~/.aws/credentials
run: |
aws configure set aws_access_key_id ${{ steps.secrets.outputs.AWS_ACCESS_KEY }} --profile ${{ env.AWS_PROFILE }}
aws configure set aws_secret_access_key ${{ steps.secrets.outputs.AWS_SECRET_KEY }} --profile ${{ env.AWS_PROFILE }}
aws configure set region ${{ env.AWS_REGION }} --profile ${{ env.AWS_PROFILE }}
- name: Create Aurora Cluster
timeout-minutes: 125
uses: ./.github/actions/aurora-manage-cluster
id: create_cluster
with:
cluster-name: ${{ steps.commit_info.outputs.cluster_name }}
username: ${{ steps.commit_info.outputs.db_username }}
password: ${{ steps.commit_info.outputs.db_password }}
aws-region: ${{ env.AWS_REGION }}
s3-backend-bucket: ${{ env.TF_STATE_BUCKET }}
s3-bucket-region: ${{ env.TF_STATE_BUCKET_REGION }}
tf-modules-revision: ${{ steps.commit_info.outputs.tf_modules_revision }}

- name: Delete Aurora Cluster
timeout-minutes: 125
if: always() && !(github.event_name == 'workflow_dispatch' && github.event.inputs.delete_cluster == 'false')
uses: ./.github/actions/eks-cleanup-resources
with:
tf-bucket: ${{ env.TF_STATE_BUCKET }}
tf-bucket-region: ${{ env.TF_STATE_BUCKET_REGION }}
max-age-hours: 0
target: ${{ steps.commit_info.outputs.cluster_name }}

- name: Notify in Slack in case of failure
id: slack-notification
if: failure() && github.event_name == 'schedule'
uses: camunda/infraex-common-config/.github/actions/report-failure-on-slack@main
with:
vault_addr: ${{ secrets.VAULT_ADDR }}
vault_role_id: ${{ secrets.VAULT_ROLE_ID }}
vault_secret_id: ${{ secrets.VAULT_SECRET_ID }}
7 changes: 4 additions & 3 deletions .github/workflows/test-gha-eks-manage-cluster.yml
Original file line number Diff line number Diff line change
Expand Up @@ -20,9 +20,9 @@ on:
pull_request:
# the paths should be synced with ../labeler.yml
paths:
- modules/fixtures/**/*.tf
- modules/fixtures/**/*.tfvars
- modules/**.tf
- modules/fixtures/backend.tf
- modules/fixtures/fixtures.default.eks.tfvars
- modules/eks-cluster/**.tf
- .tool-versions
- .github/workflows/test-gha-eks-manage-cluster.yml
- .github/actions/eks-manage-cluster/*.yml
Expand All @@ -38,6 +38,7 @@ env:
AWS_PROFILE: "infex"
AWS_REGION: "eu-west-2" # /!\ always use one of the available test region https://github.com/camunda/infraex-common-config

# please keep those synced with tests.yml
TF_STATE_BUCKET: "tests-eks-tf-state-eu-central-1"
TF_STATE_BUCKET_REGION: "eu-central-1"

Expand Down
1 change: 1 addition & 0 deletions .github/workflows/tests.yml
Original file line number Diff line number Diff line change
Expand Up @@ -27,6 +27,7 @@ env:
AWS_REGION: "eu-west-2" # /!\ always use one of the available test region https://github.com/camunda/infraex-common-config
TESTS_TF_BINARY_NAME: "terraform"

# please keep test-gha*.yml synced
TF_STATE_BUCKET: "tests-eks-tf-state-eu-central-1"
TF_STATE_BUCKET_REGION: "eu-central-1"

Expand Down

0 comments on commit 8746379

Please sign in to comment.