Terraform module which creates AWS EKS (Kubernetes) resources with an opinionated configuration targeting Camunda 8.
Following is a simple example configuration and should be adjusted as required.
See inputs for further configuration options and how they affect the cluster creation.
module "eks_cluster" {
source = "github.com/camunda/camunda-tf-eks-module/modules/eks-cluster"
region = "eu-central-1"
name = "cluster-name"
cluster_service_ipv4_cidr = "10.190.0.0/16"
cluster_node_ipv4_cidr = "10.192.0.0/16"
}
Name |
Source |
Version |
cert_manager_role |
terraform-aws-modules/iam/aws//modules/iam-role-for-service-accounts-eks |
5.52.2 |
ebs_cs_role |
terraform-aws-modules/iam/aws//modules/iam-role-for-service-accounts-eks |
5.52.2 |
eks |
terraform-aws-modules/eks/aws |
20.33.0 |
external_dns_role |
terraform-aws-modules/iam/aws//modules/iam-role-for-service-accounts-eks |
5.52.2 |
vpc |
terraform-aws-modules/vpc/aws |
5.17.0 |
Name |
Description |
Type |
Default |
Required |
access_entries |
Map of access entries to add to the cluster. |
any |
{} |
no |
authentication_mode |
The authentication mode for the cluster. |
string |
"API" |
no |
availability_zones |
A list of availability zone names in the region. By default, this is set to null and is not used; instead, availability_zones_count manages the number of availability zones. This value should not be updated directly. To make changes, please create a new resource. |
list(string) |
null |
no |
availability_zones_count |
The count of availability zones to utilize within the specified AWS Region, where pairs of public and private subnets will be generated (minimum is 2 ). Valid only when availability_zones variable is not provided. |
number |
3 |
no |
cluster_node_ipv4_cidr |
The CIDR block for public and private subnets of loadbalancers and nodes. Between /28 and /16. |
string |
"10.192.0.0/16" |
no |
cluster_service_ipv4_cidr |
The CIDR block to assign Kubernetes service IP addresses from. Between /24 and /12. |
string |
"10.190.0.0/16" |
no |
cluster_tags |
A map of additional tags to add to the cluster |
map(string) |
{} |
no |
create_ebs_gp3_default_storage_class |
Flag to determine if the kubernetes_storage_class should be created using EBS-CSI and set on GP3 by default. Set to 'false' to skip creating the storage class, useful for avoiding dependency issues during EKS cluster deletion. |
bool |
true |
no |
enable_cluster_creator_admin_permissions |
Indicates whether or not to add the cluster creator (the identity used by Terraform) as an administrator via access entry. |
bool |
true |
no |
kubernetes_version |
Kubernetes version to be used by EKS |
string |
"1.31" |
no |
name |
Name being used for relevant resources - including EKS cluster name |
string |
n/a |
yes |
np_ami_type |
Amazon Machine Image |
string |
"AL2_x86_64" |
no |
np_capacity_type |
Allows setting the capacity type to ON_DEMAND or SPOT to determine stable nodes |
string |
"ON_DEMAND" |
no |
np_desired_node_count |
Actual number of nodes for the default node pool. Min-Max will be used for autoscaling |
number |
4 |
no |
np_disk_size |
Disk size of the nodes on the default node pool |
number |
20 |
no |
np_instance_types |
Allow passing a list of instance types for the auto scaler to select from when scaling the default node pool |
list(string) |
[ "m6i.xlarge" ] |
no |
np_labels |
A map of labels to add to the default pool nodes |
map(string) |
{} |
no |
np_max_node_count |
Maximum number of nodes for the default node pool |
number |
10 |
no |
np_min_node_count |
Minimum number of nodes for the default node pool |
number |
1 |
no |
region |
The region where the cluster and relevant resources should be deployed in |
string |
n/a |
yes |