Terraform for creating an AWS EKS private container platform
Name | Description |
---|---|
vpc | VPC such that infrastructure is secured on a networking level |
vpc-endpoints | Give the VPC access to AWS the required services |
S3-bucket | Logging the CloudTrail trail |
kms-keys | Encryption for S3 bucket for logging of CloudTaril, encryption to the CloudWatch log groups for CloudTrail trail, VPC flow logs, and EKS cluster |
cloudtrail-trail | Audit loggging for infrastructure |
iam-roles | Gives services, relevant permissions, and creates an admin role for administration |
eks-cluster | EKS cluster where workloads will be computed |
This example demonstrates how to deploy an Amazon EKS cluster that is deployed on the AWS Cloud in private subnets. For that, your cluster must pull images from a container registry that's in your VPC, and also must have endpoint private access enabled. This is required for nodes to register with the cluster endpoint.
Please see this document for more details on configuring fully private EKS Clusters.
For fully Private EKS clusters requires the following VPC endpoints to be created to communicate with AWS services. This example solution will provide these endpoints if you choose to create VPC. If you are using an existing VPC then you may need to ensure these endpoints are created.
com.amazonaws.region.ssm - Secrets Management
com.amazonaws.region.ssmmessages - Secrets Monitoring
com.amazonaws.region.ec2 - EC2 Management
com.amazonaws.region.ec2messages - EC2 Monitoring
com.amazonaws.region.kms - KMS Management
com.amazonaws.region.ecr.api - ECR API calls
com.amazonaws.region.ecr.dkr - ECR Docker Images
com.amazonaws.region.logs - For CloudWatch Logs
com.amazonaws.region.sts - If using AWS Fargate or IAM roles for service accounts
com.amazonaws.region.elasticloadbalancing - If using Application Load Balancers
com.amazonaws.region.autoscaling - If using Cluster Autoscaler
com.amazonaws.region.s3 - Creates S3 Gateway
- Get relevant AWS credentials (Access Key and Access Secret) to apply terraform locally or input credentials into the relevant Pipeline variables
- Create S3 bucket and configure as Terraform remote backend to store the relevant Terraform statefile
- Add the state file related values to to the backend block in the version.tf file once created
- Create an image of a service to be pulled from AWS ECR to use to spin up containers in pods that will be deployed on the EKS cluster
terraform init
terraform fmt
terraform valiate
terraform plan -out=$PLAN
terraform apply -input=false --auto-approve $PLAN
terraform plan -destroy -out=$DESTROY
terraform apply -input=false --auto-approve $DESTROY