On this page you will find guidance on how to create a K3s cluster on AWS using one of the Cluster.dev prepared samples – the AWS-K3s stack template. Running the example code will have the following resources created:
K3s cluster with addons:
AWS Key Pair to access the cluster running instances
AWS IAM Policy for managing your DNS zone by external-dns
(optional, if you use cluster.dev domain) Route53 zone
(optional, if vpc_id is not set) VPC for EKS cluster
Terraform version 13+.
AWS CLI installed.
Cluster.dev requires cloud credentials to manage and provision resources. You can configure access to AWS in two ways:
Please note that you have to use IAM user with granted administrative permissions.
Environment variables: provide your credentials via the
AWS_SECRET_ACCESS_KEY, the environment variables that represent your AWS Access Key and AWS Secret Key. You can also use the
AWS_REGIONenvironment variable to set region, if needed. Example usage:
export AWS_ACCESS_KEY_ID="MYACCESSKEY" export AWS_SECRET_ACCESS_KEY="MYSECRETKEY" export AWS_DEFAULT_REGION="eu-central-1"
Shared Credentials File (recommended): set up an AWS configuration file to specify your credentials.
[cluster-dev] aws_access_key_id = MYACCESSKEY aws_secret_access_key = MYSECRETKEY
[profile cluster-dev] region = eu-central-1
Install AWS client ¶
If you don't have the AWS CLI installed, refer to AWS CLI official installation guide, or use commands from the example:
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip" unzip awscliv2.zip sudo ./aws/install aws s3 ls
Create S3 bucket ¶
Cluster.dev uses S3 bucket for storing states. Create the bucket with the command:
aws s3 mb s3://cdev-states
DNS Zone ¶
In AWS-K3s stack template example you need to define a Route 53 hosted zone. Options:
You already have a Route 53 hosted zone.
Create a new hosted zone using a Route 53 documentation example.
Use "cluster.dev" domain for zone delegation.
Create project ¶
Configure access to AWS and export required variables.
Create locally a project directory, cd into it and execute the command:This will create a new empty project.
cdev project create https://github.com/shalb/cdev-aws-k3s
Edit variables in the example's files, if necessary:
project.yaml- main project config. Sets common global variables for current project such as organization, region, state bucket name etc. See project configuration docs.
backend.yaml- configures backend for Cluster.dev states (including Terraform states). Uses variables from
project.yaml. See backend docs.
stack.yaml- describes stack configuration. See stack docs.
cdev planto build the project. In the output you will see an infrastructure that is going to be created after running
Prior to running
cdev applymake sure to look through the
stack.yamlfile and replace the commented fields with real values. In case you would like to use existing VPC and subnets, uncomment preset options and set correct VPC ID and subnets' IDs. If you leave them as is, Cluster.dev will have VPC and subnets created for you.
We highly recommend to run
cdev applyin a debug mode so that you could see the Cluster.dev logging in the output:
cdev apply -l debug
cdev applyis successfully executed, in the output you will see the ArgoCD URL of your cluster. Sign in to the console to check whether ArgoCD is up and running and the stack template has been deployed correctly. To sign in, use the "admin" login and the bcrypted password that you have generated for the
Displayed in the output will be also a command on how to get kubeconfig and connect to your Kubernetes cluster.
Destroy the cluster and all created resources with the command