Skip to content

AWS-K3s uses stack templates to generate users' projects in a desired cloud. AWS-K3s is a stack template that creates and provisions Kubernetes clusters in AWS cloud by means of k3s utility.

On this page you will find guidance on how to create a K3s cluster on AWS using one of the prepared samples – the AWS-K3s stack template. Running the example code will have the following resources created:

  • K3s cluster with addons:

    • cert-manager

    • ingress-nginx

    • external-dns

    • argocd

  • AWS Key Pair to access the cluster running instances

  • AWS IAM Policy for managing your DNS zone by external-dns

  • (optional, if you use domain) Route53 zone

  • (optional, if vpc_id is not set) VPC for EKS cluster


  1. Terraform version 1.4+

  2. AWS account

  3. AWS CLI installed

  4. kubectl installed

  5. client installed

Authentication requires cloud credentials to manage and provision resources. You can configure access to AWS in two ways:


Please note that you have to use IAM user with granted administrative permissions.

  • Environment variables: provide your credentials via the AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY, the environment variables that represent your AWS Access Key and AWS Secret Key. You can also use the AWS_DEFAULT_REGION or AWS_REGION environment variable to set region, if needed. Example usage:

    export AWS_DEFAULT_REGION="eu-central-1"
  • Shared Credentials File (recommended): set up an AWS configuration file to specify your credentials.

    Credentials file ~/.aws/credentials example:

    aws_access_key_id = MYACCESSKEY
    aws_secret_access_key = MYSECRETKEY

    Config: ~/.aws/config example:

    [profile cluster-dev]
    region = eu-central-1

    Then export AWS_PROFILE environment variable.

    export AWS_PROFILE=cluster-dev

Install AWS client

If you don't have the AWS CLI installed, refer to AWS CLI official installation guide, or use commands from the example:

curl "" -o ""
sudo ./aws/install
aws s3 ls

Create S3 bucket uses S3 bucket for storing states. Create the bucket with the command:

aws s3 mb s3://cdev-states

DNS Zone

In AWS-K3s stack template example you need to define a Route 53 hosted zone. Options:

  1. You already have a Route 53 hosted zone.

  2. Create a new hosted zone using a Route 53 documentation example.

  3. Use "" domain for zone delegation.

Create project

  1. Configure access to AWS and export required variables.

  2. Create locally a project directory, cd into it and execute the command:

      cdev project create
    This will create a new empty project.

  3. Edit variables in the example's files, if necessary:

    • project.yaml - main project config. Sets common global variables for current project such as organization, region, state bucket name etc. See project configuration docs.

    • backend.yaml - configures backend for states (including Terraform states). Uses variables from project.yaml. See backend docs.

    • stack.yaml - describes stack configuration. See stack docs.

  4. Run cdev plan to build the project. In the output you will see an infrastructure that is going to be created after running cdev apply.


    Prior to running cdev apply make sure to look through the stack.yaml file and replace the commented fields with real values. In case you would like to use existing VPC and subnets, uncomment preset options and set correct VPC ID and subnets' IDs. If you leave them as is, will have VPC and subnets created for you.

  5. Run cdev apply


    We highly recommend to run cdev apply in a debug mode so that you could see the logging in the output: cdev apply -l debug

  6. After cdev apply is successfully executed, in the output you will see the ArgoCD URL of your cluster. Sign in to the console to check whether ArgoCD is up and running and the stack template has been deployed correctly. To sign in, use the "admin" login and the bcrypted password that you have generated for the stack.yaml.

  7. Displayed in the output will be also a command on how to get kubeconfig and connect to your Kubernetes cluster.

  8. Destroy the cluster and all created resources with the command cdev destroy