Skip to content

DO-K8s

This section provides information on how to create a new project on DigitalOcean with DO-k8s stack template.

Prerequisites

  1. Terraform version 13+.

  2. DigitalOcean account.

  3. doctl installed.

  4. Cluster.dev client installed.

Authentication

Create an access token for a user.

Info

Make sure to grant the user with administrative permissions.

For details on using DO spaces bucket as a backend, see here.

DO access configuration

  1. Install doctl. For more information, see the official documentation.

    cd ~
    wget https://github.com/digitalocean/doctl/releases/download/v1.57.0/doctl-1.57.0-linux-amd64.tar.gz
    tar xf ~/doctl-1.57.0-linux-amd64.tar.gz
    sudo mv ~/doctl /usr/local/bin
    
  2. Export your DIGITALOCEAN_TOKEN, for details see here.

    export DIGITALOCEAN_TOKEN="MyDIGITALOCEANToken"
    
  3. Export SPACES_ACCESS_KEY_ID and SPACES_SECRET_ACCESS_KEY environment variables, for details see here.

    export SPACES_SECRET_ACCESS_KEY="dSUGdbJqa6xwJ6Fo8qV2DSksdjh..."
    export SPACES_SECRET_ACCESS_KEY="TEaKjdj8DSaJl7EnOdsa..."
    
  4. Create a spaces bucket for Terraform states in the chosen region (in the example we used the 'cdev-data' bucket name).

  5. Create a domain in DigitalOcean domains service.

Info

In the project generated by default we used 'k8s.cluster.dev' zone as an example. Please make sure to change it.

Create project

  1. Configure access to DigitalOcean and export required variables.

  2. Create locally a project directory, cd into it and execute the command:

      cdev project create https://github.com/shalb/cdev-do-k8s
    
    This will create a new empty project.

  3. Edit variables in the example's files, if necessary:

    • project.yaml - main project config. Sets common global variables for current project such as organization, region, state bucket name etc. See project configuration docs.

    • backend.yaml - configures backend for Cluster.dev states (including Terraform states). Uses variables from project.yaml. See backend docs.

    • stack.yaml - describes stack configuration. See stack docs.

  4. Run cdev plan to build the project. In the output you will see an infrastructure that is going to be created after running cdev apply.

    Note

    Prior to running cdev apply make sure to look through the stack.yaml file and replace the commented fields with real values. In case you would like to use existing VPC and subnets, uncomment preset options and set correct VPC ID and subnets' IDs. If you leave them as is, Cluster.dev will have VPC and subnets created for you.

  5. Run cdev apply

    Tip

    We highly recommend to run cdev apply in a debug mode so that you could see the Cluster.dev logging in the output: cdev apply -l debug

  6. After cdev apply is successfully executed, in the output you will see the ArgoCD URL of your cluster. Sign in to the console to check whether ArgoCD is up and running and the stack template has been deployed correctly. To sign in, use the "admin" login and the bcrypted password that you have generated for the stack.yaml.

  7. Displayed in the output will be also a command on how to get kubeconfig and connect to your Kubernetes cluster.

  8. Destroy the cluster and all created resources with the command cdev destroy

Resources

Resources to be created within the project:

  • (optional, if vpc_id is not set) VPC for Kubernetes cluster
  • DO Kubernetes cluster with addons:
    • cert-manager
    • argocd