A Terraform module to provision a minimal k3s Kubernetes cluster tailored to LEAP's VPN stack on Hetzner Cloud.
- Deploys a minimal k3s cluster
- Uses Hetzner Cloud resources (servers, networks)
- Automated provisioning via Terraform
- Private networking between nodes
- Easily extensible for more nodes or features
We propose the following setup of services across worker nodes:
k3s controller node (reverse-proxy):
- Ingress : Traefik
- cert-manager (https://cert-manager.io/) and other kube-master components
k3s worker node 1 (backend):
- menshen
- invitectl to add invite codes to db menshen depends on
k3s worker node 2: (monitoring, logs):
- kube-prometheus-stack including
- prometheus
- grafana
Follow these steps to set up your project and generate the required API token.
- Create a Hetzner Account and Log in.
- Create a new project: on the Dashboard, click
New Project, enter a name and clickAdd project. - Generate a Read-Write token. Enter the new project and navigate to
Securitysettings (left sidebar, bottom). Go to theAPI tokenstab and clickGenerate API token. Store the generated token securily. It is only shown once in the web interface.
It's easiest to start with the template directory located under hetzner/examples. This directory contains:
- The necessary code to import this repository as a Terraform module.
- All required variables for a basic setup.
- A helper script for accessing your cluster from your local machine.
Just copy the whole hetzner/exmaples directory to a directory you wish (your working directory) and adapt the terraform examples.
Make sure you have access to the git repo and can git clone it, otherwise terraform initialization will fail.
Alternatively you can use the ssh-method to clone the repo during the init-process by replacing the source = ... line by source = "git::ssh://[email protected]/leap/container-platform/terraform-k3s.git"
- Provide important variables
Below is a list of the variables you must provide in your config:
| Variable | Type | Description |
|---|---|---|
| hcloud_token | string | The Hetzner Cloud API token created Step 1 |
| admins | list(object) | list of admin objects, containing a name and the corresponding public ssh key. See vars.tf for details |
| k3s_worker_nodes | list(object) | A list of worker nodes. Default to one backend and one gateway node. Search for k3s_worker_nodes in vars.tf for a objects properties |
In the same shell and in the folder with your terraform project file, run
terraform init
When everything works out run
terraform plan
Read the plan and make sure things are getting created as expected.
Last run
terraform apply
Your k3s cluster is now being provisioned. 🎊
You can check on the Hetzner cloud console dashboard if all of your resources are created as expected.
This method allows provisioning from your local machine to remotes.
If not done before, copy the directory terraform-k3s/hetzner/examples/scripts to your work directory and run within the work directory:
eval $(./scripts/access_cluster.sh --start)Calling this script like described above will pull the k3s.yml from your controller node, adapt it for use on localhost, create in a background process a ssh tunnel with port forwarding on port 6443 to your controller node and automatically export the KUBECONFIG environment variable to your current shell.
You can also use the script to access a cluster that you have ssh-access to but that you haven't provisioned yourself. If there is no terraform.tfstate file in the parent directory you will be prompted for the public IP address of your controller node.
If you have provisioned your cluster with a different SSH key than your default one, you can amend --ssh-key <path/to/your/ssh-key to the command:
eval $(./scripts/access_cluster.sh --start --ssh-key <path/to/your/ssh-key>)Test the kubectl commands using the following command (if you haven’t yet installed kubectl on your local machine, follow this guide: https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/). This should return a table with your deployed nodes.
kubectl get nodes -o wideIf you want to run kubectl in a different shell then the one you've used to start port forwarding just export there the KUBECONFIG:
export KUBECONFIG=<path-to-your-work-dir>/k3s-local.yamlOnce you're done with your work, you should close your ssh session to your controller node and disable port forwarding again. This can be done by running
./scripts/portforwarding.sh --stopkubectl get nodes -o widekubectl get pods -A -o wide