Mohamed Belgaied Hassine
Mohamed Belgaied Hassine
Creator of this blog.
Jul 19, 2024 7 min read

Using the new Rancher Turtles (Cluster API for Rancher) with Harvester

In this article, we will focus on Cluster API as a way to deploy Kubernetes Clusters on VMs running inside SUSE Harvester. In this scenario, Harvester acts as a cloud, providing compute, storage and networking, including Load Balancing capabilities, and Cluster API is used to communicate with Harvester’s API to provision VMs on which Kubernetes is installed. For simplicity’s sake, we will focus on a graphical way to use Cluster API. That is using Rancher Turtles.

Pre-requisites

In order to do what’s in this tutorial, you need to already have the following ready:

  • A functioning Harvester Cluster
  • A functioning Rancher Manager v2.8.2 minimum (this can be deployed on Harvester itself using the rancher_vcluster feature)
  • Communication possible from Rancher Manager to Harvester API (VIP on port 6443 and 443), and also with the VM network on port 6443.
  • A compatible VM image that is already uploaded to Harvester (will be used as base image for Kubernetes nodes, you will need the reference namespace/name)
  • An SSH Public Key already declared in Harvester (you need its reference namespace/name)
  • A DHCP Server for your VMs and Load Balancers.
  • A pre-defined and working VM Network on Harvester
  • Access to machine where the tool clusterctl is available. clusterctl is a CLI tool to help manage Cluster API and cluster templates. It is possible to download it here.

Steps

Installing Rancher Turtles

  • Check if Harvester is Active in the Virtualization Management view. harvester_active.png
  • Add Rancher Turtles Helm Repo:

    • click on icon for the local cluster
    • Go to the Apps menu, and Repositories sub-menu
    • Click on Create button repo_view.png
    • Enter the following:
    • Click on Create button repo_create.png

This can also be done using the following command:

cat <<EOF | kubectl apply -f -
apiVersion: catalog.cattle.io/v1
kind: ClusterRepo
metadata:
  name: turtles
spec:
  url: https://rancher.github.io/turtles
EOF
  • Wait for the turtles repository to have a status of Active. repo_check_turtles_ready.png

NOTE : At this time, Rancher Turtles released a new version v0.8.0, already available in Rancher. However, this version comes with various changes including the usage of the API in version v1beta1. This causes issues with the current way this lab has been built. Therefore, please on install version v0.7.0, else you might have unpredictable issues.

NOTE 2: There is a known issue with Rancher Turtles when installing it in a project. Please make sure you keep the default (None) in the second step when installing.

  • Install the Turtles App, in version !! v0.7.0 !!
    • Go to Apps -> Charts.
    • Filter for turtles.
    • Click on the tile Rancher Turtles - the Cluster API Extension

turtles_chart_view.png

  • !!! Select version v0.7.0 !!, we need
  • Click Install: turtles_install_1.png
  • Click Next. Nowm make sure to NOT change the project and keep it to (None)
  • and finally Install: turtles_install_2.png

After that, Rancher will show a log frame showing progress of the Helm installation, make sure to scroll down in the logs window, and wait until you get SUCCESS line as shown below: turtles_install_success.png

NOTE: Sometimes this screen will not show on its own, that’s because Rancher Turtles’ installation causes deactivation of some Rancher components, which might break the current connexions to the Rancher WebSockets. Please make sure to manually refresh you browser and do the CAPI Provider checks in the next step.

Checking Base Providers

Rancher Turtles will automatically install the base requirements for CAPI:

  • CAPI Core Controller v1.4.6
  • CAPI ControlPlane and Bootstrap providers for RKE2 You can check that by going to local Cluster -> Magnifier icon (Resource Search (Ctrl + K) label), type in the Search field CAPI and select CAPIProviders in the list: capi_provider_search.png

This should show the list of Active CAPIProviders, which should include the requirements listed above. capi_provider_view.png

Additionally, you might need to check the Deployments under the menu Workloads and filter on cluster, then you should find 4 items as follows: turtles_components_check.png

Deploying the Harvester CAPI Infrastructure Provider (CAPHV)

In order to add the Harvester Provider, we will use an existing YAML file on GitHub, which we will deploy using Fleet. To do that, click on Continuous Delivery icon, then Git Repos, then Add Repository button: gitrepos_view.png

  • Give the necessary information for the Git Repo:
    • Name: capiprovider-harvester
    • Repository URL: https://github.com/belgaied2/susecon2024-capi-demo
    • Branch Name: main
    • After scrolling down, Add Path button and then /capiproviders
    • make also sure to select the right namespace in the top right side of the window, it must be fleet-local. gitrepo_add_1.png

Alternatively, you can copy-paster the following:

cat <<EOF | kubectl apply -f -
apiVersion: fleet.cattle.io/v1alpha1
kind: GitRepo
metadata:
  name: capiprovider-harvester
  namespace: fleet-local
spec:
  branch: main
  paths:
    - /capiproviders
  repo: https://github.com/belgaied2/susecon2024-capi-demo
  targets:
    - clusterSelector:
        matchExpressions:
          - key: provider.cattle.io
            operator: NotIn
            values:
              - harvester
EOF

Both the above approaches should show the same results in the Git Repos view (make sure to select fleet-local namespace) gitrepo_ready_view.png The the previous view CAPIProviders should show 4 providers now, the new one being harvester-infrastructure in the caphv-system namespace. capi_provider_caphv_ready.png

Note: you might see the harvester-infrastructure provider in Unavailable state for a while, that’s because the installation process takes some time, make sure to wait for a couple of minutes and it should change to the Ready state.

Generating the cluster manifest

For this part of the lab, you will need a CLI tool called clusterctl provided by the Cluster API project. If you do not have a machine available with clusterctl, you can connect using SSH to the Router VM (IP and credentials available in the table at the top of this page).

In order to create a CAPI cluster using Rancher Turtles, we need to create a YAML manifest containing all the necessary resources that are needed. This corresponds to the configuration of our cluster. In order to simplify deployment of RKE2 clusters on Harvester, the CAPHV (Cluster API Provider for Harvester) project offers the following template, which contains placeholders in the form of environment variables.

We need the following steps:

  • clone a GitHub repository (optional, only needed for GitOps using Fleet)
  • declare these environment variables
  • use clusterctl to generate the final manifest from the above template.
  • git add, git commit and git push the final manifest (optional, only needed for GitOps using Fleet)
  • declare the GitRepo in Fleet (optional, only needed for GitOps using Fleet)

List of environment variables

You can get the list of needed environment variables for the latest available template using the clusterctl command as follows:

This is the command that is needed to generate the cluster manifest:

clusterctl generate yaml --from https://github.com/rancher-sandbox/cluster-api-provider-harvester/blob/main/templates/cluster-template-rke2-dhcp.yaml  --list-variables

The result should look like the following:

Variables:
  - CLOUD_CONFIG_KUBECONFIG_B64
  - CLUSTER_NAME
  - CONTROL_PLANE_MACHINE_COUNT
  - HARVESTER_ENDPOINT
  - HARVESTER_KUBECONFIG_B64
  - KUBERNETES_VERSION
  - NAMESPACE
  - RANCHER_TURTLES_LABEL
  - SSH_KEYPAIR
  - VM_DISK_SIZE
  - VM_IMAGE_NAME
  - VM_NETWORK
  - WORKER_MACHINE_COUNT

You need to export values for each of these variables in order to generate a valid cluster YAML manifest.

Generating the YAML manifest

Once the environment variables are all set, we need to generate the YAML manifest by using againt the clusterctl command, but this time without the --list-variables argument.

clusterctl generate yaml --from https://github.com/rancher-sandbox/cluster-api-provider-harvester/blob/main/templates/cluster-template-rke2.yaml > cluster-manifest.yaml

Now, we need to generate the YAML manifest for the cluster. clusterctl needs to be available on the Linux machine you are using to push to your GitHub repository (and automatically deployed to the Rancher cluster using Fleet after that). If you are not using GitOps, you will need to copy-paste the resulting YAML file into Rancher’s Import YAML button in the local cluster explorer. rancher_import_yaml_button.png

The following steps are only valid for the GitOps approach:

Now, we push the changes the Git repo:

git add test-rk-cluster.yaml
git commit -m "My First CAPI cluster in GitOps" 
git push

Now, you need to add a GitRepo to Fleet to make it deploy the manifests on Rancher:

cat <<EOF | kubectl apply -f -
apiVersion: fleet.cattle.io/v1alpha1
kind: GitRepo
metadata:
  name: test-rk-cluster
  namespace: fleet-local
spec:
  branch: main
  paths:
    - /templates
  repo: https://github.com/belgaied2/susecon2024-capi-demo
  targets:
    - clusterSelector:
        matchExpressions:
          - key: provider.cattle.io
            operator: NotIn
            values:
              - harvester
EOF

Monitor the evolution of the cluster creation process

Now Cluster API will begin creating the cluster, in a multi-step process that can take up to 10 minutes or more.

During this process, you can check the processor by looking at resources instances being created in Rancher, for instance:

  • More Resources -> Cluster Provisioning -> CAPI Clusters
  • More Resources -> Cluster Provisioning -> Machines
  • More Resources -> Cluster Provisioning -> HarvesterMachines

You can also check out the logs of the different provider controllers:

Log typeWhere to find them
Logs of the Harvester Infrastructure providerThe pod logs of the caphv-controller-manager in the caphv-system namespace
Logs of the CAPI Core ControllerThe pog logs of the capi-controller-manager in the capi-system namespace
Logs of the RKE2 ControlPlane ProviderThe pod logs of the rke2-control-plane-controller-manager in rke2-control-plane-system namespace
Logs of the RKE2 Bootstrap ProviderThe pod logs of the rke2-bootstrap-controller-manager in rke2-bootstrap-system namespace

At the end of the process, you should see the CAPI Cluster test-rk in the example-rke2 namespace gets to the state Provisioned, and the cluster should also appeared as a Downstream Cluster in Rancher.

– END –