Design¶
This project might seem like a lot of bash code but it's actually pretty simple. There are about 30-odd primary shell scripts, and a number of helper scripts. This README focuses on the primary scripts.
Desktop Kubernetes is 99.9% bash shell with a tiny smidgen of Python.
Directory structure¶
The script directory structure is organized around related areas of functionality. The scripts generate numerous files as part of provisioning a cluster. These generated files are all placed into the generated directory in the project root. Most of these are purely transitory, except:
| File | Purpose |
|---|---|
generated/kickstart/id_ed25519 |
This is the private key corresponding to the public key that Desktop Kubernetes adds to the template VM authorized_keys file. As long as the template VM is used to provision new Desktop Kubernetes clusters, this private key must be retained for ssh'ing into the cluster VMs. Desktop Kubernetes only generates the SSH keys when a template is created - and only if the keypair does not already exist. |
generated/kubeconfig/admin.kubeconfig |
This is the cluster admin kubeconfig. It is regenerated for each new cluster you provision. You need this kubeconfig to run kubectl commands against the cluster. |
Call structure¶
All of the scripts in the call tree except for dtk and artifacts are in the scripts directory. All of the scripts are invoked by dtk. The structure below shows the call tree as the scripts are invoked to create a template VM, and then provision a cluster using the template. See the Narrative section that follows for a description of each numeric annotation:
dtk
├─ source artifacts (1)
└─ create-cluster
├─ scripts/helpers/download-objects (2)
│ └─ scripts/helpers/download-obj
│
├─ scripts/kvm/provision-vms (3) (if kvm)
│ ├─ scripts/kvm/create-template-vm
│ │ └─ scripts/vm/gen-ssh-keyfiles
│ ├─ scripts/kvm/clone-vm
│ └─ scripts/kvm/configure-etc-hosts
│
├─ scripts/virtualbox/provision-vms (3) (if vbox)
│ ├─ scripts/virtualbox/create-template-vm
│ │ ├─ scripts/vm/gen-ssh-keyfiles
│ │ ├─ scripts/os/gen-kickstart-iso
│ │ ├─ scripts/virtualbox/create-vm
│ │ └─ scripts/virtualbox/install-guest-additions
│ ├─ scripts/virtualbox/clone-vm
│ │ └─ scripts/virtualbox/configure-hostonly-networking
│ │ └─ scripts/virtualbox/gen-hostonly-ifcfg-iso
│ └─ scripts/virtualbox/configure-etc-hosts
│
├─ scripts/cluster/gen-root-ca (4)
│
├─ scripts/cluster/gen-core-k8s (5)
│ ├─ scripts/worker/configure-worker (5a)
│ │ ├─ scripts/os/configure-firewall
│ │ ├─ scripts/worker/misc/install-misc-bins
│ │ ├─ scripts/worker/containerd/install-containerd
│ │ ├─ scripts/worker/kubelet/install-kubelet
│ │ └─ scripts/worker/kube-proxy/install-kube-proxy
│ └─ scripts/control-plane/configure-controller (5b)
│ ├─ scripts/os/configure-firewall
│ ├─ scripts/control-plane/etcd/install-etcd
│ ├─ scripts/control-plane/kube-apiserver/install-kube-apiserver
│ ├─ scripts/control-plane/kube-controller-manager/install-kube-controller-manager
│ └─ scripts/control-plane/kube-scheduler/install-kube-scheduler
│
└─ scripts/addons/install-addons (6)
Narrative¶
The dtk script is the top-level script. To provision a new cluster, the work is all done in the create-cluster script:
- The
artifactsfile is sourced, which defines all the upstream URLs and local filesystem locations for the core objects needed to provision the cluster. Then the thecreate-clusterscript is called to provision the cluster. - All the binaries, ISOs, manifests, and tarballs needed to provision the core cluster are downloaded into the
binariesdirectory based on configuration options. E.g. if config specifieslinux: rockythenRocky-X.X-x86_64-dvd.iso(X.X based on whatever is hard-coded in theartifactsfile.) - All the VMs are created:
- If config specifies
create-template: truethen ssh keys are generated, and a template VM is created using Kickstart and a CentOS / Alma / Rocky ISO depending on thelinuxselection. The ssh public key is copied into the VM in theauthorized-keysfile. - The template VM (the one created in the prior step, or one that was already there identified by the
template-vmnameconfig) is cloned to create the VM(s) that comprise the Kubernetes cluster, so each VM has an identical configuration.
- If config specifies
- A root CA is generated for the cluster if one does not already exist. This CA is used to sign cluster certs throughout the remainder of the cluster provisioning process.
- The core Kubernetes cluster is created by installing the canonical Kubernetes components on each VM:
- 5a: Each worker gets a unique TLS cert/key for its
kubelet, a few binaries:crictl,runc, andcni plugins, and of course thekubeletandcontainerd. - 5b: The controller is provisioned with cluster TLS,
etcd, theapi server,controller manager, andscheduler. This project runs with a single controller to minimize the desktop footprint.
- 5a: Each worker gets a unique TLS cert/key for its
- The
install-addonsscript is called. It walks its own directory and for each subdirectory that matches an entry in theaddonssection of theconfig.yaml, it looks for and invokes aninstallscript in that directory to install the add-on.
On completion, you have a functional Kubernetes cluster consisting of one or one or more guest VMs, the first of which is always a controller, and the remainder of which are workers.