Virtualization - Talos Linux
Intro
Running talos linux in kubevirt
Deploying Talos cluster using LoadBalancer service
Deploying the control plane
Deploy the LoadBalancer service for control plane
apiVersion: v1kind: Servicemetadata:name: talos-cpspec:ports:- name: httpsport: 6443protocol: TCPtargetPort: 6443- name: apiport: 50000protocol: TCPtargetPort: 50000selector:app: talossessionAffinity: Nonetype: LoadBalancerGet the IP of the created LoadBalancer service
Terminal window export LOADBALANCER_IP=`kubectl get svc talos-cp -ojson | jq -r '.status.loadBalancer.ingress[0].ip'`echo $LOADBALANCER_IPDeploy the VM for the control plane
apiVersion: kubevirt.io/v1kind: VirtualMachinemetadata:name: talos-cp01labels:app: talosrole: controlplanespec:runStrategy: Alwaystemplate:metadata:labels:app: talosrole: controlplanespec:dnsPolicy: NonednsConfig:nameservers:- 8.8.8.8- 8.8.4.4affinity:nodeAffinity:requiredDuringSchedulingIgnoredDuringExecution:nodeSelectorTerms:- matchExpressions:- key: topology.kubernetes.io/zoneoperator: Invalues:- ucsd- ucsd-sdsc- ucsd-nrpterminationGracePeriodSeconds: 0domain:clock:timer: {}utc: {}cpu:model: host-passthroughthreads: 4machine:type: q35resources:limits:devices.kubevirt.io/kvm: "1"memory: 4Gcpu: 4requests:devices.kubevirt.io/kvm: "1"memory: 2Gcpu: 2devices:rng: {}autoattachSerialConsole: trueautoattachGraphicsDevice: truedisks:- name: talos-cp01-disk-vda-rootbootOrder: 1disk:bus: virtiovolumes:- name: talos-cp01-disk-vda-rootdataVolume:name: talos-cp01-rootdataVolumeTemplates:- metadata:name: talos-cp01-rootspec:pvc:accessModes:- ReadWriteOncestorageClassName: linstor-igrokresources:requests:storage: 40Gsource:http:url: https://github.com/siderolabs/talos/releases/download/v1.12.2/metal-amd64.isoGenerate the talos configs
With the patch.yaml file in the current folder:
patch.yaml cluster:network:podSubnets:- 10.224.0.0/16serviceSubnets:- 10.98.0.0/12machine:features:hostDNS:enabled: trueRun:
Terminal window talosctl gen config talos-dev https://$LOADBALANCER_IP:6443 --output-dir talos-config --config-patch patch.yaml --additional-sans talos-cp,$LOADBALANCER_IP --install-disk /dev/vda;talosctl --talosconfig=talos-config/talosconfig config endpoints $LOADBALANCER_IPWhen the VM is running (might take some time to pull the images), open VNC client to see the CP console
Terminal window virtctl vnc talos-cp01Wait for the head node to go into the
maintenance mode. Run thetalosctl apply-configTerminal window talosctl apply-config -n $LOADBALANCER_IP --insecure -f talos-config/controlplane.yamlThe VM will reboot. Once it’s back up and healthy (look in VNC), run bootstrap. It will reboot again.
Terminal window talosctl bootstrap -n $LOADBALANCER_IP --talosconfig talos-config/talosconfigTalos cluster will initialize. Get the kubeconfig for the cluster.
Terminal window talosctl kubeconfig kubeconfig -n $LOADBALANCER_IP --talosconfig=talos-config/talosconfigTry if you can
get nodefor the control plane.Terminal window KUBECONFIG=./kubeconfig kubectl get nodes
Deploying the workers
Deploy the workers VMPool
apiVersion: pool.kubevirt.io/v1beta1kind: VirtualMachinePoolmetadata:name: talos-workerspec:replicas: 3selector:matchLabels:kubevirt.io/vmpool: talos-workervirtualMachineTemplate:metadata:labels:kubevirt.io/vmpool: talos-workerspec:runStrategy: Alwaystemplate:metadata:labels:kubevirt.io/vmpool: talos-workerspec:dnsPolicy: NonednsConfig:nameservers:- 8.8.8.8- 8.8.4.4affinity:nodeAffinity:requiredDuringSchedulingIgnoredDuringExecution:nodeSelectorTerms:- matchExpressions:- key: topology.kubernetes.io/zoneoperator: Invalues:- ucsd- ucsd-sdsc- ucsd-nrpterminationGracePeriodSeconds: 0domain:clock:timer: {}utc: {}cpu:model: host-passthroughthreads: 4machine:type: q35resources:limits:devices.kubevirt.io/kvm: "1"memory: 8Gcpu: 4requests:devices.kubevirt.io/kvm: "1"memory: 4Gcpu: 2devices:rng: {}autoattachSerialConsole: trueautoattachGraphicsDevice: truedisks:- name: talos-worker01-disk-vda-rootbootOrder: 1disk:bus: virtiovolumes:- name: talos-worker01-disk-vda-rootdataVolume:name: talos-worker01-rootdataVolumeTemplates:- metadata:name: talos-worker01-rootspec:pvc:accessModes:- ReadWriteOncestorageClassName: linstor-igrokresources:requests:storage: 10Gsource:http:url: https://github.com/siderolabs/talos/releases/download/v1.12.2/metal-amd64.isoDeploy the Headless service for workers
apiVersion: v1kind: Servicemetadata:name: talos-workersspec:clusterIP: Noneports:- name: httpsport: 6443protocol: TCPtargetPort: 6443- name: apiport: 50000protocol: TCPtargetPort: 50000selector:kubevirt.io/vmpool: talos-workersessionAffinity: Nonetype: ClusterIPUpload generated configs to the NRP cluster
Terminal window kubectl create secret generic talos-secrets --from-file=talos-configRun the talosctl deployment to get access to workers
apiVersion: apps/v1kind: Deploymentmetadata:name: talosctllabels:app: talosctlspec:replicas: 1selector:matchLabels:app: talosctltemplate:metadata:labels:app: talosctlspec:containers:- name: deployimage: alpineresources:requests:cpu: 100mmemory: 100Milimits:cpu: 1memory: 1Gicommand:- sh- -c- |apk add curl openssl bind-tools;curl -sL https://talos.dev/install | sh;sleep infinity;imagePullPolicy: IfNotPresentvolumeMounts:- mountPath: "/talos-config"name: configvolumes:- name: configsecret:secretName: talos-secretsOnce all worker VMs are running, exec to the
talosctldeployment and bootstrap the workers from itTerminal window kubectl exec -it deploy/talosctl -- shdig talos-workers.<your_namespace>.svc.cluster.local +short | xargs -I{} talosctl apply-config -n {}:50000 --insecure -f /talos-config/worker.yamlYou should see the workers nodes appear in the cluster.
Deploying Talos cluster using Ingress (WIP)
Coming soon…
