Getting Started with MicroShift on the Mac/Ubuntu and Fedora

I ran across the MicroShift Project, and what I liked about it the most is the System Requirements. To run MicroShift, you need a machine with at least:

  • a supported 64-bit CPU architecture (amd64/x86_64, arm64, or riscv64)
  • 2 CPU cores
  • 2GB of RAM
  • 1GB of free storage space for MicroShift

And with this post, I just wanted to keep notes on how I got it up and running.

I'll be trying it out on a Mac, so for this, I'll be using Multipass ( so far has been good/not feeling like it's taking off like an airplane on this mac, and it also supports the Apple M1... And it also reminds me of the movie, The 5th Element 😀 ), to create an Ubuntu VM and try it there.

Install Multipass

Check this link on how to install Multipass: https://multipass.run/docs/installing-on-macos

Create a VM

multipass launch --name microshift --cpus 4 --mem 8G --disk 50G 20.04 --cloud-init - <<EOF
package_update: true
package_upgrade: true
packages:
  - avahi-daemon
  - apt-transport-https
  - ca-certificates
  - curl
  - gnupg
  - lsb-release
  - jq
  - tar
EOF

This will create a VM with 4 CPUs, 8GB of memory and 50GB of disk space using the Ubuntu 20.04 image, then update & upgrade the packages and install a few needed packages

Deploying MicroShift on Ubuntu

Next, I'm going to install CRI-O, podman, the oc & kubectl cli and set a default StorageClass

# exec into the vm
multipass exec microshift -- bash

# install cri-o
curl --silent https://raw.githubusercontent.com/cri-o/cri-o/v1.25.1/scripts/get | sudo bash -s -- -t v1.25.1

# enable it
sudo systemctl enable crio --now

# install podman
echo "deb https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/xUbuntu_20.04/ /" | sudo tee /etc/apt/sources.list.d/devel:kubic:libcontainers:stable.list
curl -L "https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/xUbuntu_20.04/Release.key" | sudo apt-key add -
sudo apt-get update
sudo apt-get -y upgrade
sudo apt-get -y install podman

# install the microshift.service
sudo curl -o /etc/systemd/system/microshift.service https://raw.githubusercontent.com/redhat-et/microshift/main/packaging/systemd/microshift-containerized.service

# enable it
sudo systemctl daemon-reload
sudo systemctl enable microshift --now

# install oc & kubectl
curl -O https://mirror.openshift.com/pub/openshift-v4/$(uname -m)/clients/ocp/stable/openshift-client-linux.tar.gz
sudo tar -xf openshift-client-linux.tar.gz -C /usr/local/bin oc kubectl

# copy kubeconfig to access our new cluster
mkdir ~/.kube
sudo podman cp microshift:/var/lib/microshift/resources/kubeadmin/kubeconfig ~/.kube/config
sudo chown `whoami`: ~/.kube/config

# ( optional ) set kubevirt-hostpath-provisioner as the default StorageClass
kubectl patch storageclass kubevirt-hostpath-provisioner -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'

Deploying MicroShift on Fedora

# install cri-o
sudo dnf module enable -y cri-o:1.21
sudo dnf install -y cri-o cri-tools jq tar gnupg curl openssl unzip wget
sudo systemctl enable crio --now

# install the microshift
sudo dnf install -y 'dnf-command(copr)'
sudo dnf copr enable -y @redhat-et/microshift
sudo dnf install -y microshift

# Get the Fedora version number
version=$(cat /etc/fedora-release | awk '{print $3}')

if [[ "$version" -ge 36 ]]; then
  sudo rm -rf /var/lib/microshift
  wget https://github.com/openshift/microshift/releases/download/nightly/microshift-linux-amd64 -O microshift
  sudo install -v -o root -g root -m 775 ./microshift /usr/bin/
  rm -fv microshift
fi

# open some firewalls
sudo firewall-cmd --zone=trusted --add-source=10.42.0.0/16 --permanent
sudo firewall-cmd --zone=public --add-port=80/tcp --permanent
sudo firewall-cmd --zone=public --add-port=443/tcp --permanent
sudo firewall-cmd --zone=public --add-port=5353/udp --permanent
sudo firewall-cmd --reload

# enable it
sudo systemctl enable microshift --now

# install oc & kubectl
curl -O https://mirror.openshift.com/pub/openshift-v4/$(uname -m)/clients/ocp/stable/openshift-client-linux.tar.gz
sudo tar -xf openshift-client-linux.tar.gz -C /usr/local/bin oc kubectl
rm -fv openshift-client-linux.tar.gz

# copy kubeconfig to access our new cluster
mkdir -p ~/.kube
# Workaround for https://github.com/openshift/microshift/issues/630
until sudo [ -f /var/lib/microshift/resources/kubeadmin/kubeconfig ] ; do
  sleep 3
done
sudo cp /var/lib/microshift/resources/kubeadmin/kubeconfig ~/.kube/config
sudo chown "$(whoami)": ~/.kube/config

# ( optional ) set kubevirt-hostpath-provisioner as the default StorageClass
kubectl patch storageclass kubevirt-hostpath-provisioner -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'

Check on the status

Wait for a few min, or use watch kubectl get pod -A to check on the progress of the pods as they get deployed/setup. After a few min, you should have a MicroShift cluster that you can start playing with

$ kubectl get pod -A
NAMESPACE                       NAME                                  READY   STATUS    RESTARTS   AGE
kube-system                     kube-flannel-ds-7c24f                 1/1     Running   0          73m
kubevirt-hostpath-provisioner   kubevirt-hostpath-provisioner-zx6bm   1/1     Running   0          73m
openshift-dns                   dns-default-v6vgq                     2/2     Running   0          73m
openshift-dns                   node-resolver-g9dqg                   1/1     Running   0          73m
openshift-ingress               router-default-6c96f6bc66-kt9tk       1/1     Running   0          73m
openshift-service-ca            service-ca-7bffb6f6bf-fbktd           1/1     Running   0          73m

Install the web console

To install the web console, run:

# Create a serviceAccount
oc create serviceaccount console -n kube-system

# Create a clusterRoleBinding
oc create clusterrolebinding console --clusterrole=cluster-admin --serviceaccount=kube-system:console -n kube-system

I've been playing around with kapp lately and so far it's been great, so I'll be using that here...

So now, we can generate and deploy the deployment and service resources with kapp ( or you can use oc apply -f - <<EOF if you prefer ), like:

# Switch to kube-system project/namespace
oc project kube-system

# Deploy the app
kapp deploy --diff-changes --app-changes-max-to-keep 3 --app web-console-deployment --yes -f - <<EOF
---
apiVersion: apps/v1
kind : Deployment
metadata:
  name: web-console-deployment
  namespace: kube-system
  labels:
    app: console
spec:
  replicas: 1
  selector:
    matchLabels:
      app: console
  template:
    metadata:
      labels:
          app: console
    spec:
      containers:
        - name: console-app
          image: quay.io/openshift/origin-console:latest
          env:
            - name : BRIDGE_USER_AUTH
              value: disabled # no authentication required
            - name: BRIDGE_K8S_MODE
              value: off-cluster
            - name: BRIDGE_K8S_MODE_OFF_CLUSTER_ENDPOINT
              value: https://kubernetes.default #master api
            - name : BRIDGE_K8S_MODE_OFF_CLUSTER_SKIP_VERIFY_TLS
              value: "true" # no tls enabled
            - name: BRIDGE_K8S_AUTH
              value: bearer-token
            - name: BRIDGE_K8S_AUTH_BEARER_TOKEN
              valueFrom:
                secretKeyRef:
                  name: $(kubectl get serviceaccount console --namespace=kube-system -o yaml | awk '/name: console-token/ { print $3}') # console serviceaccount token
                  key: token
---
kind: Service
apiVersion: v1
metadata:
  name: web-console-service
  namespace: kube-system
spec:
  selector:
    app: console
  type: NodePort # nodePort configuration
  ports:
    - name: http
      port: 9000
      targetPort: 9000
      nodePort: 30003
      protocol: TCP
EOF
Deploy the web console

view a list of live objects that we just deployed

$ kapp inspect --app web-console-deployment --tree
Target cluster 'https://127.0.0.1:6443' (nodes: microshift)

Resources in app 'web-console-deployment'

Namespace    Name                                          Kind           Owner    Conds.  Rs  Ri  Age  
kube-system  web-console-deployment                        Deployment     kapp     2/2 t   ok  -   1m  
kube-system   L web-console-deployment-68b97ff495          ReplicaSet     cluster  -       ok  -   1m  
kube-system   L.. web-console-deployment-68b97ff495-6qxlg  Pod            cluster  4/4 t   ok  -   1m  
kube-system  web-console-service                           Service        kapp     -       ok  -   1m  
kube-system   L web-console-service                        Endpoints      cluster  -       ok  -   1m  
kube-system   L web-console-service-97jw2                  EndpointSlice  cluster  -       ok  -   1m  

Rs: Reconcile state
Ri: Reconcile information

6 resources

Succeeded

or use oc/kubectl

$ oc get deployment,service,pod
NAME                                     READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/web-console-deployment   1/1     1            1           1m

NAME                          TYPE       CLUSTER-IP      EXTERNAL-IP   PORT(S)          AGE
service/web-console-service   NodePort   10.3.23.37      <none>        9000:30003/TCP   1m

NAME                                          READY   STATUS    RESTARTS   AGE
pod/kube-flannel-ds-7c24f                     1/1     Running   0          26h
pod/web-console-deployment-68b97ff495-6qxlg   1/1     Running   0          1m

and you can check the logs

$ kapp logs -f --app web-console-deployment # or oc logs -f deployment.apps/web-console-deployment
Target cluster 'https://127.0.0.1:6443' (nodes: microshift)

# starting tailing 'web-console-deployment-68b97ff495-6qxlg > console-app' logs
web-console-deployment-68b97ff495-6qxlg > console-app | W0505 05:37:40.949834       1 main.go:213] Flag inactivity-timeout is set to less then 300 seconds and will be ignored!
web-console-deployment-68b97ff495-6qxlg > console-app | W0505 05:37:40.949928       1 main.go:347] cookies are not secure because base-address is not https!
web-console-deployment-68b97ff495-6qxlg > console-app | W0505 05:37:40.949958       1 main.go:652] running with AUTHENTICATION DISABLED!
web-console-deployment-68b97ff495-6qxlg > console-app | I0505 05:37:40.951200       1 main.go:768] Binding to 0.0.0.0:9000...
web-console-deployment-68b97ff495-6qxlg > console-app | I0505 05:37:40.951236       1 main.go:773] not using TLS
web-console-deployment-68b97ff495-6qxlg > console-app | 2022/05/05 05:37:50 CheckOrigin: Proxy has no configured Origin. Allowing origin [http://localhost:30003] to wss://kubernetes.default/apis/apps/v1/namespaces/default/deployments?watch=true&resourceVersion=40050
web console's logs

Add VM's IP address to /etc/hosts

In order to access our newly deployed web console, we need to add the VM's IP address to our Mac's hosts file, /etc/hosts :

# Get the VM's IP address
VM_IP="$(multipass info microshift | awk '/IPv4:/ {print $2}')"

# Add it to /etc/hosts
echo "${VM_IP}    microshift.local" | sudo tee -a /etc/hosts
Add VM's IP address to /etc/hosts

and if everything went well, we should be able to access the web console at http://microshift.local:30003/

I hope that helps you get started with MicroShift on your Mac Machine