Installation of idpbuilder

Local installation with KIND Kubernetes

The idpbuilder uses KIND as Kubernetes cluster. It is suggested to use a virtual machine for the installation. MMS Linux clients are unable to execute KIND natively on the local machine because of network problems. Pods for example can’t connect to the internet.

Windows and Mac users already utilize a virtual machine for the Docker Linux environment.

Prerequisites

  • Docker Engine
  • Go
  • kubectl
  • kind

Build process

For building idpbuilder the source code needs to be downloaded and compiled:

git clone https://github.com/cnoe-io/idpbuilder.git
cd idpbuilder
go build

The idpbuilder binary will be created in the current directory.

Start idpbuilder

To start the idpbuilder binary execute the following command:

./idpbuilder create --use-path-routing  --log-level debug --package https://github.com/cnoe-io/stacks//ref-implementation

Logging into ArgoCD

At the end of the idpbuilder execution a link of the installed ArgoCD is shown. The credentianls for access can be obtained by executing:

./idpbuilder get secrets

Logging into KIND

A Kubernetes config is created in the default location $HOME/.kube/config. A management of the Kubernetes config is recommended to not unintentionally delete acces to other clusters like the OSC.

To show all running KIND nodes execute:

kubectl get nodes -o wide

To see all running pods:

kubectl get pods -o wide

Next steps

Follow this documentation: https://github.com/cnoe-io/stacks/tree/main/ref-implementation

Delete the idpbuilder KIND cluster

The cluster can be deleted by executing:

idpbuilder delete cluster

Remote installation into a bare metal Kubernetes instance

CNOE provides two implementations of an IDP:

  • Amazon AWS implementation
  • KIND implementation

Both are not useable to run on bare metal or an OSC instance. The Amazon implementation is complex and makes use of Terraform which is currently not supported by either base metal or OSC. Therefore the KIND implementation is used and customized to support the idpbuilder installation. The idpbuilder is also doing some network magic which needs to be replicated.

Several prerequisites have to be provided to support the idpbuilder on bare metal or the OSC:

  • Kubernetes dependencies
  • Network dependencies
  • Changes to the idpbuilder

Prerequisites

Talos Linux is choosen for a bare metal Kubernetes instance.

  • talosctl
  • Go
  • Docker Engine
  • kubectl
  • kustomize
  • helm
  • nginx

As soon as the idpbuilder works correctly on bare metal, the next step is to apply it to an OSC instance.

Add *.cnoe.localtest.me to hosts file

Append this lines to /etc/hosts

127.0.0.1 gitea.cnoe.localtest.me
127.0.0.1 cnoe.localtest.me

Install nginx and configure it

Install nginx by executing:

sudo apt install nginx

Replace /etc/nginx/sites-enabled/default with the following content:

server {
        listen 8443 ssl default_server;
        listen [::]:8443 ssl default_server;

        include snippets/snakeoil.conf;

        location / {
                    proxy_pass http://10.5.0.20:80;
                    proxy_http_version                 1.1;
                    proxy_cache_bypass                 $http_upgrade;
                    proxy_set_header Host              $host;
                    proxy_set_header X-Forwarded-For   $proxy_add_x_forwarded_for;
                    proxy_set_header X-Real-IP         $remote_addr;
                    proxy_set_header X-Forwarded-Host  $host;
                    proxy_set_header X-Forwarded-Proto $scheme;
        }
}

Start nginx by executing:

sudo systemctl enable nginx
sudo systemctl restart nginx

Building idpbuilder

For building idpbuilder the source code needs to be downloaded and compiled:

git clone https://github.com/cnoe-io/idpbuilder.git
cd idpbuilder
go build

The idpbuilder binary will be created in the current directory.

Configure VS Code launch settings

Open the idpbuilder folder in VS Code:

code .

Create a new launch setting. Add the "args" parameter to the launch setting:

{
    "version": "0.2.0",
    "configurations": [
        {
            "name": "Launch Package",
            "type": "go",
            "request": "launch",
            "mode": "auto",
            "program": "${fileDirname}",
            "args": ["create", "--use-path-routing", "--package", "https://github.com/cnoe-io/stacks//ref-implementation"]
        }
    ]
}

Create the Talos bare metal Kubernetes instance

Talos by default will create docker containers, similar to KIND. Create the cluster by executing:

talosctl cluster create

Install local path privisioning (storage)

mkdir -p localpathprovisioning
cd localpathprovisioning
cat > localpathprovisioning.yaml <<EOF
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- github.com/rancher/local-path-provisioner/deploy?ref=v0.0.26
patches:
- patch: |-
    kind: ConfigMap
    apiVersion: v1
    metadata:
      name: local-path-config
      namespace: local-path-storage
    data:
      config.json: |-
        {
                "nodePathMap":[
                {
                        "node":"DEFAULT_PATH_FOR_NON_LISTED_NODES",
                        "paths":["/var/local-path-provisioner"]
                }
                ]
        }
- patch: |-
    apiVersion: storage.k8s.io/v1
    kind: StorageClass
    metadata:
      name: local-path
      annotations:
        storageclass.kubernetes.io/is-default-class: "true"
- patch: |-
    apiVersion: v1
    kind: Namespace
    metadata:
      name: local-path-storage
      labels:
        pod-security.kubernetes.io/enforce: privileged
EOF
kustomize build | kubectl apply -f -
rm localpathprovisioning.yaml kustomization.yaml
cd ..
rmdir localpathprovisioning

Install an external load balancer

kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.14.8/config/manifests/metallb-native.yaml
sleep 50

cat <<EOF | kubectl apply -f -
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
  name: first-pool
  namespace: metallb-system
spec:
  addresses:
  - 10.5.0.20-10.5.0.130
EOF

cat <<EOF | kubectl apply -f -
apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
  name: homelab-l2
  namespace: metallb-system
spec:
  ipAddressPools:
  - first-pool
EOF

Install an ingress controller which uses the external load balancer

helm upgrade --install ingress-nginx ingress-nginx \
  --repo https://kubernetes.github.io/ingress-nginx \
  --namespace ingress-nginx --create-namespace
sleep 30

Execute idpbuilder

Modify the idpbuilder source code

Edit the function Run in pkg/build/build.go and comment out the creation of the KIND cluster:

	/*setupLog.Info("Creating kind cluster")
	if err := b.ReconcileKindCluster(ctx, recreateCluster); err != nil {
		return err
	}*/

Compile the idpbuilder

go build

Start idpbuilder

Then, in VS Code, switch to main.go in the root directory of the idpbuilder and start debugging.

Logging into ArgoCD

At the end of the idpbuilder execution a link of the installed ArgoCD is shown. The credentianls for access can be obtained by executing:

./idpbuilder get secrets

Logging into Talos cluster

A Kubernetes config is created in the default location $HOME/.kube/config. A management of the Kubernetes config is recommended to not unintentionally delete acces to other clusters like the OSC.

To show all running Talos nodes execute:

kubectl get nodes -o wide

To see all running pods:

kubectl get pods -o wide

Delete the idpbuilder Talos cluster

The cluster can be deleted by executing:

talosctl cluster destroy

TODO’s for running idpbuilder on bare metal or OSC

Required:

  • Add *.cnoe.localtest.me to the Talos cluster DNS, pointing to the host device IP address, which runs nginx.

  • Create a SSL certificate with cnoe.localtest.me as common name. Edit the nginx config to load this certificate. Configure idpbuilder to distribute this certificate instead of the one idpbuilder distributes by idefault.

Optimizations:

  • Implement an idpbuilder uninstall. This is specially important when working on the OSC instance.

  • Remove or configure gitea.cnoe.localtest.me, it seems not to work even in the idpbuilder local installation with KIND.

  • Improvements to the idpbuilder to support Kubernetes instances other then KIND. This can either be done by parametrization or by utilizing Terraform / OpenTOFU or Crossplane.