2023-12-12 11:36:52 +01:00

14 KiB
Executable File

Identity Federation through Keycloak/GitHub

Overview

> There are no further requirements for an article except to include the following sections at theend, and to follow all general Open Telekom Architecture Center content requirements.
> An Open Telekom Cloud Architecture Center article template, forexternal* creators, requires the following sections at the end of the article:*

TL;DR

>> Make a brief summary of what is the article about
> No header required here
> (Expected to list all the Open Telekom Cloud components used, but it could be optional if it just an architectural paradigm.
> You can name the Section titles as it seems fit to the workflow of the article.

Create a VPC and a Subnet

We are going to need a Virtual Private Cloud (VPC) and at least one Subnet where we are going to provision both RDS instances and CCE nodes. For enhanced security granularity, we could split those resources in two different Subnets.

image

Warning

RDS and CCE nodes have to be on the same VPC.

Deploy a PostgreSQL with RDS

Keycloak, as a stateful workload, requires the presence of a persistent storage in order to maintain its data and configuration during pod restarts. We could deploy a PostgreSQL database as a CCE workload, but this would require additional administrative overhead from your side. The Managed Relational Database Service of Open Telekom Cloud is a perfect fit for this scenario. A scalable turn-key solution, that fully integrated with the rest of managed services of the platform without demanding from the consumer additional administrative effort.

Create Security Groups

We are going to need two different Security Groups. One for the RDS nodes, so it can accept client calls on port 5432 (Inbound Rules), which they only reside in the same Subnet (in case we went for a single Subnet solution):

image

And one Security Group for the client nodes that need to access the database (Outbound Rules), in our case those would be the CCE nodes where Keycloak is going to be installed on.

image

Provision a Database

Now as next, we need to provision a PostgreSQL 14 database. Pick the instance and storage class size that fit your needs:

image

and make sure that you:

  • you place the RDS nodes in the same VPC with the CCE nodes
  • assign rds-instances as the Security Group for the RDS nodes

image

Create a Private DNS Zone

We are provisioning PostgreSQL in order to support the functionality of Keycloak. For that matter, although Open Telekom Cloud employs this RDS instance with a floating IP address, it would be better that we connect the RDS instance with Keycloak via a fully qualified domain name and let the Open Telekom Cloud's DNS service to manage the resolution of that endpoints. In the Domain Name Service management panel click Private Zone and create a new one that points to the VPC that CCE and RDS nodes are placed:

image

and then click Manage Record Set to add a new A Record to this zone:

image

Note

The domain name, will be a fictitious domain representing your solution and not a public one. It can be virtually any domain or subdomain that conforms to the a FQDN rules.

The floating IP of the RDS instance can be found in the Basic Information panel of the database:

image

Provision a CCE Cluster

We are going to need a CCE Cluster. In order to provision one, you can follow the configuration steps of the wizard paying attention to the following details:

  • We are not going to need an HA cluster - of course adjust to your needs because this is not something you can change in the future.
  • We need to provision the CCE Cluster in the same VPC as the RDS nodes.
  • If you follow the single Subnet lab instructions make sure you place the CCE Nodes in the same Subnet that RDS nodes reside.

image

Add worker nodes to the CCE cluster using the wizard, and wait all nodes to become operational. Then add to each node an additional Security Group, in particular the rds-client that we created earlier in this lab.

image

Note

Make your own decision how you're going to access this CCE Cluster afterwards. You can assign an Elastic IP Address and access it over the Internet or provision and additional public-facing bastion host and access it through this machine. We categorically recommend the latter.

Deploy Keycloak on CCE

We are going to deploy Keycloak using simple Kubernetes manifests. Deploy those YAML manifests in the order described below using the command on your bastion host (or in any other machine if you chose to go for an EIP):

kubectl apply -f <<filename.yaml>>

Deploy Keycloak Secrets

First we are going to need a Namespace in our CCE Cluster, in order to deploy all the resources required by Keycloak:

We are going to need two Secrets. One, postgres-credentials, that will contain the credentials to access the PostgreSQL database instance and a second one, keycloak-secrets, that will contain the necessary credential to access the web console of Keycloak:

apiVersion: v1
kind: Secret
metadata:
  name: postgres-credentials
  namespace: keycloak
type: Opaque
stringData:
  POSTGRES_USER: root
  POSTGRES_PASSWORD: <<POSTGRES_PASSWORD>>
  POSTGRES_DB: postgres
---
apiVersion: v1
kind: Secret
metadata:
  name: keycloak-secrets
  namespace: keycloak
type: Opaque
stringData:
  KEYCLOAK_ADMIN: admin
  KEYCLOAK_ADMIN_PASSWORD: <<KEYCLOAK_ADMIN_PASSWORD>>

Note

POSTGRES_PASSWORD is the password for the root user your provided during the creation of the RDS instance.

KEYCLOAK_ADMIN_PASSWORD, as we mentioned before, is the password for the admin user of the Keycloak web console. You can easily create random strong passwords, in Linux terminal, with the following command:

openssl rand -base64 14

Deploy Keycloak Application

Next step, is deploying Keycloak itself:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: keycloak
  namespace: keycloak
  labels:
    app: keycloak
spec:
  replicas: 1
  selector:
    matchLabels:
      app: keycloak
  template:
    metadata:
      labels:
        app: keycloak
    spec:
      containers:
      - name: keycloak
        image: quay.io/keycloak/keycloak:21.0.2
        args: ["start-dev"]
        env:
        - name: KEYCLOAK_ADMIN
          valueFrom:
            secretKeyRef:
              key: KEYCLOAK_ADMIN
              name: keycloak-secrets
        - name: KEYCLOAK_ADMIN_PASSWORD
          valueFrom:
            secretKeyRef:
              key: KEYCLOAK_ADMIN_PASSWORD
              name: keycloak-secrets
        - name: KC_PROXY
          value: "edge"
        - name: KC_HEALTH_ENABLED
          value: "true"
        - name: KC_METRICS_ENABLED
          value: "true"
        - name: KC_HOSTNAME_STRICT_HTTPS
          value: "true"
        - name: KC_LOG_LEVEL
          value: INFO
        - name: KC_DB
          value: postgres
        - name: POSTGRES_DB
          valueFrom:
            secretKeyRef:
              name: postgres-credentials
              key: POSTGRES_DB
        - name: KC_DB_URL
          value: jdbc:postgresql://postgresql.blueprints.arc:5432/$(POSTGRES_DB)
        - name: KC_DB_USERNAME
          valueFrom:
            secretKeyRef:
              name: postgres-credentials
              key: POSTGRES_USER
        - name: KC_DB_PASSWORD
          valueFrom:
            secretKeyRef:
              name: postgres-credentials
              key: POSTGRES_PASSWORD
        ports:
        - name: http
          containerPort: 8080
        readinessProbe:
          httpGet:
            path: /health/ready
            port: 8080
          initialDelaySeconds: 250
          periodSeconds: 10
        livenessProbe:
          httpGet:
            path: /health/live
            port: 8080
          initialDelaySeconds: 500
          periodSeconds: 30
        resources:
            limits:
              memory: 512Mi
              cpu: "1"
            requests:
              memory: 256Mi
              cpu: "0.2"

As you will notice in the highlighted lines, we parameterize the credentials portion of this manifest by referencing the variables and their values we installed in the previous step with the Secrets. Important to mention the significance of line 51, where we connect Keycloak with the RDS instance using the FQDN we created in our Private DNS Zone for this instance.

Deploy Keycloak Service

We deployed the application, but at the time being is not accessible by an internal or external actor (direct access from Pods does not count in this case). For that matter, we need to deploy a Service that will expose Keycloak's workload:

apiVersion: v1
kind: Service
metadata:
  name: keycloak
  namespace: keycloak
  labels:
    app: keycloak
spec:
  ports:
  - name: https
    port: 443
    targetPort: 8080
  selector:
    app: keycloak
  type: NodePort

Note

Pay attention to line 15, where we set the type as NodePort. That's because we want to expose this service externally, in a later step, via an Ingress.

Expose Keycloak

image

Create an Elastic Load Balancer

First in our list for this part, is to create an Elastic Load Balancer that will be employed with the following:

  • An EIP address
  • Support L4 and L7 load balancing
  • Be in the same VPC/Subnet as the nodes of our CCE Cluster
  • Associate backend servers by using their IP addresses (IP as Backend)

image

Note

Note down the ELB ID, we are going to need it to configure the Nginx Ingress that we will deploy next.

Deploy Nginx Ingress on CCE

We are going to deploy in this step the Ingress that will sit between our ELB and the Keycloak Service and expose it in the address of our preference (keycloak.example.com for this lab)

Warning

Do not forget that the FQDN we are going to use to expose the Keycloak Service has to point to a real domain or subdomain that you actually own!

We will use Helm to deploy Nginx Ingress to our CCE Cluster. Helm is the de-facto package manager of Kubernetes and if you don't have it already installed on your remote machine or your bastion host, you can do it with the following commands:

curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3
chmod 700 get_helm.sh
./get_helm.sh

We have to provide to the helm chart a couple configuration values (overrides.yaml), among them the internal ID of the Elastic Load Balancer is the most important - as it will bind the future ingresses that will be created using this ingress class with the specific load balancer.

controller:
  replicaCount: 1
  service:
    externalTrafficPolicy: Cluster
    annotations:
      kubernetes.io/elb.id: "0000000-0000-0000-0000-000000000000"

Note

Special attention required at line 6, replace the placeholder value with the ID you copied from the main panel of your newly created Elastic Load Balancer.

We can now install the chart (it will automatically create and deploy everything in a namespace named nginx-system):

helm upgrade --install -f overrides.yaml --install ingress-nginx ingress-nginx \
--repo https://kubernetes.github.io/ingress-nginx \
--namespace ingress-nginx --create-namespace

Create a Public DNS Endpoint

Create the Endpoint manually

Create the Endpoint with ExternalDNS

Deploy ExternalDNS on CCE
Deploy a Keycloak Endpoint

Deploy Keycloak Ingress

Section n

Next Steps

> (Expected, but it could be optional if you don't want the article stops here and doesn't connect with other resources)
> Add site-relative links to Architecture Center related articles but NOT to external or third-party resources
> If there are additional resources like Cloud Topology Designer solution or Github repos, list them first with the aforementioned order

Resources

> If there are additional deployable resources like Cloud Topology Designer solution or Github repos, list them first with the aformentioned order

References

> Add site-relative links to Architecture Center articles
> Add links to external or third-party resources
> REMOVE ALL THE LINES THAT START WITH "| >"