diff --git a/doc/source/best-practices/security/deploy_keycloak.rst b/doc/source/best-practices/security/deploy_keycloak.rst index 5ce9492..0d607db 100755 --- a/doc/source/best-practices/security/deploy_keycloak.rst +++ b/doc/source/best-practices/security/deploy_keycloak.rst @@ -1,3 +1,5 @@ +.. _my-reference-label: deploy_keycloak + .. meta:: :description: Deploy Keycloak on an Open Telekom Cloud CCE Cluster :keywords: keycloak, open telekom cloud, cce, identity federation, cce, kubernetes, rds, postgresql, externaldns diff --git a/doc/source/best-practices/security/keycloak_github.rst b/doc/source/best-practices/security/keycloak_github.rst index b495ea7..0657e7e 100755 --- a/doc/source/best-practices/security/keycloak_github.rst +++ b/doc/source/best-practices/security/keycloak_github.rst @@ -1,6 +1,6 @@ .. meta:: - :description: add a SEO description here - :keywords: add SEO keywords here, and list additionally all OTC services used + :description: Deploy Keycloak on an Open Telekom Cloud CCE Cluster + :keywords: keycloak, open telekom cloud, cce, identity federation, cce, kubernetes, github =========================================== Identity Federation through Keycloak/GitHub @@ -11,403 +11,174 @@ Identity Federation through Keycloak/GitHub Overview ======== -| > *There are no further requirements for an article except to include the following sections at the **end**, and to follow all general Open Telekom Architecture Center content requirements.* -| > *An Open Telekom Cloud Architecture Center article template, for **external** creators, requires the following sections at the end of the article:* - -.. topic:: TL;DR - - | >> Make a brief summary of what is the article about +Identity Federation in Keycloak refers to the ability to use external identity providers to authenticate users in your +application. In this context, GitHub can be used as an identity provider, allowing users to log in to your +Open Telekom Cloud tenant using their GitHub credentials. Users can choose to log in with their GitHub accounts and +Keycloak takes care of the authentication process, providing a seamless experience for users while ensuring security +and centralized identity management for external accounts that are not actively managed in your tenant's IAM. .. Main Article .. Components -| > *No header required here* -| > *(Expected to list all the Open Telekom Cloud components used, but it could be optional if it just an architectural paradigm.* +Prerequisites ++++++++++++++ + +For this lab, you are going to need a: + +#. **Keycloak** server: You should have a Keycloak server instance set up and running +#. **GitHub** account: You need a GitHub account to register your application and obtain client ID and secret .. Sections 1..n -| > *You can name the Section titles as it seems fit to the workflow of the article.* -Create a VPC and a Subnet -========================= - -We are going to need a Virtual Private Cloud (VPC) and at least one Subnet where we are going -to provision both RDS instances and CCE nodes. For enhanced security granularity, we could split -those resources in two different Subnets. - -.. image:: /_static/images/SCR-20231208-ezg.png - -.. warning:: RDS and CCE nodes have to be on the same VPC. - -Deploy a PostgreSQL with RDS -============================ - -Keycloak, as a stateful workload, requires the presence of a persistent storage in order to -maintain its data and configuration during pod restarts. We could deploy a PostgreSQL database -as a CCE workload, but this would require additional administrative overhead from your side. -The Managed Relational Database Service of Open Telekom Cloud is a perfect fit for this scenario. -A scalable turn-key solution, that fully integrated with the rest of managed services of the platform -without demanding from the consumer additional administrative effort. - -Create Security Groups -++++++++++++++++++++++ - -We are going to need two different Security Groups. One for the RDS nodes, so it can accept client calls -on port ``5432`` (Inbound Rules), which they only reside in the same Subnet (in case we went for a single Subnet solution): - -.. image:: /_static/images/SCR-20231208-fh3.png - -| - -And one Security Group for the client nodes that need to access the database (Outbound Rules), in our case those would -be the CCE nodes where Keycloak is going to be installed on. - -.. image:: /_static/images/SCR-20231208-k2x.png - -Provision a Database -++++++++++++++++++++ - -Now as next, we need to provision a PostgreSQL 14 database. Pick the instance and storage class size that fit your needs: - -.. image:: /_static/images/SCR-20231208-k8t.png - -| - -and make sure that you: - -- you place the RDS nodes in the same VPC with the CCE nodes -- assign ``rds-instances`` as the Security Group for the RDS nodes - -.. image:: /_static/images/SCR-20231208-ka7.png - -Create a Private DNS Zone -+++++++++++++++++++++++++ - -We are provisioning PostgreSQL in order to support the functionality of Keycloak. For that matter, although Open Telekom -Cloud employs this RDS instance with a floating IP address, it would be better that we connect the RDS instance with -Keycloak via a fully qualified domain name and let the Open Telekom Cloud's DNS service to manage the resolution of that -endpoints. In the Domain Name Service management panel click Private Zone and create a new one that points to the VPC -that CCE and RDS nodes are placed: - -.. image:: /_static/images/SCR-20231211-f5u.png - -| - -and then click Manage Record Set to add a new **A Record** to this zone: - -.. image:: /_static/images/SCR-20231211-ffb.png - -| - -.. note:: The domain name, will be a fictitious domain representing your solution and not a public one. It can be - virtually any domain or subdomain that conforms to the a FQDN rules. - -| - -The floating IP of the RDS instance can be found in the Basic Information panel of the database: - -.. image:: /_static/images/SCR-20231211-fj8.png - - -Provision a CCE Cluster -======================= - -We are going to need a CCE Cluster. In order to provision one, you can follow the configuration steps of the wizard -paying attention to the following details: - -- We are not going to need an HA cluster - of course adjust to your needs because this is not something you can - change in the future. -- We need to provision the CCE Cluster in the same VPC as the RDS nodes. -- If you follow the single Subnet lab instructions make sure you place the CCE Nodes in the same Subnet that RDS nodes - reside. - -| - -.. image:: /_static/images/SCR-20231211-fp6.png - -| - -Add worker nodes to the CCE cluster using the wizard, and wait all nodes to become operational. Then add to **each** node -an additional Security Group, in particular the ``rds-client`` that we created earlier in this lab. - -.. image:: /_static/images/SCR-20231211-g7y.png - -.. note:: Make your own decision how you're going to access this CCE Cluster afterwards. You can assign an Elastic - IP Address and access it over the Internet or provision and additional public-facing bastion host and access - it through this machine. **We categorically recommend the latter**. - -Deploy Keycloak on CCE -====================== - -We are going to deploy Keycloak using simple Kubernetes manifests. Deploy those YAML manifests in the order described -below using the command on your bastion host (or in any other machine if you chose to go for an EIP): - -.. code-block:: yaml - - kubectl apply -f <> - -Deploy Keycloak Secrets -+++++++++++++++++++++++ - -First we are going to need a Namespace in our CCE Cluster, in order to deploy all the resources required by Keycloak: - -.. code :: shell - - kubectl create namespace keycloak - -We are going to need two Secrets. One, ``postgres-credentials``, that will contain the credentials to access the PostgreSQL -database instance and a second one, ``keycloak-secrets``, that will contain the necessary credential to access the web -console of Keycloak: - -.. code-block:: yaml - :linenos: - :emphasize-lines: 9,20 - - apiVersion: v1 - kind: Secret - metadata: - name: postgres-credentials - namespace: keycloak - type: Opaque - stringData: - POSTGRES_USER: root - POSTGRES_PASSWORD: <> - POSTGRES_DB: postgres - --- - apiVersion: v1 - kind: Secret - metadata: - name: keycloak-secrets - namespace: keycloak - type: Opaque - stringData: - KEYCLOAK_ADMIN: admin - KEYCLOAK_ADMIN_PASSWORD: <> - -.. note:: ``POSTGRES_PASSWORD`` is the password for the ``root`` user your provided during the creation of the RDS instance. - -``KEYCLOAK_ADMIN_PASSWORD``, as we mentioned before, is the password for the ``admin`` user of the Keycloak web console. -You can easily create random strong passwords, in Linux terminal, with the following command: - -.. code-block:: shell - - openssl rand -base64 14 - -Deploy Keycloak Application -+++++++++++++++++++++++++++ - -Next step, is deploying Keycloak itself: - -.. code-block:: yaml - :linenos: - :emphasize-lines: 23,26,27,31,32,48,49,50,51,55,56,60,61 - - apiVersion: apps/v1 - kind: Deployment - metadata: - name: keycloak - namespace: keycloak - labels: - app: keycloak - spec: - replicas: 1 - selector: - matchLabels: - app: keycloak - template: - metadata: - labels: - app: keycloak - spec: - containers: - - name: keycloak - image: quay.io/keycloak/keycloak:21.0.2 - args: ["start-dev"] - env: - - name: KEYCLOAK_ADMIN - valueFrom: - secretKeyRef: - key: KEYCLOAK_ADMIN - name: keycloak-secrets - - name: KEYCLOAK_ADMIN_PASSWORD - valueFrom: - secretKeyRef: - key: KEYCLOAK_ADMIN_PASSWORD - name: keycloak-secrets - - name: KC_PROXY - value: "edge" - - name: KC_HEALTH_ENABLED - value: "true" - - name: KC_METRICS_ENABLED - value: "true" - - name: KC_HOSTNAME_STRICT_HTTPS - value: "true" - - name: KC_LOG_LEVEL - value: INFO - - name: KC_DB - value: postgres - - name: POSTGRES_DB - valueFrom: - secretKeyRef: - name: postgres-credentials - key: POSTGRES_DB - - name: KC_DB_URL - value: jdbc:postgresql://postgresql.blueprints.arc:5432/$(POSTGRES_DB) - - name: KC_DB_USERNAME - valueFrom: - secretKeyRef: - name: postgres-credentials - key: POSTGRES_USER - - name: KC_DB_PASSWORD - valueFrom: - secretKeyRef: - name: postgres-credentials - key: POSTGRES_PASSWORD - ports: - - name: http - containerPort: 8080 - readinessProbe: - httpGet: - path: /health/ready - port: 8080 - initialDelaySeconds: 250 - periodSeconds: 10 - livenessProbe: - httpGet: - path: /health/live - port: 8080 - initialDelaySeconds: 500 - periodSeconds: 30 - resources: - limits: - memory: 512Mi - cpu: "1" - requests: - memory: 256Mi - cpu: "0.2" - -As you will notice in the highlighted lines, we parameterize the credentials portion of this manifest by referencing -the variables and their values we installed in the previous step with the Secrets. Important to mention the significance -of line 51, where we connect Keycloak with the RDS instance using the FQDN we created in our Private DNS Zone for this -instance. - -Deploy Keycloak Service -+++++++++++++++++++++++ - -We deployed the application, but at the time being is not accessible by an internal or external actor (direct access -from Pods does not count in this case). For that matter, we need to deploy a Service that will expose Keycloak's -workload: - -.. code-block:: yaml - :linenos: - :emphasize-lines: 15 - - apiVersion: v1 - kind: Service - metadata: - name: keycloak - namespace: keycloak - labels: - app: keycloak - spec: - ports: - - name: https - port: 443 - targetPort: 8080 - selector: - app: keycloak - type: NodePort - -.. note:: Pay attention to **line 15**, where we set the ``type`` as ``NodePort``. That's because we want to expose - this service externally, in a later step, via an Ingress. - -Expose Keycloak +Deploy Keycloak =============== -.. image:: /_static/images/SCR-20231211-di1.png +You can follow this blueprint to setup a working instance of Keycloak on CCE: +:ref: `deploy_keycloak`. + +Create a new Realm +================== + +A realm manages users, credentials, roles, and groups. A user belongs to and logs into the realm he is assigned to. +Realms are isolated from one another and can manage and authenticate only those users that they belong to them. + +Open and login to your Keycloak instance. Create a new realm (let's call it ``otcac_test_company_1`` for the course of +this blueprint) and mark it as enabled: + +.. image:: /_static/images/SCR-20231212-mfl.png + +| + +Create a new Client +=================== + +Clients are applications, or services, that can request the authentication of a user. Create a new client (let's call it +``otcac_test_company_1_client`` with type ``OpenID Connect`` and in the *Capability config* step of the wizard, activate the following Authentication +flows: + +- Standard flow +- Implicit flow +- Direct access grants + +.. image:: /_static/images/SCR-20231212-mmx.png + +| + +Configure Mappers +================= + +Open the management console of the Client you just created, and navigate to the *Client scopes* tab. Click on the list +item with the name: ``otcac_test_company_1_client-dedicated``: + +.. image:: /_static/images/SCR-20231212-mr5.png + +| + +Now we need to add some mappers. We will first add one of the predefined ones: + +.. image:: /_static/images/SCR-20231212-n1w.png + +| + +and from the list choose ``email``: + +.. image:: /_static/images/SCR-20231212-n0d.png + +| + +Next we need to add a group membership mapper. Click *Add mapper/By Configuration*: + +.. image:: /_static/images/SCR-20231212-n0n.png + +and from the list choose ``Group Membership``: + +.. image:: /_static/images/SCR-20231212-n15.png + +| + +Open the configuration of the mapper. Insert a mapper and token name as ``gruppen``. The token name will be used in the +OTC Conversion Rules. Disable the `Full group path` option: + +.. image:: /_static/images/SCR-20231212-n8b.png + +| + +Get OpenID Endpoint Configuration +================================= + +Open `Realm Settings` and click on `OpenID Endpoint Configuration`: + +.. image:: /_static/images/SCR-20231212-nj4.png + +| + +You will be redirected to web page rendering, as JSON, all the endpoints and the current configuration of your realm: + +.. image:: /_static/images/SCR-20231212-ngd.png + +| + +.. note:: It is recommended to keep this web page open in a separate tab or screen, because we are going to need to + grab some values from it, for our the next steps. -Create an Elastic Load Balancer -+++++++++++++++++++++++++++++++ +Create a new OTC Identity Provider +================================== -First in our list for this part, is to create an Elastic Load Balancer that will be employed with the following: +For this step we will change to Open Telekom Cloud Console and particularly to IAM and Identity Providers. Create a new +one, and set `Protocol` to ``OpenID Connect``, `SSO Type` to ``Virtual User`` and `Status` to ``Enabled``: -- An EIP address -- Support L4 and L7 load balancing -- Be in the same VPC/Subnet as the nodes of our CCE Cluster -- Associate backend servers by using their IP addresses (*IP as Backend*) +.. image:: /_static/images/SCR-20231212-nq7.png -.. image:: /_static/images/SCR-20231211-i88.png +| -.. note:: Note down the **ELB ID**, we are going to need it to configure the Nginx Ingress that we will deploy next. +Configure the OTC Identity Provider +=================================== -Deploy Nginx Ingress on CCE -+++++++++++++++++++++++++++ +Find your newly created provider in Identity Providers list and click `Modify`: -We are going to deploy in this step the Ingress that will sit between our ELB and the Keycloak Service and expose it -in the address of our preference (keycloak.example.com for this lab) +.. image:: /_static/images/SCR-20231212-nw9.png -.. warning:: Do not forget that the FQDN we are going to use to expose the Keycloak Service has to point to a **real** domain or - subdomain that you actually **own**! +| -We will use `Helm `_ to deploy Nginx Ingress to our CCE Cluster. Helm is the de-facto package manager -of Kubernetes and if you don't have it already installed on your remote machine or your bastion host, you can do it with -the following commands: +Set the following values: -.. code-block:: shell +- `Access Type`: ``Programmatic access and management console access`` +- `Client ID`: The id of your client as defined in Keycloak (in this example is ``otcac_test_company_1_client``) +- `Authorization Endpoint`: copy the value from key **authorization_endpoint** of the `OpenID Endpoint Configuration` JSON output +- `Response Mode`: ``form_post`` +- `Signing Key`: open in a new tab the URL address that is value of the key **jwks_uri** of the `OpenID Endpoint Configuration` JSON output. Copy the whole output of the new page and paste it as is in the respective textbox for `Signing Key`. - curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 - chmod 700 get_helm.sh - ./get_helm.sh -We have to provide to the helm chart a couple configuration values (``overrides.yaml``), among them the internal ID -of the Elastic Load Balancer is the most important - as it will bind the future ingresses that will be created using -this ingress class with the specific load balancer. +.. image:: /_static/images/SCR-20231212-o7i.png -.. code-block:: yaml - :linenos: - :emphasize-lines: 6 +| - controller: - replicaCount: 1 - service: - externalTrafficPolicy: Cluster - annotations: - kubernetes.io/elb.id: "0000000-0000-0000-0000-000000000000" +Save the changes, **but before closing this panel copy the value** of the `Identity Provider URL` because we are going to +need this value in the next step of this blueprint. -.. note:: Special attention required at **line 6**, replace the placeholder value with the ID you copied from the - main panel of your newly created Elastic Load Balancer. +Configure Client's Access Settings +================================== -We can now install the chart (it will automatically create and deploy everything in a namespace named ``nginx-system``): +For this step we will switch back to Keycloak Administration Console, and navigate to `Access Settings` for our client: -.. code-block:: shell +.. image:: /_static/images/SCR-20231212-och.png - helm upgrade --install -f overrides.yaml --install ingress-nginx ingress-nginx \ - --repo https://kubernetes.github.io/ingress-nginx \ - --namespace ingress-nginx --create-namespace +| -Create a Public DNS Endpoint -++++++++++++++++++++++++++++ +Set the following values: -Create the Endpoint manually ----------------------------- +- `Root URL`: The `Identity Provider URL` you copied in the previous step. +- `Home URL`: ``https://auth.otc.t-systems.com`` +- `Valid redirect URIs`: ``https://auth.otc.t-systems.com/authui/oidc/post`` -Create the Endpoint with ExternalDNS ------------------------------------- - -Deploy ExternalDNS on CCE -````````````````````````` - -Deploy a Keycloak Endpoint -`````````````````````````` - -Deploy Keycloak Ingress -+++++++++++++++++++++++ - -Section n -========= +Create new GitHub OAuth App +=========================== +Add GitHub as Identity Provider to Keycloak +=========================================== .. Next steps & Related Resources diff --git a/doc/source/conf.py b/doc/source/conf.py index 23a0f06..102cf70 100644 --- a/doc/source/conf.py +++ b/doc/source/conf.py @@ -9,6 +9,7 @@ extensions = [ 'sphinx.ext.graphviz', 'otcdocstheme', + # 'sphinx.ext.intersphinx', ] # openstackdocstheme options