diff --git a/README.rst b/README.rst new file mode 100644 index 0000000..0f140ef --- /dev/null +++ b/README.rst @@ -0,0 +1,50 @@ +========================================= +OpenTelekomCloud SCS System Configuration +========================================= + +This is the machinery that drives the configuration, testing, continuous +integration and deployment of services provided by the OpenTelekomCloud +project. It heavily copies OpenDev configuration approach with some extensions +and deviations. + +Services are driven by Ansible playbooks and associated roles stored here. If +you are interested in the configuration of a particular service, starting at +``playbooks/service-.yaml`` will show you how it is configured. + +Most services are deployed via containers; many of them are built or customised +in this repository; see ``docker/``. + +Bootstrap +========= + +Bootstraping new installation is connected with usual +chicken-egg problem. Generally having system up and running it +is required to maintain certain secrets. But providing those +secrets requires infrastructure to be up and running. Addressing +this requres certain steps. + +TLS Certificates +---------------- + +Most systems require valid TLS certificates. Initial bootstraping also requires valid TLS certificates. System that require those will typically support providing of initial certificates through inventory variables. + +Vault +----- + +Managing secrets securely is possible in few different ways. +Ansible vault is a good tool, but it is complex to manage unseal +and to implement rotations (of both vault password as well as +secrets inside the vault). +HashiCorp Vault is in that sense a much more flexible system that also provides support for infrastructure based authorization. + +Deploying Vault on the other side is also requiring SSL certificates. Since during bootstraping it is most likely not possible to rely on the `playbooks/acme-certs.yaml` since it requires bootstrapped bridge host first it is required to provide initial valid certificates through host variables (`vault_tls_cert_content` and `vault_tls_key_content`). It makes sense not to commit those variables under the git and only provide them during the bootstraping phase. + +Bootstraping Vault therefore requires following steps + +1. Login to the host having access to all nodes which will host HashiCorp vault + +2. Checkout this repository and ensure + `inventory/service/hosts.yaml` contain proper IP addresses as + well as those hosts are member of vault group as `inventory/service/groups.yaml` + +3. execute `ansible-playbook playbooks/service-vault.yaml` playbook. diff --git a/bindep.txt b/bindep.txt new file mode 100644 index 0000000..5bee0c6 --- /dev/null +++ b/bindep.txt @@ -0,0 +1,5 @@ +libffi-dev [platform:dpkg] +libffi-devel [platform:rpm] +libssl-dev [platform:dpkg] +openssl-devel [platform:rpm] +graphviz [doc] diff --git a/doc/requirements.txt b/doc/requirements.txt new file mode 100644 index 0000000..b508b94 --- /dev/null +++ b/doc/requirements.txt @@ -0,0 +1,6 @@ +docutils>=0.11 # OSI-Approved Open Source, Public Domain +beautifulsoup4>=4.6.0 # MIT +reno>=3.1.0 # Apache-2.0 +sphinx>=4.0.0 # BSD +zuul-sphinx>=0.1.1 +graphviz diff --git a/doc/source/_images/ansible.png b/doc/source/_images/ansible.png new file mode 100644 index 0000000..39d77aa Binary files /dev/null and b/doc/source/_images/ansible.png differ diff --git a/doc/source/_images/designate.png b/doc/source/_images/designate.png new file mode 100644 index 0000000..940aee7 Binary files /dev/null and b/doc/source/_images/designate.png differ diff --git a/doc/source/_images/elb-network-load-balancer.png b/doc/source/_images/elb-network-load-balancer.png new file mode 100644 index 0000000..d8d880d Binary files /dev/null and b/doc/source/_images/elb-network-load-balancer.png differ diff --git a/doc/source/_images/git.png b/doc/source/_images/git.png new file mode 100644 index 0000000..e4d7180 Binary files /dev/null and b/doc/source/_images/git.png differ diff --git a/doc/source/_images/github.png b/doc/source/_images/github.png new file mode 100644 index 0000000..1916642 Binary files /dev/null and b/doc/source/_images/github.png differ diff --git a/doc/source/_images/gitlab.png b/doc/source/_images/gitlab.png new file mode 100644 index 0000000..89eb25c Binary files /dev/null and b/doc/source/_images/gitlab.png differ diff --git a/doc/source/_images/grafana.png b/doc/source/_images/grafana.png new file mode 100644 index 0000000..6110a96 Binary files /dev/null and b/doc/source/_images/grafana.png differ diff --git a/doc/source/_images/haproxy.png b/doc/source/_images/haproxy.png new file mode 100644 index 0000000..49d87db Binary files /dev/null and b/doc/source/_images/haproxy.png differ diff --git a/doc/source/_images/helm.png b/doc/source/_images/helm.png new file mode 100644 index 0000000..355f40c Binary files /dev/null and b/doc/source/_images/helm.png differ diff --git a/doc/source/_images/internet.png b/doc/source/_images/internet.png new file mode 100644 index 0000000..9c7c20f Binary files /dev/null and b/doc/source/_images/internet.png differ diff --git a/doc/source/_images/k8/cm.png b/doc/source/_images/k8/cm.png new file mode 100644 index 0000000..4f1c049 Binary files /dev/null and b/doc/source/_images/k8/cm.png differ diff --git a/doc/source/_images/k8/pvc.png b/doc/source/_images/k8/pvc.png new file mode 100644 index 0000000..de66402 Binary files /dev/null and b/doc/source/_images/k8/pvc.png differ diff --git a/doc/source/_images/k8/secret.png b/doc/source/_images/k8/secret.png new file mode 100644 index 0000000..e7a8b3e Binary files /dev/null and b/doc/source/_images/k8/secret.png differ diff --git a/doc/source/_images/k8/sts.png b/doc/source/_images/k8/sts.png new file mode 100644 index 0000000..71b46b9 Binary files /dev/null and b/doc/source/_images/k8/sts.png differ diff --git a/doc/source/_images/k8/svc.png b/doc/source/_images/k8/svc.png new file mode 100644 index 0000000..8cca480 Binary files /dev/null and b/doc/source/_images/k8/svc.png differ diff --git a/doc/source/_images/keystone.png b/doc/source/_images/keystone.png new file mode 100644 index 0000000..3617cc4 Binary files /dev/null and b/doc/source/_images/keystone.png differ diff --git a/doc/source/_images/loki.png b/doc/source/_images/loki.png new file mode 100644 index 0000000..3029249 Binary files /dev/null and b/doc/source/_images/loki.png differ diff --git a/doc/source/_images/memcached.png b/doc/source/_images/memcached.png new file mode 100644 index 0000000..ffc1571 Binary files /dev/null and b/doc/source/_images/memcached.png differ diff --git a/doc/source/_images/neutron.png b/doc/source/_images/neutron.png new file mode 100644 index 0000000..7d2b1fb Binary files /dev/null and b/doc/source/_images/neutron.png differ diff --git a/doc/source/_images/nginx.png b/doc/source/_images/nginx.png new file mode 100644 index 0000000..8c38768 Binary files /dev/null and b/doc/source/_images/nginx.png differ diff --git a/doc/source/_images/nova.png b/doc/source/_images/nova.png new file mode 100644 index 0000000..e894c11 Binary files /dev/null and b/doc/source/_images/nova.png differ diff --git a/doc/source/_images/octavia.png b/doc/source/_images/octavia.png new file mode 100644 index 0000000..69a8704 Binary files /dev/null and b/doc/source/_images/octavia.png differ diff --git a/doc/source/_images/openstack.png b/doc/source/_images/openstack.png new file mode 100644 index 0000000..75152a7 Binary files /dev/null and b/doc/source/_images/openstack.png differ diff --git a/doc/source/_images/openstackclient.png b/doc/source/_images/openstackclient.png new file mode 100644 index 0000000..f4611b0 Binary files /dev/null and b/doc/source/_images/openstackclient.png differ diff --git a/doc/source/_images/postgresql.png b/doc/source/_images/postgresql.png new file mode 100644 index 0000000..0381b34 Binary files /dev/null and b/doc/source/_images/postgresql.png differ diff --git a/doc/source/_images/swift.png b/doc/source/_images/swift.png new file mode 100644 index 0000000..5ac0fd5 Binary files /dev/null and b/doc/source/_images/swift.png differ diff --git a/doc/source/_images/users.png b/doc/source/_images/users.png new file mode 100644 index 0000000..5cb409b Binary files /dev/null and b/doc/source/_images/users.png differ diff --git a/doc/source/_images/vault.png b/doc/source/_images/vault.png new file mode 100644 index 0000000..cd36e58 Binary files /dev/null and b/doc/source/_images/vault.png differ diff --git a/doc/source/_images/zookeeper.png b/doc/source/_images/zookeeper.png new file mode 100644 index 0000000..16e0604 Binary files /dev/null and b/doc/source/_images/zookeeper.png differ diff --git a/doc/source/_images/zuulci.png b/doc/source/_images/zuulci.png new file mode 100644 index 0000000..40c0f2f Binary files /dev/null and b/doc/source/_images/zuulci.png differ diff --git a/doc/source/_svg/docsportal b/doc/source/_svg/docsportal new file mode 100644 index 0000000..306c734 --- /dev/null +++ b/doc/source/_svg/docsportal @@ -0,0 +1,13 @@ +digraph HelpCenter { + graph [bgcolor=transparent compound=true fontcolor="#2D3436" fontname="Sans-Serif" fontsize=10 rankdir=LR] + node [fixedsize=false] + user [label=Clients fixedsize=true fontsize=10 height=1.4 image="../_images/users.png" imagescale=true labelloc=b shape=none width=1] + web [label=WebServer fixedsize=true fontsize=10 height=1.4 image="../_images/nginx.png" imagescale=true labelloc=b shape=none width=1] + github [label="GitHub Projects" fixedsize=true fontsize=10 height=1.4 href="https://github.com/opentelekomcloud-docs" image="../_images/github.png" imagescale=true labelloc=b shape=none width=1] + zuul [label="Zuul CI/CD" fixedsize=true fontsize=10 height=1.4 href="https://docs.otc-service.com/system-config/zuul.html" image="../_images/zuulci.png" imagescale=true labelloc=b shape=none width=1] + swift [label="Swift Object Store" fixedsize=true fontsize=10 height=1.4 image="../_images/swift.png" imagescale=true labelloc=b shape=none width=1] + user -> web [label=Pull color=black fontsize=8] + web -> swift [label=Pull color=black fontsize=8] + github -> zuul [label=Push color=red fontsize=8] + zuul -> swift [label=Push color=red fontsize=8] +} diff --git a/doc/source/_svg/docsportal.svg b/doc/source/_svg/docsportal.svg new file mode 100644 index 0000000..3de0ea1 --- /dev/null +++ b/doc/source/_svg/docsportal.svg @@ -0,0 +1,76 @@ + + + + + + +HelpCenter + + +user + +Clients + + + +web + +WebServer + + + +user->web + + +Pull + + + +swift + +Swift Object Store + + + +web->swift + + +Pull + + + +github + + +GitHub Projects + + + + + +zuul + + +Zuul CI/CD + + + + + +github->zuul + + +Push + + + +zuul->swift + + +Push + + + diff --git a/doc/source/_svg/docsportal_sec b/doc/source/_svg/docsportal_sec new file mode 100644 index 0000000..57e17c3 --- /dev/null +++ b/doc/source/_svg/docsportal_sec @@ -0,0 +1,35 @@ +digraph "Documentation Portal Security diagram" { + graph [bgcolor=transparent compound=true fontcolor="#2D3436" fontname="Sans-Serif" fontsize=10 rankdir=LR] + node [fixedsize=false] + subgraph cluster_web { + graph [bgcolor="#E5F5FD" shape=box style=rounded] + label="Web Server(s)" + web1 [label="WebServer 1"] + web2 [label="WebServer 2"] + web3 [label="WebServer XX"] + } + subgraph cluster_storage { + graph [bgcolor="#E5F5FD" shape=box style=rounded] + label=Storage + swift [label="Swift Object Store"] + web1 -> swift [label=HTTPS color=black dir=back fontsize=8] + web2 -> swift [label=HTTPS color=black dir=back fontsize=8] + web3 -> swift [label=HTTPS color=black dir=back fontsize=8] + } + subgraph cluster_zuul { + graph [bgcolor="#E5F5FD" shape=box style=rounded] + label="Zuul CI/CD" + zuul [label="Zuul CI/CD" href="https://docs.otc-service.com/system-config/zuul.html"] + zuul -> swift [label=HTTPS color=black fontsize=8] + } + subgraph cluster_git { + graph [bgcolor="#E5F5FD" shape=box style=rounded] + label="Git Hosting" + github1 [label="Project 1"] + github2 [label="Project 2"] + github3 [label="Project XX"] + github1 -> zuul [label=HTTPS color=black fontsize=8] + github2 -> zuul [label=HTTPS color=black fontsize=8] + github3 -> zuul [label=HTTPS color=black fontsize=8] + } +} diff --git a/doc/source/_svg/docsportal_sec.svg b/doc/source/_svg/docsportal_sec.svg new file mode 100644 index 0000000..38631a7 --- /dev/null +++ b/doc/source/_svg/docsportal_sec.svg @@ -0,0 +1,132 @@ + + + + + + +Documentation Portal Security diagram + +cluster_web + +Web Server(s) + + +cluster_storage + +Storage + + +cluster_zuul + +Zuul CI/CD + + +cluster_git + +Git Hosting + + + +web1 + +WebServer 1 + + + +swift + +Swift Object Store + + + +web1->swift + + +HTTPS + + + +web2 + +WebServer 2 + + + +web2->swift + + +HTTPS + + + +web3 + +WebServer XX + + + +web3->swift + + +HTTPS + + + +zuul + + +Zuul CI/CD + + + + + +zuul->swift + + +HTTPS + + + +github1 + +Project 1 + + + +github1->zuul + + +HTTPS + + + +github2 + +Project 2 + + + +github2->zuul + + +HTTPS + + + +github3 + +Project XX + + + +github3->zuul + + +HTTPS + + + diff --git a/doc/source/_svg/helpcenter b/doc/source/_svg/helpcenter new file mode 100644 index 0000000..306c734 --- /dev/null +++ b/doc/source/_svg/helpcenter @@ -0,0 +1,13 @@ +digraph HelpCenter { + graph [bgcolor=transparent compound=true fontcolor="#2D3436" fontname="Sans-Serif" fontsize=10 rankdir=LR] + node [fixedsize=false] + user [label=Clients fixedsize=true fontsize=10 height=1.4 image="../_images/users.png" imagescale=true labelloc=b shape=none width=1] + web [label=WebServer fixedsize=true fontsize=10 height=1.4 image="../_images/nginx.png" imagescale=true labelloc=b shape=none width=1] + github [label="GitHub Projects" fixedsize=true fontsize=10 height=1.4 href="https://github.com/opentelekomcloud-docs" image="../_images/github.png" imagescale=true labelloc=b shape=none width=1] + zuul [label="Zuul CI/CD" fixedsize=true fontsize=10 height=1.4 href="https://docs.otc-service.com/system-config/zuul.html" image="../_images/zuulci.png" imagescale=true labelloc=b shape=none width=1] + swift [label="Swift Object Store" fixedsize=true fontsize=10 height=1.4 image="../_images/swift.png" imagescale=true labelloc=b shape=none width=1] + user -> web [label=Pull color=black fontsize=8] + web -> swift [label=Pull color=black fontsize=8] + github -> zuul [label=Push color=red fontsize=8] + zuul -> swift [label=Push color=red fontsize=8] +} diff --git a/doc/source/_svg/helpcenter.svg b/doc/source/_svg/helpcenter.svg new file mode 100644 index 0000000..3de0ea1 --- /dev/null +++ b/doc/source/_svg/helpcenter.svg @@ -0,0 +1,76 @@ + + + + + + +HelpCenter + + +user + +Clients + + + +web + +WebServer + + + +user->web + + +Pull + + + +swift + +Swift Object Store + + + +web->swift + + +Pull + + + +github + + +GitHub Projects + + + + + +zuul + + +Zuul CI/CD + + + + + +github->zuul + + +Push + + + +zuul->swift + + +Push + + + diff --git a/doc/source/_svg/helpcenter_sec b/doc/source/_svg/helpcenter_sec new file mode 100644 index 0000000..c013fa0 --- /dev/null +++ b/doc/source/_svg/helpcenter_sec @@ -0,0 +1,34 @@ +digraph "HelpCenter Security diagram" { + graph [bgcolor=transparent compound=true fontcolor="#2D3436" fontname="Sans-Serif" fontsize=10 rankdir=LR] + node [fixedsize=false] + subgraph cluster_web { + graph [bgcolor="#E5F5FD" shape=box style=rounded] + label="Web Server(s)" + web1 [label="WebServer 1"] + web2 [label="WebServer 2"] + web3 [label="WebServer XX"] + } + subgraph cluster_storage { + graph [bgcolor="#E5F5FD" shape=box style=rounded] + label=Storage + swift [label="Swift Object Store"] + web1 -> swift [label=HTTPS color=black dir=back fontsize=8] + web2 -> swift [label=HTTPS color=black dir=back fontsize=8] + web3 -> swift [label=HTTPS color=black dir=back fontsize=8] + } + subgraph cluster_zuul { + graph [bgcolor="#E5F5FD" shape=box style=rounded] + label="Zuul CI/CD" + zuul [label="Zuul CI/CD"] + zuul -> swift [label=HTTPS color=black fontsize=8] + } + subgraph cluster_git { + graph [bgcolor="#E5F5FD" shape=box style=rounded] + github1 [label="Project 1"] + github2 [label="Project 2"] + github3 [label="Project XX"] + github1 -> zuul [label=HTTPS color=black fontsize=8] + github2 -> zuul [label=HTTPS color=black fontsize=8] + github3 -> zuul [label=HTTPS color=black fontsize=8] + } +} diff --git a/doc/source/_svg/helpcenter_sec.svg b/doc/source/_svg/helpcenter_sec.svg new file mode 100644 index 0000000..21308dc --- /dev/null +++ b/doc/source/_svg/helpcenter_sec.svg @@ -0,0 +1,128 @@ + + + + + + +HelpCenter Security diagram + +cluster_web + +Web Server(s) + + +cluster_storage + +Storage + + +cluster_zuul + +Zuul CI/CD + + +cluster_git + + + + +web1 + +WebServer 1 + + + +swift + +Swift Object Store + + + +web1->swift + + +HTTPS + + + +web2 + +WebServer 2 + + + +web2->swift + + +HTTPS + + + +web3 + +WebServer XX + + + +web3->swift + + +HTTPS + + + +zuul + +Zuul CI/CD + + + +zuul->swift + + +HTTPS + + + +github1 + +Project 1 + + + +github1->zuul + + +HTTPS + + + +github2 + +Project 2 + + + +github2->zuul + + +HTTPS + + + +github3 + +Project XX + + + +github3->zuul + + +HTTPS + + + diff --git a/doc/source/_svg/reverse_proxy b/doc/source/_svg/reverse_proxy new file mode 100644 index 0000000..47a1e49 --- /dev/null +++ b/doc/source/_svg/reverse_proxy @@ -0,0 +1,41 @@ +digraph "Reverse Proxy" { + graph [bgcolor=transparent compound=true fontcolor="#2D3436" fontname="Sans-Serif" fontsize=10 rankdir=LR] + node [fixedsize=false] + user [label=Clients fixedsize=true fontsize=10 height=1.4 image="../_images/users.png" imagescale=true labelloc=b shape=none width=1] + lb [label="Load Balancer" imagescale=true shape=box tooltip="Load Balancer in OTC"] + gw [label="Network Gateway" imagescale=true shape=box tooltip="Network Gateway in vCloud"] + user -> lb + user -> gw + lb -> proxy1 + lb -> proxy2 + gw -> web3 + subgraph cluster_proxy { + graph [bgcolor="#E5F5FD" shape=box style=rounded] + label="Reverse Proxy" + proxy1 [label=proxy1 fixedsize=true fontsize=10 height=1.4 image="../_images/haproxy.png" imagescale=true labelloc=b shape=none tooltip="proxy1.eco.tsi-dev.otc-service.com" width=1] + proxy2 [label=proxy2 fixedsize=true fontsize=10 height=1.4 image="../_images/haproxy.png" imagescale=true labelloc=b shape=none tooltip="proxy2.eco.tsi-dev.otc-service.com" width=1] + web3 [label=web3 fixedsize=true fontsize=10 height=1.4 image="../_images/haproxy.png" imagescale=true labelloc=b shape=none tooltip="web3.eco.tsi-dev.otc-service.com" width=1] + } + proxy2 -> alerta [ltail=cluster_proxy] + proxy2 -> dashboard [ltail=cluster_proxy] + proxy2 -> "dashboard-eco" [ltail=cluster_proxy] + proxy2 -> docs [ltail=cluster_proxy] + proxy2 -> "graphite-apimon" [ltail=cluster_proxy] + proxy2 -> "graphite-ca" [ltail=cluster_proxy] + proxy2 -> influx [ltail=cluster_proxy] + proxy2 -> matrix [ltail=cluster_proxy] + proxy2 -> vault [ltail=cluster_proxy] + subgraph cluster_apps { + graph [bgcolor="#E5F5FD" shape=box style=rounded] + label=Applications + alerta + dashboard + "dashboard-eco" + docs + "graphite-apimon" + "graphite-ca" + influx + matrix + vault + } +} diff --git a/doc/source/_svg/reverse_proxy.svg b/doc/source/_svg/reverse_proxy.svg new file mode 100644 index 0000000..376f5ea --- /dev/null +++ b/doc/source/_svg/reverse_proxy.svg @@ -0,0 +1,211 @@ + + + + + + +Reverse Proxy + +cluster_proxy + +Reverse Proxy + + +cluster_apps + +Applications + + + +user + +Clients + + + +lb + + +Load Balancer + + + + + +user->lb + + + + + +gw + + +Network Gateway + + + + + +user->gw + + + + + +proxy1 + + +proxy1 + + + + + +lb->proxy1 + + + + + +proxy2 + + +proxy2 + + + + + +lb->proxy2 + + + + + +web3 + + +web3 + + + + + +gw->web3 + + + + + +alerta + +alerta + + + +proxy2->alerta + + + + + +dashboard + +dashboard + + + +proxy2->dashboard + + + + + +dashboard-eco + +dashboard-eco + + + +proxy2->dashboard-eco + + + + + +docs + +docs + + + +proxy2->docs + + + + + +graphite-apimon + +graphite-apimon + + + +proxy2->graphite-apimon + + + + + +graphite-ca + +graphite-ca + + + +proxy2->graphite-ca + + + + + +influx + +influx + + + +proxy2->influx + + + + + +matrix + +matrix + + + +proxy2->matrix + + + + + +vault + +vault + + + +proxy2->vault + + + + + diff --git a/doc/source/_svg/zuul b/doc/source/_svg/zuul new file mode 100644 index 0000000..40815a2 --- /dev/null +++ b/doc/source/_svg/zuul @@ -0,0 +1,33 @@ +digraph "Zuul CI/CD" { + graph [bgcolor=transparent compound=true fontcolor="#2D3436" fontname="Sans-Serif" fontsize=10 rankdir=LR] + node [fixedsize=false] + user [label=Clients fixedsize=true fontsize=10 height=1.4 image="../_images/users.png" imagescale=true labelloc=b shape=none width=1] + git [label="Git Provider" fixedsize=true fontsize=10 height=1.4 image="../_images/git.png" imagescale=true labelloc=b shape=none width=1] + subgraph cluster_zuul { + graph [bgcolor="#E5F5FD" shape=box style=rounded] + node [fontsize=8] + label="Zuul CI/CD" + "zuul-web" [label="Zuul Web"] + "zuul-merger" [label="Zuul Merger"] + "zuul-executor" [label="Zuul Executor"] + "zuul-scheduler" [label="Zuul Scheduler"] + "nodepool-launcher" [label="Nodepool Launcher"] + "nodepool-builder" [label="Nodepool Builder"] + } + zookeeper [label=Zookeeper fixedsize=true fontsize=10 height=1.4 image="../_images/zookeeper.png" imagescale=true labelloc=b shape=none width=1] + "zuul-web" -> zookeeper + "zuul-merger" -> zookeeper + "zuul-executor" -> zookeeper + "zuul-scheduler" -> zookeeper + "nodepool-launcher" -> zookeeper + "nodepool-builder" -> zookeeper + db [label="SQL Database" fixedsize=true fontsize=10 height=1.4 image="../_images/postgresql.png" imagescale=true labelloc=b shape=none width=1] + cloud [label="Clouds resources" fixedsize=true fontsize=10 height=1.4 image="../_images/openstack.png" imagescale=true labelloc=b shape=none width=1] + user -> "zuul-web" + "zuul-merger" -> git + "zuul-executor" -> git + "zuul-web" -> db + "nodepool-launcher" -> cloud + "nodepool-builder" -> cloud + "zuul-executor" -> cloud +} diff --git a/doc/source/_svg/zuul.svg b/doc/source/_svg/zuul.svg new file mode 100644 index 0000000..58eac72 --- /dev/null +++ b/doc/source/_svg/zuul.svg @@ -0,0 +1,161 @@ + + + + + + +Zuul CI/CD + +cluster_zuul + +Zuul CI/CD + + + +user + +Clients + + + +zuul-web + +Zuul Web + + + +user->zuul-web + + + + + +git + +Git Provider + + + +zookeeper + +Zookeeper + + + +zuul-web->zookeeper + + + + + +db + +SQL Database + + + +zuul-web->db + + + + + +zuul-merger + +Zuul Merger + + + +zuul-merger->git + + + + + +zuul-merger->zookeeper + + + + + +zuul-executor + +Zuul Executor + + + +zuul-executor->git + + + + + +zuul-executor->zookeeper + + + + + +cloud + +Clouds resources + + + +zuul-executor->cloud + + + + + +zuul-scheduler + +Zuul Scheduler + + + +zuul-scheduler->zookeeper + + + + + +nodepool-launcher + +Nodepool Launcher + + + +nodepool-launcher->zookeeper + + + + + +nodepool-launcher->cloud + + + + + +nodepool-builder + +Nodepool Builder + + + +nodepool-builder->zookeeper + + + + + +nodepool-builder->cloud + + + + + diff --git a/doc/source/_svg/zuul_dpl b/doc/source/_svg/zuul_dpl new file mode 100644 index 0000000..c4f70d1 --- /dev/null +++ b/doc/source/_svg/zuul_dpl @@ -0,0 +1,38 @@ +digraph "Zuul CI/CD Deployment Design" { + graph [bgcolor=transparent compound=true fontcolor="#2D3436" fontname="Sans-Serif" fontsize=10 rankdir=LR] + node [fixedsize=false] + vault [label=Vault fixedsize=true fontsize=10 height=1.4 image="../_images/vault.png" imagescale=true labelloc=b shape=none width=1] + "zuul-web" -> vault [label=TLS color=blue fontsize=8] + "zuul-merger" -> vault [label=TLS color=blue fontsize=8] + "zuul-executor" -> vault [label=TLS color=blue fontsize=8] + "zuul-scheduler" -> vault [label=TLS color=blue fontsize=8] + "nodepool-launcher" -> vault [label=TLS color=blue fontsize=8] + "nodepool-builder" -> vault [label=TLS color=blue fontsize=8] + zookeeper -> vault [label=TLS color=blue fontsize=8] + "zuul-web" -> zookeeper [label=TLS color=red fontsize=8] + "zuul-merger" -> zookeeper [label=TLS color=red fontsize=8] + "zuul-executor" -> zookeeper [label=TLS color=red fontsize=8] + "zuul-scheduler" -> zookeeper [label=TLS color=red fontsize=8] + "nodepool-launcher" -> zookeeper [label=TLS color=red fontsize=8] + "nodepool-builder" -> zookeeper [label=TLS color=red fontsize=8] + subgraph cluster_k8 { + graph [bgcolor="#E5F5FD" shape=box style=rounded] + node [fontsize=8] + label="Kubernetes Cluster" + subgraph cluster_zuul { + node [fontsize=8] + label="Zuul Namespace" + "zuul-web" [label="Zuul Web"] + "zuul-merger" [label="Zuul Merger"] + "zuul-executor" [label="Zuul Executor"] + "zuul-scheduler" [label="Zuul Scheduler"] + "nodepool-launcher" [label="Nodepool Launcher"] + "nodepool-builder" [label="Nodepool Builder"] + } + subgraph cluster_zk { + node [fontsize=8] + label="Zuul Namespace" + zookeeper [label=Zookeeper fixedsize=true fontsize=10 height=1.4 image="../_images/zookeeper.png" imagescale=true labelloc=b shape=none width=1] + } + } +} diff --git a/doc/source/_svg/zuul_dpl.svg b/doc/source/_svg/zuul_dpl.svg new file mode 100644 index 0000000..eb5950b --- /dev/null +++ b/doc/source/_svg/zuul_dpl.svg @@ -0,0 +1,166 @@ + + + + + + +Zuul CI/CD Deployment Design + +cluster_k8 + +Kubernetes Cluster + + +cluster_zuul + +Zuul Namespace + + +cluster_zk + +Zuul Namespace + + + +vault + +Vault + + + +zuul-web + +Zuul Web + + + +zuul-web->vault + + +TLS + + + +zookeeper + +Zookeeper + + + +zuul-web->zookeeper + + +TLS + + + +zuul-merger + +Zuul Merger + + + +zuul-merger->vault + + +TLS + + + +zuul-merger->zookeeper + + +TLS + + + +zuul-executor + +Zuul Executor + + + +zuul-executor->vault + + +TLS + + + +zuul-executor->zookeeper + + +TLS + + + +zuul-scheduler + +Zuul Scheduler + + + +zuul-scheduler->vault + + +TLS + + + +zuul-scheduler->zookeeper + + +TLS + + + +nodepool-launcher + +Nodepool Launcher + + + +nodepool-launcher->vault + + +TLS + + + +nodepool-launcher->zookeeper + + +TLS + + + +nodepool-builder + +Nodepool Builder + + + +nodepool-builder->vault + + +TLS + + + +nodepool-builder->zookeeper + + +TLS + + + +zookeeper->vault + + +TLS + + + diff --git a/doc/source/_svg/zuul_sec b/doc/source/_svg/zuul_sec new file mode 100644 index 0000000..94bf207 --- /dev/null +++ b/doc/source/_svg/zuul_sec @@ -0,0 +1,39 @@ +digraph "Zuul CI/CD Security Design" { + graph [bgcolor=transparent compound=true fontcolor="#2D3436" fontname="Sans-Serif" fontsize=10 rankdir=LR] + node [fixedsize=false] + git [label="Git Provider" fixedsize=true fontsize=10 height=1.4 image="../_images/git.png" imagescale=true labelloc=b shape=none width=1] + db [label="SQL Database" fixedsize=true fontsize=10 height=1.4 image="../_images/postgresql.png" imagescale=true labelloc=b shape=none width=1] + cloud [label="Clouds resources" fixedsize=true fontsize=10 height=1.4 image="../_images/openstack.png" imagescale=true labelloc=b shape=none width=1] + "zuul-web" -> zookeeper [label=TLS color=red fontsize=8] + "zuul-merger" -> zookeeper [label=TLS color=red fontsize=8] + "zuul-executor" -> zookeeper [label=TLS color=red fontsize=8] + "zuul-scheduler" -> zookeeper [label=TLS color=red fontsize=8] + "nodepool-launcher" -> zookeeper [label=TLS color=red fontsize=8] + "nodepool-builder" -> zookeeper [label=TLS color=red fontsize=8] + subgraph cluster_k8 { + graph [bgcolor="#E5F5FD" shape=box style=rounded] + node [fontsize=8] + label="Kubernetes Cluster" + subgraph cluster_zuul { + node [fontsize=8] + label="Zuul Namespace" + "zuul-web" [label="Zuul Web"] + "zuul-merger" [label="Zuul Merger"] + "zuul-executor" [label="Zuul Executor"] + "zuul-scheduler" [label="Zuul Scheduler"] + "nodepool-launcher" [label="Nodepool Launcher"] + "nodepool-builder" [label="Nodepool Builder"] + } + subgraph cluster_zk { + node [fontsize=8] + label="Zuul Namespace" + zookeeper [label=Zookeeper fixedsize=true fontsize=10 height=1.4 image="../_images/zookeeper.png" imagescale=true labelloc=b shape=none width=1] + } + } + "zuul-merger" -> git [label=SSH color=blue fontsize=8] + "zuul-executor" -> git [label=SSH color=blue fontsize=8] + "zuul-web" -> db [label=TLS fontsize=8] + "nodepool-launcher" -> cloud [label=HTTPS color=green fontsize=8] + "nodepool-builder" -> cloud [label=HTTPS color=green fontsize=8] + "zuul-executor" -> cloud [label=SSH color=blue fontsize=8] +} diff --git a/doc/source/_svg/zuul_sec.svg b/doc/source/_svg/zuul_sec.svg new file mode 100644 index 0000000..9916dae --- /dev/null +++ b/doc/source/_svg/zuul_sec.svg @@ -0,0 +1,171 @@ + + + + + + +Zuul CI/CD Security Design + +cluster_k8 + +Kubernetes Cluster + + +cluster_zuul + +Zuul Namespace + + +cluster_zk + +Zuul Namespace + + + +git + +Git Provider + + + +db + +SQL Database + + + +cloud + +Clouds resources + + + +zuul-web + +Zuul Web + + + +zuul-web->db + + +TLS + + + +zookeeper + +Zookeeper + + + +zuul-web->zookeeper + + +TLS + + + +zuul-merger + +Zuul Merger + + + +zuul-merger->git + + +SSH + + + +zuul-merger->zookeeper + + +TLS + + + +zuul-executor + +Zuul Executor + + + +zuul-executor->git + + +SSH + + + +zuul-executor->cloud + + +SSH + + + +zuul-executor->zookeeper + + +TLS + + + +zuul-scheduler + +Zuul Scheduler + + + +zuul-scheduler->zookeeper + + +TLS + + + +nodepool-launcher + +Nodepool Launcher + + + +nodepool-launcher->cloud + + +HTTPS + + + +nodepool-launcher->zookeeper + + +TLS + + + +nodepool-builder + +Nodepool Builder + + + +nodepool-builder->cloud + + +HTTPS + + + +nodepool-builder->zookeeper + + +TLS + + + diff --git a/doc/source/bridge.rst b/doc/source/bridge.rst new file mode 100644 index 0000000..113703d --- /dev/null +++ b/doc/source/bridge.rst @@ -0,0 +1,66 @@ +:title: Bridge + +.. _bridge: + +Bridge +###### + +Bridge is a bastion host that is the starting point for ops operations in +OpenTelekomCloudEco. It is the server from which Ansible is run, and contains +decrypted secure information such as passwords. The bridge server contains all +of the ansible playbooks as well as the scripts to create new servers. + +Sensitive information like passwords is stored encrypted in the private git and +are pulled by the bridge host on a cron basis. + +At a Glance +=========== + +:Projects: + * https://ansible.com/ +:Bugs: +:Resources: + +Ansible Hosts +------------- +In OTC Eco, all host configuration is done via ansible playbooks. + +Adding a node +------------- + +In principle hosts in the inventory (``inventory/base/hosts.yaml``) contain +required variables so that playbooks are able to provision the infrastructure. +This is not yet implemented for all hosts/systems. + +.. _running-ansible-on-nodes: + +Running Ansible on Nodes +------------------------ + +Each service that has been migrated fully to Ansible has its own playbook in +:git_file:`playbooks` named ``service_{ service_name }.yaml``. + +Because the playbooks are normally run by zuul, to run them manually, first run +the utility ``disable-ansible`` as root. That will touch the file +``/home/zuul/DISABLE-ANSIBLE``. We use the utility to avoid mistyping the +lockfile name. Then make sure no jobs are currently executing ansible. Ensure +that ``/home/zuul/src/github.com/opentelekomcloud-infra/system-config`` is in +the appropriate state, then run: + +.. code-block:: bash + + cd /home/zuul/src/github.com/opentelekomcloud-infra/system-config + ansible-playbook --limit="$HOST:localhost" playbooks/service-$SERVICE.yaml + +as root, where `$HOST` is the host you want to run puppet on. +The `:localhost` is important as some of the plays depend on performing a task +on the localhost before continuing to the host in question, and without it in +the limit section, the tasks for the host will have undefined values. + +When done, don't forget to remove ``/home/zuul/DISABLE-ANSIBLE`` + +Disabling Ansible on Nodes +-------------------------- + +In the case of needing to disable the running of ansible on a node, it's a +simple matter of adding an entry to the ansible inventory "disabled" group. diff --git a/doc/source/conf.py b/doc/source/conf.py new file mode 100644 index 0000000..2b0f824 --- /dev/null +++ b/doc/source/conf.py @@ -0,0 +1,69 @@ +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or +# implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import os +import sys +import warnings + +# -- General configuration ---------------------------------------------------- +# If extensions (or modules to document with autodoc) are in another directory, +# add these directories to sys.path here. If the directory is relative to the +# documentation root, use os.path.abspath to make it absolute, like shown here. +sys.path.insert(0, os.path.abspath('.')) + +# Add any Sphinx extension module names here, as strings. They can be +# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom ones. +extensions = [ + 'sphinx.ext.graphviz', + 'custom_roles', + 'zuul_sphinx' +] + +# We have roles split between zuul-suitable roles at top level roles/* +# (automatically detected by zuul-sphinx) and playbook-specific roles +# (might have plugins, etc that make them unsuitable as potential zuul +# roles). Document both. +zuul_role_paths = ['playbooks/roles'] + +# The suffix of source filenames. +source_suffix = '.rst' + +# The master toctree document. +master_doc = 'index' + +# General information about the project. +project = u'Open Telekom Cloud Ecosystem Infra' +copyright = u'2021, Various members of the OpenTelekomCloud' + +# The name of the Pygments (syntax highlighting) style to use. +pygments_style = 'sphinx' + +# Locations to exclude when looking for source files. +exclude_patterns = ['_build'] + +# -- Options for HTML output ---------------------------------------------- + +html_theme = 'alabaster' +html_static_path = ['_svg'] + +graphviz_output_format = 'svg' + +# Grouping the document tree into LaTeX files. List of tuples +# (source start file, target name, title, author, documentclass +# [howto/manual]). +latex_documents = [ + ('index', + '%s.tex' % project, + u'%s Documentation' % project, + u'OpenTelekomCloud', 'manual'), +] diff --git a/doc/source/custom_roles.py b/doc/source/custom_roles.py new file mode 100644 index 0000000..6747f60 --- /dev/null +++ b/doc/source/custom_roles.py @@ -0,0 +1,80 @@ +# Copyright 2013 OpenStack Foundation +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +# License for the specific language governing permissions and limitations +# under the License. + +# Most of this code originated in sphinx.domains.python and +# sphinx.ext.autodoc and has been only slightly adapted for use in +# subclasses here. + +# Thanks to Doug Hellman for: +# http://doughellmann.com/2010/05/defining-custom-roles-in-sphinx.html + +from docutils import nodes + + +def git_file_role(name, rawtext, text, lineno, inliner, + options={}, content=[]): + """Link a local path to a git file view. + + Returns 2 part tuple containing list of nodes to insert into the + document and a list of system messages. Both are allowed to be + empty. + + :param name: The role name used in the document. + :param rawtext: The entire markup snippet, with role. + :param text: The text marked with the role. + :param lineno: The line number where rawtext appears in the input. + :param inliner: The inliner instance that called us. + :param options: Directive options for customization. + :param content: The directive content for customization. + """ + + ref = ('https://github.com/opentelekomcloud-infra/' + 'system-config/blob/main/%s' % text) + linktext = 'system-config: %s' % text + node = nodes.reference(rawtext, linktext, refuri=ref, **options) + return [node], [] + + +def config_role(name, rawtext, text, lineno, inliner, + options={}, content=[]): + """Link a local path to a git file view. + + Returns 2 part tuple containing list of nodes to insert into the + document and a list of system messages. Both are allowed to be + empty. + + :param name: The role name used in the document. + :param rawtext: The entire markup snippet, with role. + :param text: The text marked with the role. + :param lineno: The line number where rawtext appears in the input. + :param inliner: The inliner instance that called us. + :param options: Directive options for customization. + :param content: The directive content for customization. + """ + + ref = ('https://github.com/opentelekomcloud/' + 'zuul-project-config/src/branch/master/%s' % text) + linktext = 'project-config: %s' % text + node = nodes.reference(rawtext, linktext, refuri=ref, **options) + return [node], [] + + +def setup(app): + """Install the plugin. + + :param app: Sphinx application context. + """ + app.add_role('git_file', git_file_role) + app.add_role('config', config_role) + return diff --git a/doc/source/docsportal.rst b/doc/source/docsportal.rst new file mode 100644 index 0000000..f2b4e7a --- /dev/null +++ b/doc/source/docsportal.rst @@ -0,0 +1,59 @@ +:title: Documentation Portal + +.. _docsportal: + +Documentation Portal +#################### + +Documentation portal is a web server that serves documentation maintained by +various git projects. + +At a Glance +=========== + +:Hosts: + * https://docs.otc-service.com +:Projects: + * https://github.com/opentelekomcloud/otcdocstheme +:Configuration: + * :git_file:`playbooks/roles/document_hosting_k8s/templates/nginx-site.conf.j2` + * :git_file:`inventory/service/group_vars/k8s-controller.yaml` +:Bugs: +:Resources: + +Overview +======== + +Every project managed by the Zuul Eco tenant is capable to use general jobs for +publishing documentation and releasenotes. Those jobs push rendered html +content into the Swift (dedicated containers) and make them word readable. + +Integration of projects under the :ref:`Zuul` allows following: + +- CI for the changes in the project (i.e. only tested and approved content is + being merged into the main branch) + +- CD: for the changes that are being merged documents are being built and + pushed to the HelpCenter. + + +Software Architecture +===================== + +A Web-Server (nginx) is listening in the frontend for the requests and based +on the URL decides in which container the data is actually located. It +contacts Storage server and gets the original content from there, which is +then being cached and returned back to the requestor. + +.. graphviz:: dot/docsportal.dot + :caption: Docs portal software architecture + +.. include:: docsportal_sec.rst.inc + +Deployment +========== + +:git_file:`playbooks/service-docs.yaml` is a playbook for the service +configuration and deployment. It is automatically executed once a pull request +touching any of the affected files (roles, inventory) is being merged. +Additionally it is applied periodically. diff --git a/doc/source/docsportal_sec.rst.inc b/doc/source/docsportal_sec.rst.inc new file mode 100644 index 0000000..a36cbac --- /dev/null +++ b/doc/source/docsportal_sec.rst.inc @@ -0,0 +1,204 @@ +Security Design +=============== + +.. graphviz:: dot/docsportal_sec.dot + :caption: Docs portal secutiry architecture + +Security Architecture +--------------------- + +Documentation portal takes care of making documentation publicly available. +This means that the content it is processing will be publicly available. No +care about avoiding placing sensitive information from git repositories is +taken and it is explicitly in the responsibility of each individual project +maintainers to ensure no sensitive information land in git. + +Ensuring, that only checked and approved content will be merged in git and +published is a primary responsibility of :ref:`Zuul` which applies also +:ref:`git_control` to manage required conditions that need to be fulfilled on +every pull request of every project. + +Separation +---------- + +The project is implemented as a combination of multiple software +and solution components communicating with each other. Those +components are installed physically separated from each other +with no direct connectivity except of public HTTPS. + +* Web Server + + * Nginx web server accepts HTTP protocol requests. It rewrites + the request to a destination on the Remote Storage. + + * It performs remote request to fetch the requested content + and serves it back to the initial requestor. + + * Depending on the content accessibility web server is either exposed + directly to the web or behind additional reverse proxy implementing + required security limitations. + +* Storage + + * OpenStack Swift storage. Practically this can be any other object storage + which allows web access. + + * Zuul CI/CD writes approved content into the storage + destination. + + * Web Server fetches the content. + + * The content in the Storage for the Documentation Portal is by + definition public content (not protected by any additional ACLs). + + * If the content is not designed to be publicly available content in the + Storage must be protected by ACLs. This in turn will require enabling web + server to access this content (i.e. swift-proxy in the case of using + OpenStack Swift). + +* :ref:`zuul` + + * Zuul installation manages git projects and implements + configured CI rules in order to ensure that only checked and + approved content will be merged. Default configuration + forbids anybody (except of Zuul Administrators) to bypass + required checks and merge content manually + * Once all the prerequisites are fulfilled Zuul merges Pull + Request, builds documentation and pushes it to storage with + dedicated credentials. + * Only git projects explicitly included in the Zuul tenant are + being respected. Registered git projects with disabled + branch protection rules are ignored. + +* `GitHub `_ + + * An external git hosting provider. + * Projects in the GitHub organization are managed by `dedicate process + `_ + +Interface Description +--------------------- + +The only public facing interface is the regular Web using HTTPS (automatic +forwarding from HTTP). + +Tenant Security +--------------- + +Documentation Portal does not support tenants concept. All documents that are hosted +on the Help Center are placed in a dedicated storage (as public content). +Instead a dedicated instance of the documentation portal is deployed for +isolating particular documentation areas. + +O&M Access Control +------------------ + +Only users enabled in the :git_file:`inventory/base/group_vars/all.yaml` are +able to login to the underlaying infrastructure. Direct access to the hosts is +only possible through the :ref:`Bridge` host which serves as a bastion host. + +Logging and Monitoring +---------------------- + +Every component of the HelpCenter produces own logs. + +* haproxy log (VM service logs) +* nginx log (VM or Kubernetes POD log) +* Swift proxy and storage service logs +* Zuul logs + + * public job logs (test build log file) + * executor log + * scheduler log + +Patch Management +---------------- + +The service consists of OpenSource elements only. Whenever new release of any +software element (haproxy, nginx, zuul) is identified a Pull Request to this +repository need to be created to update the software. Pathing of the +underlaying VM (haproxy) is executed as a regular job applying all the existing +OS updates. + +Hardening +--------- + +All configuration files for the hosts, Cloud Load Balancer configuration and K8 +configuration is part of this repository. Every VM is managed by the System +Config project applying the same hardening rules to evenry host according to +the configuration. As such system hardenings are dictated by Deutsche Telekom +Hardening policies. + +Certificate Handling +-------------------- + +SSL Certificates are obtained using Let's Encrypt Certificate authority. +Following is important: + +* Certificate for the K8 deployment can be managed by the + `CertManager `_ deployed in + the Kubernetes cluster. This is achieved by placing + Kubernetes annotation on the deployment. +* Alternatively SSL Certificate for the K8 installation may be generated on the + deployment server and provided into the K8 as secrets. +* Certificates for the other involved components (Zuul, + Swift) are managed by the corresponding components + themselves. + +Backup and Restore +------------------ + +No backup/restore procedure exists besides Swift backup/restore. Sources for +the documents are stored in GitHub in a raw form with all modification history. +Whenever it is required to restore document to the particular point in time a +pull request can be created restoring current version to a particular state in +history. +From a disaster recovery point of view a fresh generation of the documents from +sources can be used. The same approach can be applied periodically to ensure +generated documents are always up-to-date and matching current document +stylizations. + +User and Account management +--------------------------- + +No user accounts on the documentation portal are existing. Only a regular +anonym access to the service is possible. No cookies or local web browser storage is used. + +Communication Matrix (external) +------------------------------- + +Complete communication between Help Center elements is happening as with +external components (using HTTPS). + +Depending on the requirements additional reverse proxy may be installed in from +of the web server to provide additional hardening or other required isolation +measures. Also in this case communication between reverse proxy and the web +server is happening as HTTPS traffic. + +.. list-table:: + + * - From/To + - Web Server + - Storage + - Zuul + - GitHub + * - WebServer + - N/A + - HTTPS + - N/A + - N/A + * - Storage + - N/A + - N/A + - N/A + - N/A + * - Zuul + - N/A + - HTTPS + - N/A + - HTTPS + * - GitHub + - N/A + - N/A + - HTTPS + - N/A diff --git a/doc/source/dot/docsportal.dot b/doc/source/dot/docsportal.dot new file mode 100644 index 0000000..306c734 --- /dev/null +++ b/doc/source/dot/docsportal.dot @@ -0,0 +1,13 @@ +digraph HelpCenter { + graph [bgcolor=transparent compound=true fontcolor="#2D3436" fontname="Sans-Serif" fontsize=10 rankdir=LR] + node [fixedsize=false] + user [label=Clients fixedsize=true fontsize=10 height=1.4 image="../_images/users.png" imagescale=true labelloc=b shape=none width=1] + web [label=WebServer fixedsize=true fontsize=10 height=1.4 image="../_images/nginx.png" imagescale=true labelloc=b shape=none width=1] + github [label="GitHub Projects" fixedsize=true fontsize=10 height=1.4 href="https://github.com/opentelekomcloud-docs" image="../_images/github.png" imagescale=true labelloc=b shape=none width=1] + zuul [label="Zuul CI/CD" fixedsize=true fontsize=10 height=1.4 href="https://docs.otc-service.com/system-config/zuul.html" image="../_images/zuulci.png" imagescale=true labelloc=b shape=none width=1] + swift [label="Swift Object Store" fixedsize=true fontsize=10 height=1.4 image="../_images/swift.png" imagescale=true labelloc=b shape=none width=1] + user -> web [label=Pull color=black fontsize=8] + web -> swift [label=Pull color=black fontsize=8] + github -> zuul [label=Push color=red fontsize=8] + zuul -> swift [label=Push color=red fontsize=8] +} diff --git a/doc/source/dot/docsportal_sec.dot b/doc/source/dot/docsportal_sec.dot new file mode 100644 index 0000000..86017cd --- /dev/null +++ b/doc/source/dot/docsportal_sec.dot @@ -0,0 +1,30 @@ +graph "Documentation Portal Security diagram" { + graph [bgcolor=transparent compound=true fontcolor="#2D3436" fontname="Sans-Serif" fontsize=10 rankdir=LR] + node [fixedsize=false] + edge [fontsize=8] + user [label=Clients fixedsize=true fontsize=10 height=1.4 image="../_images/users.png" imagescale=true labelloc=b shape=none width=1] + web [label=WebServer fixedsize=true fontsize=10 height=1.4 image="../_images/nginx.png" imagescale=true labelloc=b shape=none width=1] + subgraph cluster_storage { + graph [bgcolor="#E5F5FD" shape=box style=rounded] + label=Storage + swift [label="Swift Object Store" fixedsize=true fontsize=10 height=1.4 image="../_images/swift.png" imagescale=true labelloc=b shape=none width=1] + } + + subgraph cluster_git { + graph [bgcolor="#E5F5FD" shape=box style=rounded] + label="Git Hosting" + github1 [label="Project 1"] + github2 [label="Project 2"] + github3 [label="Project XX"] + } + + zuul [label="Zuul CI/CD" fixedsize=true fontsize=10 height=1.4 href="https://docs.otc-service.com/system-config/zuul.html" image="../_images/zuulci.png" imagescale=true labelloc=b shape=none width=1] + + github1 -- zuul [label=HTTPS dir=forward] + github2 -- zuul [label=HTTPS dir=forward] + github3 -- zuul [label=HTTPS dir=forward] + zuul -- swift [label=HTTPS dir=forward] + web -- swift [label=HTTPS dir=back] + user -- web [label=HTTPS dir=back] +} + diff --git a/doc/source/dot/helpcenter.dot b/doc/source/dot/helpcenter.dot new file mode 100644 index 0000000..306c734 --- /dev/null +++ b/doc/source/dot/helpcenter.dot @@ -0,0 +1,13 @@ +digraph HelpCenter { + graph [bgcolor=transparent compound=true fontcolor="#2D3436" fontname="Sans-Serif" fontsize=10 rankdir=LR] + node [fixedsize=false] + user [label=Clients fixedsize=true fontsize=10 height=1.4 image="../_images/users.png" imagescale=true labelloc=b shape=none width=1] + web [label=WebServer fixedsize=true fontsize=10 height=1.4 image="../_images/nginx.png" imagescale=true labelloc=b shape=none width=1] + github [label="GitHub Projects" fixedsize=true fontsize=10 height=1.4 href="https://github.com/opentelekomcloud-docs" image="../_images/github.png" imagescale=true labelloc=b shape=none width=1] + zuul [label="Zuul CI/CD" fixedsize=true fontsize=10 height=1.4 href="https://docs.otc-service.com/system-config/zuul.html" image="../_images/zuulci.png" imagescale=true labelloc=b shape=none width=1] + swift [label="Swift Object Store" fixedsize=true fontsize=10 height=1.4 image="../_images/swift.png" imagescale=true labelloc=b shape=none width=1] + user -> web [label=Pull color=black fontsize=8] + web -> swift [label=Pull color=black fontsize=8] + github -> zuul [label=Push color=red fontsize=8] + zuul -> swift [label=Push color=red fontsize=8] +} diff --git a/doc/source/gitcontrol.rst b/doc/source/gitcontrol.rst new file mode 100644 index 0000000..7144c20 --- /dev/null +++ b/doc/source/gitcontrol.rst @@ -0,0 +1,155 @@ +:title: Git Control + +.. _git_control: + +Git Control +########### + +Automation of the GitHub Organizations management. + +At a Glance +=========== + +:Hosts: +:Projects: + * `Ansible Collection Gitcontrol`_ + * `Gitstyring`_ +:Configuration: + * https://github.com/opentelekomcloud-infra/gitstyring/tree/main/orgs + * supplementary closed source project (gitlab/ecosystem/gitstyring) +:Bugs: +:Resources: + +Overview +======== + +This project combination is taking care of automating management of the Open +Telekom Cloud GitHub organizations. It currently takes care of following: + +* project settings in the organizations +* branch protection for the projects +* team/collaborator permissions on the projects +* organization team management (description, membership) +* organization collaborator management (membership) + +Software Architecture +===================== + +Ansible collection (`opentelekomcloud.gitcontrol`) is implementing modules for +the managing GitHub organizations, projects, users. `Gitstyring`_ projects +defines the configuration to be applied. + +:ref:`Zuul` jobs defined in the `Gitstyring`_ projects are responsible for +applying of the target configuration. The workflow is implemented as following: + +- A temporary VM is prepared +- `Ansible Collection Gitcontrol`_ collection is installed together with Ansible +- Loop over target managed organizations: + + - Temporary GitHub token is retrieved according to `gh_auth`_ for the OTCBot + GitHub application for the organization. Private key for token signing is + retrieved from Vault. + - Configured state of the organizaiton members is applied using temporary + token. + - Configured state of the organizaiton teams is applied using temporary + token. + - Configured state of the organizaiton projects is applied using temporary + token. + - Temporary token is revoked. + +Security Design +=============== + + +Security Architecture +--------------------- + +GitHub organizations are managed using OTCBot `GitHub application +`_. This allows +avoiding necessity to use pre-created tokens with administration privileges. +Private key of the GitHub application is stored in the Vault and a special +Vault policy is defined to allow access to it. Required configuration projects +are using dedicated `AppRole `_ +in combination with the mentioned policy to restrict which projects are able to +access the key. Using the application private key a JWT token is generated +which is used to get application installation token with the required scope to +be able to apply the configuration to the organization. +After using the installation token is forcibly revoked by sending DELETE call +to the GitHub API. + +As a next step step for improving security a special Vault plugin is going to +be created that takes organization name and desired permission set and returns +dedicated installation token. this will allow avoiding private key to ever +leave Vault. + +Every change proposed to the target configuration will be applied in the +dry-run mode using token with read-only privileges to verify configuration. + +Separation +---------- + +Not applicable. + +Interface Description +--------------------- + +Not available. + +Tenant Security +--------------- + +Not applicable. + +O&M Access Control +------------------ + +Not applicable. + +Logging and Monitoring +---------------------- + +Logs for the execution can be found in the corresponding Zuul job execution +logs. + +Patch Management +---------------- + +Not applicable. + +Hardening +--------- + +Not applicable. + +Certificate Handling +-------------------- + +Not required. + +Private key of the GitHub application is kept in the Vault. It can be rotated +by generating new key by the administrators of the GitHub +opentelekomcloud-infra members and overwriting it in the Vault. + +Backup and Restore +------------------ + +Not applicable. + +User and Account management +--------------------------- + +User mapping is configured by `Gitstyring`_. No password/token management is implemented. + +Communication Matrix +-------------------- + +Not applicable. + +Deployment +========== + +Not applicable. + +.. _Gitstyring: https://github.com/opentelekomcloud-infra/gitstyring +.. _`Ansible Collection Gitcontrol`: https://github.com/opentelekomcloud/ansible-collection-gitcontrol +.. _gh_auth: https://docs.github.com/en/developers/apps/building-github-apps/authenticating-with-github-apps#authenticating-as-a-github-app> diff --git a/doc/source/helpcenter.rst b/doc/source/helpcenter.rst new file mode 100644 index 0000000..c7b5cd5 --- /dev/null +++ b/doc/source/helpcenter.rst @@ -0,0 +1,77 @@ +:title: Help Center + +Help Center +########### + +Open Telekom Cloud Help Center is a web server that serves documentation and +releasenotes created by various software projects of the Open Telekom Cloud. + +At a Glance +=========== + +:Hosts: + * https://docs-beta.otc.t-systems.com +:Projects: + * https://github.com/opentelekomcloud/otcdocstheme + * https://github.com/opentelekomcloud-docs/docsportal + * https://github.com/opentelekomcloud-docs/ +:Configuration: + * :git_file:`playbooks/roles/document_hosting_k8s/templates/nginx-site.conf.j2` + * :git_file:`inventory/service/group_vars/k8s-controller.yaml` +:Bugs: +:Resources: + +Overview +======== + +Every project on the GitHub under the opentelekomcloud-docs organization is +capable in delivering documentation to the Help Center. Originally this +documentation represents API reference documents and User Guides which need to +be served on the Help Center for user reference. However there is no general +limitation on which type of documents are managed and projects can manage +further content (i.e. developer guides, how-tos, etc). + +Every git project ideally represents a single service of the Open Telekom +Cloud. + +Integration of projects under the :ref:`Zuul` allows following: + +- CI for the changes in the project (i.e. only tested and approved content is + being merged into the main branch) + +- CD: for the changes that are being merged documents are being built and + pushed to the HelpCenter. + +Help Center is implemented as an :ref:`docsportal` instance with no additional reverse proxy used. Since the published content is designed to be public no additional access limitations are applied. + +Software Architecture +===================== + +A Web-Server (nginx) is listening in the frontend for the requests and based +on the URL decides in which container the data is actually located. It +contacts Storage server and gets the original content from there, which is +then being cached and returned back to the requestor. + +.. graphviz:: dot/helpcenter.dot + :caption: Docs portal software architecture from the protocols point + +.. include:: docsportal_sec.rst.inc + +Deployment +========== + +:git_file:`playbooks/service-docs.yaml` is a playbook for the service +configuration and deployment. It is automatically executed once a pull request +touching any of the affected files (roles, inventory) is being merged. +Additionally it is applied periodically. + +Deployment model of the Help Center is as +follows: + +* WebServer (nginx) is running as part of the + K8 deployment and is exposed to the public + internet via Ingress. + +* OpenStack Swift is used as object storage + with publicly readable container (in a + dedicated project). diff --git a/doc/source/index.rst b/doc/source/index.rst new file mode 100644 index 0000000..8ca0c2d --- /dev/null +++ b/doc/source/index.rst @@ -0,0 +1,12 @@ +Ecosystems Infrastructure +========================= + +This documentation covers the installation and maintenance of the +infrastructure elements used by the Ecosystem team of the Open Telekom Cloud. + + +.. toctree:: + :maxdepth: 2 + + systems + roles diff --git a/doc/source/matrix.rst b/doc/source/matrix.rst new file mode 100644 index 0000000..459d109 --- /dev/null +++ b/doc/source/matrix.rst @@ -0,0 +1,30 @@ +:title: Matrix + +Matrix homeserver +################# + +Matrix is a mesh communication network allowing to join multiple protocols +under one roof + + +At a Glance +=========== + +:Hosts: + * https://matrix.otc-service.com +:Projects: +:Bugs: +:Resources: +:Chat: + * #General:matrix.otc-service.com Matrix room + +Overview +======== + + +Deployment and Processing flow +============================== + +* ``playbooks/service-matrix.yaml`` is a playbook for the service configuration + and deployment. + diff --git a/doc/source/proxy.rst b/doc/source/proxy.rst new file mode 100644 index 0000000..b0fc8d7 --- /dev/null +++ b/doc/source/proxy.rst @@ -0,0 +1,139 @@ +:title: Proxy + +Reverse Proxy +############# + +Multiple resources are deployed behind the reverse proxy in order to enable +proper load balancing, failover and hybrid resource deployment (resources +deployed in different networks without possibility to use Cloud Load Balancer). + +At a Glance +=========== + +:Hosts: +:Projects: + * https://www.haproxy.org/ +:Configuration: + * :git_file:`inventory/service/group_vars/proxy.yaml` + * :git_file:`playbooks/roles/haproxy/templates/haproxy.cfg.j2` +:Bugs: +:Resources: + +Software Architecture +===================== + +A regular unmodified haproxy software is deployed in VMs and is exposed through +the Cloud Load Balancer. + +Security Design +=============== + +Security Architecture +--------------------- + +* haproxy is deployed in a container on a dedicated VM +* firewalld component is deployed on the VM and is only opening required ports + (configured part of this repository) +* VMs are not having public IP and can be only physically accessed through + :ref:`bridge` +* HTTP/HTTPS traffic is reaching the service through Cloud Load Balancer + +.. raw:: html + + + +Separation +---------- + +Service runs on the dedicated VMs without any other additional service running. + +Interface Description +--------------------- + +Cloud load balancer is distributing load across mutiple haproxy instances. It +exposes ports 80 and 443 in the internal network, where those are consumed by +the Cloud Load Balancer. + +Tenant Security +--------------- + +No customer Service is deployed in the Domain dedicated for the Ecosystem Squad. Only +members are having permissions there. + +O&M Access Control +------------------ + +Only users enabled in the :git_file:`inventory/base/group_vars/all.yaml` are +able to login to the underlaying infrastructure. + +Logging and Monitoring +---------------------- + +* haproxy logs (on the proxyX.YY VMs) +* haproxy emits StatsD metrics into the Graphite DB and those can be observed + using Grafana + + +Patch Management +---------------- + +The service consists of OpenSource elements only. Whenever new release of any +software element (haproxy) is identified a Pull Request to this +repository need to be created to use it in the deployment. +Pathing of the underlaying VM (haproxy) is executed as a regular job applying +all the existing OS updates. + +Hardening +--------- + +All configuration files for the hosts part of this repository. Every VM is managed by the System +Config project applying the same hardening rules to every host according to +the configuration + +* :git_file:`inventory/service/host_vars/proxy1.eco.tsi-dev.otc-service.com.yaml` +* :git_file:`inventory/service/host_vars/proxy2.eco.tsi-dev.otc-service.com.yaml` + +Certificate Handling +-------------------- + +SSL Certificates are obtained using Let's Encrypt Certificate authority +(:git_file:`playbooks/acme-certs.yaml`). +Following is important: + +* Haproxy certificates are generated using the same procedure on the haproxy + hosts themselves. +* Certificate renewal and service reload happens automatically. + +Backup and Restore +------------------ + +No backup/restore procedure exists. Infrastructure deployment is automated and +can be redeployed when necessary. + + +User and Account management +--------------------------- + +No user accounts are existing. + +Communication Matrix +-------------------- + +.. list-table:: + + * - From \\ To + - haproxy + - elb + * - haproxy + - N/A + - N/A + * - elb + - TCP(80,443) + - N/A + + +Deployment +========== + +* ``playbooks/service-proxy.yaml`` is a playbook for the service configuration + and deployment. diff --git a/doc/source/roles.rst b/doc/source/roles.rst new file mode 100644 index 0000000..085fd89 --- /dev/null +++ b/doc/source/roles.rst @@ -0,0 +1,28 @@ +:title: Roles + +Ansible Roles +############# + +Documentation for roles included in `system-config` + +There are two types of roles. Top-level roles, kept in the ``roles/`` +directory, are available to be used as roles in Zuul jobs. This +places some constraints on the roles, such as not being able to use +plugins. Add + +.. code-block:: yaml + + roles: + - zuul: opentelekomcloud-infra/system-config + +to your job definition to source these roles. + +Roles in ``playbooks/roles`` are designed to be run on the +Infrastructure control-plane (i.e. from ``bridge.eco.tsi-dev.otc-service.com``). +These roles are not available to be shared with Zuul jobs. + +Role documentation +------------------ + + +.. zuul:autoroles:: diff --git a/doc/source/swift.rst b/doc/source/swift.rst new file mode 100644 index 0000000..fea01a6 --- /dev/null +++ b/doc/source/swift.rst @@ -0,0 +1,161 @@ +:title: Swift + +OpenStack Swift +############### + +Open Telekom Cloud Swift is not matching the OpenStack software. As an attempt +to overcome compatibility issues a real upstream software can be used with no +code changes. + +At a Glance +=========== + +:Hosts: + * https://swift.eco.tsi-dev.otc-service.com +:Projects: + * https://opendev.org/openstack/swift + * https://github.com/opentelekomcloud-infra/validatetoken +:Configuration: +:Bugs: +:Resources: + * `OpenStack Swift documentation`_ + +Overview +======== + +Upstream OpenStack Swift software is deployed in an isolated Open Telekom Cloud +project and is exposed using the Cloud Load Balancer. + + +Software Architecture +===================== + +Software components +------------------- + +* OpenStack Swift Proxy service - Authorization and API handling +* OpenStack Swift Storage services - Data storage +* Keystone authentication middleware (validatetoken) - oslo middleware to + verify token information + +Network setup +------------- + +* external network (API handling) +* storage network (communication between proxy services and storage nodes) +* replication network (data synchronization between storage nodes) +* management network (used to provision software) +* cloud load balancer is using the external network to communicate with Swift + proxy servers + +Security Design +=============== + +Swift is not having any authentication database. In order to verify validity of +the API request it sends API request to the Keystone (IAM) for the verification +of the passed token. When the positive information is received Swift decides +further on whether the user is authorized to do the action. This is happening +based on the roles the user has and does not require any additional (local) +information. + +Software is deployed in an isolated Project of the Open Telekom Cloud public Domain and does not share the infrastructure with any other components. Management of the installation is achieved using the vpc peering between management network of the installation and the :ref:`bridge`. + +User data is stored on the Storage nodes not encrypted. Technically it is +possible to enable `encryption `_, but due to +the absense of any customer or in any other way sensitive data it is not +enabled. + +Separation +---------- + +* Software is deployed in an isolated project +* Hosts to run the software has multiple networking interfaces and only + required traffic is allowed to run (default - drop) + +Interface Descritpion +--------------------- + +Service is exposed to the internet only through the load balancer HTTPS port. +This implements `REST API `. +Authorization requires passing `X-Auth-Token` header with a valid Identity +token. + +Tenant Security +--------------- + +An isolated project and isolated management user is used. + +O&M Access Control +------------------ + +Only users enabled in the :git_file:`inventory/base/group_vars/all.yaml` are +able to login to the underlaying infrastructure. + + +Logging and Monitoring +---------------------- + +There are 2 sets of logs available: + +* proxy logs (on the proxy VMs) +* account/container/object service log (on the storage VMs) + +Certificate Handling +-------------------- + +SSL Certificates are obtained using Let's Encrypt Certificate authority +(:git_file:`playbooks/acme-certs.yaml`). Certificate for Swift is generated on +the :ref:`bridge` host and is uploaded to the Cloud Load Balancer service after +rotation. + +Backup and Restore +------------------ + +No Backup and Restore functionality is currently implemented. + +User and Account management +--------------------------- + +Official Open Telekon Cloud Identity Service (IAM) is used for user and account +management. No related data is stored in Swift. + +Communication Matrix +-------------------- + +.. list-table:: External communication matrix + + * - From/To + - Swift + - elb + * - Swift + - N/A + - N/A + * - elb + - HTTP(8080) + - N/A + + +.. list-table:: Internal communication matrix + + * - From/To + - bridge + - proxy + - storage + * - bridge + - SSH + - SSH + - SSH + * - proxy + - N/A + - N/A + - TCP(6200,6201,6202) + * - storage + - N/A + - N/A + - Rsync + +Deployment +========== + + +.. _OpenStack Swift Documentation: https://docs.openstack.org/swift/latest/overview_architecture.html diff --git a/doc/source/systems.rst b/doc/source/systems.rst new file mode 100644 index 0000000..1f5644d --- /dev/null +++ b/doc/source/systems.rst @@ -0,0 +1,16 @@ +:title: Major Systems + +Major Systems +############# + +.. toctree:: + :maxdepth: 2 + + bridge + zuul + docsportal + matrix + helpcenter + swift + proxy + gitcontrol diff --git a/doc/source/zuul.rst b/doc/source/zuul.rst new file mode 100644 index 0000000..e6158fa --- /dev/null +++ b/doc/source/zuul.rst @@ -0,0 +1,458 @@ +:title: Zuul CI/CD + +.. _Zuul: + +Zuul CI/CD +########## + +Zuul is a pipeline-oriented project gating system. It facilitates +running tests and automated tasks in response to Code Review events. + +At a Glance +=========== + +:Hosts: + * https://zuul.otc-service.com +:Projects: + * https://opendev.org/zuul/zuul +:Configuration: + * :git_file:`inventory/service/group_vars/zuul.yaml` +:Bugs: +:Resources: + * `Zuul Reference Manual`_ +:Chat: + * #zuul:matrix.otc-service.com Matrix room + +Overview +======== + +The Open Telekom Cloud project uses a number of pipelines in Zuul: + +**check** + Newly uploaded patchsets enter this pipeline to receive an initial + +/-1 Verified vote. + +**gate** + Changes that have been approved by core reviewers are enqueued in + order in this pipeline, and if they pass tests, will be merged. + +**post** + This pipeline runs jobs that operate after each change is merged. + +**release** + When a commit is tagged as a release, this pipeline runs jobs that + publish archives and documentation. + +**tag** + When a commit is tagged as a release (non semantic naming scheme), this + pipeline runs jobs that publish archives and documentation. + +**periodic** + This pipeline has jobs triggered on a timer for e.g. testing for + environmental changes daily. + +**promote** + This pipeline runs jobs that operate after each change is merged + in order to promote artifacts generated in the gate + pipeline. + +The **gate** pipeline uses speculative execution to improve +throughput. Changes are tested in parallel under the assumption that +changes ahead in the queue will merge. If they do not, Zuul will +abort and restart tests without the affected changes. This means that +many changes may be tested in parallel while continuing to assure that +each commit is correctly tested. + +Zuul's current status may be viewed at +``_. + +Software Architecture +===================== + +Please refer to `zuul`_ documentation for detailed explanation on how Zuul is designed. + +.. raw:: html + + + +Security Design +--------------- + +Security Architecture +~~~~~~~~~~~~~~~~~~~~~ + +.. raw:: html + + + +Separation +~~~~~~~~~~ + +Zuul consists of the following major components: + +* nodepool-launcher + + * The main nodepool component is named nodepool-launcher and is responsible + for managing cloud instances launched from the images created and uploaded + by nodepool-builder. + +* nodepool-builder + + * The nodepool-builder builds and uploads images to providers. + +* zuul-executor + + * Executors are responsible for running jobs. At the start of each job, an + executor prepares an environment in which to run Ansible which contains all + of the git repositories specified by the job with all dependent changes + merged into their appropriate branches. + +* zuul-scheduler + + * The scheduler is the primary component of Zuul. It receives events from any + connections to remote systems which have been configured, enqueues items + into pipelines, distributes jobs to executors, and reports results. + +* zuul-merger + + * Zull performs log of git operations, often needs to perform a speculative + merge in order to determine whether it needs to perform any further + actions Standalone merger reduces the load from executors. + +* zuul-web + + * The Zuul web server serves as the single process handling all HTTP + interactions with Zuul. This includes the websocket interface for live log + streaming, the REST API and the html/javascript dashboard. + +In addition to the components of Zuul itself following external components +are used: + +* zookeeper +* SQL database +* cloud resources (for spinning VMs or containers for job executions) + +None of the components of Zuul are communicating directly with each other and +instead rely on external Zookeeper with TLS encryption for exchanging +information. Components are using TLS certificates to authorize to Zookeeper. + +Details can be found at `Zuul Components`_. + +Interface Description +~~~~~~~~~~~~~~~~~~~~~ + +Zuul system is implementing following interfaces for the communication with +the outside systems: + +* Web component (managed by zuul-web component): + + * Web UI interface (gives user information on job status) + * REST API (allows R/O operations for querying status) + * Webhook listener (listens for events from git hosting backends) + +In addition to that Zuul accesses following systems: + +* Zookeeper (for internal communication) + + * protected with TLS and TLS client certificates + +* SQL Database (for storing job results) + + * protected with TLS and username/password + +* External Log Storage (Swift for storing job logs) + + * protected with TLS and username/password/token + +* Git hosting (for read and write operations) + + * Relies on the SSH access protected with SSH key + +* Cloud resources (for performing required test) + + * protected according to the requirements of the particular cloud provider + (username/password, token, client certificate). In general TLS is used for + API invocation (for provisioning resources) and afterwards SSH with private + key to further execute Ansible on the resource. Once the resource is not + used anymore, API request is sent to the cloud provider via TLS to + decommission it. + +Further details can be found `Zuul Admin Reference`_. + +Tenant Security +~~~~~~~~~~~~~~~ + +Every tenant of Zuul is configured through the `zuul-config`_ repository. +Every tenant includes list of projects which are allowed to use system. Git +projects not configured are ignored. In addition to that only events from git +projects with enabled branch protections are respected by Zuul. + +During job execution by `zuul-executor +`_ +projects are being tested in a completely isolated context guaranteeing both +isolation of projects as well as protection of the system from potential +vulnerabilities or malicious actions by the projects themselves). + +Zuul jobs triggered upon corresponding git actions are executed either in +isolated dedicated VMs provisioned in the cloud or in Kubernetes pods in +isolated namespaces. + +Further details can be found `Zuul Tenant Configuration`_. + +O&M Access Control +~~~~~~~~~~~~~~~~~~ + +Zuul administrators are having access to any component of the Zuul system. +This gives possibility to access execution logs of test jobs (which are +anyway published at the end of the excution), as well as enqueue/dequeue +particular pipelines for the project pull/merge request. This access, +however, does not give any possibility to bypass project set requirements on +code merging (Zuul administrator is not able to enforce pull/merge request +merging), this can be done only by people with direct git hosting admin or +write access. + +Logging and Monitoring +~~~~~~~~~~~~~~~~~~~~~~ + +Zuul is logging all jobs being performed. This information is made public so +that pull request initiators are able to know status of the test. It must be +noted, however, that every Zuul tenant is reponsible for defining base jobs +which are either making logs publicly available or not. In general those jobs +are themselves responsible for maintaining the log files (whether to put them +on some external log hosting or discard them immediately). + +Zuul internal logging is done completely independently and is produced on the +systems running Zuul components themselves. These logs are maintained +corresponsing to the requirements of the Zuul installation. + +In addition to the Zuul components logging, it also supports metric emitting. +It supports StatsD metrics pushing and Prometheus metric fetching. More details +`Zuul Monitoring`_. + +Patch Management +~~~~~~~~~~~~~~~~ + +Zuul administrators are responsible for updating Zuul software and taking care +of the platform where those components are running. + +Hardening +~~~~~~~~~ + +As a means of hardening of the Zuul installation following can be mentioned: + +* Zuul is deployed in a dedicated Kubernetes cluster and every component is + running as a container. + +* Access to the Zuul UI and REST API is implemented through the Cloud Load + Balancer and K8 Ingress controller attached to it + +* Secret data used in Zuul is stored in Vault and can be easily rotated with + required frequency. + +* Cloud resources used by Zuul are protected by security groups. Moreover + connection is implemented by the means of internal VPC peering connections + with no direct access using public IP addresses. + +* Zookeeper instance used by Zuul is a dedicated instance with no external + access. + +* SQL DB used by Zuul is a dedicated instance with no public IP address. + +* API and SSH access to git hosting can be additionally protected by the + whitelisting of Zuul external IP address. + +Backup and Restore +~~~~~~~~~~~~~~~~~~ + +Zuul is build on the principles of storing all required information in git. +This is applicable for the configuration of which jobs are executed for which +project, as well as what is the Zuul configuration. This makes Backup more or +less obsolete. Of course there are some parts of the installation that +require backups: + +* private/public keys for the project secrets (private keys are in addition + protected by password). + +Details on the methods can be found `here +`_. + +Certificate Handling +~~~~~~~~~~~~~~~~~~~~~ + +There are few types of certificates used in Zuul: + +* Zookeeper client TLS certificates +* TLS certificates for the API/UI (Web access) +* API keys and private certificates for SSH and API access to git hoster. + +Those certificates must be maintained according to the security +requirements and deployment specifics. In general it is preferred to use +short-lived self-signed certificates for the Zookeeper cluster as well as +LetsEncrypt certificates for Web access. + +User and account Management +~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Generally Zuul does not support user accounts. It mainly communicates with +git hosting systems with appropriate credentials and has no information about +particular users proposing changes there. + +Zuul supports optional `Tenant Scoped REST API +`_, but +this is currently not enabled in the current installation. + +Operational accounts +^^^^^^^^^^^^^^^^^^^^ + +There are not granular operator accounts in Zuul installation. There is only +one account allowing operate the system. + +Technical and M2M accounts +^^^^^^^^^^^^^^^^^^^^^^^^^^ + +Every component of Zuul only communicates to Zookeeper. For this Zookeeper +client TLS certificate is used. No other technical or M2M accounts exist on +the system. + +Communication Matrix (internal) +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +As mentioned above Zuul components communicate with each other only through +Zookeeper. When one component need to communicate with another one it places +the request in Zookeeper. + +.. list-table:: + + * - From \\ To + - zookeeper + - vault + * - nodepool-builder + - TLS(2281) + - TLS(8200) + * - nodepool-launcher + - TLS(2281) + - TLS(8200) + * - zuul-web + - TLS(2281) + - TLS(8200) + * - zuul-merger + - TLS(2281) + - TLS(8200) + * - zuul-executor + - TLS(2281) + - TLS(8200) + * - zuul-scheduler + - TLS(2281) + - TLS(8200) + * - zookeeper + - TLS(2888,3888) + - TLS(8200) + +Zookeeper protocol details can be found at `Zookeeper Internals +`_. + +Communication Matrix (external) +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +.. list-table:: + + * - From \\ To + - SQL DB + - Git hosting + - Cloud + * - nodepool-builder + - N/A + - N/A + - [CLOUD_TLS]_ + * - nodepool-launcher + - N/A + - N/A + - [CLOUD_TLS]_ + * - zuul-web + - [DB_TLS]_ + - [TLS]_ + - N/A + * - zuul-merger + - N/A + - [SSH]_ + - N/A + * - zuul-executor + - N/A + - [SSH]_ + - [SSH]_ + * - zuul-scheduler + - N/A + - N/A + - N/A + +.. [TLS] HTTPS encrypted (TLS) on port 443 +.. [SSH] SSH encrypted on custom port (depends on the git provider) +.. [CLOUD_TLS] HTTPS encrypted (TLS) on port 443 +.. [DB_TLS] Database protocol, encrypted (TLS) (port depends on conrete DB type) + +Deployment Design +================= + +Zuul is installed in an isolated Kubernetes cluster. As a mean of further +security isolation SQL database and Zookeeper must be installaled dedicated +exclusively to the Zuul instance. + +Secrets required for Zuul operation are fetched by the components from the +`Vault`_ instance. This is achieved by relying on the following items: + +* https://www.vaultproject.io/docs/auth/kubernetes + + * Service account of the Zuul user is registered in the Vault for the + corresponding K8 cluster and namespace. + +* https://www.vaultproject.io/docs/secrets/kv/kv-v2 + + * Strict policy is granted to the user giving read only access to the + required secrets. + +* https://www.vaultproject.io/docs/agent + + * Vault agent is deployed as a sidecar container for Zuul components which + is reponsible for fetching required secrets from Vault and rendering them + into the corresponding config files. + +* Vault instance is not accessible publicly (has no public IP address) + +.. raw:: html + + + +Network Deployment Design +------------------------- + +Zuul components are installed inside of the single Kubernetes cluster. This +means all components are placed in dedicated virtual networks of the +Kubernetes. Communication with Zookeeper happens through the Kubernetes +Service. + +Software Deployment Design +-------------------------- + +* nodepool-builder is deployed using + :git_file:`playbooks/roles/zuul_k8s/tasks/nodepool.yaml` +* nodepool-launcher is deployed using + :git_file:`playbooks/roles/zuul_k8s/tasks/nodepool.yaml` +* zuul-web component is deployed using + :git_file:`playbooks/roles/zuul_k8s/tasks/zuul-web.yaml` +* zuul-merger component is deployed using + :git_file:`playbooks/roles/zuul_k8s/tasks/zuul-merger.yaml` +* zuul-executor component is deployed using + :git_file:`playbooks/roles/zuul_k8s/tasks/zuul-executor.yaml` +* zuul-scheduler component is deployed using + :git_file:`playbooks/roles/zuul_k8s/tasks/zuul-scheduler.yaml` +* zookeeper is deployed using + :git_file:`playbooks/roles/zookeeper/tasks/k8s.yaml` + +.. _Zuul Reference Manual: https://zuul-ci.org/docs/zuul +.. _Zuul Status Page: http://zuul.otc-service.com +.. _zuul-config: https://github.com/opentelekomcloud-infra/zuul-config +.. _Zuul Admin Reference: https://zuul-ci.org/docs/zuul/reference/admin.html +.. _Zuul Tenant Configuration: https://zuul-ci.org/docs/zuul/reference/tenants.html +.. _Zuul Components: https://zuul-ci.org/docs/zuul/discussion/components.html +.. _Zuul Monitoring: https://zuul-ci.org/docs/zuul/reference/monitoring.html +.. _Vault: https://www.vaultproject.io/ diff --git a/inventory/base/group_vars/all.yaml b/inventory/base/group_vars/all.yaml new file mode 100644 index 0000000..8165025 --- /dev/null +++ b/inventory/base/group_vars/all.yaml @@ -0,0 +1,72 @@ +ansible_python_interpreter: python3 +silence_synchronize: true + +distro_lookup_path: + - "{{ ansible_facts.distribution }}.{{ ansible_facts.lsb.codename|default() }}.{{ ansible_facts.architecture }}.yaml" + - "{{ ansible_facts.distribution }}.{{ ansible_facts.lsb.codename|default() }}.yaml" + - "{{ ansible_facts.distribution }}.{{ ansible_facts.architecture }}.yaml" + - "{{ ansible_facts.distribution }}.yaml" + - "{{ ansible_facts.os_family }}.yaml" + - default.yaml + +iptables_base_allowed_hosts: [] +iptables_extra_allowed_hosts: [] +iptables_allowed_hosts: "{{ iptables_base_allowed_hosts + iptables_extra_allowed_hosts }}" + +iptables_base_allowed_groups: [] +iptables_extra_allowed_groups: [] +iptables_allowed_groups: "{{ iptables_base_allowed_groups + iptables_extra_allowed_groups }}" + +iptables_base_public_tcp_ports: [] +iptables_extra_public_tcp_ports: [] +firewalld_base_ports_enable: [] +firewalld_extra_ports_enable: [] +firewalld_base_services_enable: ['ssh'] +firewalld_extra_services_enable: [] +# iptables_test_public_tcp_ports is here only to allow the test +# framework to inject an iptables rule to allow zuul console +# streaming. Do not use it otherwise. +firewalld_ports_enable: "{{ firewalld_test_ports_enable|default([]) + firewalld_base_ports_enable + firewalld_extra_ports_enable }}" +firewalld_services_enable: "{{ firewalld_base_services_enable + firewalld_extra_services_enable }}" + +iptables_base_public_udp_ports: [] +iptables_extra_public_udp_ports: [] +iptables_public_udp_ports: "{{ iptables_base_public_udp_ports + iptables_extra_public_udp_ports }}" + +unbound_forward_zones: [] + +# When adding new users, always pick a UID larger than the last UID, do not +# fill in holes in the middle of the range. +all_users: + gtema: + comment: Artem Goncharov + key: | + ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDBBVL8LJ14SfFPK2zNeuO8rglURUJ32LFQXn0IzinZ7Y3ic8vtmF+UBvg+h8th56GZ3/DR9b+zcXfbA+0cdfTr+BWlDCYwcLab2vgU/S9FyQBzYr7ZWxtEFOmb5ztVp2b5wFt/DD7YBfyJNzM9SpVQDO4furwNZDq5af0+D67KOsV2BPLXL4/zMGkLR3TSFNzdJCSLrWML96NWK1FvpEjDroyKXFTVVcLBTgtBnFtpjpUzmlJSntaUxTQq1htiWLTGQL3ApLqx7YYctxDDkeBrWGSQPZgFppqhk5U8sWE9ieGztGuVyYzAhvz8YO9nm8M26izVebjwe+9u1hqa3Pk9 artem.goncharov + sk-ssh-ed25519@openssh.com AAAAGnNrLXNzaC1lZDI1NTE5QG9wZW5zc2guY29tAAAAIFhfujbhx20AzKf2okw9WnduPe2keIWkFDhsSLNlvMd6AAAABHNzaDo= gtema@yubikey + uid: 2000 + gid: 2000 + + zuul: + comment: Zuul CICD + key: | + ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCqO/dXXqmBr1RP8+En5iuLDkPtk7S1jbqjD6QppHo3eKe0WDXeENydPQrXrYf1wJcRa9a8Mdxx2tSxVNqyNVLmlyzPzPc9K2TM6shtHoc3Jzd1HlmfB9MJU2amKuqePwAptCgsxxLBvK+mvh0kXmKnkfMSItCpjOyj6udwwFChJFU/2LB3X9FqLCQB7n3FYKwvbrFDtcIa1COo2h8TychwqWAPKj0Fh7M+mjaF41vcBcmz+uaNk5czC0c7b03TVjKTpYFEmZNtoc0taLP6Ya2exYdHo2uiPYmFiPdVFuv6AMpRnO9CRZzQv+1tlcEPVfsp8gHJVOI47NTx5c5PRTMl system-config + ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDKiyOg0fZzcJtk2OGmEH279Hyur9714hbyZetMV01/iMrMtxpZn0AlBVUjJlOI83Da75bRHLdTG0W4xrax8b+DFsskuuEWo/xVwli9BOYuh5yKgW1Wx/vs4OsYIkFoQIColACGIEqO/ts7xdTUdGnp2nWjBauBocgL/2uc2ytT2PjlsJPZkvDd93nZsryEyFkTKjykS/OgnYfYUcOoI5Agn4cWZSaiGWzLbSp/ebe46g4cAzrOfgYgbPFw1rfooKjyjELdvfFot7Mxj28WsTv+FIGc+vU+KMejJmD00eNBSPbZJl0ogeD0YNEq3MSuhPqOYA6WJs5Sl8tZGNTMt2hB gl-ecosystem-system-config-20220110 + ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDUM1QR8n5e20dmEjd4m556Ej6Spo4WTBI2OVzO4Rr63tylrHLEXkuzxTSPb87aBgmWulFND1+LBsivaFKdL12WF8elyy0T54cdW+O21isOLCVjRbSfjM0e8sme1lMoJXiupdAzWa3XD7cBCdRog79O/DYB/CLHq6gQuQt0a0+p0rea4dSAiXu5VYJ2IlH9hj3vmstuN8cDsGUNuqwyzFUWOgEQT0KMAjvPwoQ8Aft1LPDnEMhOk82JuQzLS8L3Vvpcwb00VqfC9eBGqBL/Rt6yWWERVxtHtdtGxzWz+5wMtUe6CK1lpa9TG2TbtBoSPoQjka8qh31M1TMRQbNvA4ap zuul-gitstyring-key-20210531 + ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCrm6Bl1RgJ76JwwN9TBoh0FzpcwHtYOuXA3Q5XVx7hZdo8vx+/djQ0wp/LNoE+OtW9yhsZstCcLXLk1Qt28Ce4KDY4eVrj0XhuMPQ8QLSPKqNYHoI0I8/fM/iDPln47KgV59o1kb5dQ4OcGgKcCWHN4fehYLPLi9BBJ+UK5Lrf7FNzCWz9UJBZ00xpjOKOKKFKLGNo+lVIUbj6Ay1OWfa1FxaQemG22rxJU6eI/nt2CWvq8FTt2Bpe0tnnJhvbgyf7o4kE6Rb1VORxzryvN31ruR8jMDI1arW5M2qKbgbNMz/zFhSaY+ophQKbOZVEyLRxDyKCOJpSVvYal03beJGZ zuul-gitstyring2-key-20211103 + ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC1tOTit52KNxZ74KYFy8P6lLvaPy48zvCIWatfN1TcS+buj3L5abA6Vrb9JPXyIBlRW7dBUy259yTX0RLd/f7uysoXMvTAaUBNG54K+fI6HfXxhrQOEaR0dgPHLMjVucZ+Vay3SPtntwci1A6Zq9GJOeC9iBzLlu5W6Q2Eyko7tA+aB4IVVXTKbAigIsgS0bwOoBDh7nA3xbeGsQnnzvcFTXEvLpoe/e+hIS+olsNTiT6CeTjyOTDZsbZqAG9YncZzWi+KXe31EJ2y13S9zWXnwhcZY0VdJHEZFjrEsYSjjOSeaV08sl/VWtRBP9H4hREw+JwcB2MrGaoAKOSzrkLT zuul-octavia-proxy-key-20211012 + uid: 2031 + gid: 2031 + +# List of users to install on all hosts +base_users: + - gtema +# Default empty list of users to install on specific hosts or groups +extra_users: [] +# Users who should be removed +disabled_users: [] +# Default distro cloud image names to remove +disabled_distro_cloud_users: + - ubuntu + - linux + - centos + - admin diff --git a/inventory/base/hosts.yaml b/inventory/base/hosts.yaml new file mode 100644 index 0000000..33e0ae2 --- /dev/null +++ b/inventory/base/hosts.yaml @@ -0,0 +1,32 @@ +all: + hosts: + bastion.scs.otc-service.com: + ansible_host: 10.0.20.232 + ansible_user: automation + public_v4: 10.0.20.232 + host_keys: + - 'ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBO3RHfleGRMVSNHSBg634EJwM1jYMrbsHTibECPttH1xc6Hdq5XSk/LWYYAeR8g3otMjxxwCVS13e/nMQNMlYvo=' + vault1.scs.otc-service.com: + ansible_host: 10.10.0.29 + ansible_user: automation + public_v4: 10.10.0.29 + host_keys: + - 'ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFiPzNLi9kxmb4FeAjpQ8GfXpUqzZxs+1L1JqYmAhaNPdy6DwcKglWde/ce3DxFA3YXGGNw8B1euq+hI/zoNVxI=' + vault2.scs.otc-service.com: + ansible_host: 10.10.0.120 + ansible_user: automation + public_v4: 10.10.0.120 + host_keys: + - 'ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNgVYQmU1AEzantVTjKpe1A6z22ve8/gMkdBFFwHgQicG6ppU+0L9LtVJsLd7xgSg8wnUGaZUotQ9sfKogwb2LQ=' + vault3.scs.otc-service.com: + ansible_host: 10.10.0.113 + ansible_user: automation + public_v4: 10.10.0.113 + host_keys: + - 'ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBE3Afc7X7kB5v6Rre0LJRC05R/KVW5iV6q+KKyHHQWMCXTdEHRDkgXiSDwxV7FPneZB7QT42QqNfoa43Zz4ptP0=' + gitea1.scs.otc-service.com: + ansible_host: 10.10.0.6 + ansible_user: automation + public_v4: 10.10.0.6 + host_keys: + - 'ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBA4L6C0gdxqrbueQf+cEWVHxmZmcewbYCGV5wqEayTXT4ceoktkyzHjOjk4fa91VmE5He+GkC1a88hDnWcwT2+w=' diff --git a/inventory/service/group_vars/all.yaml b/inventory/service/group_vars/all.yaml new file mode 100644 index 0000000..408bb06 --- /dev/null +++ b/inventory/service/group_vars/all.yaml @@ -0,0 +1,2 @@ +vault_image_stable: quay.io/opentelekomcloud/vault:change_668_latest +vault_image_latest: quay.io/opentelekomcloud/vault:change_668_latest diff --git a/inventory/service/group_vars/bastion.yaml b/inventory/service/group_vars/bastion.yaml new file mode 100644 index 0000000..4b7ba8e --- /dev/null +++ b/inventory/service/group_vars/bastion.yaml @@ -0,0 +1,8 @@ +bastion_key_exclusive: false + +kube_config_template: clouds/bridge_kube_config.yaml.j2 +extra_users: + - zuul + +cloud_launcher_profiles: [] +cloud_launcher_clouds: [] diff --git a/inventory/service/group_vars/cloud-launcher.yaml b/inventory/service/group_vars/cloud-launcher.yaml new file mode 100644 index 0000000..f7fceca --- /dev/null +++ b/inventory/service/group_vars/cloud-launcher.yaml @@ -0,0 +1,2 @@ +ansible_roles: + - cloud-launcher diff --git a/inventory/service/group_vars/gitea.yaml b/inventory/service/group_vars/gitea.yaml new file mode 100644 index 0000000..4828ac5 --- /dev/null +++ b/inventory/service/group_vars/gitea.yaml @@ -0,0 +1,30 @@ +gitea_version: "1.18.5" +gitea_checksum: "sha256:4766ad9310bd39d50676f8199563292ae0bab3a1922b461ece0feb4611e867f2" + +gitea_domain: "gitea.eco.tsi-dev.otc-service.com" +gitea_app_name: "Open Telekom Cloud: git" +gitea_root_url: "https://gitea.eco.tsi-dev.otc-service.com" +gitea_http_port: 443 +gitea_packages_enable: true + +fail2ban_filters: + - content: | + # gitea.conf + [Init] + datepattern = ^%%Y/%%m/%%d %%H:%%M:%%S + + [Definition] + failregex = .*(Failed authentication attempt|invalid credentials|Attempted access of unknown user).* from + ignoreregex = + dest: "/etc/fail2ban/filter.d/gitea.conf" + +fail2ban_jails: + - content: | + [gitea] + enabled = true + filter = gitea + logpath = /var/lib/gitea/log/gitea.log + maxretry = 10 + findtime = 3600 + bantime = 900 + dest: "/etc/fail2ban/jail.d/gitea.conf" diff --git a/inventory/service/group_vars/k8s-controller.yaml b/inventory/service/group_vars/k8s-controller.yaml new file mode 100644 index 0000000..bdc3e86 --- /dev/null +++ b/inventory/service/group_vars/k8s-controller.yaml @@ -0,0 +1,28 @@ +--- +helm_chart_instances: + otcci_cert-manager: + context: otcci + repo_url: https://charts.jetstack.io + repo_name: jetstack + name: cert-manager + ref: jetstack/cert-manager + version: v1.6.1 + namespace: cert-manager + values_template: "templates/charts/cert-manager/cert-manager-values.yaml.j2" + post_config_template: "templates/charts/cert-manager/cert-manager-post-config.yaml.j2" + otcci_nginx-ingress: + context: otcci + repo_url: https://kubernetes.github.io/ingress-nginx + repo_name: ingress-nginx + name: ingress-nginx + ref: ingress-nginx/ingress-nginx + version: 4.1.0 + namespace: default + values_template: "templates/charts/ingress-nginx/ingress-nginx-values.yaml.j2" + is_default: true + config_entries: + use-gzip: true + compute-full-forwarded-for: true + use-forwarded-headers: true + elb_id: "3d926b98-97ec-4060-be79-ac67c82298e7" + elb_eip: "80.158.57.224" diff --git a/inventory/service/group_vars/vault-controller.yaml b/inventory/service/group_vars/vault-controller.yaml new file mode 100644 index 0000000..adece15 --- /dev/null +++ b/inventory/service/group_vars/vault-controller.yaml @@ -0,0 +1,232 @@ +vault_policies_main: + # configure-vault playbook of the bridge to tune secret engines + - name: "sys-mounts-cru" + definition: | + path "sys/mounts/*" { capabilities = ["read", "list", "create", "update"] } + + # configure-vault playbook of the bridge to tune auth methods + - name: "sys-auth-ru" + definition: | + path "sys/mounts/auth/+/tune" { capabilities = ["read", "update"] } + + # configure-vault playbook of the bridge to tune secret engines + - name: "sys-leases-revoke" + definition: | + path "sys/leases/revoke" { capabilities = ["update"] } + + # configure-vault playbook of the bridge to maintain policies + - name: "policies-acl-rw" + definition: | + path "sys/policies/acl/*" { capabilities = ["read", "list", "create", "update", "delete"] } + + # configure-vault playbook of the bridge to maintain approles + - name: "approle-rw" + definition: | + path "auth/approle/role/*" { capabilities = ["read", "list", "create", "update", "delete"] } + + # configure-vault playbook of the bridge to maintain k8 authorizations + - name: "k8auth-rw" + definition: | + path "auth/+/config" { capabilities = ["read", "list", "create", "update", "delete"] } + + # configure-vault playbook of the bridge to maintain k8 auth roles + - name: "k8role-rw" + definition: | + path "auth/+/role/*" { capabilities = ["read", "list", "create", "update", "delete"] } + + # bridge playbooks to fetch inventory + - name: "k8-configs-ro" + definition: | + path "secret/data/kubernetes/*" { capabilities = ["read", "list"] } + + # ci cluster admin access for Zuul + - name: "ci-k8-config-ro" + definition: | + path "secret/data/kubernetes/otcci_k8s" { capabilities = ["read"] } + + # Zuul checking whether requested approle exists + - name: "approle-zuul-roles-read" + definition: | + path "auth/approle/role/zuul_eco_opentelekomcloud-infra_otc-zuul-jobs" { capabilities = ["read"] } + path "auth/approle/role/zuul_eco_opentelekomcloud-infra_zuul-project-config" { capabilities = ["read"] } + path "auth/approle/role/zuul_eco_opentelekomcloud-infra_gitstyring" { capabilities = ["read"] } + path "auth/approle/role/zuul_eco_opentelekomcloud-docs_doc-exports" { capabilities = ["read"] } + path "auth/approle/role/zuul_gl_ecosystem_zuul-project-config" { capabilities = ["read"] } + path "auth/approle/role/zuul_gl_ecosystem_gitstyring" { capabilities = ["read"] } + + # Zuul create new secret for the approle + - name: "approle-zuul-secret-id-w" + definition: | + path "auth/approle/role/zuul_eco_opentelekomcloud-infra_otc-zuul-jobs/secret-id" { capabilities = ["update"] } + path "auth/approle/role/zuul_eco_opentelekomcloud-infra_zuul-project-config/secret-id" { capabilities = ["update"] } + path "auth/approle/role/zuul_eco_opentelekomcloud-infra_gitstyring/secret-id" { capabilities = ["update"] } + path "auth/approle/role/zuul_eco_opentelekomcloud-docs_doc-exports/secret-id" { capabilities = ["update"] } + path "auth/approle/role/zuul_gl_ecosystem_zuul-project-config/secret-id" { capabilities = ["update"] } + path "auth/approle/role/zuul_gl_ecosystem_gitstyring/secret-id" { capabilities = ["update"] } + + # Bridge access to inventory + - name: "cloud-users-all-ro" + definition: | + path "secret/data/cloud_users/*" { capabilities = ["read", "list"] } + path "secret/metadata/cloud_users/*" { capabilities = ["read", "list"] } + path "secret/data/clouds/*" { capabilities = ["read", "list"] } + path "secret/metadata/clouds/*" { capabilities = ["read", "list"] } + + # zuul deployment to know own credentials + - name: "cloud-users-zuul-ro" + definition: | + path "secret/data/cloud_users/448_nodepool" { capabilities = ["read"] } + path "secret/metadata/cloud_users/448_nodepool" { capabilities = ["read"] } + path "secret/data/clouds/otcci_nodepool*" { capabilities = ["read"] } + path "secret/metadata/clouds/otcci_nodepool*" { capabilities = ["read"] } + + # Zuul want to get github token + - name: "otcci-gh-zuul" + definition: | + path "github_zuul/token" { capabilities = ["read", "create", "update"] } + + # zuul itself + - name: "zuul-app-ro" + definition: | + path "secret/data/zuul/*" {capabilities = ["read"] } + path "secret/metadata/zuul/*" {capabilities = ["read"] } + + # database secret engine mgmt + - name: "database-rw" + definition: | + path "database/*" {capabilities = ["read", "list", "create", "update", "delete"] } + + # Get credentials for databases + - name: "database-ro" + definition: | + path "database/*" {capabilities = ["read", "list"] } + + # Temporary storage of the db users (in kv store) + - name: "tmp-db-ro" + definition: | + path "secret/data/db/*" { capabilities = ["read"] } + path "secret/metadata/db/*" { capabilities = ["read"] } + + # some ssh stuff, most likely zuul + - name: "ssh-ro" + definition: | + path "secret/data/ssh/*" { capabilities = ["read"] } + path "secret/metadata/ssh/*" { capabilities = ["read"] } + + # jobs want to open PRs + - name: "gitea-cicd" + definition: | + path "secret/data/gitea_cicd" { capabilities = ["read"] } + path "secret/metadata/gitea_cicd" { capabilities = ["read"] } + + # Swift configuration + - name: "swift-ro" + definition: | + path "secret/data/swift/*" { capabilities = ["read"] } + path "secret/metadata/swift/*" { capabilities = ["read"] } + + # Get credentials for openstack cloud + - name: "openstack-ro" + definition: | + path "openstack/*" {capabilities = ["read", "list"] } + + # Maintain openstack clouds/roles + - name: "openstack-rw" + definition: | + path "openstack/*" {capabilities = ["read", "list", "create", "update", "delete"] } + + # Get password policies + - name: "pwd-policy-ro" + definition: | + path "sys/policies/password/*" {capabilities = ["read", "list"] } + + # Maintain password policies + - name: "pwd-policy-rw" + definition: | + path "sys/policies/password/*" {capabilities = ["read", "list", "create", "update", "delete"] } + + # Gitea configuration + - name: "gitea-ro" + definition: | + path "secret/data/gitea" { capabilities = ["read"] } + path "secret/metadata/gitea" { capabilities = ["read"] } + +vault_approles_main: + # This approle is used by bridge to provision systems + - name: "system-config-bridge" + token_policies: ["sys-mounts-cru", "sys-auth-ru", "policies-acl-rw", "approle-rw", "k8auth-rw", "k8role-rw", "cloud-users-all-ro", "tls-rw", "pki-int-zuul-rw", "k8-configs-ro", "tmp-db-ro", "grafana-config-ro", "alerta-config-ro", "oauth-ro", "ldap-ro", "database-ro", "ssh-ro", "promtail-ro", "opensearch-ro", "influxdb-ro", "swift-ro", "openstack-rw", "pwd-policy-rw", "sys-leases-revoke", "gitea-ro", "smtp-gw-ro", "keycloak-ro", "prometheus-ro", "argocd-ro"] + token_ttl: "2h" + +vault_k8roles_main: + # Zuul otcci auth + - name: "zuul" + auth_path: "kubernetes_otcci" + policies: ["tls-zuul-ro", "zuul-app-ro", "cloud-users-zuul-ro", "database-ro", "ci-k8-config-ro", "smtp-gw-ro"] + bound_service_account_names: ["zuul"] + bound_service_account_namespaces: ["zuul-ci"] + token_ttl: "3h" + +vault_pwd_policies_main: + - name: "os-policy" + policy: | + length = 20 + rule "charset" { + charset = "abcdefghijklmnopqrstuvwxyz" + min-chars = 1 + } + rule "charset" { + charset = "ABCDEFGHIJKLMNOPQRSTUVWXYZ" + min-chars = 1 + } + rule "charset" { + charset = "0123456789" + min-chars = 1 + } + rule "charset" { + charset = "!@#$%^&*" + min-chars = 1 + } + +vault_os_clouds_main: +vault_os_roles_main: +vault_os_static_roles_main: +vault_instances: + # main redundancy cluster + main: + vault_addr: "https://vault-lb.scs.otc-service.com:8200" + vault_token: "{{ lookup('community.hashi_vault.hashi_vault', 'auth/token/lookup-self').id }}" + policies: "{{ vault_policies_main }}" + approle: + roles: "{{ vault_approles_main }}" + kubernetes: + auths: + - path: "kubernetes_otcci" + kubernetes_host: "{{ otcci_k8s.server }}" + kubernetes_ca_cert: "{{ otcci_k8s.secrets['ca.crt'] }}" + roles: "{{ vault_k8roles_main }}" + github: + auths: [] + pki: + # Admin settings + # Secret engines + secret_engines: + - path: "secret" + type: "kv" + description: "KV Secrets Engine" + options: + version: "2" + - path: "database" + type: "database" + description: "Database secrets Engine" + auths: + - path: "approle" + type: "approle" + description: "AppRole authorization" + - path: "kubernetes_otcci" + type: "kubernetes" + description: "OTC CI K8 cluster authorization" + pwd_policies: "{{ vault_pwd_policies_main }}" + # Opestack cloud/role definition + os_clouds: "{{ vault_os_clouds_main }}" + os_roles: "{{ vault_os_roles_main }}" + os_static_roles: "{{ vault_os_static_roles_main }}" diff --git a/inventory/service/group_vars/vault.yaml b/inventory/service/group_vars/vault.yaml new file mode 100644 index 0000000..5f27ce0 --- /dev/null +++ b/inventory/service/group_vars/vault.yaml @@ -0,0 +1,16 @@ +# Vault settings +vault_plugins: + - url: "https://github.com/opentelekomcloud-infra/vault-plugin-secrets-github/releases/download/v1.2.1/vault-plugin-secrets-github_1.2.1_linux_amd64.zip" + sha256: "9acd271a264a48cb8dfac055bb9849b3938fe8afbc794a2d81d14be1357cbcf5" + name: "vault-plugin-secrets-github" + type: "secret" + paths: + - "github" + - "github_otcbot" + - "github_zuul" + - url: "https://github.com/opentelekomcloud/vault-plugin-secrets-openstack/releases/download/v1.3.0/vault-plugin-secrets-openstack_1.3.0_linux_amd64.tar.gz" + sha256: "2f48d3011a0cc0ce4726e889f5d4103446eb820cdcc0ecb89deb03757e42568e" + name: "vault-plugin-secrets-openstack" + type: "secret" + paths: + - "openstack" diff --git a/inventory/service/groups.yaml b/inventory/service/groups.yaml new file mode 100644 index 0000000..c7addac --- /dev/null +++ b/inventory/service/groups.yaml @@ -0,0 +1,31 @@ +plugin: yamlgroup +groups: + bastion: + - bastion*.scs.otc-service.com + - bridge*.scs.otc-service.com + + ssl_certs: + - bridge.scs.otc-service.com + - vault1.scs.otc-service.com + - vault2.scs.otc-service.com + - vault3.scs.otc-service.com + - gitea1.scs.otc-service.com + + k8s-controller: + - bridge.scs.otc-service.com + + vault: + - vault1.scs.otc-service.com + - vault2.scs.otc-service.com + - vault3.scs.otc-service.com + + vault-controller: + - bridge.scs.otc-service.com + + gitea: + - gitea1.scs.otc-service.com + + prod_bastion: + - bridge.scs.otc-service.com + + disabled: [] diff --git a/inventory/service/host_vars/bastion.scs.otc-service.com.yaml b/inventory/service/host_vars/bastion.scs.otc-service.com.yaml new file mode 100644 index 0000000..121e28b --- /dev/null +++ b/inventory/service/host_vars/bastion.scs.otc-service.com.yaml @@ -0,0 +1,5 @@ +firewalld_extra_ports_enable: [] + +# Allow tcp and agent forwarding on the jump host. Aligned with DT 3.04-19/20 +ssh_allow_tcp_forwarding: true +ssh_allow_agent_forwarding: true diff --git a/inventory/service/host_vars/vault1.scs.otc-service.com.yaml b/inventory/service/host_vars/vault1.scs.otc-service.com.yaml new file mode 100644 index 0000000..df0ddee --- /dev/null +++ b/inventory/service/host_vars/vault1.scs.otc-service.com.yaml @@ -0,0 +1,10 @@ +ssl_certs: + vault: + - "vault1.scs.otc-service.com" +vault_cert: "vault" + +vault_proxy_protocol_behavior: "allow_authorized" +# vault_proxy_protocol_authorized_addrs: "192.168.110.151,192.168.110.160" +# vault_x_forwarded_for_authorized_addrs: "192.168.110.151,192.168.110.160" + +firewalld_extra_ports_enable: ['8200/tcp', '8201/tcp'] diff --git a/inventory/service/host_vars/vault2.scs.otc-service.com.yaml b/inventory/service/host_vars/vault2.scs.otc-service.com.yaml new file mode 100644 index 0000000..09cf905 --- /dev/null +++ b/inventory/service/host_vars/vault2.scs.otc-service.com.yaml @@ -0,0 +1,10 @@ +ssl_certs: + vault: + - "vault2.scs.otc-service.com" +vault_cert: "vault" + +vault_proxy_protocol_behavior: "allow_authorized" +# vault_proxy_protocol_authorized_addrs: "192.168.110.151,192.168.110.160" +# vault_x_forwarded_for_authorized_addrs: "192.168.110.151,192.168.110.160" + +firewalld_extra_ports_enable: ['8200/tcp', '8201/tcp'] diff --git a/inventory/service/host_vars/vault3.scs.otc-service.com.yaml b/inventory/service/host_vars/vault3.scs.otc-service.com.yaml new file mode 100644 index 0000000..9583a6b --- /dev/null +++ b/inventory/service/host_vars/vault3.scs.otc-service.com.yaml @@ -0,0 +1,10 @@ +ssl_certs: + vault: + - "vault3.scs.otc-service.com" +vault_cert: "vault" + +vault_proxy_protocol_behavior: "allow_authorized" +# vault_proxy_protocol_authorized_addrs: "192.168.110.151,192.168.110.160" +# vault_x_forwarded_for_authorized_addrs: "192.168.110.151,192.168.110.160" + +firewalld_extra_ports_enable: ['8200/tcp', '8201/tcp'] diff --git a/kubernetes/zuul/README.md b/kubernetes/zuul/README.md new file mode 100644 index 0000000..dd0f036 --- /dev/null +++ b/kubernetes/zuul/README.md @@ -0,0 +1,72 @@ +# Kustomize stack for installing Zuul + +This folder contains Kubernetes manifests processed by Kustomize application in +order to generate final set of manifests for installing Zuul into the +Kubernetes. + +## Components + +Whole installation is split into individual components, so that it is possible +to configure what to use in a specific installation: + +### ca + +Zuul requires Zookeeper in HA mode with TLS enabled to function. It is possible +to handle TLS outside of the cluster, but it is also possible to rely on +cert-manager capability of having own CA authority and provide certificates as +requested. At the moment this is set as a hard dependency in the remaining +components, but it would be relatively easy to make it really optional +component. + +### Zookeeper + +This represents a Zookeeper cluster installation. No crazy stuff, pretty +straigt forward + +### zuul-scheduler + +Zuul scheduler + +### zuul-executor + +Zuul executor + +### zuul-merger + +Optional zuul-merger + +### zuul-web + +Zuul web frontend + +### nodepool-launcher + +Launcher for VMs or pods + +### nodepool-builder + +Optional builder for VM images. At the moment it is not possible to build all +types of images inside of Kubernetes, since running podman under docker in K8 +is not working smoothly on every installation + +## Layers + +- `base` layer is representing absolutely minimal installaiton. In the + kustomization.yaml there is a link to zuul-config repository which must + contain `nodepool/nodepool.yaml` - nodepool config and `zuul/main.yaml` - + tenants info. This link is given by `zuul_instance_config` configmap with + ZUUL_CONFIG_REPO=https://gitea.eco.tsi-dev.otc-service.com/scs/zuul-config.git + +- `zuul_ci` - zuul.otc-service.com installation + +## Versions + +Zookeeper version is controlled through +`components/zookeeper/kustomization.yaml` + +Zuul version by default is pointing to the latest version in docker registry +and it is expected that every overlay is setting desired version. + +Proper overlays are also relying on HashiCorp Vault for providing installation +secrets. Vault agent version is controlled i.e. in the overlay itself with +variable pointing to the vault installation in the overlay patch. diff --git a/kubernetes/zuul/base/ca.yaml b/kubernetes/zuul/base/ca.yaml new file mode 100644 index 0000000..0b3af44 --- /dev/null +++ b/kubernetes/zuul/base/ca.yaml @@ -0,0 +1,37 @@ +--- +apiVersion: cert-manager.io/v1 +kind: Issuer +metadata: + name: selfsigned-issuer +spec: + selfSigned: {} +--- +apiVersion: cert-manager.io/v1 +kind: Certificate +metadata: + name: ca-cert +spec: + # Secret names are always required. + secretName: ca-cert + duration: 87600h # 10y + renewBefore: 360h # 15d + isCA: true + privateKey: + size: 2048 + algorithm: RSA + encoding: PKCS1 + commonName: cacert + # At least one of a DNS Name, URI, or IP address is required. + dnsNames: + - caroot + # Issuer references are always required. + issuerRef: + name: selfsigned-issuer +--- +apiVersion: cert-manager.io/v1 +kind: Issuer +metadata: + name: ca-issuer +spec: + ca: + secretName: ca-cert diff --git a/kubernetes/zuul/base/cert.yaml b/kubernetes/zuul/base/cert.yaml new file mode 100644 index 0000000..84df7d0 --- /dev/null +++ b/kubernetes/zuul/base/cert.yaml @@ -0,0 +1,22 @@ +--- +apiVersion: cert-manager.io/v1 +kind: Certificate +metadata: + name: zookeeper-client + labels: + app.kubernetes.io/name: zookeeper-client-certificate + app.kubernetes.io/part-of: zuul + app.kubernetes.io/component: zookeeper-client-certificate +spec: + privateKey: + encoding: PKCS8 + secretName: zookeeper-client-tls + commonName: client + usages: + - digital signature + - key encipherment + - server auth + - client auth + issuerRef: + name: ca-issuer + kind: Issuer diff --git a/kubernetes/zuul/base/configs/zuul.conf b/kubernetes/zuul/base/configs/zuul.conf new file mode 100644 index 0000000..1fae2fe --- /dev/null +++ b/kubernetes/zuul/base/configs/zuul.conf @@ -0,0 +1,21 @@ +[zookeeper] +hosts=zookeeper.zuul.svc.cluster.local:2281 +tls_cert=/tls/client/zk.crt +tls_key=/tls/client/zk.key +tls_ca=/tls/client/ca.crt +session_timeout=40 + +[scheduler] +tenant_config=/etc/zuul-config/zuul/main.yaml +state_dir=/var/lib/zuul +relative_priority=true +prometheus_port=9091 + +[web] +listen_address=0.0.0.0 +port=9000 +prometheus_port=9091 + +[fingergw] +port=9079 +user=zuul diff --git a/kubernetes/zuul/base/kustomization.yaml b/kubernetes/zuul/base/kustomization.yaml new file mode 100644 index 0000000..fac6715 --- /dev/null +++ b/kubernetes/zuul/base/kustomization.yaml @@ -0,0 +1,36 @@ +--- +apiVersion: kustomize.config.k8s.io/v1beta1 +kind: Kustomization + +components: + - ../components/ca + - ../components/zookeeper + - ../components/zuul-config + - ../components/zuul-executor + - ../components/zuul-scheduler + - ../components/zuul-web + - ../components/nodepool-launcher + +configMapGenerator: + - name: zuul-instance-config + literals: + - ZUUL_CONFIG_REPO=https://gitea.eco.tsi-dev.otc-service.com/scs/zuul-config.git + +labels: + - includeSelectors: true + pairs: + app.kubernetes.io/instance: "base" + app.kubernetes.io/managed-by: "kustomize" + +# images: + +resources: + - sa.yaml + - cert.yaml + +secretGenerator: + - name: "zuul-config" + files: + - "configs/zuul.conf" + - name: "nodepool-config" + files: [] diff --git a/kubernetes/zuul/base/sa.yaml b/kubernetes/zuul/base/sa.yaml new file mode 100644 index 0000000..85ff9fc --- /dev/null +++ b/kubernetes/zuul/base/sa.yaml @@ -0,0 +1,5 @@ +--- +apiVersion: v1 +kind: ServiceAccount +metadata: + name: zuul diff --git a/kubernetes/zuul/components/ca/all.yaml b/kubernetes/zuul/components/ca/all.yaml new file mode 100644 index 0000000..c079bad --- /dev/null +++ b/kubernetes/zuul/components/ca/all.yaml @@ -0,0 +1,37 @@ +--- +apiVersion: cert-manager.io/v1 +kind: Issuer +metadata: + name: selfsigned-issuer +spec: + selfSigned: {} +--- +apiVersion: cert-manager.io/v1 +kind: Certificate +metadata: + name: ca-cert +spec: + # Secret names are always required. + secretName: ca-cert + duration: 87600h # 10y + renewBefore: 360h # 15d + isCA: true + privateKey: + size: 2048 + algorithm: RSA + encoding: PKCS1 + commonName: cacert + # At least one of a DNS Name, URI, or IP address is required. + dnsNames: + - caroot + # Issuer references are always required. + issuerRef: + name: selfsigned-issuer +--- +apiVersion: cert-manager.io/v1 +kind: Issuer +metadata: + name: ca-issuer +spec: + ca: + secretName: ca-cert diff --git a/kubernetes/zuul/components/ca/kustomization.yaml b/kubernetes/zuul/components/ca/kustomization.yaml new file mode 100644 index 0000000..b883c4c --- /dev/null +++ b/kubernetes/zuul/components/ca/kustomization.yaml @@ -0,0 +1,11 @@ +--- +apiVersion: kustomize.config.k8s.io/v1alpha1 +kind: Component + +labels: + - includeSelectors: true + pairs: + app.kubernetes.io/name: "ca" + +resources: + - all.yaml diff --git a/kubernetes/zuul/components/nodepool-builder/kustomization.yaml b/kubernetes/zuul/components/nodepool-builder/kustomization.yaml new file mode 100644 index 0000000..61f0ad9 --- /dev/null +++ b/kubernetes/zuul/components/nodepool-builder/kustomization.yaml @@ -0,0 +1,6 @@ +--- +apiVersion: kustomize.config.k8s.io/v1alpha1 +kind: Component + +resources: + - statefulset.yaml diff --git a/kubernetes/zuul/components/nodepool-builder/statefulset.yaml b/kubernetes/zuul/components/nodepool-builder/statefulset.yaml new file mode 100644 index 0000000..d238518 --- /dev/null +++ b/kubernetes/zuul/components/nodepool-builder/statefulset.yaml @@ -0,0 +1,108 @@ +--- +apiVersion: apps/v1 +kind: StatefulSet +metadata: + name: nodepool-builder + labels: + app.kubernetes.io/name: "zuul" + app.kubernetes.io/part-of: "zuul" + app.kubernetes.io/component: "nodepool-builder" +spec: + replicas: 1 + selector: + matchLabels: + app.kubernetes.io/name: "zuul" + app.kubernetes.io/part-of: "zuul" + app.kubernetes.io/component: "nodepool-builder" + serviceName: "nodepool-builder" + template: + metadata: + labels: + app.kubernetes.io/name: "zuul" + app.kubernetes.io/part-of: "zuul" + app.kubernetes.io/component: "nodepool-builder" + spec: + containers: + - name: "nodepool" + image: "zuul/nodepool-builder" + command: + - "/usr/local/bin/nodepool-builder" + - "-f" + - "-d" + - "-c" + - "/data/nodepool/nodepool.yaml" + + resources: + limits: + cpu: "300m" + memory: "500Mi" + requests: + cpu: "100m" + memory: "200Mi" + + securityContext: + privileged: true + # runAsUser: 10001 + # runAsGroup: 10001 + + volumeMounts: + - name: "dev" + mountPath: "/dev" + + - name: "dib-tmp" + mountPath: "/opt/dib_tmp" + + - name: "dib-cache" + mountPath: "/opt/dib_cache" + + - name: "nodepool-images-dir" + mountPath: "/opt/nodepool/images" + + # Podman need non-overlayfs + - name: "nodepool-containers" + mountPath: "/var/lib/containers" + + - name: "zookeeper-client-tls" + mountPath: "/tls/client" + readOnly: true + + - name: "zuul-config-data" + mountPath: "/data" + + serviceAccountName: "zuul" + volumes: + - name: "dev" + hostPath: + path: "/dev" + + - name: "dib-cache" + emptyDir: {} + + - name: "dib-tmp" + emptyDir: {} + + - name: "nodepool-config" + secret: + secretName: "nodepool-config" + + - name: "nodepool-containers" + emptyDir: {} + + - name: "zookeeper-client-tls" + secret: + secretName: "zookeeper-client-tls" + + - name: "zuul-config-data" + persistentVolumeClaim: + claimName: "zuul-config" + + volumeClaimTemplates: + - metadata: + name: "nodepool-images-dir" + spec: + accessModes: + - "ReadWriteOnce" + storageClassName: "csi-disk" + resources: + requests: + storage: "80G" diff --git a/kubernetes/zuul/components/nodepool-launcher/deployment.yaml b/kubernetes/zuul/components/nodepool-launcher/deployment.yaml new file mode 100644 index 0000000..043544f --- /dev/null +++ b/kubernetes/zuul/components/nodepool-launcher/deployment.yaml @@ -0,0 +1,73 @@ +--- +apiVersion: apps/v1 +kind: Deployment +metadata: + name: nodepool-launcher + labels: + app.kubernetes.io/name: "zuul" + app.kubernetes.io/part-of: zuul + app.kubernetes.io/component: "nodepool-launcher" +spec: + replicas: 1 + selector: + matchLabels: + app.kubernetes.io/name: "zuul" + app.kubernetes.io/part-of: zuul + app.kubernetes.io/component: "nodepool-launcher" + template: + metadata: + labels: + app.kubernetes.io/name: "zuul" + app.kubernetes.io/part-of: zuul + app.kubernetes.io/component: "nodepool-launcher" + spec: + containers: + - name: "nodepool" + image: "zuul/nodepool-launcher" + command: + - "/usr/local/bin/nodepool-launcher" + - "-f" + - "-d" + - "-c" + - "/data/nodepool/nodepool.yaml" + + resources: + limits: + cpu: "300m" + memory: "500Mi" + requests: + cpu: "100m" + memory: "200Mi" + + securityContext: + runAsUser: 10001 + runAsGroup: 10001 + + volumeMounts: + - name: "zookeeper-client-tls" + mountPath: "/tls/client" + readOnly: true + + - name: "zuul-config-data" + mountPath: "/data" + + - name: "nodepool-lib" + mountPath: "/var/lib/nodepool" + + serviceAccountName: "zuul" + volumes: + - name: "nodepool-config" + secret: + secretName: "nodepool-config" + + - name: "zookeeper-client-tls" + secret: + secretName: "zookeeper-client-tls" + + - name: "zuul-config-data" + persistentVolumeClaim: + claimName: "zuul-config" + + - name: "nodepool-lib" + emptyDir: {} + revisionHistoryLimit: 2 diff --git a/kubernetes/zuul/components/nodepool-launcher/kustomization.yaml b/kubernetes/zuul/components/nodepool-launcher/kustomization.yaml new file mode 100644 index 0000000..fbc3362 --- /dev/null +++ b/kubernetes/zuul/components/nodepool-launcher/kustomization.yaml @@ -0,0 +1,6 @@ +--- +apiVersion: kustomize.config.k8s.io/v1alpha1 +kind: Component + +resources: + - deployment.yaml diff --git a/kubernetes/zuul/components/zookeeper/cert.yaml b/kubernetes/zuul/components/zookeeper/cert.yaml new file mode 100644 index 0000000..da205e9 --- /dev/null +++ b/kubernetes/zuul/components/zookeeper/cert.yaml @@ -0,0 +1,25 @@ +--- +apiVersion: cert-manager.io/v1 +kind: Certificate +metadata: + name: zookeeper-server +spec: + privateKey: + encoding: PKCS8 + secretName: zookeeper-server-tls + commonName: server + usages: + - digital signature + - key encipherment + - server auth + - client auth + dnsNames: + - zookeeper-0.zookeeper-headless.zuul-ci.svc.cluster.local + - zookeeper-0 + - zookeeper-1.zookeeper-headless.zuul-ci.svc.cluster.local + - zookeeper-1 + - zookeeper-2.zookeeper-headless.zuul-ci.svc.cluster.local + - zookeeper-2 + issuerRef: + name: ca-issuer + kind: Issuer diff --git a/kubernetes/zuul/components/zookeeper/kustomization.yaml b/kubernetes/zuul/components/zookeeper/kustomization.yaml new file mode 100644 index 0000000..bf96e4f --- /dev/null +++ b/kubernetes/zuul/components/zookeeper/kustomization.yaml @@ -0,0 +1,29 @@ +--- +apiVersion: kustomize.config.k8s.io/v1alpha1 +kind: Component + +configMapGenerator: + - name: "zookeeper-config" + files: + - scripts/ok + - scripts/run + - scripts/ready + +labels: + - includeSelectors: true + pairs: + app.kubernetes.io/name: "zookeeper" + app.kubernetes.io/version: "3.8.0" + app.kubernetes.io/part-of: "zuul" + +images: + - name: "zookeeper" + newName: "quay.io/opentelekomcloud/zookeeper" + newTag: "3.8.0" + +resources: + - cert.yaml + - sa.yaml + - service.yaml + - statefulset.yaml + - pdb.yaml diff --git a/kubernetes/zuul/components/zookeeper/pdb.yaml b/kubernetes/zuul/components/zookeeper/pdb.yaml new file mode 100644 index 0000000..19c17bf --- /dev/null +++ b/kubernetes/zuul/components/zookeeper/pdb.yaml @@ -0,0 +1,14 @@ +--- +apiVersion: policy/v1 +kind: PodDisruptionBudget +metadata: + name: zookeeper + labels: + app.kubernetes.io/name: "zookeeper" + app.kubernetes.io/component: "server" +spec: + selector: + matchLabels: + app.kubernetes.io/name: "zookeeper" + app.kubernetes.io/component: "server" + maxUnavailable: 1 diff --git a/kubernetes/zuul/components/zookeeper/sa.yaml b/kubernetes/zuul/components/zookeeper/sa.yaml new file mode 100644 index 0000000..0cb7f33 --- /dev/null +++ b/kubernetes/zuul/components/zookeeper/sa.yaml @@ -0,0 +1,5 @@ +--- +apiVersion: v1 +kind: ServiceAccount +metadata: + name: zookeeper diff --git a/kubernetes/zuul/components/zookeeper/scripts/ok b/kubernetes/zuul/components/zookeeper/scripts/ok new file mode 100644 index 0000000..fd8de36 --- /dev/null +++ b/kubernetes/zuul/components/zookeeper/scripts/ok @@ -0,0 +1,6 @@ +#!/bin/sh +if [ -f /tls/client/ca.crt ]; then + echo "srvr" | openssl s_client -CAfile /tls/client/ca.crt -cert /tls/client/tls.crt -key /tls/client/tls.key -connect 127.0.0.1:${1:-2281} -quiet -ign_eof 2>/dev/null | grep Mode +else + zkServer.sh status +fi diff --git a/kubernetes/zuul/components/zookeeper/scripts/ready b/kubernetes/zuul/components/zookeeper/scripts/ready new file mode 100644 index 0000000..3035bed --- /dev/null +++ b/kubernetes/zuul/components/zookeeper/scripts/ready @@ -0,0 +1,6 @@ +#!/bin/sh +if [ -f /tls/client/ca.crt ]; then + echo "ruok" | openssl s_client -CAfile /tls/client/ca.crt -cert /tls/client/tls.crt -key /tls/client/tls.key -connect 127.0.0.1:${1:-2281} -quiet -ign_eof 2>/dev/null +else + echo ruok | nc 127.0.0.1 ${1:-2181} +fi diff --git a/kubernetes/zuul/components/zookeeper/scripts/run b/kubernetes/zuul/components/zookeeper/scripts/run new file mode 100644 index 0000000..c971582 --- /dev/null +++ b/kubernetes/zuul/components/zookeeper/scripts/run @@ -0,0 +1,115 @@ +#!/bin/bash + +set -a +ROOT=$(echo /apache-zookeeper-*) + +ZK_USER=${ZK_USER:-"zookeeper"} +ZK_LOG_LEVEL=${ZK_LOG_LEVEL:-"INFO"} +ZK_DATA_DIR=${ZK_DATA_DIR:-"/data"} +ZK_DATA_LOG_DIR=${ZK_DATA_LOG_DIR:-"/data/log"} +ZK_CONF_DIR=${ZK_CONF_DIR:-"/conf"} +ZK_CLIENT_PORT=${ZK_CLIENT_PORT:-2181} +ZK_SSL_CLIENT_PORT=${ZK_SSL_CLIENT_PORT:-2281} +ZK_SERVER_PORT=${ZK_SERVER_PORT:-2888} +ZK_ELECTION_PORT=${ZK_ELECTION_PORT:-3888} +ZK_TICK_TIME=${ZK_TICK_TIME:-2000} +ZK_INIT_LIMIT=${ZK_INIT_LIMIT:-10} +ZK_SYNC_LIMIT=${ZK_SYNC_LIMIT:-5} +ZK_HEAP_SIZE=${ZK_HEAP_SIZE:-2G} +ZK_MAX_CLIENT_CNXNS=${ZK_MAX_CLIENT_CNXNS:-60} +ZK_MIN_SESSION_TIMEOUT=${ZK_MIN_SESSION_TIMEOUT:- $((ZK_TICK_TIME*2))} +ZK_MAX_SESSION_TIMEOUT=${ZK_MAX_SESSION_TIMEOUT:- $((ZK_TICK_TIME*20))} +ZK_SNAP_RETAIN_COUNT=${ZK_SNAP_RETAIN_COUNT:-3} +ZK_PURGE_INTERVAL=${ZK_PURGE_INTERVAL:-0} +ID_FILE="$ZK_DATA_DIR/myid" +ZK_CONFIG_FILE="$ZK_CONF_DIR/zoo.cfg" +LOG4J_PROPERTIES="$ZK_CONF_DIR/log4j.properties" +HOST=$(hostname) +DOMAIN=`hostname -d` +JVMFLAGS="-Xmx$ZK_HEAP_SIZE -Xms$ZK_HEAP_SIZE" + +APPJAR=$(echo $ROOT/*jar) +CLASSPATH="${ROOT}/lib/*:${APPJAR}:${ZK_CONF_DIR}:" + +if [[ $HOST =~ (.*)-([0-9]+)$ ]]; then + NAME=${BASH_REMATCH[1]} + ORD=${BASH_REMATCH[2]} + MY_ID=$((ORD+1)) +else + echo "Failed to extract ordinal from hostname $HOST" + exit 1 +fi + +mkdir -p $ZK_DATA_DIR +mkdir -p $ZK_DATA_LOG_DIR +echo $MY_ID >> $ID_FILE + +if [[ -f /tls/server/ca.crt ]]; then + cp /tls/server/ca.crt /data/server-ca.pem + cat /tls/server/tls.crt /tls/server/tls.key > /data/server.pem +fi +if [[ -f /tls/client/ca.crt ]]; then + cp /tls/client/ca.crt /data/client-ca.pem + cat /tls/client/tls.crt /tls/client/tls.key > /data/client.pem +fi + +echo "dataDir=$ZK_DATA_DIR" >> $ZK_CONFIG_FILE +echo "dataLogDir=$ZK_DATA_LOG_DIR" >> $ZK_CONFIG_FILE +echo "tickTime=$ZK_TICK_TIME" >> $ZK_CONFIG_FILE +echo "initLimit=$ZK_INIT_LIMIT" >> $ZK_CONFIG_FILE +echo "syncLimit=$ZK_SYNC_LIMIT" >> $ZK_CONFIG_FILE +echo "maxClientCnxns=$ZK_MAX_CLIENT_CNXNS" >> $ZK_CONFIG_FILE +echo "minSessionTimeout=$ZK_MIN_SESSION_TIMEOUT" >> $ZK_CONFIG_FILE +echo "maxSessionTimeout=$ZK_MAX_SESSION_TIMEOUT" >> $ZK_CONFIG_FILE +echo "autopurge.snapRetainCount=$ZK_SNAP_RETAIN_COUNT" >> $ZK_CONFIG_FILE +echo "autopurge.purgeInterval=$ZK_PURGE_INTERVAL" >> $ZK_CONFIG_FILE +echo "4lw.commands.whitelist=*" >> $ZK_CONFIG_FILE + +# Client TLS configuration +if [[ -f /tls/client/ca.crt ]]; then + echo "secureClientPort=$ZK_SSL_CLIENT_PORT" >> $ZK_CONFIG_FILE + echo "ssl.keyStore.location=/data/client.pem" >> $ZK_CONFIG_FILE + echo "ssl.trustStore.location=/data/client-ca.pem" >> $ZK_CONFIG_FILE +else + echo "clientPort=$ZK_CLIENT_PORT" >> $ZK_CONFIG_FILE +fi + +# Server TLS configuration +if [[ -f /tls/server/ca.crt ]]; then + echo "serverCnxnFactory=org.apache.zookeeper.server.NettyServerCnxnFactory" >> $ZK_CONFIG_FILE + echo "sslQuorum=true" >> $ZK_CONFIG_FILE + echo "ssl.quorum.keyStore.location=/data/server.pem" >> $ZK_CONFIG_FILE + echo "ssl.quorum.trustStore.location=/data/server-ca.pem" >> $ZK_CONFIG_FILE +fi + +for (( i=1; i<=$ZK_REPLICAS; i++ )) +do + echo "server.$i=$NAME-$((i-1)).$DOMAIN:$ZK_SERVER_PORT:$ZK_ELECTION_PORT" >> $ZK_CONFIG_FILE +done + +rm -f $LOG4J_PROPERTIES + +echo "zookeeper.root.logger=$ZK_LOG_LEVEL, CONSOLE" >> $LOG4J_PROPERTIES +echo "zookeeper.console.threshold=$ZK_LOG_LEVEL" >> $LOG4J_PROPERTIES +echo "zookeeper.log.threshold=$ZK_LOG_LEVEL" >> $LOG4J_PROPERTIES +echo "zookeeper.log.dir=$ZK_DATA_LOG_DIR" >> $LOG4J_PROPERTIES +echo "zookeeper.log.file=zookeeper.log" >> $LOG4J_PROPERTIES +echo "zookeeper.log.maxfilesize=256MB" >> $LOG4J_PROPERTIES +echo "zookeeper.log.maxbackupindex=10" >> $LOG4J_PROPERTIES +echo "zookeeper.tracelog.dir=$ZK_DATA_LOG_DIR" >> $LOG4J_PROPERTIES +echo "zookeeper.tracelog.file=zookeeper_trace.log" >> $LOG4J_PROPERTIES +echo "log4j.rootLogger=\${zookeeper.root.logger}" >> $LOG4J_PROPERTIES +echo "log4j.appender.CONSOLE=org.apache.log4j.ConsoleAppender" >> $LOG4J_PROPERTIES +echo "log4j.appender.CONSOLE.Threshold=\${zookeeper.console.threshold}" >> $LOG4J_PROPERTIES +echo "log4j.appender.CONSOLE.layout=org.apache.log4j.PatternLayout" >> $LOG4J_PROPERTIES +echo "log4j.appender.CONSOLE.layout.ConversionPattern=%d{ISO8601} [myid:%X{myid}] - %-5p [%t:%C{1}@%L] - %m%n" >> $LOG4J_PROPERTIES + +if [ -n "$JMXDISABLE" ] +then + MAIN=org.apache.zookeeper.server.quorum.QuorumPeerMain +else + MAIN="-Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.port=$JMXPORT -Dcom.sun.management.jmxremote.authenticate=$JMXAUTH -Dcom.sun.management.jmxremote.ssl=$JMXSSL -Dzookeeper.jmx.log4j.disable=$JMXLOG4J org.apache.zookeeper.server.quorum.QuorumPeerMain" +fi + +set -x +exec java -cp "$CLASSPATH" $JVMFLAGS $MAIN $ZK_CONFIG_FILE diff --git a/kubernetes/zuul/components/zookeeper/service.yaml b/kubernetes/zuul/components/zookeeper/service.yaml new file mode 100644 index 0000000..8d66781 --- /dev/null +++ b/kubernetes/zuul/components/zookeeper/service.yaml @@ -0,0 +1,45 @@ +--- +apiVersion: v1 +kind: Service +metadata: + name: zookeeper-headless + labels: + app.kubernetes.io/name: "zookeeper" + app.kubernetes.io/component: "server" +spec: + clusterIP: None + ports: + - name: client + port: 2281 + protocol: TCP + targetPort: client + - name: server + port: 2888 + protocol: TCP + targetPort: server + - name: election + port: 3888 + protocol: TCP + targetPort: election + selector: + app.kubernetes.io/name: "zookeeper" + app.kubernetes.io/component: "server" + publishNotReadyAddresses: true +--- +apiVersion: v1 +kind: Service +metadata: + name: zookeeper + labels: + app.kubernetes.io/name: "zookeeper" + app.kubernetes.io/component: "server" +spec: + ports: + - name: client + port: 2281 + protocol: TCP + targetPort: client + selector: + app.kubernetes.io/name: "zookeeper" + app.kubernetes.io/component: "server" + type: ClusterIP diff --git a/kubernetes/zuul/components/zookeeper/statefulset.yaml b/kubernetes/zuul/components/zookeeper/statefulset.yaml new file mode 100644 index 0000000..6959d58 --- /dev/null +++ b/kubernetes/zuul/components/zookeeper/statefulset.yaml @@ -0,0 +1,144 @@ +--- +apiVersion: apps/v1 +kind: StatefulSet +metadata: + name: "zookeeper" + labels: + app.kubernetes.io/name: "zookeeper" + app.kubernetes.io/component: "server" +spec: + podManagementPolicy: "Parallel" + replicas: 1 + serviceName: "zookeeper-headless" + template: + metadata: + labels: + app.kubernetes.io/component: "server" + spec: + affinity: + podAntiAffinity: + preferredDuringSchedulingIgnoredDuringExecution: + - weight: 100 + podAffinityTerm: + labelSelector: + matchExpressions: + - key: "app.kubernetes.io/name" + operator: In + values: + - "zookeeper" + - key: "app.kubernetes.io/component" + operator: In + values: + - "server" + topologyKey: "kubernetes.io/hostname" + + terminationGracePeriodSeconds: 1800 + serviceAccountName: "zookeeper" + containers: + - name: "zookeeper" + securityContext: + runAsUser: 1000 + runAsGroup: 1000 + image: "zookeeper" + resources: + limits: + cpu: "500m" + memory: "4Gi" + requests: + cpu: "100m" + memory: "1Gi" + command: + - "/bin/bash" + - "-xec" + - "/config-scripts/run" + ports: + - containerPort: 2281 + name: "client" + - containerPort: 2888 + name: "server" + - containerPort: 3888 + name: "election" + livenessProbe: + exec: + command: + - sh + - /config-scripts/ok + initialDelaySeconds: 20 + periodSeconds: 30 + timeoutSeconds: 5 + failureThreshold: 2 + successThreshold: 1 + readinessProbe: + exec: + command: + - sh + - /config-scripts/ready + initialDelaySeconds: 20 + periodSeconds: 30 + timeoutSeconds: 5 + failureThreshold: 2 + successThreshold: 1 + env: + - name: ZK_REPLICAS + value: "3" + - name: JMXAUTH + value: "false" + - name: JMXDISABLE + value: "false" + - name: JMXPORT + value: "1099" + - name: JMXSSL + value: "false" + - name: ZK_SYNC_LIMIT + value: "10" + - name: ZK_TICK_TIME + value: "2000" + - name: ZOO_AUTOPURGE_PURGEINTERVAL + value: "6" + - name: ZOO_AUTOPURGE_SNAPRETAINCOUNT + value: "3" + - name: ZOO_INIT_LIMIT + value: "5" + - name: ZOO_MAX_CLIENT_CNXNS + value: "60" + - name: ZOO_PORT + value: "2181" + - name: ZOO_STANDALONE_ENABLED + value: "false" + - name: ZOO_TICK_TIME + value: "2000" + + volumeMounts: + - name: data + mountPath: /data + - name: zookeeper-server-tls + mountPath: /tls/server + readOnly: true + - name: zookeeper-client-tls + mountPath: /tls/client + readOnly: true + - name: config + mountPath: /config-scripts + + volumes: + - name: config + configMap: + name: zookeeper-config + defaultMode: 0555 + - name: zookeeper-server-tls + secret: + secretName: zookeeper-server-tls + - name: zookeeper-client-tls + secret: + secretName: zookeeper-server-tls + + updateStrategy: + type: "RollingUpdate" + volumeClaimTemplates: + - metadata: + name: "data" + spec: + accessModes: ["ReadWriteOnce"] + resources: + requests: + storage: "1Gi" diff --git a/kubernetes/zuul/components/zuul-client/deployment.yaml b/kubernetes/zuul/components/zuul-client/deployment.yaml new file mode 100644 index 0000000..85f3c18 --- /dev/null +++ b/kubernetes/zuul/components/zuul-client/deployment.yaml @@ -0,0 +1,69 @@ +--- +apiVersion: apps/v1 +kind: Deployment +metadata: + name: zuul-client + labels: + app.kubernetes.io/name: "zuul" + app.kubernetes.io/component: "zuul-client" +spec: + replicas: 1 + selector: + matchLabels: + app.kubernetes.io/name: "zuul" + app.kubernetes.io/component: "zuul-client" + template: + metadata: + labels: + app.kubernetes.io/name: "zuul" + app.kubernetes.io/component: "zuul-client" + spec: + serviceAccountName: "zuul" + + containers: + # Zuul-client is a regular zuul-web image doing nothing. + # We use it only to have completely independent pod serving as + # zuul client for i.e. maintenance. + - name: "zuul-client" + image: "zuul/zuul-web" + command: + - "sh" + - "-c" + - "while :; do sleep 60; done" + + resources: + limits: + cpu: "50m" + memory: "200Mi" + requests: + cpu: "20m" + memory: "100Mi" + + securityContext: + runAsUser: 10001 + runAsGroup: 10001 + + volumeMounts: + - name: "zuul-config" + mountPath: "/etc/zuul" + readOnly: true + - name: "zookeeper-client-tls" + mountPath: "/tls/client" + readOnly: true + - name: "zuul-config-data" + mountPath: "/etc/zuul-config" + + volumes: + - name: "zuul-config" + secret: + secretName: "zuul-config" + + - name: "zookeeper-client-tls" + secret: + secretName: "zookeeper-client-tls" + + - name: "zuul-config-data" + persistentVolumeClaim: + claimName: "zuul-config" + + revisionHistoryLimit: 2 diff --git a/kubernetes/zuul/components/zuul-client/kustomization.yaml b/kubernetes/zuul/components/zuul-client/kustomization.yaml new file mode 100644 index 0000000..fbc3362 --- /dev/null +++ b/kubernetes/zuul/components/zuul-client/kustomization.yaml @@ -0,0 +1,6 @@ +--- +apiVersion: kustomize.config.k8s.io/v1alpha1 +kind: Component + +resources: + - deployment.yaml diff --git a/kubernetes/zuul/components/zuul-config/deployment.yaml b/kubernetes/zuul/components/zuul-config/deployment.yaml new file mode 100644 index 0000000..abafa53 --- /dev/null +++ b/kubernetes/zuul/components/zuul-config/deployment.yaml @@ -0,0 +1,64 @@ +--- +apiVersion: apps/v1 +kind: Deployment +metadata: + name: "zuul-config" + labels: + app.kubernetes.io/name: "zuul" + app.kubernetes.io/component: "zuul-config" +spec: + replicas: 1 + selector: + matchLabels: + app.kubernetes.io/name: "zuul" + app.kubernetes.io/component: "zuul-config" + template: + metadata: + labels: + app.kubernetes.io/name: "zuul" + app.kubernetes.io/component: "zuul-config" + spec: + initContainers: + + - name: "git-fetcher" + image: "zuul/nodepool-builder" + args: + - "cd /data && git clone $ZUUL_CONFIG_REPO . || true" + command: ["/bin/sh", "-ec"] + env: + - name: "ZUUL_CONFIG_REPO" + valueFrom: + configMapKeyRef: + name: "zuul-instance-config" + key: "ZUUL_CONFIG_REPO" + volumeMounts: + - name: "zuul-config-data" + mountPath: "/data" + + containers: + + - name: "git-syncer" + args: + - "while :; do cd /data/; git pull; sleep 60; done" + command: ["/bin/sh", "-ec"] + image: "zuul/nodepool-builder" + resources: + limits: + cpu: "100m" + memory: "128Mi" + requests: + cpu: "10m" + memory: "64Mi" + + volumeMounts: + - name: "zuul-config-data" + mountPath: "/data" + + volumes: + - name: "zuul-instance-config" + secret: + secretName: "zuul-instance-config" + + - name: "zuul-config-data" + persistentVolumeClaim: + claimName: "zuul-config" diff --git a/kubernetes/zuul/components/zuul-config/kustomization.yaml b/kubernetes/zuul/components/zuul-config/kustomization.yaml new file mode 100644 index 0000000..a5bb0ad --- /dev/null +++ b/kubernetes/zuul/components/zuul-config/kustomization.yaml @@ -0,0 +1,7 @@ +--- +apiVersion: kustomize.config.k8s.io/v1alpha1 +kind: Component + +resources: + - pvc.yaml + - deployment.yaml diff --git a/kubernetes/zuul/components/zuul-config/pvc.yaml b/kubernetes/zuul/components/zuul-config/pvc.yaml new file mode 100644 index 0000000..1fb12bc --- /dev/null +++ b/kubernetes/zuul/components/zuul-config/pvc.yaml @@ -0,0 +1,14 @@ +apiVersion: "v1" +kind: "PersistentVolumeClaim" +metadata: + name: zuul-config + labels: + app.kubernetes.io/name: "zuul" + app.kubernetes.io/component: "zuul-config" +spec: + storageClassName: "csi-nas" + accessModes: + - ReadWriteMany + resources: + requests: + storage: 1Gi diff --git a/kubernetes/zuul/components/zuul-executor/kustomization.yaml b/kubernetes/zuul/components/zuul-executor/kustomization.yaml new file mode 100644 index 0000000..4c429a2 --- /dev/null +++ b/kubernetes/zuul/components/zuul-executor/kustomization.yaml @@ -0,0 +1,7 @@ +--- +apiVersion: kustomize.config.k8s.io/v1alpha1 +kind: Component + +resources: + - service.yaml + - statefulset.yaml diff --git a/kubernetes/zuul/components/zuul-executor/service.yaml b/kubernetes/zuul/components/zuul-executor/service.yaml new file mode 100644 index 0000000..67ee401 --- /dev/null +++ b/kubernetes/zuul/components/zuul-executor/service.yaml @@ -0,0 +1,21 @@ +--- +apiVersion: v1 +kind: Service +metadata: + name: zuul-executor + labels: + app.kubernetes.io/name: "zuul" + app.kubernetes.io/part-of: "zuul" + app.kubernetes.io/component: "zuul-executor" +spec: + type: "ClusterIP" + clusterIP: None + ports: + - name: "logs" + port: 7900 + protocol: "TCP" + targetPort: "logs" + selector: + app.kubernetes.io/name: "zuul" + app.kubernetes.io/part-of: "zuul" + app.kubernetes.io/component: "zuul-executor" diff --git a/kubernetes/zuul/components/zuul-executor/statefulset.yaml b/kubernetes/zuul/components/zuul-executor/statefulset.yaml new file mode 100644 index 0000000..d280d75 --- /dev/null +++ b/kubernetes/zuul/components/zuul-executor/statefulset.yaml @@ -0,0 +1,102 @@ +--- +apiVersion: apps/v1 +kind: StatefulSet +metadata: + name: zuul-executor + labels: + app.kubernetes.io/name: "zuul" + app.kubernetes.io/part-of: "zuul" + app.kubernetes.io/component: "zuul-executor" +spec: + replicas: 1 + serviceName: "zuul-executor" + selector: + matchLabels: + app.kubernetes.io/name: "zuul" + app.kubernetes.io/part-of: "zuul" + app.kubernetes.io/component: "zuul-executor" + template: + metadata: + labels: + app.kubernetes.io/name: "zuul" + app.kubernetes.io/part-of: "zuul" + app.kubernetes.io/component: "zuul-executor" + spec: + affinity: + podAntiAffinity: + preferredDuringSchedulingIgnoredDuringExecution: + - weight: 100 + podAffinityTerm: + labelSelector: + matchExpressions: + - key: "app.kubernetes.io/name" + operator: In + values: + - "zuul" + - key: "app.kubernetes.io/component" + operator: In + values: + - "zuul-executor" + topologyKey: "kubernetes.io/hostname" + + containers: + - name: "executor" + image: "zuul/zuul-executor" + args: ["/usr/local/bin/zuul-executor", "-f", "-d"] + env: + - name: "ZUUL_EXECUTOR_SIGTERM_GRACEFUL" + value: "1" + + lifecycle: + preStop: + exec: + command: [ + "/usr/local/bin/zuul-executor", "graceful" + ] + + ports: + - containerPort: 7900 + name: "logs" + protocol: "TCP" + + resources: + limits: + cpu: "2" + memory: "8G" + requests: + cpu: "1" + memory: "1G" + + securityContext: + privileged: true + + + volumeMounts: + - name: "zuul-config" + mountPath: "/etc/zuul" + readOnly: true + - name: "zookeeper-client-tls" + mountPath: "/tls/client" + readOnly: true + - name: "zuul-config-data" + mountPath: "/etc/zuul-config" + - name: "zuul-var" + mountPath: "/var/lib/zuul" + + serviceAccountName: "zuul" + terminationGracePeriodSeconds: 120 + volumes: + - name: "zuul-config" + secret: + secretName: "zuul-config" + + - name: "zookeeper-client-tls" + secret: + secretName: "zookeeper-client-tls" + + - name: "zuul-config-data" + persistentVolumeClaim: + claimName: "zuul-config" + + - name: "zuul-var" + emptyDir: {} diff --git a/kubernetes/zuul/components/zuul-merger/kustomization.yaml b/kubernetes/zuul/components/zuul-merger/kustomization.yaml new file mode 100644 index 0000000..61f0ad9 --- /dev/null +++ b/kubernetes/zuul/components/zuul-merger/kustomization.yaml @@ -0,0 +1,6 @@ +--- +apiVersion: kustomize.config.k8s.io/v1alpha1 +kind: Component + +resources: + - statefulset.yaml diff --git a/kubernetes/zuul/components/zuul-merger/statefulset.yaml b/kubernetes/zuul/components/zuul-merger/statefulset.yaml new file mode 100644 index 0000000..4fca57d --- /dev/null +++ b/kubernetes/zuul/components/zuul-merger/statefulset.yaml @@ -0,0 +1,87 @@ +--- +apiVersion: apps/v1 +kind: StatefulSet +metadata: + name: "zuul-merger" + labels: + app.kubernetes.io/name: "zuul" + app.kubernetes.io/part-of: "zuul" + app.kubernetes.io/component: "zuul-merger" +spec: + replicas: 1 + serviceName: "zuul-merger" + selector: + matchLabels: + app.kubernetes.io/name: "zuul" + app.kubernetes.io/part-of: "zuul" + app.kubernetes.io/component: "zuul-merger" + template: + metadata: + labels: + app.kubernetes.io/name: "zuul" + app.kubernetes.io/part-of: "zuul" + app.kubernetes.io/component: "zuul-merger" + spec: + affinity: + podAntiAffinity: + preferredDuringSchedulingIgnoredDuringExecution: + - weight: 100 + podAffinityTerm: + labelSelector: + matchExpressions: + - key: "app.kubernetes.io/name" + operator: "In" + values: + - "zuul" + - key: "app.kubernetes.io/component" + operator: "In" + values: + - "zuul-merger" + topologyKey: "kubernetes.io/hostname" + + containers: + - name: "merger" + image: "zuul/zuul-merger" + args: ["/usr/local/bin/zuul-merger", "-f", "-d"] + + resources: + limits: + cpu: "200m" + memory: "400Mi" + requests: + cpu: "50m" + memory: "200Mi" + + securityContext: + runAsUser: 10001 + runAsGroup: 10001 + + volumeMounts: + - name: "zuul-config" + mountPath: "/etc/zuul" + readOnly: true + - name: "zookeeper-client-tls" + mountPath: "/tls/client" + readOnly: true + - name: "zuul-config-data" + mountPath: "/etc/zuul-config" + - name: "zuul-var" + mountPath: "/var/lib/zuul" + + serviceAccountName: "zuul" + terminationGracePeriodSeconds: 120 + volumes: + - name: "zuul-config" + secret: + secretName: "zuul-config" + + - name: "zookeeper-client-tls" + secret: + secretName: "zookeeper-client-tls" + + - name: "zuul-config-data" + persistentVolumeClaim: + claimName: "zuul-config" + + - name: "zuul-var" + emptyDir: {} diff --git a/kubernetes/zuul/components/zuul-scheduler/kustomization.yaml b/kubernetes/zuul/components/zuul-scheduler/kustomization.yaml new file mode 100644 index 0000000..61f0ad9 --- /dev/null +++ b/kubernetes/zuul/components/zuul-scheduler/kustomization.yaml @@ -0,0 +1,6 @@ +--- +apiVersion: kustomize.config.k8s.io/v1alpha1 +kind: Component + +resources: + - statefulset.yaml diff --git a/kubernetes/zuul/components/zuul-scheduler/statefulset.yaml b/kubernetes/zuul/components/zuul-scheduler/statefulset.yaml new file mode 100644 index 0000000..6292079 --- /dev/null +++ b/kubernetes/zuul/components/zuul-scheduler/statefulset.yaml @@ -0,0 +1,91 @@ +--- +apiVersion: apps/v1 +kind: StatefulSet +metadata: + name: zuul-scheduler + labels: + app.kubernetes.io/name: "zuul" + app.kubernetes.io/component: "zuul-scheduler" +spec: + replicas: 1 + serviceName: "zuul-scheduler" + selector: + matchLabels: + app.kubernetes.io/name: "zuul" + app.kubernetes.io/component: "zuul-scheduler" + template: + metadata: + labels: + app.kubernetes.io/name: "zuul" + app.kubernetes.io/component: "zuul-scheduler" + spec: + affinity: + podAntiAffinity: + preferredDuringSchedulingIgnoredDuringExecution: + - weight: 100 + podAffinityTerm: + labelSelector: + matchExpressions: + - key: "app.kubernetes.io/name" + operator: In + values: + - "zuul" + - key: "app.kubernetes.io/component" + operator: In + values: + - "zuul-scheduler" + topologyKey: "kubernetes.io/hostname" + + containers: + - name: "scheduler" + image: "zuul/zuul-scheduler" + args: ["/usr/local/bin/zuul-scheduler", "-f", "-d"] + + resources: + limits: + cpu: "2" + memory: "2G" + requests: + cpu: "100m" + memory: "200Mi" + + securityContext: + runAsUser: 10001 + runAsGroup: 10001 + + volumeMounts: + - name: "zuul-config" + mountPath: "/etc/zuul" + readOnly: true + - name: "zookeeper-client-tls" + mountPath: "/tls/client" + readOnly: true + - name: "zuul-config-data" + mountPath: "/etc/zuul-config" + - name: "zuul-scheduler-state-dir" + mountPath: "/var/lib/zuul" + + serviceAccountName: "zuul" + volumes: + - name: "zuul-config" + secret: + secretName: "zuul-config" + + - name: "zookeeper-client-tls" + secret: + secretName: "zookeeper-client-tls" + + - name: "zuul-config-data" + persistentVolumeClaim: + claimName: "zuul-config" + + volumeClaimTemplates: + - metadata: + name: "zuul-scheduler-state-dir" + spec: + accessModes: + - "ReadWriteOnce" + storageClassName: "csi-disk" + resources: + requests: + storage: "5G" diff --git a/kubernetes/zuul/components/zuul-web/deployment.yaml b/kubernetes/zuul/components/zuul-web/deployment.yaml new file mode 100644 index 0000000..1e983a7 --- /dev/null +++ b/kubernetes/zuul/components/zuul-web/deployment.yaml @@ -0,0 +1,69 @@ +--- +apiVersion: apps/v1 +kind: Deployment +metadata: + name: zuul-web + labels: + app.kubernetes.io/name: "zuul" + app.kubernetes.io/part-of: zuul + app.kubernetes.io/component: "zuul-web" +spec: + replicas: 1 + selector: + matchLabels: + app.kubernetes.io/name: "zuul" + app.kubernetes.io/part-of: "zuul" + app.kubernetes.io/component: "zuul-web" + template: + metadata: + labels: + app.kubernetes.io/name: "zuul" + app.kubernetes.io/part-of: "zuul" + app.kubernetes.io/component: "zuul-web" + spec: + containers: + - name: "web" + image: "zuul/zuul-web" + args: ["/usr/local/bin/zuul-web", "-f", "-d"] + + ports: + - containerPort: 9000 + name: "web" + protocol: "TCP" + + resources: + limits: + cpu: "50m" + memory: "500Mi" + requests: + cpu: "20m" + memory: "200Mi" + + securityContext: + runAsUser: 10001 + runAsGroup: 10001 + + volumeMounts: + - name: "zuul-config" + mountPath: "/etc/zuul" + readOnly: true + - name: "zookeeper-client-tls" + mountPath: "/tls/client" + readOnly: true + - name: "zuul-config-data" + mountPath: "/etc/zuul-config" + + serviceAccountName: "zuul" + volumes: + - name: "zuul-config" + secret: + secretName: "zuul-config" + + - name: "zookeeper-client-tls" + secret: + secretName: "zookeeper-client-tls" + + - name: "zuul-config-data" + persistentVolumeClaim: + claimName: "zuul-config" + revisionHistoryLimit: 2 diff --git a/kubernetes/zuul/components/zuul-web/hpa.yaml b/kubernetes/zuul/components/zuul-web/hpa.yaml new file mode 100644 index 0000000..fd7b19d --- /dev/null +++ b/kubernetes/zuul/components/zuul-web/hpa.yaml @@ -0,0 +1,23 @@ +--- +apiVersion: autoscaling/v2 +kind: "HorizontalPodAutoscaler" +metadata: + name: "zuul-web" + labels: + app.kubernetes.io/name: "zuul" + app.kubernetes.io/part-of: "zuul" + app.kubernetes.io/component: "zuul-web" +spec: + scaleTargetRef: + kind: "Deployment" + name: "zuul-web" + apiVersion: "apps/v1" + minReplicas: 1 + maxReplicas: 2 + metrics: + - type: "Resource" + resource: + name: "cpu" + target: + type: "Utilization" + averageUtilization: 70 diff --git a/kubernetes/zuul/components/zuul-web/ingress.yaml b/kubernetes/zuul/components/zuul-web/ingress.yaml new file mode 100644 index 0000000..3231654 --- /dev/null +++ b/kubernetes/zuul/components/zuul-web/ingress.yaml @@ -0,0 +1,21 @@ +--- +apiVersion: networking.k8s.io/v1 +kind: Ingress +metadata: + name: "zuul-web" + labels: + app.kubernetes.io/name: "zuul" + app.kubernetes.io/part-of: "zuul" + app.kubernetes.io/component: "zuul-web" +spec: + rules: + - host: "zuul" + http: + paths: + - backend: + service: + name: "zuul-web" + port: + number: 9000 + path: "/" + pathType: "Prefix" diff --git a/kubernetes/zuul/components/zuul-web/kustomization.yaml b/kubernetes/zuul/components/zuul-web/kustomization.yaml new file mode 100644 index 0000000..845940d --- /dev/null +++ b/kubernetes/zuul/components/zuul-web/kustomization.yaml @@ -0,0 +1,9 @@ +--- +apiVersion: kustomize.config.k8s.io/v1alpha1 +kind: Component + +resources: + - service.yaml + - deployment.yaml + - ingress.yaml + - hpa.yaml diff --git a/kubernetes/zuul/components/zuul-web/service.yaml b/kubernetes/zuul/components/zuul-web/service.yaml new file mode 100644 index 0000000..68eff83 --- /dev/null +++ b/kubernetes/zuul/components/zuul-web/service.yaml @@ -0,0 +1,21 @@ +--- +apiVersion: v1 +kind: Service +metadata: + name: zuul-web + labels: + app.kubernetes.io/name: zuul + app.kubernetes.io/part-of: zuul + app.kubernetes.io/component: zuul-web +spec: + type: "ClusterIP" + clusterIP: None + ports: + - name: "web" + port: 9000 + protocol: "TCP" + targetPort: "web" + selector: + app.kubernetes.io/name: "zuul" + app.kubernetes.io/part-of: "zuul" + app.kubernetes.io/component: "zuul-web" diff --git a/kubernetes/zuul/overlays/scs/configs/kube.config.hcl b/kubernetes/zuul/overlays/scs/configs/kube.config.hcl new file mode 100644 index 0000000..58565ae --- /dev/null +++ b/kubernetes/zuul/overlays/scs/configs/kube.config.hcl @@ -0,0 +1,24 @@ +apiVersion: v1 +kind: Config +current-context: otcci +preferences: {} + +clusters: + - name: otcci + cluster: + server: "https://10.10.0.32:5443" + insecure-skip-tls-verify: true + +contexts: + - name: otcci + context: + cluster: otcci + user: otcci-admin + +users: + - name: otcci-admin + user: +{{- with secret "secret/kubernetes/otcci_k8s" }} + client-certificate-data: "{{ base64Encode .Data.data.client_crt }}" + client-key-data: "{{ base64Encode .Data.data.client_key }}" +{{- end }} diff --git a/kubernetes/zuul/overlays/scs/configs/openstack/clouds.yaml.hcl b/kubernetes/zuul/overlays/scs/configs/openstack/clouds.yaml.hcl new file mode 100644 index 0000000..b7be0fd --- /dev/null +++ b/kubernetes/zuul/overlays/scs/configs/openstack/clouds.yaml.hcl @@ -0,0 +1,16 @@ +--- +# Nodepool openstacksdk configuration +# +# This file is deployed to nodepool launcher and builder hosts +# and is used there to authenticate nodepool operations to clouds. +# This file only contains projects we are launching test nodes in, and +# the naming should correspond that used in nodepool configuration +# files. +# +# Generated automatically, please do not edit directly! +cache: + expiration: + server: 5 + port: 5 + floating-ip: 5 +clouds: diff --git a/kubernetes/zuul/overlays/scs/configs/vault-agent/config-nodepool.hcl b/kubernetes/zuul/overlays/scs/configs/vault-agent/config-nodepool.hcl new file mode 100644 index 0000000..a39b754 --- /dev/null +++ b/kubernetes/zuul/overlays/scs/configs/vault-agent/config-nodepool.hcl @@ -0,0 +1,31 @@ +pid_file = "/home/vault/.pid" +"auto_auth" = { + "method" = { + "mount_path" = "auth/kubernetes_otcci" + "config" = { + "role" = "zuul" + } + "type" = "kubernetes" + } + sink "file" { + config = { + path = "/home/vault/.token" + } + } +} + +cache { + use_auto_auth_token = true +} + +template { + destination = "/vault/secrets/openstack/clouds.yaml" + source = "/vault/custom/clouds.yaml.hcl" + perms = "0640" +} + +template { + destination = "/vault/secrets/.kube/config" + source = "/vault/custom/kube.config.hcl" + perms = "0640" +} diff --git a/kubernetes/zuul/overlays/scs/configs/vault-agent/config-zuul.hcl b/kubernetes/zuul/overlays/scs/configs/vault-agent/config-zuul.hcl new file mode 100644 index 0000000..9776a63 --- /dev/null +++ b/kubernetes/zuul/overlays/scs/configs/vault-agent/config-zuul.hcl @@ -0,0 +1,55 @@ +pid_file = "/home/vault/.pid" +"auto_auth" = { + "method" = { + "mount_path" = "auth/kubernetes_otcci" + "config" = { + "role" = "zuul" + } + "type" = "kubernetes" + } + sink "file" { + config = { + path = "/home/vault/.token" + } + } +} + +cache { + use_auto_auth_token = true +} + +template { + destination = "/vault/secrets/connections/github.key" + contents = <= 400: + self.ansible.fail_json( + msg=f"Error during creating PR: {response.text}") + + + def get_repo_and_set_config(self, username: str, email: str): + """ + Get repository in current directory + and fill up .gitconfig with required variables + + :param username: Git username + :param email: Git email + :return: repository + """ + os.system(f"git config --global user.name {username}") + os.system(f"git config --global user.email {email}") + + def __call__(self): + repo_location = self.params['repo_location'] + path = self.params['path'] + key = self.params['key'] + value = self.params['value'] + username = self.params.get('username') + email = self.params.get('email') + token = self.params.get('token') + + with open(f"{repo_location}/{path}") as f: + yaml_data = yaml.safe_load(f) + + self.update_yaml_value( + yaml_data, key, value) + + with open(f"{repo_location}/{path}", "w") as f: + yaml.safe_dump(yaml_data, f, sort_keys=False) + + if all(v is not None for v in (username, email)): + proposal_branch = self.get_proposal_branch_name(key, value) + repo = self.get_repo_and_set_config(username, email) + self.commit_changes( + repo_location, proposal_branch, path, key, value, token) + + self.ansible.exit_json(changed=True) + + +def main(): + module = ProposeModule() + module() + +if __name__ == '__main__': + main() diff --git a/playbooks/module_utils/facts/system/pkg_mgr.py b/playbooks/module_utils/facts/system/pkg_mgr.py new file mode 100644 index 0000000..50c20bf --- /dev/null +++ b/playbooks/module_utils/facts/system/pkg_mgr.py @@ -0,0 +1,145 @@ +# Collect facts related to the system package manager +# +# This file is part of Ansible +# +# Ansible is free software: you can redistribute it and/or modify +# it under the terms of the GNU General Public License as published by +# the Free Software Foundation, either version 3 of the License, or +# (at your option) any later version. +# +# Ansible is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +# GNU General Public License for more details. +# +# You should have received a copy of the GNU General Public License +# along with Ansible. If not, see . + +from __future__ import (absolute_import, division, print_function) +__metaclass__ = type + +import os +import subprocess + +from ansible.module_utils.facts.collector import BaseFactCollector + +# A list of dicts. If there is a platform with more than one +# package manager, put the preferred one last. If there is an +# ansible module, use that as the value for the 'name' key. +PKG_MGRS = [{'path': '/usr/bin/yum', 'name': 'yum'}, + {'path': '/usr/bin/dnf', 'name': 'dnf'}, + {'path': '/usr/bin/apt-get', 'name': 'apt'}, + {'path': '/usr/bin/zypper', 'name': 'zypper'}, + {'path': '/usr/sbin/urpmi', 'name': 'urpmi'}, + {'path': '/usr/bin/pacman', 'name': 'pacman'}, + {'path': '/bin/opkg', 'name': 'opkg'}, + {'path': '/usr/pkg/bin/pkgin', 'name': 'pkgin'}, + {'path': '/opt/local/bin/pkgin', 'name': 'pkgin'}, + {'path': '/opt/tools/bin/pkgin', 'name': 'pkgin'}, + {'path': '/opt/local/bin/port', 'name': 'macports'}, + {'path': '/usr/local/bin/brew', 'name': 'homebrew'}, + {'path': '/sbin/apk', 'name': 'apk'}, + {'path': '/usr/sbin/pkg', 'name': 'pkgng'}, + {'path': '/usr/sbin/swlist', 'name': 'HP-UX'}, + {'path': '/usr/bin/emerge', 'name': 'portage'}, + {'path': '/usr/sbin/pkgadd', 'name': 'svr4pkg'}, + {'path': '/usr/bin/pkg', 'name': 'pkg5'}, + {'path': '/usr/bin/xbps-install', 'name': 'xbps'}, + {'path': '/usr/local/sbin/pkg', 'name': 'pkgng'}, + {'path': '/usr/bin/swupd', 'name': 'swupd'}, + {'path': '/usr/sbin/sorcery', 'name': 'sorcery'}, + {'path': '/usr/bin/rpm-ostree', 'name': 'atomic_container'}, + ] + + +class OpenBSDPkgMgrFactCollector(BaseFactCollector): + name = 'pkg_mgr' + _fact_ids = set() + _platform = 'OpenBSD' + + def collect(self, module=None, collected_facts=None): + facts_dict = {} + + facts_dict['pkg_mgr'] = 'openbsd_pkg' + return facts_dict + + +# the fact ends up being 'pkg_mgr' so stick with that naming/spelling +class PkgMgrFactCollector(BaseFactCollector): + name = 'pkg_mgr' + _fact_ids = set() + _platform = 'Generic' + required_facts = set(['distribution']) + + def _check_rh_versions(self, pkg_mgr_name, collected_facts): + if collected_facts['ansible_distribution'] == 'Fedora': + try: + if int(collected_facts['ansible_distribution_major_version']) < 23: + for yum in [pkg_mgr for pkg_mgr in PKG_MGRS if pkg_mgr['name'] == 'yum']: + if os.path.exists(yum['path']): + pkg_mgr_name = 'yum' + break + else: + for dnf in [pkg_mgr for pkg_mgr in PKG_MGRS if pkg_mgr['name'] == 'dnf']: + if os.path.exists(dnf['path']): + pkg_mgr_name = 'dnf' + break + except ValueError: + # If there's some new magical Fedora version in the future, + # just default to dnf + pkg_mgr_name = 'dnf' + return pkg_mgr_name + + def _check_apt_flavor(self, pkg_mgr_name): + # Check if '/usr/bin/apt' is APT-RPM or an ordinary (dpkg-based) APT. + # There's rpm package on Debian, so checking if /usr/bin/rpm exists + # is not enough. Instead ask RPM if /usr/bin/apt-get belongs to some + # RPM package. + rpm_query = '/usr/bin/rpm -q --whatprovides /usr/bin/apt-get'.split() + if os.path.exists('/usr/bin/rpm'): + with open(os.devnull, 'w') as null: + try: + subprocess.check_call(rpm_query, stdout=null, stderr=null) + pkg_mgr_name = 'apt_rpm' + except subprocess.CalledProcessError: + # No apt-get in RPM database. Looks like Debian/Ubuntu + # with rpm package installed + pkg_mgr_name = 'apt' + return pkg_mgr_name + + def collect(self, module=None, collected_facts=None): + facts_dict = {} + collected_facts = collected_facts or {} + + pkg_mgr_name = 'unknown' + for pkg in PKG_MGRS: + if os.path.exists(pkg['path']): + pkg_mgr_name = pkg['name'] + + # Handle distro family defaults when more than one package manager is + # installed, the ansible_fact entry should be the default package + # manager provided by the distro. + if collected_facts['ansible_os_family'] == "RedHat": + if pkg_mgr_name not in ('yum', 'dnf'): + pkg_mgr_name = self._check_rh_versions(pkg_mgr_name, collected_facts) + elif collected_facts['ansible_os_family'] == 'Altlinux': + if pkg_mgr_name == 'apt': + pkg_mgr_name = 'apt_rpm' + + elif collected_facts['ansible_os_family'] == 'Debian' and pkg_mgr_name != 'apt': + # It's possible to install yum, dnf, zypper, rpm, etc inside of + # Debian. Doing so does not mean the system wants to use them. + pkg_mgr_name = 'apt' + + # Check if /usr/bin/apt-get is ordinary (dpkg-based) APT or APT-RPM + if pkg_mgr_name == 'apt': + pkg_mgr_name = self._check_apt_flavor(pkg_mgr_name) + + # pacman has become available by distros other than those that are Arch + # based by virtue of a dependency to the systemd mkosi project, this + # handles some of those scenarios as they are reported/requested + if pkg_mgr_name == 'pacman' and collected_facts['ansible_os_family'] in ["RedHat"]: + pkg_mgr_name = self._check_rh_versions(collected_facts) + + facts_dict['pkg_mgr'] = pkg_mgr_name + return facts_dict diff --git a/playbooks/roles/acme_create_certs/README.rst b/playbooks/roles/acme_create_certs/README.rst new file mode 100644 index 0000000..e69de29 diff --git a/playbooks/roles/acme_create_certs/defaults/main.yaml b/playbooks/roles/acme_create_certs/defaults/main.yaml new file mode 100644 index 0000000..0b1a54e --- /dev/null +++ b/playbooks/roles/acme_create_certs/defaults/main.yaml @@ -0,0 +1,4 @@ +certs_path: "/etc/ssl/{{ inventory_hostname }}" +acme_directory: "https://acme-v02.api.letsencrypt.org/directory" +acme_account_contact: + - "mailto:DL-PBCOTCDELECOCERT@t-systems.com" diff --git a/playbooks/roles/acme_create_certs/handlers/main.yaml b/playbooks/roles/acme_create_certs/handlers/main.yaml new file mode 100644 index 0000000..c6fb66a --- /dev/null +++ b/playbooks/roles/acme_create_certs/handlers/main.yaml @@ -0,0 +1,14 @@ +- name: Restart haproxy + include_tasks: roles/acme_create_certs/handlers/restart-haproxy.yaml + +- name: Restart graphite + include_tasks: roles/acme_create_certs/handlers/restart-graphite.yaml + +- name: Reload vault + include_tasks: roles/acme_create_certs/handlers/reload-vault.yaml + +- name: Restart gitea + include_tasks: roles/acme_create_certs/handlers/restart-gitea.yaml + +- name: Restart keycloak + include_tasks: roles/acme_create_certs/handlers/restart-keycloak.yaml diff --git a/playbooks/roles/acme_create_certs/handlers/reload-vault.yaml b/playbooks/roles/acme_create_certs/handlers/reload-vault.yaml new file mode 100644 index 0000000..5fe3d7e --- /dev/null +++ b/playbooks/roles/acme_create_certs/handlers/reload-vault.yaml @@ -0,0 +1,10 @@ +- name: Check vault process + command: pgrep -f vault + ignore_errors: yes + register: vault_pids + +- name: Reload Vault + ansible.builtin.service: + name: "vault" + state: "reloaded" + when: vault_pids.rc == 0 diff --git a/playbooks/roles/acme_create_certs/handlers/restart-gitea.yaml b/playbooks/roles/acme_create_certs/handlers/restart-gitea.yaml new file mode 100644 index 0000000..a285bfe --- /dev/null +++ b/playbooks/roles/acme_create_certs/handlers/restart-gitea.yaml @@ -0,0 +1,10 @@ +- name: Check gitea process + command: pgrep -f gitea + ignore_errors: yes + register: gitea_pids + +- name: Restart Gitea + ansible.builtin.service: + name: "gitea" + state: "restarted" + when: gitea_pids.rc == 0 diff --git a/playbooks/roles/acme_create_certs/handlers/restart-graphite.yaml b/playbooks/roles/acme_create_certs/handlers/restart-graphite.yaml new file mode 100644 index 0000000..892d0fc --- /dev/null +++ b/playbooks/roles/acme_create_certs/handlers/restart-graphite.yaml @@ -0,0 +1,10 @@ +- name: Check graphite process + command: pgrep -f graphite + ignore_errors: yes + register: graphite_pids + +- name: Restart Graphite + ansible.builtin.service: + name: "graphite" + state: "restarted" + when: graphite_pids.rc == 0 diff --git a/playbooks/roles/acme_create_certs/handlers/restart-haproxy.yaml b/playbooks/roles/acme_create_certs/handlers/restart-haproxy.yaml new file mode 100644 index 0000000..23c2472 --- /dev/null +++ b/playbooks/roles/acme_create_certs/handlers/restart-haproxy.yaml @@ -0,0 +1,10 @@ +- name: Check haproxy process + command: pgrep -f haproxy + ignore_errors: yes + register: haproxy_pids + +- name: Restart Haproxy + ansible.builtin.service: + name: "haproxy" + state: "restarted" + when: haproxy_pids.rc == 0 diff --git a/playbooks/roles/acme_create_certs/handlers/restart-keycloak.yaml b/playbooks/roles/acme_create_certs/handlers/restart-keycloak.yaml new file mode 100644 index 0000000..e3bc9cc --- /dev/null +++ b/playbooks/roles/acme_create_certs/handlers/restart-keycloak.yaml @@ -0,0 +1,10 @@ +- name: Check keycloak process + command: pgrep -f keycloak + ignore_errors: yes + register: keycloak_pids + +- name: Restart keycloak + ansible.builtin.service: + name: "keycloak" + state: "restarted" + when: keycloak_pids.rc == 0 diff --git a/playbooks/roles/acme_create_certs/tasks/acme.yaml b/playbooks/roles/acme_create_certs/tasks/acme.yaml new file mode 100644 index 0000000..3ffadef --- /dev/null +++ b/playbooks/roles/acme_create_certs/tasks/acme.yaml @@ -0,0 +1,17 @@ +- name: Validate acme challenge + community.crypto.acme_certificate: + acme_version: 2 + acme_directory: "{{ acme_directory }}" + account_key_src: "{{ certs_path }}/account-key.pem" + src: "{{ certs_path }}/{{ cert.key }}.csr" + cert: "{{ certs_path }}/{{ cert.key }}.crt" + fullchain: "{{ certs_path }}/{{ cert.key }}-fullchain.crt" + chain: "{{ certs_path }}/{{ cert.key }}-intermediate.crt" + challenge: "dns-01" + remaining_days: 60 + data: "{{ acme_challenge[cert.key] }}" + terms_agreed: true + when: acme_challenge[cert.key] is defined and acme_challenge[cert.key] is changed + notify: + # Need to restart all known services + - Restart graphite diff --git a/playbooks/roles/acme_create_certs/tasks/main.yaml b/playbooks/roles/acme_create_certs/tasks/main.yaml new file mode 100644 index 0000000..cc7d60e --- /dev/null +++ b/playbooks/roles/acme_create_certs/tasks/main.yaml @@ -0,0 +1,133 @@ +- name: Generate list of changed certificates + set_fact: + acme_txt_changed: '{{ acme_txt_required|map("first")|list|unique }}' + +- name: Include ACME validation + include_tasks: acme.yaml + loop: "{{ query('dict', ssl_certs) }}" + loop_control: + loop_var: cert + #when: item.key in acme_txt_changed + +- name: Create haproxy certs directory + ansible.builtin.file: + path: "/etc/ssl/{{ inventory_hostname }}/haproxy" + state: "directory" + mode: "0755" + +- name: Check vault user + ansible.builtin.user: + name: "vault" + register: "vault_user" + when: "'vault' in group_names" + +- name: Create vault certs directory + ansible.builtin.file: + path: "/etc/ssl/{{ inventory_hostname }}/vault" + state: "directory" + mode: "0755" + owner: "{{ vault_user.name | default(omit) }}" + group: "{{ vault_user.group | default(omit) }}" + when: "'vault' in group_names" + +- name: Copy vault certs + ansible.builtin.copy: + src: "{{ certs_path }}/{{ cert }}" + dest: "{{ certs_path }}/vault/{{ cert }}" + mode: "0440" + owner: "{{ vault_user.name | default(omit) }}" + group: "{{ vault_user.group | default(omit) }}" + remote_src: true + loop: + - "{{ vault_cert }}.pem" + - "{{ vault_cert }}-fullchain.crt" + loop_control: + loop_var: "cert" + notify: + - Reload vault + when: + - "'vault' in group_names" + - "vault_cert is defined" + +- name: Check gitea user + ansible.builtin.user: + name: "git" + register: "gitea_user" + when: "'gitea' in group_names" + +- name: Create gitea certs directory + ansible.builtin.file: + path: "/etc/ssl/{{ inventory_hostname }}/gitea" + state: "directory" + mode: "0755" + owner: "{{ gitea_user.name | default(omit) }}" + group: "{{ gitea_user.group | default(omit) }}" + when: "'gitea' in group_names" + +- name: Copy gitea certs + ansible.builtin.copy: + src: "{{ certs_path }}/{{ cert }}" + dest: "{{ certs_path }}/gitea/{{ cert }}" + mode: "0440" + owner: "{{ gitea_user.name | default(omit) }}" + group: "{{ gitea_user.group | default(omit) }}" + remote_src: true + loop: + - "{{ gitea_cert }}.pem" + - "{{ gitea_cert }}-fullchain.crt" + loop_control: + loop_var: "cert" + notify: + - Restart gitea + when: + - "'gitea' in group_names" + - "gitea_cert is defined" + +- name: Check keycloak user + ansible.builtin.user: + name: "keycloak" + register: "keycloak_user" + when: "'keycloak' in group_names" + +- name: Create keycloak certs directory + ansible.builtin.file: + path: "/etc/ssl/{{ inventory_hostname }}/keycloak" + state: "directory" + mode: "0755" + owner: "{{ keycloak_user.name | default(omit) }}" + group: "{{ keycloak_user.name | default(omit) }}" + when: "'keycloak' in group_names" + +- name: Copy keycloak certs + ansible.builtin.copy: + src: "{{ certs_path }}/{{ cert }}" + dest: "{{ certs_path }}/keycloak/{{ cert }}" + mode: "0440" + owner: "{{ keycloak_user.name | default(omit) }}" + group: "{{ keycloak_user.name | default(omit) }}" + remote_src: true + loop: + - "{{ keycloak_cert }}.pem" + - "{{ keycloak_cert }}-fullchain.crt" + loop_control: + loop_var: "cert" + notify: + - Restart keycloak + when: + - "'keycloak' in group_names" + - "keycloak_cert is defined" + +# we only restart haproxy if it's cert files got modified +- name: Prepare haproxy certs + ansible.builtin.assemble: + src: "{{ certs_path }}/" + regexp: ".*{{ cert.key }}(-fullchain.crt|.pem)" + dest: "{{ certs_path }}/haproxy/{{ cert.key }}.pem" + group: "{{ haproxy_group | default('99') }}" + owner: "{{ haproxy_user | default('99') }}" + loop: "{{ query('dict', ssl_certs) }}" + loop_control: + loop_var: cert + when: "'proxy' in group_names" + notify: + - Restart haproxy diff --git a/playbooks/roles/acme_drop_txt_records/README.rst b/playbooks/roles/acme_drop_txt_records/README.rst new file mode 100644 index 0000000..e69de29 diff --git a/playbooks/roles/acme_drop_txt_records/defaults/main.yaml b/playbooks/roles/acme_drop_txt_records/defaults/main.yaml new file mode 100644 index 0000000..15310ab --- /dev/null +++ b/playbooks/roles/acme_drop_txt_records/defaults/main.yaml @@ -0,0 +1 @@ +dns_cloud: "otc-dns" diff --git a/playbooks/roles/acme_drop_txt_records/tasks/main.yaml b/playbooks/roles/acme_drop_txt_records/tasks/main.yaml new file mode 100644 index 0000000..4d88818 --- /dev/null +++ b/playbooks/roles/acme_drop_txt_records/tasks/main.yaml @@ -0,0 +1,26 @@ +- name: Make key list + set_fact: + acme_txt_keys: {} + +- name: Build key list + set_fact: + acme_txt_keys: "{{ acme_txt_keys | combine(hostvars[item]['acme_txt_required'], list_merge='append') }}" + with_inventory_hostnames: + - ssl_certs:!disabled + when: + - "item in hostvars" + - "'acme_txt_required' in hostvars[item]" + +- name: Final list + debug: + var: acme_txt_keys + +- name: Drop dns rec + openstack.cloud.recordset: + cloud: "{{ dns_cloud }}" + state: "absent" + zone: "{{ item.key | regex_replace('^_acme-challenge\\.([a-z0-9-_]*)\\.(.*)$', '\\2') }}." + name: "{{ item.key }}." + recordset_type: "txt" + records: "{{ item.value }}" + loop: "{{ acme_txt_keys | dict2items }}" diff --git a/playbooks/roles/acme_install_txt_records/README.rst b/playbooks/roles/acme_install_txt_records/README.rst new file mode 100644 index 0000000..e69de29 diff --git a/playbooks/roles/acme_install_txt_records/defaults/main.yaml b/playbooks/roles/acme_install_txt_records/defaults/main.yaml new file mode 100644 index 0000000..15310ab --- /dev/null +++ b/playbooks/roles/acme_install_txt_records/defaults/main.yaml @@ -0,0 +1 @@ +dns_cloud: "otc-dns" diff --git a/playbooks/roles/acme_install_txt_records/tasks/main.yaml b/playbooks/roles/acme_install_txt_records/tasks/main.yaml new file mode 100644 index 0000000..6b51be3 --- /dev/null +++ b/playbooks/roles/acme_install_txt_records/tasks/main.yaml @@ -0,0 +1,25 @@ +- name: Make key list + set_fact: + acme_txt_keys: {} + +- name: Build key list + set_fact: + acme_txt_keys: "{{ acme_txt_keys | combine(hostvars[item]['acme_txt_required'], list_merge='append') }}" + with_inventory_hostnames: + - ssl_certs:!disabled + when: + - "item in hostvars" + - "'acme_txt_required' in hostvars[item]" + +- name: Final list + debug: + var: acme_txt_keys + +- name: Create dns rec + openstack.cloud.recordset: + cloud: "{{ dns_cloud }}" + zone: "{{ item.key | regex_replace('^_acme-challenge\\.([a-z0-9-_]*)\\.(.*)$', '\\2') }}." + name: "{{ item.key }}." + recordset_type: "txt" + records: "{{ item.value }}" + loop: "{{ acme_txt_keys | dict2items }}" diff --git a/playbooks/roles/acme_request_certs/README.rst b/playbooks/roles/acme_request_certs/README.rst new file mode 100644 index 0000000..e69de29 diff --git a/playbooks/roles/acme_request_certs/defaults/main.yaml b/playbooks/roles/acme_request_certs/defaults/main.yaml new file mode 100644 index 0000000..2bc6285 --- /dev/null +++ b/playbooks/roles/acme_request_certs/defaults/main.yaml @@ -0,0 +1,5 @@ +certs_path: "/etc/ssl/{{ inventory_hostname }}" +acme_directory: "https://acme-v02.api.letsencrypt.org/directory" +acme_account_contact: + - "mailto:DL-PBCOTCDELECOCERT@t-systems.com" +ssl_cert_selfsign: false diff --git a/playbooks/roles/acme_request_certs/tasks/acme.yaml b/playbooks/roles/acme_request_certs/tasks/acme.yaml new file mode 100644 index 0000000..a378828 --- /dev/null +++ b/playbooks/roles/acme_request_certs/tasks/acme.yaml @@ -0,0 +1,25 @@ +- include_tasks: common.yaml + +- name: Create acme challenge + community.crypto.acme_certificate: + acme_version: 2 + acme_directory: "{{ acme_directory }}" + terms_agreed: "yes" + account_key_src: "{{ certs_path }}/account-key.pem" + src: "{{ certs_path }}/{{ cert.key }}.csr" + cert: "{{ certs_path }}/{{ cert.key }}.crt" + challenge: "dns-01" + remaining_days: 60 + force: "{{ csr_result is changed }}" + register: challenge + +- name: Save acme challenge + set_fact: + acme_challenge: "{{ acme_challenge | combine({cert.key: challenge}) }}" + when: challenge is defined and challenge is changed + +- name: Construct TXT + set_fact: + acme_txt_required: "{{ acme_txt_required | combine({item.key: ['\"'+item.value[0]+'\"']}) }}" + loop: "{{ challenge['challenge_data_dns'] | dict2items }}" + when: challenge is defined and challenge is changed diff --git a/playbooks/roles/acme_request_certs/tasks/common.yaml b/playbooks/roles/acme_request_certs/tasks/common.yaml new file mode 100644 index 0000000..649271c --- /dev/null +++ b/playbooks/roles/acme_request_certs/tasks/common.yaml @@ -0,0 +1,12 @@ +- name: Generate signing key + community.crypto.openssl_privatekey: + path: "{{ certs_path }}/{{ cert.key }}.pem" + size: 4096 + +- name: Generate csr + community.crypto.openssl_csr: + path: "{{ certs_path }}/{{ cert.key }}.csr" + privatekey_path: "{{ certs_path }}/{{ cert.key }}.pem" + common_name: "{{ cert.value[0] }}" + subject_alt_name: "DNS:{{ cert.value | join(',DNS:') }}" + register: csr_result diff --git a/playbooks/roles/acme_request_certs/tasks/main.yaml b/playbooks/roles/acme_request_certs/tasks/main.yaml new file mode 100644 index 0000000..cd2b471 --- /dev/null +++ b/playbooks/roles/acme_request_certs/tasks/main.yaml @@ -0,0 +1,56 @@ +- name: Include variables + include_vars: "{{ lookup('first_found', params) }}" + vars: + params: + files: "{{ distro_lookup_path }}" + paths: + - "vars" + +- name: Install required packages + become: true + ansible.builtin.package: + state: present + name: "{{ item }}" + loop: + - "{{ packages }}" + when: "ansible_facts.pkg_mgr != 'atomic_container'" + register: task_result + until: task_result is success + retries: 5 + +- set_fact: + acme_txt_required: [] + acme_challenge: {} + +- name: Create directory to store certs + file: + path: "{{ certs_path }}" + state: "directory" + mode: "0755" + +- name: Generate account key + community.crypto.openssl_privatekey: + path: "{{ certs_path }}/account-key.pem" + size: 4096 + +- name: Create account + community.crypto.acme_account: + account_key_src: "{{ certs_path }}/account-key.pem" + acme_directory: "{{ acme_directory }}" + acme_version: 2 + state: present + terms_agreed: yes + contact: "{{ acme_account_contact | default(omit) }}" + +- include_tasks: acme.yaml + loop: "{{ query('dict', ssl_certs) }}" + loop_control: + loop_var: cert + when: not ssl_cert_selfsign + +- include_tasks: selfsign.yaml + loop: "{{ query('dict', ssl_certs) }}" + loop_control: + loop_var: cert + when: ssl_cert_selfsign + diff --git a/playbooks/roles/acme_request_certs/tasks/selfsign.yaml b/playbooks/roles/acme_request_certs/tasks/selfsign.yaml new file mode 100644 index 0000000..ce3431f --- /dev/null +++ b/playbooks/roles/acme_request_certs/tasks/selfsign.yaml @@ -0,0 +1,14 @@ +- include_tasks: common.yaml + +- name: Create selfsigned certificate + community.crypto.x509_certificate: + path: "{{ certs_path }}/{{ cert.key }}.crt" + privatekey_path: "{{ certs_path }}/{{ cert.key }}.pem" + csr_path: "{{ certs_path }}/{{ cert.key }}.csr" + provider: "selfsigned" + +- name: Create fullchain cert for haproxy + ansible.builtin.copy: + src: "{{ certs_path }}/{{ cert.key }}.crt" + dest: "{{ certs_path }}/{{ cert.key }}-fullchain.crt" + remote_src: true diff --git a/playbooks/roles/acme_request_certs/vars/Debian.yaml b/playbooks/roles/acme_request_certs/vars/Debian.yaml new file mode 100644 index 0000000..2310804 --- /dev/null +++ b/playbooks/roles/acme_request_certs/vars/Debian.yaml @@ -0,0 +1,3 @@ +--- +packages: + - python3-cryptography diff --git a/playbooks/roles/acme_request_certs/vars/RedHat.yaml b/playbooks/roles/acme_request_certs/vars/RedHat.yaml new file mode 100644 index 0000000..2310804 --- /dev/null +++ b/playbooks/roles/acme_request_certs/vars/RedHat.yaml @@ -0,0 +1,3 @@ +--- +packages: + - python3-cryptography diff --git a/playbooks/roles/add-inventory-known-hosts/README.rst b/playbooks/roles/add-inventory-known-hosts/README.rst new file mode 100644 index 0000000..c283a86 --- /dev/null +++ b/playbooks/roles/add-inventory-known-hosts/README.rst @@ -0,0 +1 @@ +Add the host keys from inventory to global known_hosts diff --git a/playbooks/roles/add-inventory-known-hosts/tasks/main.yaml b/playbooks/roles/add-inventory-known-hosts/tasks/main.yaml new file mode 100644 index 0000000..960a0a7 --- /dev/null +++ b/playbooks/roles/add-inventory-known-hosts/tasks/main.yaml @@ -0,0 +1,40 @@ +- name: Load the current inventory from bridge + slurp: + src: '/home/zuul/src/github.com/opentelekomcloud-infra/system-config/inventory/base/hosts.yaml' + register: _bridge_inventory_encoded + +- name: Turn inventory into variable + set_fact: + _bridge_inventory: '{{ _bridge_inventory_encoded.content | b64decode | from_yaml }}' + +- name: Build known_hosts list + set_fact: + bastion_known_hosts: >- + [ + {%- for host, values in _bridge_inventory['all']['hosts'].items() -%} + {% for key in values['host_keys'] %} + '{{ host }},{{ values.public_v4 }}{{ "," + values.public_v6 if 'public_v6' in values}} {{ key }}', + {% endfor %} + {%- endfor -%} + ] + +- name: Write out values to /etc/ssh/ssh_known_hosts + blockinfile: + path: '/etc/ssh/ssh_known_hosts' + block: | + {% for entry in bastion_known_hosts %} + {{ entry }} + {% endfor %} + owner: root + group: root + mode: 0644 + create: yes + +# Disable writing out known_hosts globally on the bastion host. +# Nothing on this host should be connecting to somewhere not codified +# above; this prevents us possibly hiding that by caching values. +- name: Disable known_hosts caching + lineinfile: + path: /etc/ssh/ssh_config + regexp: 'UserKnownHostsFile' + line: ' UserKnownHostsFile /dev/null' diff --git a/playbooks/roles/base/README.rst b/playbooks/roles/base/README.rst new file mode 100644 index 0000000..984908b --- /dev/null +++ b/playbooks/roles/base/README.rst @@ -0,0 +1 @@ +Directory to hold base roles. diff --git a/playbooks/roles/base/audit/README.rst b/playbooks/roles/base/audit/README.rst new file mode 100644 index 0000000..38bb161 --- /dev/null +++ b/playbooks/roles/base/audit/README.rst @@ -0,0 +1 @@ +Audit service installation/configuration role diff --git a/playbooks/roles/base/audit/defaults/main.yaml b/playbooks/roles/base/audit/defaults/main.yaml new file mode 100644 index 0000000..94409c7 --- /dev/null +++ b/playbooks/roles/base/audit/defaults/main.yaml @@ -0,0 +1,101 @@ +auditd_file_size: 10 +auditd_num_logs: 5 +auditd_rotate_action: "ROTATE" + +os_audit_deamon: "auditd" + +os_audit_rules_file: "/etc/audit/rules.d/audit.rules" + +config_system_events: true +system_events: + # System reboot + - "-a always,exit -F arch=b64 -S execve -F path=/sbin/reboot -k reboot" + - "-a always,exit -F arch=b64 -S execve -F path=/sbin/poweroff -k reboot" + - "-a always,exit -F arch=b64 -S execve -F path=/sbin/shutdown -k reboot" + # Change of scheduled jobs + - "-w /etc/at.allow" + - "-w /etc/at.deny" + - "-w /var/spool/at/" + - "-w /etc/crontab" + - "-w /etc/anacrontab" + - "-w /etc/cron.allow" + - "-w /etc/cron.deny" + - "-w /etc/cron.d/" + - "-w /etc/cron.hourly/" + - "-w /etc/cron.daily" + - "-w /etc/cron.weekly/" + - "-w /etc/cron.monthly/" + # Change of system time + - "-a always,exit -F arch=b64 -S adjtimex -S settimeofday -k time-change" + - "-a always,exit -F arch=b64 -S clock_settime -k time-change" + - "-w /etc/localtime -p wa -k time-change" + # Connection of external device (storage) + - "-a always,exit -F arch=b64 -S mount -F auid>=1000 -F auid!=4294967295 -k mounts" + - "-a always,exit -F arch=b64 -S mount -F auid>=500 -F auid!=4294967295 -k export" + # Loading/unloading of kernel modules + - "-w /sbin/insmod -p x -k modules" + - "-w /sbin/rmmod -p x -k modules" + - "-w /sbin/modprobe -p x -k modules" + - "-a always,exit -F arch=b64 -S init_module -S delete_module -k modules" + +config_access_events: true +access_events: + # Logon and Logoff + - "-w /var/log/lastlog -p wa -k logins" + # Password Change + - "-w /etc/shadow -p wa -k identity" + - "-w /etc/gshadow -p wa -k identity" + - "-w /etc/security/opasswd -p wa -k identity" + # Escalation of privileges + - "-w /etc/sudoers -p wa -k scope" + - "-w /etc/sudoers.d -p wa -k scope" + - "-w /var/log/sudo.log -p wa -k actions" + # Modification of discretionary access control permissions + - "-a always,exit -F arch=b64 -S chmod -S fchmod -S fchmodat -F auid>=1000 -F auid!=4294967295 -k perm_mod" + - "-a always,exit -F arch=b64 -S chown -S fchown -S fchownat -S lchown -F auid>=1000 -F auid!=4294967295 -k perm_mod" + - "-a always,exit -F arch=b64 -S setxattr -S lsetxattr -S fsetxattr -S removexattr -S lremovexattr -S fremovexattr -F auid>=1000 -F auid!=4294967295 -k perm_mod" + + +config_account_group_mgmt_events: true +account_group_mgmt_events: +# Create/modify/delete users + - "-w /etc/passwd -p wa -k identity" +# Create/modify/delete groups + - "-w /etc/group -p wa -k identity" + +config_change_events: true +change_events: + # Deletion of logs + - "-w /var/log/audit/audit.log" + - "-w /var/log/audit/audit[1-4].log" + # Change of logging configuration + - "-w /etc/syslog." + - "-w /etc/rsyslog.conf" + - "-w /etc/rsyslog.d/conf" + - "-w /etc/audit/auditd.conf -p wa" + - "-w /etc/audit/audit.rules -p wa" + # Network configuration change + - "-a always,exit -F arch=b64 -S sethostname -S setdomainname -k system-locale" + - "-w /etc/issue -p wa -k system-locale" + - "-w /etc/issue.net -p wa -k system-locale" + - "-w /etc/hosts -p wa -k system-locale" + - "-w /etc/network -p wa -k system-locale" + - "-w /etc/networks -p wa -k system-locale" + # Authentication Subsystem changes + - "-w /etc/pam.d/" + - "-w /etc/nsswitch.conf" + # Critical File changes + - "-w /etc/ssh/sshd_config" + - "-w /etc/sysctl.conf" + - "-w /etc/modprobe.conf" + - "-w /etc/profile.d/" + - "-w /etc/profile" + - "-w /etc/shells" + # Promtail/Telegraf logging service/config + - "-w /etc/promtail/" + - "-w /etc/systemd/system/promtail.service" + - "-w /etc/telegraf/" + - "-w /etc/systemd/system/telegraf.service" + +find_exclude_mountpoints: [] + diff --git a/playbooks/roles/base/audit/handlers/main.yaml b/playbooks/roles/base/audit/handlers/main.yaml new file mode 100644 index 0000000..3072821 --- /dev/null +++ b/playbooks/roles/base/audit/handlers/main.yaml @@ -0,0 +1,9 @@ +- name: restart auditd + ansible.builtin.command: "service {{ os_audit_deamon }} restart" + listen: restart auditd + when: not ansible_check_mode + +- name: update grub + ansible.builtin.command: "{{ os_grub_config_update }}" + changed_when: false + when: not ansible_check_mode diff --git a/playbooks/roles/base/audit/tasks/main.yaml b/playbooks/roles/base/audit/tasks/main.yaml new file mode 100644 index 0000000..edfa8bf --- /dev/null +++ b/playbooks/roles/base/audit/tasks/main.yaml @@ -0,0 +1,115 @@ +- name: Include OS-specific variables + include_vars: "{{ lookup('first_found', params) }}" + vars: + params: + files: "{{ distro_lookup_path }}" + paths: + - 'vars' + +- name: Install distro specific audit package + ansible.builtin.package: + state: present + name: "{{ distro_packages }}" + when: "ansible_facts.pkg_mgr != 'atomic_container'" + +- name: Check if GRUB_CMDLINE_LINUX exists + ansible.builtin.shell: grep -c "^GRUB_CMDLINE_LINUX=" /etc/default/grub || true + register: check_grub_config + changed_when: false + check_mode: no + when: enable_audit_in_grub | default(true) + +- name: Add GRUB_CMDLINE_LINUX line if not existing + ansible.builtin.lineinfile: + path: "/etc/default/grub" + line: 'GRUB_CMDLINE_LINUX="audit=1"' + notify: update grub + when: + - enable_audit_in_grub | default(true) + - check_grub_config.stdout == "0" + +- name: Enable audit in config for grub + ansible.builtin.lineinfile: + path: "/etc/default/grub" + regexp: '^({{ item }}=(?!.*audit)\"[^\"]*)(\".*)' + line: '\1 audit=1\2' + backrefs: yes + notify: update grub + with_items: + - GRUB_CMDLINE_LINUX + - GRUB_CMDLINE_LINUX_DEFAULT + when: enable_audit_in_grub | default(true) + +- name: Configure max size for log rotate in auditd.conf + ansible.builtin.lineinfile: + path: '/etc/audit/auditd.conf' + regexp: '^max_log_file =' + line: 'max_log_file = {{ auditd_file_size }}' + state: present + when: configure_audit | default(true) + +- name: Configure num_logs for log rotate for auditd + ansible.builtin.lineinfile: + path: '/etc/audit/auditd.conf' + regexp: '^num_logs =' + line: 'num_logs = {{ auditd_num_logs }}' + state: present + notify: restart auditd + when: configure_audit | default(true) + +- name: Configure num_logs for logrotate for auditd + ansible.builtin.lineinfile: + path: '/etc/audit/auditd.conf' + regexp: '^max_log_file_action =' + line: 'max_log_file_action = {{ auditd_rotate_action }}' + state: present + notify: restart auditd + when: configure_audit | default(true) + +- name: Configure logging events + ansible.builtin.template: + src: 'audit-rules.j2' + dest: "{{ os_audit_rules_file }}" + owner: root + group: root + mode: 0640 + notify: restart auditd + when: + - configure_audit_rules | default(true) + +- name: Build find command with excluding mountpoints + set_fact: + find_command: "{{ lookup('template', 'find_command.j2') }}" + when: + - configure_privileged_commands | default(true) + +- name: Search for privileged commands + ansible.builtin.shell: "{{ find_command }}" + register: priv_commands + changed_when: false + check_mode: no + when: + - configure_privileged_commands | default(true) + +- name: Configure logging for priviledged commands + ansible.builtin.lineinfile: + path: "{{ os_audit_rules_file }}" + line: '-a always,exit -F path={{ item }} -F perm=x -F auid>=1000 -F auid!=4294967295 -k privileged' + state: present + with_items: '{{ priv_commands.stdout_lines }}' + notify: restart auditd + when: + - configure_privileged_commands | default(true) + +- name: Make audit configuration immutable + ansible.builtin.lineinfile: + path: "{{ os_audit_rules_file }}" + line: '-e 2' + state: present + when: configure_audit_immutable | default(true) + +- name: Ensure auditd is running + ansible.builtin.service: + name: "{{ os_audit_deamon }}" + enabled: yes + state: started diff --git a/playbooks/roles/base/audit/templates/audit-rules.j2 b/playbooks/roles/base/audit/templates/audit-rules.j2 new file mode 100644 index 0000000..a6d24c4 --- /dev/null +++ b/playbooks/roles/base/audit/templates/audit-rules.j2 @@ -0,0 +1,48 @@ +# This file contains the auditctl rules that are loaded +# whenever the audit daemon is started via the initscripts. +# The rules are simply the parameters that would be passed +# to auditctl. + +# First rule - delete all +-D + +# Increase the buffers to survive stress events. +# Make this bigger for busy systems +-b 8192 + +# System events +{% if config_system_events %} +{% for sys in system_events %} +{{sys}} +{% endfor %} +{% for ossys in os_specific_system_events %} +{{ossys}} +{% endfor %} +{% endif %} + +# Access and authentication events +{% if config_access_events %} +{% for acc in access_events %} +{{acc}} +{% endfor %} +{% for osacc in os_specific_access_events %} +{{osacc}} +{% endfor %} +{% endif %} + +# Account and group management events +{% if config_account_group_mgmt_events %} +{% for agm in account_group_mgmt_events %} +{{agm}} +{% endfor %} +{% endif %} + +# Configuration change events +{% if config_change_events %} +{% for chg in change_events %} +{{chg}} +{% endfor %} +{% for oschg in os_specific_change_events %} +{{oschg}} +{% endfor %} +{% endif %} diff --git a/playbooks/roles/base/audit/templates/find_command.j2 b/playbooks/roles/base/audit/templates/find_command.j2 new file mode 100644 index 0000000..48a990e --- /dev/null +++ b/playbooks/roles/base/audit/templates/find_command.j2 @@ -0,0 +1,8 @@ +df --local -P | awk {'if (NR!=1) print $6'} \ +{% if find_exclude_mountpoints is defined and find_exclude_mountpoints|length %} +| grep -v \ +{% for dir in find_exclude_mountpoints %} +-e {{ dir }} \ +{% endfor %} +{% endif %} +| xargs -I '{}' find '{}' -xdev -type f \( -perm -4000 -o -perm -2000 \) -print 2>/dev/null diff --git a/playbooks/roles/base/audit/vars/Debian.yaml b/playbooks/roles/base/audit/vars/Debian.yaml new file mode 100644 index 0000000..35c8cea --- /dev/null +++ b/playbooks/roles/base/audit/vars/Debian.yaml @@ -0,0 +1,26 @@ +distro_packages: + - auditd +os_grub_config_update: "update-grub" + +os_specific_system_events: + # (Un)Installation of software + - "-w /usr/bin/dpkg -p x -k software_mgmt" + - "-w /usr/bin/apt-add-repository -p x -k software_mgmt" + - "-w /usr/bin/apt-get -p x -k software_mgmt" + - "-w /usr/bin/aptitude -p x -k software_mgmt" + +os_specific_access_events: + # Logon and Logoff + - "-w /var/log/faillog -p wa -k logins" + - "-w /var/log/tallylog -p wa -k logins" + # AppArmor events + - "-w /etc/apparmor/ -p wa -k MAC-policy" + - "-w /etc/apparmor.d/ -p wa -k MAC-policy" + +os_specific_change_events: + # Modification of logs + - "-w /var/log/auth.log" + - "-w /var/log/system.log" + # Network configuration change + - "-w /etc/network/interfaces -p wa -k system-locale" + diff --git a/playbooks/roles/base/audit/vars/RedHat.yaml b/playbooks/roles/base/audit/vars/RedHat.yaml new file mode 100644 index 0000000..3a7bbf7 --- /dev/null +++ b/playbooks/roles/base/audit/vars/RedHat.yaml @@ -0,0 +1,24 @@ +distro_packages: + - audit +os_grub_config_update: "grub2-mkconfig -o /boot/grub2/grub.cfg" + + +os_specific_system_events: + # (Un)Installation of software + - "-w /usr/bin/rpm -p x -k software_mgmt" + - "-w /usr/bin/yum -p x -k software_mgmt" + - "-w /usr/bin/dnf -p x -k software_mgmt" + +os_specific_access_events: + # Logon and Logoff + - "-w /var/run/faillock/ -p wa -k logins" + # SELinux events + - "-w /etc/selinux/ -p wa -k MAC-policy" + +os_specific_change_events: + # Modification of logs + - "-w /var/log/messages" + # Network configuration change + - "-w /etc/sysconfig/network -p wa -k system-locale" + - "-w /etc/sysconfig/network-scripts/ -p wa -k system-locale" + diff --git a/playbooks/roles/base/repos/README.rst b/playbooks/roles/base/repos/README.rst new file mode 100644 index 0000000..7a21030 --- /dev/null +++ b/playbooks/roles/base/repos/README.rst @@ -0,0 +1,5 @@ +Set basic repository sources + +**Role Variables** + +* None diff --git a/playbooks/roles/base/repos/files/80retry b/playbooks/roles/base/repos/files/80retry new file mode 100644 index 0000000..8ebe6de --- /dev/null +++ b/playbooks/roles/base/repos/files/80retry @@ -0,0 +1 @@ +APT::Acquire::Retries "20"; diff --git a/playbooks/roles/base/repos/files/90no-translations b/playbooks/roles/base/repos/files/90no-translations new file mode 100644 index 0000000..2318f84 --- /dev/null +++ b/playbooks/roles/base/repos/files/90no-translations @@ -0,0 +1 @@ +Acquire::Languages "none"; diff --git a/playbooks/roles/base/repos/files/sources.list.bionic.aarch64 b/playbooks/roles/base/repos/files/sources.list.bionic.aarch64 new file mode 100644 index 0000000..30a09d2 --- /dev/null +++ b/playbooks/roles/base/repos/files/sources.list.bionic.aarch64 @@ -0,0 +1,8 @@ +# This file is kept updated by ansible, adapted from +# https://help.ubuntu.com/lts/serverguide/configuration.html +# Note the use of ports.ubuntu.com. + +deb http://ports.ubuntu.com/ubuntu-ports/ bionic main universe +deb http://ports.ubuntu.com/ubuntu-ports/ bionic-updates main universe +deb http://ports.ubuntu.com/ubuntu-ports/ bionic-backports main universe +deb http://ports.ubuntu.com/ubuntu-ports/ bionic-security main universe diff --git a/playbooks/roles/base/repos/files/sources.list.bionic.x86_64 b/playbooks/roles/base/repos/files/sources.list.bionic.x86_64 new file mode 100644 index 0000000..37969b7 --- /dev/null +++ b/playbooks/roles/base/repos/files/sources.list.bionic.x86_64 @@ -0,0 +1,7 @@ +# This file is kept updated by ansible, adapted from +# https://help.ubuntu.com/lts/serverguide/configuration.html + +deb http://us.archive.ubuntu.com/ubuntu bionic main universe +deb http://us.archive.ubuntu.com/ubuntu bionic-updates main universe +deb http://us.archive.ubuntu.com/ubuntu bionic-backports main universe +deb http://security.ubuntu.com/ubuntu bionic-security main universe diff --git a/playbooks/roles/base/repos/files/sources.list.bullseye.x86_64 b/playbooks/roles/base/repos/files/sources.list.bullseye.x86_64 new file mode 100644 index 0000000..8da7e8f --- /dev/null +++ b/playbooks/roles/base/repos/files/sources.list.bullseye.x86_64 @@ -0,0 +1,8 @@ +deb [arch=amd64] http://ftp.de.debian.org/debian/ bullseye main contrib non-free +deb-src [arch=amd64] http://ftp.de.debian.org/debian/ bullseye main contrib non-free + +deb [arch=amd64] http://ftp.de.debian.org/debian/ bullseye-updates main contrib non-free +deb-src [arch=amd64] http://ftp.de.debian.org/debian/ bullseye-updates main contrib non-free + +#deb [arch=amd64] http://security.debian.org/ bullseye/updates main contrib non-free +#deb-src [arch=amd64] http://security.debian.org/ bullseye/updates main contrib non-free diff --git a/playbooks/roles/base/repos/files/sources.list.focal.aarch64 b/playbooks/roles/base/repos/files/sources.list.focal.aarch64 new file mode 100644 index 0000000..222a08f --- /dev/null +++ b/playbooks/roles/base/repos/files/sources.list.focal.aarch64 @@ -0,0 +1,8 @@ +# This file is kept updated by ansible, adapted from +# https://help.ubuntu.com/lts/serverguide/configuration.html +# Note the use of ports.ubuntu.com. + +deb http://ports.ubuntu.com/ubuntu-ports/ focal main universe +deb http://ports.ubuntu.com/ubuntu-ports/ focal-updates main universe +deb http://ports.ubuntu.com/ubuntu-ports/ focal-backports main universe +deb http://ports.ubuntu.com/ubuntu-ports/ focal-security main universe diff --git a/playbooks/roles/base/repos/files/sources.list.focal.x86_64 b/playbooks/roles/base/repos/files/sources.list.focal.x86_64 new file mode 100644 index 0000000..0435b34 --- /dev/null +++ b/playbooks/roles/base/repos/files/sources.list.focal.x86_64 @@ -0,0 +1,7 @@ +# This file is kept updated by ansible, adapted from +# https://help.ubuntu.com/lts/serverguide/configuration.html + +deb http://de.archive.ubuntu.com/ubuntu focal main universe +deb http://de.archive.ubuntu.com/ubuntu focal-updates main universe +deb http://de.archive.ubuntu.com/ubuntu focal-backports main universe +deb http://de.archive.ubuntu.com/ubuntu focal-security main universe diff --git a/playbooks/roles/base/repos/files/sources.list.jammy.x86_64 b/playbooks/roles/base/repos/files/sources.list.jammy.x86_64 new file mode 100644 index 0000000..3813b3d --- /dev/null +++ b/playbooks/roles/base/repos/files/sources.list.jammy.x86_64 @@ -0,0 +1,7 @@ +# This file is kept updated by ansible, adapted from +# https://help.ubuntu.com/lts/serverguide/configuration.html + +deb http://de.archive.ubuntu.com/ubuntu jammy main universe +deb http://de.archive.ubuntu.com/ubuntu jammy-updates main universe +deb http://de.archive.ubuntu.com/ubuntu jammy-backports main universe +deb http://de.archive.ubuntu.com/ubuntu jammy-security main universe diff --git a/playbooks/roles/base/repos/files/sources.list.sid.x86_64 b/playbooks/roles/base/repos/files/sources.list.sid.x86_64 new file mode 100644 index 0000000..eef96b7 --- /dev/null +++ b/playbooks/roles/base/repos/files/sources.list.sid.x86_64 @@ -0,0 +1,10 @@ +deb http://ftp.de.debian.org/debian/ sid main contrib non-free +deb-src http://ftp.de.debian.org/debian/ sid main contrib non-free + +# Updates +# deb http://ftp.de.debian.org/debian/ sid-updates main contrib non-free +# deb-src http://ftp.de.debian.org/debian/ sid-updates main contrib non-free + +# Security Updates http://www.debian.org/security/ +# deb http://security.debian.org/ sid/updates main contrib non-free +# deb-src http://security.debian.org/ sid/updates main contrib non-free diff --git a/playbooks/roles/base/repos/files/sources.list.trusty.x86_64 b/playbooks/roles/base/repos/files/sources.list.trusty.x86_64 new file mode 100644 index 0000000..02458a5 --- /dev/null +++ b/playbooks/roles/base/repos/files/sources.list.trusty.x86_64 @@ -0,0 +1,10 @@ +# This file is kept updated by ansible, adapted from +# http://ubuntuguide.org/wiki/Ubuntu_Trusty_Packages_and_Repositories + +deb http://archive.ubuntu.com/ubuntu trusty main +deb http://archive.ubuntu.com/ubuntu trusty-updates main +deb http://archive.ubuntu.com/ubuntu trusty universe +deb http://archive.ubuntu.com/ubuntu trusty-updates universe +deb http://archive.ubuntu.com/ubuntu trusty-backports main universe +deb http://security.ubuntu.com/ubuntu trusty-security main +deb http://security.ubuntu.com/ubuntu trusty-security universe diff --git a/playbooks/roles/base/repos/handlers/main.yaml b/playbooks/roles/base/repos/handlers/main.yaml new file mode 100644 index 0000000..603689e --- /dev/null +++ b/playbooks/roles/base/repos/handlers/main.yaml @@ -0,0 +1,3 @@ +- name: Update apt cache + apt: + update_cache: true diff --git a/playbooks/roles/base/repos/tasks/CentOS.yaml b/playbooks/roles/base/repos/tasks/CentOS.yaml new file mode 100644 index 0000000..8610c9b --- /dev/null +++ b/playbooks/roles/base/repos/tasks/CentOS.yaml @@ -0,0 +1,23 @@ +- name: Install epel-release + yum: + name: epel-release + +# there is a bug (rhbz#1261747) where systemd can fail to enable +# services due to selinux errors after upgrade. A work-around is +# to install the latest version of selinux and systemd here and +# restart the daemon for good measure after it is upgraded. +- name: Install latest selinux-policy and systemd + yum: + name: "{{ package_item }}" + state: latest + loop: + - selinux-policy + - systemd + loop_control: + loop_var: package_item + register: systemd_updated + +- name: Restart systemd + systemd: + daemon_reload: yes + when: systemd_updated is changed diff --git a/playbooks/roles/base/repos/tasks/Debian.yaml b/playbooks/roles/base/repos/tasks/Debian.yaml new file mode 100644 index 0000000..fcf6284 --- /dev/null +++ b/playbooks/roles/base/repos/tasks/Debian.yaml @@ -0,0 +1,22 @@ +- name: Configure apt retries + copy: + mode: 0444 + src: 80retry + dest: /etc/apt/apt.conf.d/80retry + +- name: Disable apt translations + copy: + mode: 0444 + src: 90no-translations + dest: /etc/apt/apt.conf.d/90no-translations + +- name: Make /etc/apt/sources.list.d + ansible.builtin.file: + state: "directory" + path: "/etc/apt/sources.list.d" + +- name: Replace sources.list file + copy: + src: 'sources.list.{{ ansible_facts.lsb.codename }}.{{ ansible_facts.architecture }}' + dest: "/etc/apt/sources.list.d/{{ ansible_facts.lsb.codename }}.list" + notify: Update apt cache diff --git a/playbooks/roles/base/repos/tasks/main.yaml b/playbooks/roles/base/repos/tasks/main.yaml new file mode 100644 index 0000000..e682b7e --- /dev/null +++ b/playbooks/roles/base/repos/tasks/main.yaml @@ -0,0 +1,11 @@ +- name: Set up additional repos + include_tasks: "{{ item }}" + vars: + params: + files: + - "{{ ansible_facts.distribution }}.yaml" + - "{{ ansible_facts.os_family }}.yaml" + loop: "{{ query('first_found', params, errors='ignore') }}" + when: "ansible_facts.pkg_mgr != 'atomic_container'" + +- meta: flush_handlers diff --git a/playbooks/roles/base/server/README.rst b/playbooks/roles/base/server/README.rst new file mode 100644 index 0000000..e61f5c0 --- /dev/null +++ b/playbooks/roles/base/server/README.rst @@ -0,0 +1,9 @@ +Basic common server configuration + +**Role Variables** + +.. zuul:rolevar:: bastion_key_exclusive + :default: True + + Whether the bastion ssh key is the only key allowed to ssh in as + root. diff --git a/playbooks/roles/base/server/defaults/main.yaml b/playbooks/roles/base/server/defaults/main.yaml new file mode 100644 index 0000000..3bb2f01 --- /dev/null +++ b/playbooks/roles/base/server/defaults/main.yaml @@ -0,0 +1,61 @@ +bastion_ipv4: 192.168.0.239 +bastion_ipv6: fe80::3709:495b:66ac:8875 +bastion_public_key: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDYnt1MAAdWM8xW3Y5SeUhk+5cMZbpsz+juSUrbTsKDFx32BhsXtKR1sk8pCoKXTkvU+tD4PF3rI+7R8M/gI9MB8VM7l9BweZ1EDwVrDuiSxtl0HtYV+E+a7Fd0jDytGrzUloQtBqYqfPTOegU8APIOKOygrrQurBnZaCJfQoRtPEfoyuzL0maLAQ5AWATAwwbCJsQsgxVEL8flW2lt2r+JiQfXx8OGb/uUUzlDpwGdhZHa5l2VY+huShL7ij2LHxVuw3OExAzRdgPduqQG/xJGZZJsj5Vyorvkd0K78J2a+mEvNCIl0BjCfPhOJYk9FGB5Ne9XLCcwUVHImqPZpJIFTWW/Ic7/m73lqHNGGi9dem7aXCMQlltAvegL8H6JY6LhcIODB8zTDQ81mk1kmO0PTotujKWYLb9ptw1AuScvPAxcRHjKsSnlkUOK/oy3oE/J6LO/fUTay7DH34DqaqLNKGLXvYfmzkrqTi5IrCb0JE1HAcbelmRh/s5+CMxuOVk= root@bastion1.eco.tsi-dev.otc-service.com +bastion_key_exclusive: true +base_packages: + - at + - git + - logrotate + - lvm2 + - openssh-server + - parted + - rsync + - rsyslog + - strace + - tcpdump + - wget +logrotate_maxsize: "300M" + +# DT 3.04-3 +ssh_key_ex: + - curve25519-sha256@libssh.org + - diffie-hellman-group-exchange-sha256 + - ecdh-sha2-nistp521 + - ecdh-sha2-nistp384 + - ecdh-sha2-nistp256 + +# DT 3.04-4 +ssh_ciphers: + - chacha20-poly1305@openssh.com + - aes256-gcm@openssh.com + - aes128-gcm@openssh.com + - aes256-ctr + - aes192-ctr + - aes128-ctr + +# DT 3.04-5 +ssh_macs: + - hmac-sha2-512-etm@openssh.com + - hmac-sha2-256-etm@openssh.com + - hmac-sha2-512 + - hmac-sha2-256 + +# DT 3.04-6 +ssh_hostkey_algorithm: + - ecdsa-sha2-nistp256 + - ecdsa-sha2-nistp256-cert-v01@openssh.com + - ecdsa-sha2-nistp384 + - ecdsa-sha2-nistp384-cert-v01@openssh.com + - ecdsa-sha2-nistp521 + - ecdsa-sha2-nistp521-cert-v01@openssh.com + - ssh-ed25519 + - ssh-ed25519-cert-v01@openssh.com + # SKs are not supported by all OSs, disable them for now + # - sk-ssh-ed25519@openssh.com + # - sk-ssh-ed25519-cert-v01@openssh.com + # - sk-ecdsa-sha2-nistp256@openssh.com + # - sk-ecdsa-sha2-nistp256-cert-v01@openssh.com + +ssh_disable_forwarding: true +ssh_allow_tcp_forwarding: false +ssh_allow_agent_forwarding: false diff --git a/playbooks/roles/base/server/files/10periodic b/playbooks/roles/base/server/files/10periodic new file mode 100644 index 0000000..83f51c6 --- /dev/null +++ b/playbooks/roles/base/server/files/10periodic @@ -0,0 +1,6 @@ +APT::Periodic::Enable "1"; +APT::Periodic::Update-Package-Lists "1"; +APT::Periodic::Download-Upgradeable-Packages "1"; +APT::Periodic::AutocleanInterval "5"; +APT::Periodic::Unattended-Upgrade "1"; +APT::Periodic::RandomSleep "1800"; diff --git a/playbooks/roles/base/server/files/50unattended-upgrades b/playbooks/roles/base/server/files/50unattended-upgrades new file mode 100644 index 0000000..2634307 --- /dev/null +++ b/playbooks/roles/base/server/files/50unattended-upgrades @@ -0,0 +1,30 @@ +// Automatically upgrade packages from these (origin, archive) pairs +Unattended-Upgrade::Allowed-Origins { + // ${distro_id} and ${distro_codename} will be automatically expanded + "${distro_id} stable"; + "${distro_id} ${distro_codename}-security"; + "${distro_id} ${distro_codename}-updates"; +// "${distro_id} ${distro_codename}-proposed-updates"; +}; + +// List of packages to not update +Unattended-Upgrade::Package-Blacklist { +// "vim"; +// "libc6"; +// "libc6-dev"; +// "libc6-i686"; +}; + +// Send email to this address for problems or packages upgrades +// If empty or unset then no email is sent, make sure that you +// have a working mail setup on your system. The package 'mailx' +// must be installed or anything that provides /usr/bin/mail. +Unattended-Upgrade::Mail "root"; + +// Do automatic removal of new unused dependencies after the upgrade +// (equivalent to apt-get autoremove) +Unattended-Upgrade::Remove-Unused-Dependencies "true"; + +// Automatically reboot *WITHOUT CONFIRMATION* if a +// the file /var/run/reboot-required is found after the upgrade +//Unattended-Upgrade::Automatic-Reboot "false"; diff --git a/playbooks/roles/base/server/files/95disable-recommends b/playbooks/roles/base/server/files/95disable-recommends new file mode 100644 index 0000000..c378775 --- /dev/null +++ b/playbooks/roles/base/server/files/95disable-recommends @@ -0,0 +1,2 @@ +APT::Install-Recommends "0"; +APT::Install-Suggests "0"; \ No newline at end of file diff --git a/playbooks/roles/base/server/files/bash-history.sh b/playbooks/roles/base/server/files/bash-history.sh new file mode 100644 index 0000000..e3f56e6 --- /dev/null +++ b/playbooks/roles/base/server/files/bash-history.sh @@ -0,0 +1 @@ +export HISTTIMEFORMAT="%Y-%m-%dT%T%z " diff --git a/playbooks/roles/base/server/files/debian_limits.conf b/playbooks/roles/base/server/files/debian_limits.conf new file mode 100644 index 0000000..860c08b --- /dev/null +++ b/playbooks/roles/base/server/files/debian_limits.conf @@ -0,0 +1,4 @@ +# Original 1024 +* soft nofile 4096 +# Original 4096 +* hard nofile 8192 diff --git a/playbooks/roles/base/server/files/yum/yum-cron.conf b/playbooks/roles/base/server/files/yum/yum-cron.conf new file mode 100644 index 0000000..bd1ec68 --- /dev/null +++ b/playbooks/roles/base/server/files/yum/yum-cron.conf @@ -0,0 +1,81 @@ +[commands] +# What kind of update to use: +# default = yum upgrade +# security = yum --security upgrade +# security-severity:Critical = yum --sec-severity=Critical upgrade +# minimal = yum --bugfix update-minimal +# minimal-security = yum --security update-minimal +# minimal-security-severity:Critical = --sec-severity=Critical update-minimal +update_cmd = default + +# Whether a message should be emitted when updates are available, +# were downloaded, or applied. +update_messages = yes + +# Whether updates should be downloaded when they are available. +download_updates = yes + +# Whether updates should be applied when they are available. Note +# that download_updates must also be yes for the update to be applied. +apply_updates = yes + +# Maximum amout of time to randomly sleep, in minutes. The program +# will sleep for a random amount of time between 0 and random_sleep +# minutes before running. This is useful for e.g. staggering the +# times that multiple systems will access update servers. If +# random_sleep is 0 or negative, the program will run immediately. +# 6*60 = 360 +random_sleep = 360 + + +[emitters] +# Name to use for this system in messages that are emitted. If +# system_name is None, the hostname will be used. +system_name = None + +# How to send messages. Valid options are stdio and email. If +# emit_via includes stdio, messages will be sent to stdout; this is useful +# to have cron send the messages. If emit_via includes email, this +# program will send email itself according to the configured options. +# If emit_via is None or left blank, no messages will be sent. +emit_via = stdio + +# The width, in characters, that messages that are emitted should be +# formatted to. +output_width = 80 + + +[email] +# The address to send email messages from. +# NOTE: 'localhost' will be replaced with the value of system_name. +email_from = root@localhost + +# List of addresses to send messages to. +email_to = root + +# Name of the host to connect to to send email messages. +email_host = localhost + + +[groups] +# NOTE: This only works when group_command != objects, which is now the default +# List of groups to update +group_list = None + +# The types of group packages to install +group_package_types = mandatory, default + +[base] +# This section overrides yum.conf + +# Use this to filter Yum core messages +# -4: critical +# -3: critical+errors +# -2: critical+errors+warnings (default) +debuglevel = -2 + +# skip_broken = True +mdpolicy = group:main + +# Uncomment to auto-import new gpg keys (dangerous) +# assumeyes = True diff --git a/playbooks/roles/base/server/handlers/main.yaml b/playbooks/roles/base/server/handlers/main.yaml new file mode 100644 index 0000000..4c60d26 --- /dev/null +++ b/playbooks/roles/base/server/handlers/main.yaml @@ -0,0 +1,17 @@ +- name: Restart rsyslog + service: + name: rsyslog + state: restarted + +- name: Restart logrotate + service: + name: logrotate + enabled: yes + state: restarted + ignore_errors: true + +- name: Restart ssh + service: + name: '{{ ssh_service_name }}' + state: restarted + when: not ansible_facts.is_chroot diff --git a/playbooks/roles/base/server/tasks/Debian.yaml b/playbooks/roles/base/server/tasks/Debian.yaml new file mode 100644 index 0000000..3a6f4cd --- /dev/null +++ b/playbooks/roles/base/server/tasks/Debian.yaml @@ -0,0 +1,65 @@ +- name: Disable install of additional recommends and suggests packages + copy: + mode: 0444 + src: 95disable-recommends + dest: /etc/apt/apt.conf.d/ + owner: root + group: root + +- name: Remove ntp and run timesyncd + block: + - name: Remove ntp + package: + name: ntp + state: absent + + - name: Ensure systemd-timesyncd is running + service: + name: systemd-timesyncd + enabled: yes + state: started + +- name: Remove packages that make no sense for our servers + package: + name: + - apport + - whoopsie + - popularity-contest + - lxd + - lxd-client + # - cloud-init + state: absent + +- name: Get rid of extra depends + command: apt-get autoremove -y + +- name: Configure file limits + copy: + mode: 0644 + src: debian_limits.conf + dest: /etc/security/limits.d/60-nofile-limit.conf + +- name: Install apt-daily 10periodic file for unattended-upgrades + copy: + mode: 0444 + src: 10periodic + dest: /etc/apt/apt.conf.d/10periodic + owner: root + group: root + +- name: Install 50unattended-upgrades file for unattended-upgrades + copy: + mode: 0444 + src: 50unattended-upgrades + dest: /etc/apt/apt.conf.d/50unattended-upgrades + owner: root + group: root + +- name: Ensure required build packages + apt: + update_cache: yes + name: + - libffi-dev + - libssl-dev + - build-essential + when: ansible_architecture == 'aarch64' diff --git a/playbooks/roles/base/server/tasks/RedHat.yaml b/playbooks/roles/base/server/tasks/RedHat.yaml new file mode 100644 index 0000000..580749d --- /dev/null +++ b/playbooks/roles/base/server/tasks/RedHat.yaml @@ -0,0 +1,27 @@ +- name: Remove ntp and run timesyncd + block: + - name: Remove ntp + ansible.builtin.package: + name: ntp + state: absent + + - name: Ensure chrony is running + ansible.builtin.systemd: + name: chronyd + enabled: yes + state: started + +- name: Ensure dnf-automatic updates the system + community.general.ini_file: + path: "/etc/dnf/automatic.conf" + section: "commands" + option: "apply_updates" + value: "yes" + when: "ansible_facts.pkg_mgr != 'atomic_container'" + +- name: Ensure dnf-automatic service is running + ansible.builtin.systemd: + name: dnf-automatic.timer + enabled: yes + state: started + when: "ansible_facts.pkg_mgr != 'atomic_container'" diff --git a/playbooks/roles/base/server/tasks/main.yaml b/playbooks/roles/base/server/tasks/main.yaml new file mode 100644 index 0000000..518b295 --- /dev/null +++ b/playbooks/roles/base/server/tasks/main.yaml @@ -0,0 +1,129 @@ +- name: Install base packages + ansible.builtin.package: + state: present + name: "{{ base_packages }}" + when: "ansible_facts.pkg_mgr != 'atomic_container'" + +- name: Install fallback packages + ansible.builtin.package: + state: "present" + name: "{{ item }}" + ignore_errors: true + loop: + - redhat-lsb-core + when: "ansible_facts.pkg_mgr != 'atomic_container'" + +- name: Include OS-specific variables + include_vars: "{{ lookup('first_found', params) }}" + vars: + params: + files: "{{ distro_lookup_path }}" + paths: + - 'vars' + +- name: Install distro specific packages + ansible.builtin.package: + state: present + name: "{{ distro_packages }}" + when: "ansible_facts.pkg_mgr != 'atomic_container'" + +- name: Increase syslog message size in order to capture python tracebacks + copy: + content: '$MaxMessageSize 6k' + dest: /etc/rsyslog.d/99-maxsize.conf + mode: 0644 + when: "ansible_facts.pkg_mgr != 'atomic_container'" + notify: Restart rsyslog + +- name: Deploy SystemConfig logrotate config + ansible.builtin.template: + dest: "/etc/logrotate.d/1system-config" + src: "logrotate.j2" + notify: Restart logrotate + +- name: Ensure logrotate.timer is running + ansible.builtin.service: + name: logrotate.timer + enabled: yes + state: started + ignore_errors: true + +- name: Ensure rsyslog is running + ansible.builtin.service: + name: rsyslog + enabled: yes + state: started + when: "ansible_facts.pkg_mgr != 'atomic_container'" + +# TODO: remove this once we are sure automation user is properly deployed +# everywhere +- name: Set ssh key for management + ansible.builtin.authorized_key: + state: present + user: root + exclusive: "{{ bastion_key_exclusive }}" + key: "{{ bastion_public_key }}" + key_options: | + from="{{ bastion_ipv4 }},{{ bastion_ipv6 }},localhost" + +- name: Set ssh key for management + ansible.builtin.authorized_key: + state: present + user: "{{ ansible_user }}" + exclusive: "{{ bastion_key_exclusive }}" + key: "{{ bastion_public_key }}" + key_options: | + from="{{ bastion_ipv4 }},{{ bastion_ipv6 }},localhost" + when: "ansible_user is defined" + +# - name: Check presence of /etc/ssh/sshd_config.d directory +# ansible.builtin.stat: +# path: /etc/ssh/sshd_config.d/ +# register: sshd_config_dir + +- name: Install sshd config + ansible.builtin.template: + src: "sshd_config.j2" + dest: "/etc/ssh/sshd_config" + owner: "root" + group: "root" + mode: 0444 + validate: "/usr/sbin/sshd -t -f %s" + # when: not sshd_config_dir.stat.exists + when: "ansible_facts.pkg_mgr != 'atomic_container'" + notify: Restart ssh + +# #Some OS want us to place content under /etc/ssh/sshd_config.d/*.conf +# but then we seem not to have possibility to allow root login from bridge +# - name: Install sshd part config +# ansible.builtin.template: +# src: "90-eco.conf.j2" +# dest: "/etc/ssh/sshd_config.d/90-eco.conf" +# owner: "root" +# group: "root" +# mode: 0444 +# validate: "/usr/sbin/sshd -t -f %s" +# when: sshd_config_dir.stat.isdir is defined and sshd_config_dir.stat.isdir +# notify: Restart ssh + +- name: Disable byobu + file: + path: /etc/profile.d/Z98-byobu.sh + state: absent + +- name: Setup RFC3339 bash history timestamps + copy: + mode: 0644 + src: bash-history.sh + dest: /etc/profile.d/bash-history.sh + +- name: Ensure root cache directory + file: + path: /root/.cache + state: directory + mode: 0700 + +- name: Include OS-specific tasks + include_tasks: "{{ lookup('first_found', file_list) }}" + vars: + file_list: "{{ distro_lookup_path }}" diff --git a/playbooks/roles/base/server/templates/90-eco.conf.j2 b/playbooks/roles/base/server/templates/90-eco.conf.j2 new file mode 100644 index 0000000..865e6ee --- /dev/null +++ b/playbooks/roles/base/server/templates/90-eco.conf.j2 @@ -0,0 +1,31 @@ +PermitRootLogin no +StrictModes yes + +PubkeyAuthentication yes + +PermitEmptyPasswords no + +# Change to yes to enable challenge-response passwords (beware issues with +# some PAM modules and threads) +ChallengeResponseAuthentication no + +# Change to no to disable tunnelled clear text passwords +PasswordAuthentication no + +# Set this to 'yes' to enable PAM authentication, account processing, +# and session processing. If this is enabled, PAM authentication will +# be allowed through the ChallengeResponseAuthentication and +# PasswordAuthentication. Depending on your PAM configuration, +# PAM authentication via ChallengeResponseAuthentication may bypass +# the setting of "PermitRootLogin without-password". +# If you just want the PAM account and session checks to run without +# PAM authentication, then enable this but set PasswordAuthentication +# and ChallengeResponseAuthentication to 'no'. +UsePAM yes + +# allow ansible connections from bridge host +Match address {{ bastion_ipv4 }},{{ bastion_ipv6 }} + PermitRootLogin without-password +# allow ansible connections from localhost +Match host localhost + PermitRootLogin without-password diff --git a/playbooks/roles/base/server/templates/logrotate.j2 b/playbooks/roles/base/server/templates/logrotate.j2 new file mode 100644 index 0000000..79e3385 --- /dev/null +++ b/playbooks/roles/base/server/templates/logrotate.j2 @@ -0,0 +1,5 @@ +compress + +{% if logrotate_maxsize is defined %} +maxsize {{ logrotate_maxsize }} +{% endif %} diff --git a/playbooks/roles/base/server/templates/sshd_config.j2 b/playbooks/roles/base/server/templates/sshd_config.j2 new file mode 100644 index 0000000..d83c6a7 --- /dev/null +++ b/playbooks/roles/base/server/templates/sshd_config.j2 @@ -0,0 +1,127 @@ +# Package generated configuration file +# See the sshd_config(5) manpage for details + +# What ports, IPs and protocols we listen for +Port 22 +# Use these options to restrict which interfaces/protocols sshd will bind to +# DT 3.04-1 +Protocol 2 +# HostKeys for protocol version 2 +HostKey /etc/ssh/ssh_host_rsa_key +# HostKey /etc/ssh/ssh_host_dsa_key +HostKey /etc/ssh/ssh_host_ecdsa_key +HostKey /etc/ssh/ssh_host_ed25519_key +#Privilege Separation is turned on for security +#UsePrivilegeSeparation yes + +# DT 3.04-3 +{% if ssh_key_ex is defined %} +# Key Exchange Algorithms +KexAlgorithms {{ ssh_key_ex | join(',') }} +{% endif %} +# DT 3.04-4 +{% if ssh_ciphers %} +# Ciphers +Ciphers {{ ssh_ciphers | join(',') }} +{% endif %} +# DT 3.04-5 +{% if ssh_macs %} +# MACs +MACs {{ ssh_macs | join(',') }} +{% endif %} +# DT 3.04-6 +{% if ssh_hostkey_algorithm %} +# Host Key Algorithms +HostKeyAlgorithms {{ ssh_hostkey_algorithm | join(',') }} +{% endif %} + +# Logging +# DT 3.04-7 +SyslogFacility AUTH +LogLevel INFO + +# Authentication: +# DT 3.04-8 +LoginGraceTime 60 +# DT 3.04-9 +MaxAuthTries 5 +# DT 3.04-10 +PermitRootLogin no +# DT 3.04-11 +StrictModes yes +# DT 3.04-12 +PubkeyAuthentication yes +#AuthorizedKeysFile %h/.ssh/authorized_keys +# DT 3.04-13 +# Change to no to disable tunnelled clear text passwords +PasswordAuthentication no + +# DT 3.04-14 +# Don't read the user's ~/.rhosts and ~/.shosts files +IgnoreRhosts yes +# DT 3.04-15 +# similar for protocol version 2 +HostbasedAuthentication no +# Uncomment if you don't trust ~/.ssh/known_hosts for RhostsRSAAuthentication +#IgnoreUserKnownHosts yes + +# DT 3.04-17 +# DT recommends 60, but it is too low +ClientAliveInterval 300 +ClientAliveCountMax 10 +TCPKeepAlive yes + +# DT 3.04-18 +PermitTunnel no + +# DT 3.04-19 +AllowTcpForwarding {{ ssh_allow_tcp_forwarding | ternary('yes', 'no') }} + +# DT 3.04-20 +AllowAgentForwarding {{ ssh_allow_agent_forwarding | ternary('yes', 'no') }} + +# DT 3.04-21 +GatewayPorts no + +# DT 3.04-22 +X11Forwarding no + +# DT 3.04-23 +PermitUserEnvironment no + +# DT 3.04-24 +PermitEmptyPasswords no + +# Change to yes to enable challenge-response passwords (beware issues with +# some PAM modules and threads) +ChallengeResponseAuthentication no + +PrintMotd no +PrintLastLog yes +#UseLogin no + +#MaxStartups 10:30:60 +#Banner /etc/issue.net + +# Allow client to pass locale environment variables +# AcceptEnv LANG LC_* + +Subsystem sftp {{ sftp_path }} + +# Set this to 'yes' to enable PAM authentication, account processing, +# and session processing. If this is enabled, PAM authentication will +# be allowed through the ChallengeResponseAuthentication and +# PasswordAuthentication. Depending on your PAM configuration, +# PAM authentication via ChallengeResponseAuthentication may bypass +# the setting of "PermitRootLogin without-password". +# If you just want the PAM account and session checks to run without +# PAM authentication, then enable this but set PasswordAuthentication +# and ChallengeResponseAuthentication to 'no'. +UsePAM yes + +# allow ansible connections from bridge host +Match address {{ bastion_ipv4 }},{{ bastion_ipv6 }} + PermitRootLogin without-password +# allow ansible connections from localhost +Match host localhost + PermitRootLogin without-password diff --git a/playbooks/roles/base/server/vars/Debian.yaml b/playbooks/roles/base/server/vars/Debian.yaml new file mode 100644 index 0000000..fc3cceb --- /dev/null +++ b/playbooks/roles/base/server/vars/Debian.yaml @@ -0,0 +1,13 @@ +distro_packages: + - dnsutils + # - emacs-nox + # - yaml-mode + - iputils-ping + - vim-nox + - unattended-upgrades + - mailutils + - gnupg + - systemd-timesyncd + - apparmor-utils +sftp_path: /usr/lib/openssh/sftp-server +ssh_service_name: ssh diff --git a/playbooks/roles/base/server/vars/RedHat.yaml b/playbooks/roles/base/server/vars/RedHat.yaml new file mode 100644 index 0000000..9ccceed --- /dev/null +++ b/playbooks/roles/base/server/vars/RedHat.yaml @@ -0,0 +1,10 @@ +distro_packages: + - bind-utils + - emacs-nox + - iputils + - chrony + - vim-minimal + - dnf-automatic +sftp_path: /usr/libexec/openssh/sftp-server +ssh_service_name: sshd + diff --git a/playbooks/roles/base/server/vars/Ubuntu.trusty.yaml b/playbooks/roles/base/server/vars/Ubuntu.trusty.yaml new file mode 100644 index 0000000..36b9475 --- /dev/null +++ b/playbooks/roles/base/server/vars/Ubuntu.trusty.yaml @@ -0,0 +1,10 @@ +distro_packages: + - dnsutils + - emacs23-nox + - yaml-mode + - iputils-ping + - vim-nox + - unattended-upgrades + - mailutils +sftp_path: /usr/lib/openssh/sftp-server +ssh_service_name: ssh diff --git a/playbooks/roles/base/server/vars/Ubuntu.xenial.yaml b/playbooks/roles/base/server/vars/Ubuntu.xenial.yaml new file mode 100644 index 0000000..2d13214 --- /dev/null +++ b/playbooks/roles/base/server/vars/Ubuntu.xenial.yaml @@ -0,0 +1,15 @@ +distro_packages: + - dnsutils + - emacs-nox + - yaml-mode + - iputils-ping + - vim-nox + - unattended-upgrades + - mailutils + # Install this to make transitioning ansible on python2 from + # trusty to xenial easier then we can switch all xenial nodes + # to python3 + - python2.7 + - python +sftp_path: /usr/lib/openssh/sftp-server +ssh_service_name: ssh diff --git a/playbooks/roles/base/snmpd/README.rst b/playbooks/roles/base/snmpd/README.rst new file mode 100644 index 0000000..c9c625c --- /dev/null +++ b/playbooks/roles/base/snmpd/README.rst @@ -0,0 +1 @@ +Installs and configures the net-snmp daemon diff --git a/playbooks/roles/base/snmpd/handlers/main.yaml b/playbooks/roles/base/snmpd/handlers/main.yaml new file mode 100644 index 0000000..5fa7c5a --- /dev/null +++ b/playbooks/roles/base/snmpd/handlers/main.yaml @@ -0,0 +1,4 @@ +- name: Restart snmpd + service: + name: "{{ service_name }}" + state: restarted diff --git a/playbooks/roles/base/snmpd/tasks/main.yaml b/playbooks/roles/base/snmpd/tasks/main.yaml new file mode 100644 index 0000000..10dc4ac --- /dev/null +++ b/playbooks/roles/base/snmpd/tasks/main.yaml @@ -0,0 +1,28 @@ +- name: Include OS-specific variables + include_vars: "{{ lookup('first_found', params) }}" + vars: + params: + files: "{{ distro_lookup_path }}" + paths: + - 'vars' + +- name: Install snmpd + package: + state: present + name: '{{ package }}' + +- name: Write snmpd config file + template: + src: snmpd.conf + dest: /etc/snmp/snmpd.conf + mode: 0444 + notify: + - Restart snmpd + +# We don't usually ensure services are running, but snmp is generally +# not public facing and is easy to overlook. +- name: Enable snmpd + service: + name: "{{ service_name }}" + enabled: true + state: started diff --git a/playbooks/roles/base/snmpd/templates/snmpd.conf b/playbooks/roles/base/snmpd/templates/snmpd.conf new file mode 100644 index 0000000..c791ed5 --- /dev/null +++ b/playbooks/roles/base/snmpd/templates/snmpd.conf @@ -0,0 +1,195 @@ +############################################################################### +# +# EXAMPLE.conf: +# An example configuration file for configuring the Net-SNMP agent ('snmpd') +# See the 'snmpd.conf(5)' man page for details +# +# Some entries are deliberately commented out, and will need to be explicitly activated +# +############################################################################### +# +# AGENT BEHAVIOUR +# + +# Listen for connections from the local system only +#agentAddress udp:127.0.0.1:161 +# Listen for connections on all interfaces (both IPv4 *and* IPv6) +#agentAddress udp:161,udp6:[::1]:161 +agentAddress udp:161,udp6:161 + + + +############################################################################### +# +# SNMPv3 AUTHENTICATION +# +# Note that these particular settings don't actually belong here. +# They should be copied to the file /var/lib/snmp/snmpd.conf +# and the passwords changed, before being uncommented in that file *only*. +# Then restart the agent + +# createUser authOnlyUser MD5 "remember to change this password" +# createUser authPrivUser SHA "remember to change this one too" DES +# createUser internalUser MD5 "this is only ever used internally, but still change the password" + +# If you also change the usernames (which might be sensible), +# then remember to update the other occurances in this example config file to match. + + + +############################################################################### +# +# ACCESS CONTROL +# + + # system + hrSystem groups only +view systemonly included .1.3.6.1.2.1.1 +view systemonly included .1.3.6.1.2.1.25.1 + + # Full access from the local host +#rocommunity public localhost + # Default access to basic system info +rocommunity public default +rocommunity6 public default + + # Full access from an example network + # Adjust this network address to match your local + # settings, change the community string, + # and check the 'agentAddress' setting above +#rocommunity secret 10.0.0.0/16 + + # Full read-only access for SNMPv3 +# rouser authOnlyUser + # Full write access for encrypted requests + # Remember to activate the 'createUser' lines above +#rwuser authPrivUser priv + +# It's no longer typically necessary to use the full 'com2sec/group/access' configuration +# r[ou]user and r[ow]community, together with suitable views, should cover most requirements + + + +############################################################################### +# +# SYSTEM INFORMATION +# + +# Note that setting these values here, results in the corresponding MIB objects being 'read-only' +# See snmpd.conf(5) for more details +sysLocation Sitting on the Dock of the Bay +sysContact Me + # Application + End-to-End layers +sysServices 72 + + +# +# Process Monitoring +# + # At least one 'mountd' process +proc mountd + # No more than 4 'ntalkd' processes - 0 is OK +proc ntalkd 4 + # At least one 'sendmail' process, but no more than 10 +proc sendmail 10 1 + +# Walk the UCD-SNMP-MIB::prTable to see the resulting output +# Note that this table will be empty if there are no "proc" entries in the snmpd.conf file + + +# +# Disk Monitoring +# + # 10MBs required on root disk, 5% free on /var, 10% free on all other disks +disk / 10000 +disk /var 5% +includeAllDisks 10% + +# Walk the UCD-SNMP-MIB::dskTable to see the resulting output +# Note that this table will be empty if there are no "disk" entries in the snmpd.conf file + + +# +# System Load +# + # Unacceptable 1-, 5-, and 15-minute load averages +load 12 10 5 + +# Walk the UCD-SNMP-MIB::laTable to see the resulting output +# Note that this table *will* be populated, even without a "load" entry in the snmpd.conf file + + + +############################################################################### +# +# ACTIVE MONITORING +# + + # send SNMPv1 traps +# trapsink localhost public + # send SNMPv2c traps +#trap2sink localhost public + # send SNMPv2c INFORMs +#informsink localhost public + +# Note that you typically only want *one* of these three lines +# Uncommenting two (or all three) will result in multiple copies of each notification. + + +# +# Event MIB - automatically generate alerts +# + # Remember to activate the 'createUser' lines above +#iquerySecName internalUser +#rouser internalUser + # generate traps on UCD error conditions +#defaultMonitors yes + # generate traps on linkUp/Down +#linkUpDownNotifications yes + + + +############################################################################### +# +# EXTENDING THE AGENT +# + +# +# Arbitrary extension commands +# +# extend test1 /bin/echo Hello, world! +# extend-sh test2 echo Hello, world! ; echo Hi there ; exit 35 +#extend-sh test3 /bin/sh /tmp/shtest + +# Note that this last entry requires the script '/tmp/shtest' to be created first, +# containing the same three shell commands, before the line is uncommented + +# Walk the NET-SNMP-EXTEND-MIB tables (nsExtendConfigTable, nsExtendOutput1Table +# and nsExtendOutput2Table) to see the resulting output + +# Note that the "extend" directive supercedes the previous "exec" and "sh" directives +# However, walking the UCD-SNMP-MIB::extTable should still returns the same output, +# as well as the fuller results in the above tables. + + +# +# "Pass-through" MIB extension command +# +#pass .1.3.6.1.4.1.8072.2.255 /bin/sh PREFIX/local/passtest +#pass .1.3.6.1.4.1.8072.2.255 /usr/bin/perl PREFIX/local/passtest.pl + +# Note that this requires one of the two 'passtest' scripts to be installed first, +# before the appropriate line is uncommented. +# These scripts can be found in the 'local' directory of the source distribution, +# and are not installed automatically. + +# Walk the NET-SNMP-PASS-MIB::netSnmpPassExamples subtree to see the resulting output + + +# +# AgentX Sub-agents +# + # Run as an AgentX master agent +# master agentx + # Listen for network connections (from localhost) + # rather than the default named socket /var/agentx/master +#agentXSocket tcp:localhost:705 diff --git a/playbooks/roles/base/snmpd/vars/Debian.yaml b/playbooks/roles/base/snmpd/vars/Debian.yaml new file mode 100644 index 0000000..4b7e2fa --- /dev/null +++ b/playbooks/roles/base/snmpd/vars/Debian.yaml @@ -0,0 +1,2 @@ +package: snmpd +service_name: snmpd diff --git a/playbooks/roles/base/snmpd/vars/RedHat.yaml b/playbooks/roles/base/snmpd/vars/RedHat.yaml new file mode 100644 index 0000000..e4fd4eb --- /dev/null +++ b/playbooks/roles/base/snmpd/vars/RedHat.yaml @@ -0,0 +1,2 @@ +package: net-snmp +service_name: snmpd diff --git a/playbooks/roles/base/timezone/README.rst b/playbooks/roles/base/timezone/README.rst new file mode 100644 index 0000000..19b8aaa --- /dev/null +++ b/playbooks/roles/base/timezone/README.rst @@ -0,0 +1,5 @@ +Configures timezone to Etc/UTC and restarts crond when changed. + +**Role Variables** + +* None diff --git a/playbooks/roles/base/timezone/handlers/main.yaml b/playbooks/roles/base/timezone/handlers/main.yaml new file mode 100644 index 0000000..5ec3801 --- /dev/null +++ b/playbooks/roles/base/timezone/handlers/main.yaml @@ -0,0 +1,4 @@ +- name: restart cron + service: + name: "{{ cron_service_name }}" + state: restarted diff --git a/playbooks/roles/base/timezone/tasks/main.yaml b/playbooks/roles/base/timezone/tasks/main.yaml new file mode 100644 index 0000000..26bcffb --- /dev/null +++ b/playbooks/roles/base/timezone/tasks/main.yaml @@ -0,0 +1,14 @@ +- name: Include OS-specific variables + include_vars: "{{ lookup('first_found', params) }}" + vars: + params: + files: "{{ distro_lookup_path }}" + paths: + - 'vars' + +- name: Set timezone to Etc/UTC + timezone: + name: Etc/UTC + # The timezone Ansible module recommends restarting cron after tz change. + notify: + - restart cron diff --git a/playbooks/roles/base/timezone/vars/Debian.yaml b/playbooks/roles/base/timezone/vars/Debian.yaml new file mode 100644 index 0000000..983c51a --- /dev/null +++ b/playbooks/roles/base/timezone/vars/Debian.yaml @@ -0,0 +1 @@ +cron_service_name: cron diff --git a/playbooks/roles/base/timezone/vars/RedHat.yaml b/playbooks/roles/base/timezone/vars/RedHat.yaml new file mode 100644 index 0000000..533b421 --- /dev/null +++ b/playbooks/roles/base/timezone/vars/RedHat.yaml @@ -0,0 +1 @@ +cron_service_name: crond diff --git a/playbooks/roles/base/unbound/README.rst b/playbooks/roles/base/unbound/README.rst new file mode 100644 index 0000000..095dc57 --- /dev/null +++ b/playbooks/roles/base/unbound/README.rst @@ -0,0 +1 @@ +Installs and configures the unbound DNS resolver diff --git a/playbooks/roles/base/unbound/files/dhclient.conf b/playbooks/roles/base/unbound/files/dhclient.conf new file mode 100644 index 0000000..1eac762 --- /dev/null +++ b/playbooks/roles/base/unbound/files/dhclient.conf @@ -0,0 +1,7 @@ +option rfc3442-classless-static-routes code 121 = array of unsigned integer 8; +send host-name ""; +request subnet-mask, broadcast-address, routers, + interface-mtu, rfc3442-classless-static-routes; +supersede domain-name-servers 127.0.0.1; +supersede domain-search ""; +supersede domain-name ""; diff --git a/playbooks/roles/base/unbound/files/resolv.conf b/playbooks/roles/base/unbound/files/resolv.conf new file mode 100644 index 0000000..bbc8559 --- /dev/null +++ b/playbooks/roles/base/unbound/files/resolv.conf @@ -0,0 +1 @@ +nameserver 127.0.0.1 diff --git a/playbooks/roles/base/unbound/files/unbound.default b/playbooks/roles/base/unbound/files/unbound.default new file mode 100644 index 0000000..784cb4c --- /dev/null +++ b/playbooks/roles/base/unbound/files/unbound.default @@ -0,0 +1,18 @@ +# If set, the unbound daemon will be started and stopped by the init script. +UNBOUND_ENABLE=true + +# Whether to automatically update the root trust anchor file. +ROOT_TRUST_ANCHOR_UPDATE=true + +# File in which to store the root trust anchor. +ROOT_TRUST_ANCHOR_FILE=/var/lib/unbound/root.key + +# If set, the unbound init script will provide unbound's listening +# IP addresses as nameservers to resolvconf. +RESOLVCONF=true + +# If set, resolvconf nameservers will be configured as forwarders +# to be used by unbound. +RESOLVCONF_FORWARDERS=false + +#DAEMON_OPTS="-c /etc/unbound/unbound.conf" diff --git a/playbooks/roles/base/unbound/handlers/main.yaml b/playbooks/roles/base/unbound/handlers/main.yaml new file mode 100644 index 0000000..7f80452 --- /dev/null +++ b/playbooks/roles/base/unbound/handlers/main.yaml @@ -0,0 +1,7 @@ +--- +- name: Restart unbound + ansible.builtin.systemd: + name: "unbound" + enabled: true + state: "restarted" + daemon_reload: true diff --git a/playbooks/roles/base/unbound/tasks/Debian.yaml b/playbooks/roles/base/unbound/tasks/Debian.yaml new file mode 100644 index 0000000..d21e162 --- /dev/null +++ b/playbooks/roles/base/unbound/tasks/Debian.yaml @@ -0,0 +1,16 @@ +# We require the defaults file be in place before installing the +# package to work around this bug: +# https://bugs.launchpad.net/ubuntu/+source/unbound/+bug/988513 +# where we could end up briefly forwarding to a provider's broken +# DNS. + +# This file differs from that in the package only by setting +# RESOLVCONF_FORWARDERS to false. +- name: Install unbound defaults file + copy: + src: unbound.default + dest: /etc/default/unbound + mode: 0444 + +- set_fact: + unbound_confd_path: "/etc/unbound/unbound.conf.d" diff --git a/playbooks/roles/base/unbound/tasks/dhclient.yaml b/playbooks/roles/base/unbound/tasks/dhclient.yaml new file mode 100644 index 0000000..ebb596b --- /dev/null +++ b/playbooks/roles/base/unbound/tasks/dhclient.yaml @@ -0,0 +1,11 @@ +- name: Register dhclient config file + stat: + path: "{{ item }}" + register: _dhclient + +- name: Write dhclient config file + when: _dhclient.stat.exists | bool + copy: + src: dhclient.conf + dest: "{{ item }}" + mode: 0444 diff --git a/playbooks/roles/base/unbound/tasks/main.yaml b/playbooks/roles/base/unbound/tasks/main.yaml new file mode 100644 index 0000000..be2de6b --- /dev/null +++ b/playbooks/roles/base/unbound/tasks/main.yaml @@ -0,0 +1,43 @@ +- name: Include OS-specific tasks + include_tasks: "{{ item }}" + vars: + params: + files: "{{ distro_lookup_path }}" + loop: "{{ query('first_found', params, errors='ignore') }}" + +- name: Install unbound + package: + state: present + name: unbound + when: "ansible_facts.pkg_mgr != 'atomic_container'" + +- name: Write dhclient config files + include_tasks: dhclient.yaml + loop: + - /etc/dhcp/dhclient.conf + - /etc/dhcp/dhclient-eth0.conf + +- name: Write resolv.conf + copy: + src: resolv.conf + dest: /etc/resolv.conf + mode: 0444 + +- name: Write unbound conf part file + ansible.builtin.template: + src: "unbound.confd.conf.j2" + dest: "{{ unbound_confd_path | default('/etc/unbound/conf.d') }}/{{ cfg.name }}.conf" + mode: "0644" + loop: "{{ unbound_conf_parts | default([]) }}" + loop_control: + loop_var: "cfg" + notify: + - Restart unbound + + register: result + +- name: Enable unbound + service: + name: "unbound" + enabled: true + state: "started" diff --git a/playbooks/roles/base/unbound/templates/unbound.confd.conf.j2 b/playbooks/roles/base/unbound/templates/unbound.confd.conf.j2 new file mode 100644 index 0000000..8a3f57a --- /dev/null +++ b/playbooks/roles/base/unbound/templates/unbound.confd.conf.j2 @@ -0,0 +1,12 @@ +{% if cfg.zone_local is defined and cfg.zone_local %} +server: + local-zone: "{{ cfg.name }}" nodefault +{% endif %} + +{% if cfg.zone_forward is defined and cfg.zone_forward %} +forward-zone: + name: {{ cfg.name }} +{% for k, v in cfg.opts.items() %} + {{ k }}: {{ v }} +{% endfor %} +{% endif %} diff --git a/playbooks/roles/base/users/README.rst b/playbooks/roles/base/users/README.rst new file mode 100644 index 0000000..f08eccd --- /dev/null +++ b/playbooks/roles/base/users/README.rst @@ -0,0 +1,32 @@ +Configure users on a server + +Configure users on a server. Users are given sudo access + +**Role Variables** + +.. zuul:rolevar:: all_users + :default: {} + + Dictionary of all users. Each user needs a ``uid``, ``gid`` and ``key`` + +.. zuul:rolevar:: base_users + :default: [] + + Users to install on all hosts + +.. zuul:rolevar:: extra_users + :default: [] + + Extra users to install on a specific host or group + +.. zuul:rolevar:: disabled_distro_cloud_users + :default: [] + + Distro cloud image default users to remove from hosts. This removal is + slightly more forceful than the removal of normal users. + +.. zuul:rolevar:: disabled_users + :default: [] + + Users who should be removed from all hosts + diff --git a/playbooks/roles/base/users/defaults/main.yaml b/playbooks/roles/base/users/defaults/main.yaml new file mode 100644 index 0000000..4ea1c0c --- /dev/null +++ b/playbooks/roles/base/users/defaults/main.yaml @@ -0,0 +1,4 @@ +all_users: {} +disabled_distro_cloud_users: [] +disabled_users: [] +extra_users: [] diff --git a/playbooks/roles/base/users/files/Debian/adduser.conf b/playbooks/roles/base/users/files/Debian/adduser.conf new file mode 100644 index 0000000..2ad61f0 --- /dev/null +++ b/playbooks/roles/base/users/files/Debian/adduser.conf @@ -0,0 +1,88 @@ +# /etc/adduser.conf: `adduser' configuration. +# See adduser(8) and adduser.conf(5) for full documentation. + +# The DSHELL variable specifies the default login shell on your +# system. +DSHELL=/bin/bash + +# The DHOME variable specifies the directory containing users' home +# directories. +DHOME=/home + +# If GROUPHOMES is "yes", then the home directories will be created as +# /home/groupname/user. +GROUPHOMES=no + +# If LETTERHOMES is "yes", then the created home directories will have +# an extra directory - the first letter of the user name. For example: +# /home/u/user. +LETTERHOMES=no + +# The SKEL variable specifies the directory containing "skeletal" user +# files; in other words, files such as a sample .profile that will be +# copied to the new user's home directory when it is created. +SKEL=/etc/skel + +# FIRST_SYSTEM_[GU]ID to LAST_SYSTEM_[GU]ID inclusive is the range for UIDs +# for dynamically allocated administrative and system accounts/groups. +# Please note that system software, such as the users allocated by the base-passwd +# package, may assume that UIDs less than 100 are unallocated. +FIRST_SYSTEM_UID=100 +LAST_SYSTEM_UID=999 + +FIRST_SYSTEM_GID=100 +LAST_SYSTEM_GID=999 + +# FIRST_[GU]ID to LAST_[GU]ID inclusive is the range of UIDs of dynamically +# allocated user accounts/groups. +FIRST_UID=3000 +LAST_UID=9999 + +FIRST_GID=3000 +LAST_GID=9999 + +# The USERGROUPS variable can be either "yes" or "no". If "yes" each +# created user will be given their own group to use as a default. If +# "no", each created user will be placed in the group whose gid is +# USERS_GID (see below). +USERGROUPS=yes + +# If USERGROUPS is "no", then USERS_GID should be the GID of the group +# `users' (or the equivalent group) on your system. +USERS_GID=100 + +# If DIR_MODE is set, directories will be created with the specified +# mode. Otherwise the default mode 0755 will be used. +DIR_MODE=0755 + +# If SETGID_HOME is "yes" home directories for users with their own +# group the setgid bit will be set. This was the default for +# versions << 3.13 of adduser. Because it has some bad side effects we +# no longer do this per default. If you want it nevertheless you can +# still set it here. +SETGID_HOME=no + +# If QUOTAUSER is set, a default quota will be set from that user with +# `edquota -p QUOTAUSER newuser' +QUOTAUSER="" + +# If SKEL_IGNORE_REGEX is set, adduser will ignore files matching this +# regular expression when creating a new home directory +SKEL_IGNORE_REGEX="dpkg-(old|new|dist|save)" + +# Set this if you want the --add_extra_groups option to adduser to add +# new users to other groups. +# This is the list of groups that new non-system users will be added to +# Default: +#EXTRA_GROUPS="dialout cdrom floppy audio video plugdev users" + +# If ADD_EXTRA_GROUPS is set to something non-zero, the EXTRA_GROUPS +# option above will be default behavior for adding new, non-system users +#ADD_EXTRA_GROUPS=1 + + +# check user and group names also against this regular expression. +#NAME_REGEX="^[a-z][-a-z0-9_]*\$" + +# use extrausers by default +#USE_EXTRAUSERS=1 diff --git a/playbooks/roles/base/users/files/Debian/login.defs b/playbooks/roles/base/users/files/Debian/login.defs new file mode 100644 index 0000000..3b3248b --- /dev/null +++ b/playbooks/roles/base/users/files/Debian/login.defs @@ -0,0 +1,340 @@ +# +# /etc/login.defs - Configuration control definitions for the login package. +# +# Three items must be defined: MAIL_DIR, ENV_SUPATH, and ENV_PATH. +# If unspecified, some arbitrary (and possibly incorrect) value will +# be assumed. All other items are optional - if not specified then +# the described action or option will be inhibited. +# +# Comment lines (lines beginning with "#") and blank lines are ignored. +# +# Modified for Linux. --marekm + +# REQUIRED for useradd/userdel/usermod +# Directory where mailboxes reside, _or_ name of file, relative to the +# home directory. If you _do_ define MAIL_DIR and MAIL_FILE, +# MAIL_DIR takes precedence. +# +# Essentially: +# - MAIL_DIR defines the location of users mail spool files +# (for mbox use) by appending the username to MAIL_DIR as defined +# below. +# - MAIL_FILE defines the location of the users mail spool files as the +# fully-qualified filename obtained by prepending the user home +# directory before $MAIL_FILE +# +# NOTE: This is no more used for setting up users MAIL environment variable +# which is, starting from shadow 4.0.12-1 in Debian, entirely the +# job of the pam_mail PAM modules +# See default PAM configuration files provided for +# login, su, etc. +# +# This is a temporary situation: setting these variables will soon +# move to /etc/default/useradd and the variables will then be +# no more supported +MAIL_DIR /var/mail +#MAIL_FILE .mail + +# +# Enable logging and display of /var/log/faillog login failure info. +# This option conflicts with the pam_tally PAM module. +# +FAILLOG_ENAB yes + +# +# Enable display of unknown usernames when login failures are recorded. +# +# WARNING: Unknown usernames may become world readable. +# See #290803 and #298773 for details about how this could become a security +# concern +LOG_UNKFAIL_ENAB no + +# +# Enable logging of successful logins +# +LOG_OK_LOGINS no + +# +# Enable "syslog" logging of su activity - in addition to sulog file logging. +# SYSLOG_SG_ENAB does the same for newgrp and sg. +# +SYSLOG_SU_ENAB yes +SYSLOG_SG_ENAB yes + +# +# If defined, all su activity is logged to this file. +# +#SULOG_FILE /var/log/sulog + +# +# If defined, file which maps tty line to TERM environment parameter. +# Each line of the file is in a format something like "vt100 tty01". +# +#TTYTYPE_FILE /etc/ttytype + +# +# If defined, login failures will be logged here in a utmp format +# last, when invoked as lastb, will read /var/log/btmp, so... +# +FTMP_FILE /var/log/btmp + +# +# If defined, the command name to display when running "su -". For +# example, if this is defined as "su" then a "ps" will display the +# command is "-su". If not defined, then "ps" would display the +# name of the shell actually being run, e.g. something like "-sh". +# +SU_NAME su + +# +# If defined, file which inhibits all the usual chatter during the login +# sequence. If a full pathname, then hushed mode will be enabled if the +# user's name or shell are found in the file. If not a full pathname, then +# hushed mode will be enabled if the file exists in the user's home directory. +# +HUSHLOGIN_FILE .hushlogin +#HUSHLOGIN_FILE /etc/hushlogins + +# +# *REQUIRED* The default PATH settings, for superuser and normal users. +# +# (they are minimal, add the rest in the shell startup files) +ENV_SUPATH PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin +ENV_PATH PATH=/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games + +# +# Terminal permissions +# +# TTYGROUP Login tty will be assigned this group ownership. +# TTYPERM Login tty will be set to this permission. +# +# If you have a "write" program which is "setgid" to a special group +# which owns the terminals, define TTYGROUP to the group number and +# TTYPERM to 0620. Otherwise leave TTYGROUP commented out and assign +# TTYPERM to either 622 or 600. +# +# In Debian /usr/bin/bsd-write or similar programs are setgid tty +# However, the default and recommended value for TTYPERM is still 0600 +# to not allow anyone to write to anyone else console or terminal + +# Users can still allow other people to write them by issuing +# the "mesg y" command. + +TTYGROUP tty +TTYPERM 0600 + +# +# Login configuration initializations: +# +# ERASECHAR Terminal ERASE character ('\010' = backspace). +# KILLCHAR Terminal KILL character ('\025' = CTRL/U). +# UMASK Default "umask" value. +# +# The ERASECHAR and KILLCHAR are used only on System V machines. +# +# UMASK is the default umask value for pam_umask and is used by +# useradd and newusers to set the mode of the new home directories. +# 022 is the "historical" value in Debian for UMASK +# 027, or even 077, could be considered better for privacy +# There is no One True Answer here : each sysadmin must make up his/her +# mind. +# +# If USERGROUPS_ENAB is set to "yes", that will modify this UMASK default value +# for private user groups, i. e. the uid is the same as gid, and username is +# the same as the primary group name: for these, the user permissions will be +# used as group permissions, e. g. 022 will become 002. +# +# Prefix these values with "0" to get octal, "0x" to get hexadecimal. +# +ERASECHAR 0177 +KILLCHAR 025 +UMASK 022 + +# +# Password aging controls: +# +# PASS_MAX_DAYS Maximum number of days a password may be used. +# PASS_MIN_DAYS Minimum number of days allowed between password changes. +# PASS_WARN_AGE Number of days warning given before a password expires. +# +PASS_MAX_DAYS 99999 +PASS_MIN_DAYS 0 +PASS_WARN_AGE 7 + +# +# Min/max values for automatic uid selection in useradd +# +SYS_UID_MAX 999 +UID_MIN 3000 +UID_MAX 9999 +# System accounts +#SYS_UID_MIN 100 +#SYS_UID_MAX 999 + +# +# Min/max values for automatic gid selection in groupadd +# +SYS_GID_MAX 999 +GID_MIN 3000 +GID_MAX 9999 +# System accounts +#SYS_GID_MIN 100 +#SYS_GID_MAX 999 + +# +# Max number of login retries if password is bad. This will most likely be +# overriden by PAM, since the default pam_unix module has it's own built +# in of 3 retries. However, this is a safe fallback in case you are using +# an authentication module that does not enforce PAM_MAXTRIES. +# +LOGIN_RETRIES 5 + +# +# Max time in seconds for login +# +LOGIN_TIMEOUT 60 + +# +# Which fields may be changed by regular users using chfn - use +# any combination of letters "frwh" (full name, room number, work +# phone, home phone). If not defined, no changes are allowed. +# For backward compatibility, "yes" = "rwh" and "no" = "frwh". +# +CHFN_RESTRICT rwh + +# +# Should login be allowed if we can't cd to the home directory? +# Default in no. +# +DEFAULT_HOME yes + +# +# If defined, this command is run when removing a user. +# It should remove any at/cron/print jobs etc. owned by +# the user to be removed (passed as the first argument). +# +#USERDEL_CMD /usr/sbin/userdel_local + +# +# Enable setting of the umask group bits to be the same as owner bits +# (examples: 022 -> 002, 077 -> 007) for non-root users, if the uid is +# the same as gid, and username is the same as the primary group name. +# +# If set to yes, userdel will remove the user´s group if it contains no +# more members, and useradd will create by default a group with the name +# of the user. +# +USERGROUPS_ENAB yes + +# +# Instead of the real user shell, the program specified by this parameter +# will be launched, although its visible name (argv[0]) will be the shell's. +# The program may do whatever it wants (logging, additional authentification, +# banner, ...) before running the actual shell. +# +# FAKE_SHELL /bin/fakeshell + +# +# If defined, either full pathname of a file containing device names or +# a ":" delimited list of device names. Root logins will be allowed only +# upon these devices. +# +# This variable is used by login and su. +# +#CONSOLE /etc/consoles +#CONSOLE console:tty01:tty02:tty03:tty04 + +# +# List of groups to add to the user's supplementary group set +# when logging in on the console (as determined by the CONSOLE +# setting). Default is none. +# +# Use with caution - it is possible for users to gain permanent +# access to these groups, even when not logged in on the console. +# How to do it is left as an exercise for the reader... +# +# This variable is used by login and su. +# +#CONSOLE_GROUPS floppy:audio:cdrom + +# +# If set to "yes", new passwords will be encrypted using the MD5-based +# algorithm compatible with the one used by recent releases of FreeBSD. +# It supports passwords of unlimited length and longer salt strings. +# Set to "no" if you need to copy encrypted passwords to other systems +# which don't understand the new algorithm. Default is "no". +# +# This variable is deprecated. You should use ENCRYPT_METHOD. +# +#MD5_CRYPT_ENAB no + +# +# If set to MD5 , MD5-based algorithm will be used for encrypting password +# If set to SHA256, SHA256-based algorithm will be used for encrypting password +# If set to SHA512, SHA512-based algorithm will be used for encrypting password +# If set to DES, DES-based algorithm will be used for encrypting password (default) +# Overrides the MD5_CRYPT_ENAB option +# +# Note: It is recommended to use a value consistent with +# the PAM modules configuration. +# +ENCRYPT_METHOD SHA512 + +# +# Only used if ENCRYPT_METHOD is set to SHA256 or SHA512. +# +# Define the number of SHA rounds. +# With a lot of rounds, it is more difficult to brute forcing the password. +# But note also that it more CPU resources will be needed to authenticate +# users. +# +# If not specified, the libc will choose the default number of rounds (5000). +# The values must be inside the 1000-999999999 range. +# If only one of the MIN or MAX values is set, then this value will be used. +# If MIN > MAX, the highest value will be used. +# +# SHA_CRYPT_MIN_ROUNDS 5000 +# SHA_CRYPT_MAX_ROUNDS 5000 + +################# OBSOLETED BY PAM ############## +# # +# These options are now handled by PAM. Please # +# edit the appropriate file in /etc/pam.d/ to # +# enable the equivelants of them. +# +############### + +#MOTD_FILE +#DIALUPS_CHECK_ENAB +#LASTLOG_ENAB +#MAIL_CHECK_ENAB +#OBSCURE_CHECKS_ENAB +#PORTTIME_CHECKS_ENAB +#SU_WHEEL_ONLY +#CRACKLIB_DICTPATH +#PASS_CHANGE_TRIES +#PASS_ALWAYS_WARN +#ENVIRON_FILE +#NOLOGINS_FILE +#ISSUE_FILE +#PASS_MIN_LEN +#PASS_MAX_LEN +#ULIMIT +#ENV_HZ +#CHFN_AUTH +#CHSH_AUTH +#FAIL_DELAY + +################# OBSOLETED ####################### +# # +# These options are no more handled by shadow. # +# # +# Shadow utilities will display a warning if they # +# still appear. # +# # +################################################### + +# CLOSE_SESSIONS +# LOGIN_STRING +# NO_PASSWORD_CONSOLE +# QMAIL_DIR diff --git a/playbooks/roles/base/users/files/RedHat/login.defs b/playbooks/roles/base/users/files/RedHat/login.defs new file mode 100644 index 0000000..b3f6e59 --- /dev/null +++ b/playbooks/roles/base/users/files/RedHat/login.defs @@ -0,0 +1,69 @@ +# +# Please note that the parameters in this configuration file control the +# behavior of the tools from the shadow-utils component. None of these +# tools uses the PAM mechanism, and the utilities that use PAM (such as the +# passwd command) should therefore be configured elsewhere. Refer to +# /etc/pam.d/system-auth for more information. +# + +# *REQUIRED* +# Directory where mailboxes reside, _or_ name of file, relative to the +# home directory. If you _do_ define both, MAIL_DIR takes precedence. +# QMAIL_DIR is for Qmail +# +#QMAIL_DIR Maildir +MAIL_DIR /var/spool/mail +#MAIL_FILE .mail + +# Password aging controls: +# +# PASS_MAX_DAYS Maximum number of days a password may be used. +# PASS_MIN_DAYS Minimum number of days allowed between password changes. +# PASS_MIN_LEN Minimum acceptable password length. +# PASS_WARN_AGE Number of days warning given before a password expires. +# +PASS_MAX_DAYS 99999 +PASS_MIN_DAYS 0 +PASS_MIN_LEN 5 +PASS_WARN_AGE 7 + +# +# Min/max values for automatic uid selection in useradd +# +SYS_UID_MIN 201 +SYS_UID_MAX 499 +UID_MIN 3000 +UID_MAX 60000 + +# +# Min/max values for automatic gid selection in groupadd +# +SYS_GID_MIN 201 +SYS_GID_MAX 499 +GID_MIN 3000 +GID_MAX 60000 + +# +# If defined, this command is run when removing a user. +# It should remove any at/cron/print jobs etc. owned by +# the user to be removed (passed as the first argument). +# +#USERDEL_CMD /usr/sbin/userdel_local + +# +# If useradd should create home directories for users by default +# On RH systems, we do. This option is overridden with the -m flag on +# useradd command line. +# +CREATE_HOME yes + +# The permission mask is initialized to this value. If not specified, +# the permission mask will be initialized to 022. +UMASK 077 + +# This enables userdel to remove user groups if no members exist. +# +USERGROUPS_ENAB yes + +# Use SHA512 to encrypt password. +ENCRYPT_METHOD SHA512 diff --git a/playbooks/roles/base/users/files/sudoers b/playbooks/roles/base/users/files/sudoers new file mode 100644 index 0000000..51828c2 --- /dev/null +++ b/playbooks/roles/base/users/files/sudoers @@ -0,0 +1,26 @@ +# /etc/sudoers +# +# This file MUST be edited with the 'visudo' command as root. +# +# See the man page for details on how to write a sudoers file. +# + +Defaults env_reset +Defaults secure_path="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin" + +# Host alias specification + +# User alias specification + +# Cmnd alias specification + +# User privilege specification +root ALL=(ALL) ALL + +# Allow members of group sudo to execute any command after they have +# provided their password +# (Note that later entries override this, so you might need to move +# it further down) +%sudo ALL=(ALL) NOPASSWD: ALL +# +#includedir /etc/sudoers.d diff --git a/playbooks/roles/base/users/tasks/main.yaml b/playbooks/roles/base/users/tasks/main.yaml new file mode 100644 index 0000000..a66de2a --- /dev/null +++ b/playbooks/roles/base/users/tasks/main.yaml @@ -0,0 +1,86 @@ +- name: Add sudo group + group: + name: "sudo" + state: present + +# NOTE(mordred): We replace the main file rather than dropping a file in to +# /etc/sudoers.d to deal with divergent base sudoers files from our distros. +# We also want to change some default behavior (we want nopassword sudo, for +# instance). +- name: Setup sudoers file + copy: + dest: /etc/sudoers + src: sudoers + owner: root + group: root + mode: 0440 + +- name: Setup adduser.conf file + copy: + dest: /etc/adduser.conf + src: '{{ ansible_facts.os_family }}/adduser.conf' + owner: root + group: root + mode: 0644 + when: + - "ansible_facts.os_family == 'Debian'" + +- name: Setup login.defs file + copy: + dest: /etc/login.defs + src: '{{ ansible_facts.os_family }}/login.defs' + owner: root + group: root + mode: 0644 +- name: Delete default distro cloud image users + # Do this in a separate task so that we can use force: yes which is + # probably too destructive for normal users, but should be fine for + # these built in cloud image names. + loop: "{{ disabled_distro_cloud_users }}" + user: + name: "{{ item }}" + state: absent + remove: yes + force: yes + +- name: Delete old users + loop: "{{ disabled_users }}" + user: + name: "{{ item }}" + state: absent + remove: yes + +- name: Add groups + loop: "{{ base_users + extra_users }}" + group: + name: "{{ item }}" + state: present + gid: "{{ all_users[item].gid|default(omit) }}" + when: + - item in all_users + - "'gid' in all_users[item]" + +- name: Add users + loop: "{{ base_users + extra_users }}" + user: + name: "{{ item }}" + state: present + uid: "{{ all_users[item].uid }}" + group: "{{ item }}" + comment: "{{ all_users[item].comment }}" + groups: sudo + shell: /bin/bash + when: + - item in all_users + - "'uid' in all_users[item]" + +- name: Add ssh keys to users + loop: "{{ base_users + extra_users }}" + authorized_key: + user: "{{ item }}" + state: present + key: "{{ all_users[item].key }}" + exclusive: yes + when: + - item in all_users + - "'key' in all_users[item]" diff --git a/playbooks/roles/configure-kubectl/README.rst b/playbooks/roles/configure-kubectl/README.rst new file mode 100644 index 0000000..74cb3ea --- /dev/null +++ b/playbooks/roles/configure-kubectl/README.rst @@ -0,0 +1,19 @@ +Configure kube config files + +Configure kubernetes files needed by kubectl. + +**Role Variables** + +.. zuul:rolevar:: kube_config_dir + :default: /root/.kube + +.. zuul:rolevar:: kube_config_owner + :default: root + +.. zuul:rolevar:: kube_config_group + :default: root + +.. zuul:rolevar:: kube_config_file + :default: {{ kube_config_dir }}/config + +.. zuul:rolevar:: kube_config_template diff --git a/playbooks/roles/configure-kubectl/defaults/main.yaml b/playbooks/roles/configure-kubectl/defaults/main.yaml new file mode 100644 index 0000000..3bad648 --- /dev/null +++ b/playbooks/roles/configure-kubectl/defaults/main.yaml @@ -0,0 +1,4 @@ +kube_config_dir: /root/.kube +kube_config_owner: root +kube_config_group: root +kube_config_file: '{{ kube_config_dir }}/config' diff --git a/playbooks/roles/configure-kubectl/tasks/main.yaml b/playbooks/roles/configure-kubectl/tasks/main.yaml new file mode 100644 index 0000000..eed7720 --- /dev/null +++ b/playbooks/roles/configure-kubectl/tasks/main.yaml @@ -0,0 +1,15 @@ +- name: Ensure kube config directory + file: + group: '{{ kube_config_group }}' + owner: '{{ kube_config_owner }}' + mode: 0750 + path: '{{ kube_config_dir }}' + state: directory + +- name: Install the kube config file + template: + src: '{{ kube_config_template }}' + dest: '{{ kube_config_file }}' + group: '{{ kube_config_group }}' + owner: '{{ kube_config_owner }}' + mode: 0600 diff --git a/playbooks/roles/configure-openstacksdk/README.rst b/playbooks/roles/configure-openstacksdk/README.rst new file mode 100644 index 0000000..ae511df --- /dev/null +++ b/playbooks/roles/configure-openstacksdk/README.rst @@ -0,0 +1,19 @@ +Configure openstacksdk files + +Configure openstacksdk files needed by nodepool and ansible. + +**Role Variables** + +.. zuul:rolevar:: openstacksdk_config_dir + :default: /etc/openstack + +.. zuul:rolevar:: openstacksdk_config_owner + :default: root + +.. zuul:rolevar:: openstacksdk_config_group + :default: root + +.. zuul:rolevar:: openstacksdk_config_file + :default: {{ openstacksdk_config_dir }}/clouds.yaml + +.. zuul:rolevar:: openstacksdk_config_template diff --git a/playbooks/roles/configure-openstacksdk/defaults/main.yaml b/playbooks/roles/configure-openstacksdk/defaults/main.yaml new file mode 100644 index 0000000..7609e63 --- /dev/null +++ b/playbooks/roles/configure-openstacksdk/defaults/main.yaml @@ -0,0 +1,4 @@ +openstacksdk_config_dir: /etc/openstack +openstacksdk_config_owner: root +openstacksdk_config_group: root +openstacksdk_config_file: '{{ openstacksdk_config_dir }}/clouds.yaml' diff --git a/playbooks/roles/configure-openstacksdk/tasks/main.yaml b/playbooks/roles/configure-openstacksdk/tasks/main.yaml new file mode 100644 index 0000000..e3abb1d --- /dev/null +++ b/playbooks/roles/configure-openstacksdk/tasks/main.yaml @@ -0,0 +1,15 @@ +- name: Ensure openstacksdk config directory + file: + group: '{{ openstacksdk_config_group }}' + owner: '{{ openstacksdk_config_owner }}' + mode: 0750 + path: '{{ openstacksdk_config_dir }}' + state: directory + +- name: Install the clouds config file + template: + src: '{{ openstacksdk_config_template }}' + dest: '{{ openstacksdk_config_file }}' + group: '{{ openstacksdk_config_group }}' + owner: '{{ openstacksdk_config_owner }}' + mode: 0640 diff --git a/playbooks/roles/configure_keycloak/README.rst b/playbooks/roles/configure_keycloak/README.rst new file mode 100644 index 0000000..e69de29 diff --git a/playbooks/roles/configure_keycloak/defaults/main.yaml b/playbooks/roles/configure_keycloak/defaults/main.yaml new file mode 100644 index 0000000..e69de29 diff --git a/playbooks/roles/configure_keycloak/tasks/assign_users.yaml b/playbooks/roles/configure_keycloak/tasks/assign_users.yaml new file mode 100644 index 0000000..d3d6c34 --- /dev/null +++ b/playbooks/roles/configure_keycloak/tasks/assign_users.yaml @@ -0,0 +1,20 @@ +- name: Get user ID for username {{ member }} + ansible.builtin.uri: + url: "{{ url }}/admin/realms/{{ realm }}/users?search={{ member }}" + method: GET + headers: + Contenet-type: "application/json" + Authorization: "bearer {{ tkn }}" + register: usr + +- name: Assign userID {{ usr.json[0].id }} to groupID {{ grp.json[0].id }} + ansible.builtin.uri: + url: "{{ url }}/admin/realms/{{ realm }}/users/{{ usr.json[0].id }}/groups/{{ grp.json[0].id }}" + method: PUT + headers: + Contenet-type: "application/json" + Authorization: "bearer {{ tkn }}" + status_code: 204 + when: + - "usr.json[0].id is defined" + - "grp.json[0].id is defined" diff --git a/playbooks/roles/configure_keycloak/tasks/client.yaml b/playbooks/roles/configure_keycloak/tasks/client.yaml new file mode 100644 index 0000000..1fe450f --- /dev/null +++ b/playbooks/roles/configure_keycloak/tasks/client.yaml @@ -0,0 +1,21 @@ +- name: Create or update Keycloak client {{ client.name }} + community.general.keycloak_client: + state: "present" + auth_client_id: "admin-cli" + auth_keycloak_url: "{{ url }}" + token: "{{ token }}" + client_id: "{{ client.client_id }}" + realm: "{{ client.realm }}" + name: "{{ client.name |default(omit) }}" + protocol: "{{ client.protocol |default(omit) }}" + base_url: "{{ client.base_url |default(omit) }}" + root_url: "{{ client.root_url |default(omit) }}" + admin_url: "{{ client.admin_url |default(omit) }}" + description: "{{ client.description |default(omit) }}" + redirect_uris: "{{ client.redirect_uris |default(omit) }}" + web_origins: "{{ client.web_origins |default(omit) }}" + implicit_flow_enabled: "{{ client.implicit_flow_enabled |default(omit) }}" + public_client: "{{ client.public_client |default(omit) }}" + secret: "{{ client.secret |default(omit) }}" + default_client_scopes: "{{ client.default_client_scopes |default(omit) }}" + optional_client_scopes: "{{ client.optional_client_scopes |default(omit) }}" diff --git a/playbooks/roles/configure_keycloak/tasks/client_rolemapping.yaml b/playbooks/roles/configure_keycloak/tasks/client_rolemapping.yaml new file mode 100644 index 0000000..2eaf31c --- /dev/null +++ b/playbooks/roles/configure_keycloak/tasks/client_rolemapping.yaml @@ -0,0 +1,10 @@ +- name: Create or update Keycloak client rolemapping for group {{ client_rolemapping.group_name }} + community.general.keycloak_client_rolemapping: + state: "present" + auth_client_id: "admin-cli" + auth_keycloak_url: "{{ url }}" + token: "{{ token }}" + realm: "{{ client_rolemapping.realm }}" + group_name: "{{ client_rolemapping.group_name }}" + client_id: "{{ client_rolemapping.client_id }}" + roles: "{{ client_rolemapping.roles }}" diff --git a/playbooks/roles/configure_keycloak/tasks/client_scope.yaml b/playbooks/roles/configure_keycloak/tasks/client_scope.yaml new file mode 100644 index 0000000..5e74071 --- /dev/null +++ b/playbooks/roles/configure_keycloak/tasks/client_scope.yaml @@ -0,0 +1,11 @@ +- name: Create or update Keycloak client scope {{ client_scope.name }} + community.general.keycloak_clientscope: + state: "present" + auth_client_id: "admin-cli" + auth_keycloak_url: "{{ url }}" + token: "{{ token }}" + realm: "{{ client_scope.realm }}" + name: "{{ client_scope.name }}" + protocol: "{{ client_scope.protocol |default(omit) }}" + description: "{{ client_scope.description |default(omit) }}" + protocol_mappers: "{{ client_scope.protocol_mappers | default(omit) }}" diff --git a/playbooks/roles/configure_keycloak/tasks/group.yaml b/playbooks/roles/configure_keycloak/tasks/group.yaml new file mode 100644 index 0000000..c304312 --- /dev/null +++ b/playbooks/roles/configure_keycloak/tasks/group.yaml @@ -0,0 +1,8 @@ +- name: Create or update Keycloak group {{ group.name }} + community.general.keycloak_group: + state: "present" + auth_client_id: "admin-cli" + auth_keycloak_url: "{{ url }}" + token: "{{ token }}" + realm: "{{ group.realm }}" + name: "{{ group.name }}" diff --git a/playbooks/roles/configure_keycloak/tasks/group_membership.yaml b/playbooks/roles/configure_keycloak/tasks/group_membership.yaml new file mode 100644 index 0000000..2be71e0 --- /dev/null +++ b/playbooks/roles/configure_keycloak/tasks/group_membership.yaml @@ -0,0 +1,19 @@ +- name: Get group ID for group_name {{ group_membership.group_name }} + ansible.builtin.uri: + url: "{{ url }}/admin/realms/{{ group_membership.realm }}/groups?search={{ group_membership.group_name }}" + method: GET + headers: + Contenet-type: "application/json" + Authorization: "bearer {{ token }}" + register: grp + +- include_tasks: assign_users.yaml + vars: + group_id: "{{ grp.json[0].id }}" + realm: "{{ group_membership.realm }}" + tkn: "{{ token }}" + loop: "{{ group_membership.members }}" + loop_control: + loop_var: "member" + when: "group_membership.members is defined" + diff --git a/playbooks/roles/configure_keycloak/tasks/identity_provider.yaml b/playbooks/roles/configure_keycloak/tasks/identity_provider.yaml new file mode 100644 index 0000000..67a4fd2 --- /dev/null +++ b/playbooks/roles/configure_keycloak/tasks/identity_provider.yaml @@ -0,0 +1,30 @@ +- name: Check if identity provider {{ identity_provider.name }} exists + ansible.builtin.uri: + url: "{{ url }}/admin/realms/{{ identity_provider.realm }}/identity-provider/instances/{{ identity_provider.name }}" + method: GET + headers: + Authorization: "bearer {{ token }}" + status_code: [ 404, 200 ] + register: id_exists + +- name: Add identity provider {{ identity_provider.name }} if it does not exist + ansible.builtin.uri: + url: "{{ url }}/admin/realms/{{ identity_provider.realm }}/identity-provider/instances" + method: POST + headers: + Authorization: "bearer {{ token }}" + body_format: "json" + body: "{{ identity_provider.body }}" + status_code : 201 + when: "id_exists.json.alias is not defined" + +- name: Update identity provider {{ identity_provider.name }} if it does exist + ansible.builtin.uri: + url: "{{ url }}/admin/realms/{{ identity_provider.realm }}/identity-provider/instances/{{ identity_provider.name }}" + method: PUT + headers: + Authorization: "bearer {{ token }}" + body_format: "json" + body: "{{ identity_provider.body }}" + status_code : 204 + when: "id_exists.json.alias is defined" diff --git a/playbooks/roles/configure_keycloak/tasks/main.yaml b/playbooks/roles/configure_keycloak/tasks/main.yaml new file mode 100644 index 0000000..0ed7e60 --- /dev/null +++ b/playbooks/roles/configure_keycloak/tasks/main.yaml @@ -0,0 +1,131 @@ +--- +- name: Get master realm access_token + ansible.builtin.uri: + url: "{{ keycloak.admin_url}}" + method: "POST" + body_format: "form-urlencoded" + body: + grant_type: "password" + username: "admin" + password: "{{ keycloak.admin_password }}" + client_id: "admin-cli" + register: "kc_token" + no_log: true + +- name: Extend master realm access_token_lifespan + community.general.keycloak_realm: + state: "present" + auth_client_id: "admin-cli" + auth_keycloak_url: "{{ keycloak.base_url}}" + token: "{{ kc_token.json.access_token }}" + realm: "master" + access_token_lifespan: "{{ keycloak_master_token_lifespan | default(300) }}" + register: extend + +- name: Get master realm access_token on lifespan change + ansible.builtin.uri: + url: "{{ keycloak.admin_url}}" + method: "POST" + body_format: "form-urlencoded" + body: + grant_type: "password" + username: "admin" + password: "{{ keycloak.admin_password }}" + client_id: "admin-cli" + register: "kc_token_new" + no_log: true + when: extend.changed + +- name: Swap renewed token on lifespan change + set_fact: + kc_token: "{{ kc_token_new }}" + when: kc_token_new.json.access_token is defined + +- include_tasks: realm.yaml + vars: + token: "{{ kc_token.json.access_token }}" + url: "{{ keycloak.base_url}}" + loop: "{{ keycloak.realms }}" + loop_control: + loop_var: "realm" + label: "{{ realm.name }}" + when: "keycloak.realms is defined" + +- include_tasks: identity_provider.yaml + vars: + token: "{{ kc_token.json.access_token }}" + url: "{{ keycloak.base_url}}" + loop: "{{ keycloak.identity_providers }}" + loop_control: + loop_var: "identity_provider" + label: "{{ identity_provider.name }}" + when: "keycloak.identity_providers is defined" + +- include_tasks: group.yaml + vars: + token: "{{ kc_token.json.access_token }}" + url: "{{ keycloak.base_url}}" + loop: "{{ keycloak.groups }}" + loop_control: + loop_var: "group" + label: "{{ group.name }}" + when: "keycloak.groups is defined" + +- include_tasks: user_federation.yaml + vars: + token: "{{ kc_token.json.access_token }}" + url: "{{ keycloak.base_url}}" + loop: "{{ keycloak.user_federations }}" + loop_control: + loop_var: "user_federation" + label: "{{ user_federation.name }}" + when: "keycloak.user_federations is defined" + +- include_tasks: group_membership.yaml + vars: + token: "{{ kc_token.json.access_token }}" + url: "{{ keycloak.base_url}}" + loop: "{{ keycloak.group_memberships }}" + loop_control: + loop_var: "group_membership" + label: "{{ group_membership.group_name }}" + when: "keycloak.group_memberships is defined" + +- include_tasks: client_scope.yaml + vars: + token: "{{ kc_token.json.access_token }}" + url: "{{ keycloak.base_url}}" + loop: "{{ keycloak.client_scopes }}" + loop_control: + loop_var: "client_scope" + label: "{{ client_scope.name }}" + +- include_tasks: client.yaml + vars: + token: "{{ kc_token.json.access_token }}" + url: "{{ keycloak.base_url}}" + loop: "{{ keycloak.clients }}" + loop_control: + loop_var: "client" + label: "{{ client.name }}" + +- include_tasks: role.yaml + vars: + token: "{{ kc_token.json.access_token }}" + url: "{{ keycloak.base_url}}" + loop: "{{ keycloak.roles }}" + loop_control: + loop_var: "role" + label: "{{ role.name }}" + when: "keycloak.roles is defined" + +- include_tasks: client_rolemapping.yaml + vars: + token: "{{ kc_token.json.access_token }}" + url: "{{ keycloak.base_url}}" + loop: "{{ keycloak.client_rolemappings }}" + loop_control: + loop_var: "client_rolemapping" + label: "{{ client_rolemapping.group_name }}" + when: "keycloak.client_rolemappings is defined" + diff --git a/playbooks/roles/configure_keycloak/tasks/realm.yaml b/playbooks/roles/configure_keycloak/tasks/realm.yaml new file mode 100644 index 0000000..1e81d27 --- /dev/null +++ b/playbooks/roles/configure_keycloak/tasks/realm.yaml @@ -0,0 +1,9 @@ +- name: Create or update Keycloak realm {{ realm.name }} + community.general.keycloak_realm: + state: "present" + auth_client_id: "admin-cli" + auth_keycloak_url: "{{ url }}" + token: "{{ token }}" + id: "{{ realm.name }}" + realm: "{{ realm.name }}" + enabled: True diff --git a/playbooks/roles/configure_keycloak/tasks/role.yaml b/playbooks/roles/configure_keycloak/tasks/role.yaml new file mode 100644 index 0000000..10f4f84 --- /dev/null +++ b/playbooks/roles/configure_keycloak/tasks/role.yaml @@ -0,0 +1,10 @@ +- name: Create or update Keycloak client role {{ role.name }} + community.general.keycloak_role: + state: "present" + auth_client_id: "admin-cli" + auth_keycloak_url: "{{ url }}" + token: "{{ token }}" + name: "{{ role.name }}" + realm: "{{ role.realm }}" + client_id: "{{ role.client_id }}" + description: "{{ role.description | default(omit) }}" diff --git a/playbooks/roles/configure_keycloak/tasks/user_federation.yaml b/playbooks/roles/configure_keycloak/tasks/user_federation.yaml new file mode 100644 index 0000000..ddb2e22 --- /dev/null +++ b/playbooks/roles/configure_keycloak/tasks/user_federation.yaml @@ -0,0 +1,41 @@ +- name: Check if user federation {{ user_federation.name }} exists + ansible.builtin.uri: + url: "{{ url }}/admin/realms/{{ user_federation.realm }}/components?name={{ user_federation.name }}" + method: GET + headers: + Authorization: "bearer {{ token }}" + status_code: [ 404, 200 ] + register: component + +- name: Add user federation {{ user_federation.name }} if it does not exist + community.general.keycloak_user_federation: + state: "present" + auth_client_id: "admin-cli" + auth_keycloak_url: "{{ url }}" + token: "{{ token }}" + realm: "{{ user_federation.realm }}" + name: "{{ user_federation.name }}" + provider_id: "{{ user_federation.provider_id }}" + provider_type: "{{ user_federation.provider_type }}" + config: "{{ user_federation.config }}" + mappers: "{{ user_federation.mappers | default(omit) }}" + when: "component.json[0].config is not defined" + +- name: Get created user federation {{ user_federation.name }} params + ansible.builtin.uri: + url: "{{ url }}/admin/realms/{{ user_federation.realm }}/components?name={{ user_federation.name }}" + method: GET + headers: + Authorization: "bearer {{ token }}" + status_code: [ 200 ] + register: component_created + when: "component.json[0].config is not defined" + +- name: Initiate a full sync of users {{ user_federation.name }} + ansible.builtin.uri: + url: "{{ url }}/admin/realms/{{ user_federation.realm }}/user-storage/{{ component_created.json[0].id }}/sync?action=triggerFullSync" + method: POST + headers: + Authorization: "bearer {{ token }}" + status_code: [ 200 ] + when: "component_created.json[0].id is defined" diff --git a/playbooks/roles/configure_vault/README.rst b/playbooks/roles/configure_vault/README.rst new file mode 100644 index 0000000..e69de29 diff --git a/playbooks/roles/configure_vault/tasks/approle.yaml b/playbooks/roles/configure_vault/tasks/approle.yaml new file mode 100644 index 0000000..93e8342 --- /dev/null +++ b/playbooks/roles/configure_vault/tasks/approle.yaml @@ -0,0 +1,14 @@ +- name: Write AppRole {{ approle.name }} to vault + ansible.builtin.uri: + url: "{{ vault_addr }}/v1/auth/approle/role/{{ approle.name | replace('/', '_') }}" + headers: + X-Vault-Token: "{{ vault_token }}" + method: "PUT" + body_format: "json" + body: + secret_id_num_uses: "{{ approle.secret_id_num_uses | default(omit) }}" + secret_id_ttl: "{{ approle.secret_id_ttl | default(omit) }}" + token_ttl: "{{ approle.token_ttl | default(omit) }}" + token_policies: "{{ approle.token_policies | default(omit) }}" + token_num_uses: "{{ approle.token_num_uses | default(omit) }}" + status_code: [200, 201, 202, 204] diff --git a/playbooks/roles/configure_vault/tasks/auth.yaml b/playbooks/roles/configure_vault/tasks/auth.yaml new file mode 100644 index 0000000..de94638 --- /dev/null +++ b/playbooks/roles/configure_vault/tasks/auth.yaml @@ -0,0 +1,68 @@ +- name: Read Auth {{ auth.type }} at {{ auth.path }} + check_mode: "no" + ansible.builtin.uri: + url: "{{ vault_addr }}/v1/sys/mounts/auth/{{ auth.path }}/tune" + headers: + X-Vault-Token: "{{ vault_token }}" + method: "GET" + return_content: "yes" + register: current_auth + failed_when: false + +- name: Mount auth {{ auth.type }} at {{ auth.path }} + ansible.builtin.uri: + url: "{{ vault_addr }}/v1/sys/auth/{{ auth.path }}" + headers: + X-Vault-Token: "{{ vault_token }}" + method: "POST" + body_format: "json" + body: + type: "{{ auth.type }}" + description: "{{ auth.description | default(omit) }}" + config: + default_lease_ttl: "{{ auth.default_lease_ttl | default(omit) }}" + max_lease_ttl: "{{ auth.max_lease_ttl | default(omit) }}" + audit_non_hmac_request_keys: "{{ auth.audit_non_hmac_request_keys | default(omit) }}" + audit_non_hmac_response_keys: "{{ auth.audit_non_hmac_response_keys | default(omit) }}" + listing_visibility: "{{ auth.listing_visibility | default(omit) }}" + passthrough_request_headers: "{{ auth.passthrough_request_headers | default(omit) }}" + allowed_response_headers: "{{ auth.allowed_response_headers | default(omit) }}" + options: "{{ auth.options | default(omit) }}" + + status_code: [200, 201, 202, 204] + when: + - "current_auth is not defined or current_auth.status != 200" + - "vault_auth_create is defined and vault_auth_create|bool" + +- name: Tune auth {{ auth.type }} at {{ auth.path }} + ansible.builtin.uri: + url: "{{ vault_addr }}/v1/sys/mounts/auth/{{ auth.path }}/tune" + headers: + X-Vault-Token: "{{ vault_token }}" + method: "POST" + body_format: "json" + body: + description: "{{ auth.description | default(omit) }}" + config: + default_lease_ttl: "{{ auth.default_lease_ttl | default(omit) }}" + max_lease_ttl: "{{ auth.max_lease_ttl | default(omit) }}" + audit_non_hmac_request_keys: "{{ auth.audit_non_hmac_request_keys | default(omit) }}" + audit_non_hmac_response_keys: "{{ auth.audit_non_hmac_response_keys | default(omit) }}" + listing_visibility: "{{ auth.listing_visibility | default(omit) }}" + passthrough_request_headers: "{{ auth.passthrough_request_headers | default(omit) }}" + allowed_response_headers: "{{ auth.allowed_response_headers | default(omit) }}" + options: "{{ auth.options | default(omit) }}" + status_code: [200, 201, 202, 204] + when: + - "current_auth.status == 200" + - "current_auth is defined and current_auth.json is defined" + - "auth.description is defined and current_auth.json.description != auth.description" + # - "current_auth.json.default_lease_ttl != auth.default_lease_ttl" + # - "current_auth.json.max_lease_ttl != auth.max_lease_ttl" + # - "auth.force_no_cache is defined and current_auth.json.force_no_cache != auth.force_no_cache" + # - "auth.auditcurrent_auth.json.audit_non_hmac_request_keys != auth.audit_non_hmac_request_keys" + # - "current_auth.json.audit_non_hmac_response_keys != auth.audit_non_hmac_response_keys" + + # - "current_auth.json.listing_visibility != auth.listing_visibility" + # - "current_auth.json.passthrough_request_headers != auth.passthrough_request_headers" + # - "current_auth.json.allowed_response_headers != auth.allowed_response_headers" diff --git a/playbooks/roles/configure_vault/tasks/k8auth.yaml b/playbooks/roles/configure_vault/tasks/k8auth.yaml new file mode 100644 index 0000000..d66453f --- /dev/null +++ b/playbooks/roles/configure_vault/tasks/k8auth.yaml @@ -0,0 +1,15 @@ +- name: Write K8 Auth {{ auth.path }} to vault + ansible.builtin.uri: + url: "{{ vault_addr }}/v1/auth/{{ auth.path }}/config" + headers: + X-Vault-Token: "{{ vault_token }}" + method: "PUT" + body_format: "json" + body: + kubernetes_host: "{{ auth.kubernetes_host }}" + kubernetes_ca_cert: "{{ auth.kubernetes_ca_cert | default(omit) }}" + token_reviewer_jwt: "{{ auth.token_reviewer_jwt | default(omit) }}" + pem_keys: "{{ auth.pem_keys | default(omit) }}" + disable_local_ca_jwt: "{{ auth.disable_local_ca_jwt | default(omit) }}" + + status_code: [200, 201, 202, 204] diff --git a/playbooks/roles/configure_vault/tasks/k8role.yaml b/playbooks/roles/configure_vault/tasks/k8role.yaml new file mode 100644 index 0000000..15946b2 --- /dev/null +++ b/playbooks/roles/configure_vault/tasks/k8role.yaml @@ -0,0 +1,23 @@ +- name: Write K8Role {{ role.name }} to vault + ansible.builtin.uri: + url: "{{ vault_addr }}/v1/auth/{{ role.auth_path }}/role/{{ role.name }}" + headers: + X-Vault-Token: "{{ vault_token }}" + method: "PUT" + body_format: "json" + body: + bound_service_account_names: "{{ role.bound_service_account_names | list }}" + bound_service_account_namespaces: "{{ role.bound_service_account_namespaces | list }}" + audience: "{{ role.audience | default(omit) }}" + alias_name_source: "{{ role.alias_name_source | default(omit) }}" + token_ttl: "{{ role.token_ttl | default(omit) }}" + token_max_ttl: "{{ role.token_max_ttl | default(omit) }}" + token_policies: "{{ role.policies | list }}" + token_bound_cidrs: "{{ role.token_bound_cidrs | default(omit) }}" + token_explicit_max_ttl: "{{ role.token_explicit_max_ttl | default(omit) }}" + token_no_default_policy: "{{ role.token_no_default_policy | default(omit) }}" + token_num_uses: "{{ role.token_num_uses | default(omit) }}" + token_period: "{{ role.token_period | default(omit) }}" + token_type: "{{ role.token_type | default(omit) }}" + + status_code: [200, 201, 202, 204] diff --git a/playbooks/roles/configure_vault/tasks/kubernetes.yaml b/playbooks/roles/configure_vault/tasks/kubernetes.yaml new file mode 100644 index 0000000..b51e4d8 --- /dev/null +++ b/playbooks/roles/configure_vault/tasks/kubernetes.yaml @@ -0,0 +1,19 @@ +- include_tasks: k8auth.yaml + vars: + vault_addr: "{{ vault.vault_addr }}" + vault_token: "{{ vault.vault_token }}" + loop: "{{ k8.auths }}" + loop_control: + loop_var: "auth" + label: "{{ auth.path }}" + when: "k8.auths is defined" + +- include_tasks: k8role.yaml + vars: + vault_addr: "{{ vault.vault_addr }}" + vault_token: "{{ vault.vault_token }}" + loop: "{{ k8.roles }}" + loop_control: + loop_var: "role" + label: "{{ role.name }}" + when: "k8.roles is defined" diff --git a/playbooks/roles/configure_vault/tasks/main.yaml b/playbooks/roles/configure_vault/tasks/main.yaml new file mode 100644 index 0000000..a32d9f8 --- /dev/null +++ b/playbooks/roles/configure_vault/tasks/main.yaml @@ -0,0 +1,87 @@ +- include_tasks: policy.yaml + vars: + vault_addr: "{{ vault.vault_addr | default(omit) }}" + vault_token: "{{ vault.vault_token | default(omit)}}" + loop: "{{ vault.policies }}" + loop_control: + loop_var: "policy" + label: "{{ policy.name }}" + when: "vault.policies is defined" + +- include_tasks: secret_engine.yaml + vars: + vault_addr: "{{ vault.vault_addr | default(omit) }}" + vault_token: "{{ vault.vault_token | default(omit)}}" + loop: "{{ vault.secret_engines }}" + loop_control: + loop_var: "engine" + label: "{{ engine.path }}" + when: "vault.secret_engines is defined" + +- include_tasks: auth.yaml + vars: + vault_addr: "{{ vault.vault_addr | default(omit) }}" + vault_token: "{{ vault.vault_token | default(omit)}}" + loop: "{{ vault.auths }}" + loop_control: + loop_var: "auth" + label: "{{ auth.path }}" + when: "vault.auths is defined" + +- include_tasks: approle.yaml + vars: + vault_addr: "{{ vault.vault_addr | default(omit) }}" + vault_token: "{{ vault.vault_token | default(omit)}}" + loop: "{{ vault.approle.roles }}" + loop_control: + loop_var: "approle" + label: "{{ approle.name }}" + when: + - "vault.approle is defined" + - "vault.approle.roles is defined" + +- include_tasks: kubernetes.yaml + vars: + vault_addr: "{{ vault.vault_addr | default(omit) }}" + vault_token: "{{ vault.vault_token | default(omit)}}" + k8: "{{ vault.kubernetes }}" + +- include_tasks: pwd_policy.yaml + vars: + vault_addr: "{{ vault.vault_addr | default(omit) }}" + vault_token: "{{ vault.vault_token | default(omit)}}" + loop: "{{ vault.pwd_policies }}" + loop_control: + loop_var: "pwd_policy" + label: "{{ pwd_policy.name }}" + when: "vault.pwd_policies is defined" + +- include_tasks: os_cloud.yaml + vars: + vault_addr: "{{ vault.vault_addr | default(omit) }}" + vault_token: "{{ vault.vault_token | default(omit)}}" + loop: "{{ vault.os_clouds }}" + loop_control: + loop_var: "cloud" + label: "{{ cloud.name }}" + when: "vault.os_clouds is defined" + +- include_tasks: os_role.yaml + vars: + vault_addr: "{{ vault.vault_addr | default(omit) }}" + vault_token: "{{ vault.vault_token | default(omit)}}" + loop: "{{ vault.os_roles }}" + loop_control: + loop_var: "role" + label: "{{ role.name }}" + when: "vault.os_roles is defined" + +- include_tasks: os_static_role.yaml + vars: + vault_addr: "{{ vault.vault_addr | default(omit) }}" + vault_token: "{{ vault.vault_token | default(omit)}}" + loop: "{{ vault.os_static_roles }}" + loop_control: + loop_var: "static_role" + label: "{{ static_role.name }}" + when: "vault.os_static_roles is defined" diff --git a/playbooks/roles/configure_vault/tasks/main_bootstrap.yaml b/playbooks/roles/configure_vault/tasks/main_bootstrap.yaml new file mode 100644 index 0000000..cfbd15e --- /dev/null +++ b/playbooks/roles/configure_vault/tasks/main_bootstrap.yaml @@ -0,0 +1,48 @@ +- include_tasks: auth.yaml + vars: + vault_addr: "{{ vault.vault_addr | default(omit) }}" + vault_token: "{{ vault.vault_token | default(omit)}}" + vault_auth_create: "true" + loop: "{{ vault.auths }}" + loop_control: + loop_var: "auth" + label: "{{ auth.path }}" + when: "vault.auths is defined" + +- include_tasks: secret_engine.yaml + vars: + vault_addr: "{{ vault.vault_addr | default(omit) }}" + vault_token: "{{ vault.vault_token | default(omit)}}" + loop: "{{ vault.secret_engines }}" + loop_control: + loop_var: "engine" + label: "{{ engine.path }}" + when: "vault.secret_engines is defined" + +- include_tasks: policy.yaml + vars: + vault_addr: "{{ vault.vault_addr | default(omit) }}" + vault_token: "{{ vault.vault_token | default(omit)}}" + loop: "{{ vault.policies }}" + loop_control: + loop_var: "policy" + label: "{{ policy.name }}" + when: "vault.policies is defined" + +- include_tasks: approle.yaml + vars: + vault_addr: "{{ vault.vault_addr | default(omit) }}" + vault_token: "{{ vault.vault_token | default(omit)}}" + loop: "{{ vault.approle.roles }}" + loop_control: + loop_var: "approle" + label: "{{ approle.name }}" + when: + - "vault.approle is defined" + - "vault.approle.roles is defined" + +- include_tasks: kubernetes.yaml + vars: + vault_addr: "{{ vault.vault_addr | default(omit) }}" + vault_token: "{{ vault.vault_token | default(omit)}}" + k8: "{{ vault.kubernetes }}" diff --git a/playbooks/roles/configure_vault/tasks/os_cloud.yaml b/playbooks/roles/configure_vault/tasks/os_cloud.yaml new file mode 100644 index 0000000..103ed4f --- /dev/null +++ b/playbooks/roles/configure_vault/tasks/os_cloud.yaml @@ -0,0 +1,54 @@ +- name: Read OS Cloud {{ cloud.name }} + check_mode: "no" + ansible.builtin.uri: + url: "{{ vault_addr }}/v1/openstack/clouds/{{ cloud.name }}" + headers: + X-Vault-Token: "{{ vault_token }}" + method: "GET" + return_content: "yes" + register: current_cloud + failed_when: false + +- name: Pause to get input password for {{ cloud.name }} + pause: + prompt: "Please enter the password for cloud - {{ cloud.name }} / {{ cloud.user_domain_name }} - user: {{ cloud.username }}" + echo: no + register: pwd + when: "current_cloud is not defined or current_cloud.status != 200" + +- name: Write OS Cloud {{ cloud.name }} to vault + ansible.builtin.uri: + url: "{{ vault_addr }}/v1/openstack/clouds/{{ cloud.name }}" + headers: + X-Vault-Token: "{{ vault_token }}" + method: "POST" + body_format: "json" + body: + auth_url: "{{ cloud.auth_url | default(omit) }}" + username: "{{ cloud.username | default(omit) }}" + password: "{{ pwd.user_input }}" + user_domain_name: "{{ cloud.user_domain_name | default(omit) }}" + username_template: "{{ cloud.username_template | default(omit) }}" + password_policy: "{{ cloud.password_policy | default(omit) }}" + status_code: [200, 201, 202, 204] + when: + - "pwd.user_input is defined" + - "pwd.user_input | length > 0" + +- name: Update OS Cloud {{ cloud.name }} - no PWD Updated + ansible.builtin.uri: + url: "{{ vault_addr }}/v1/openstack/clouds/{{ cloud.name }}" + headers: + X-Vault-Token: "{{ vault_token }}" + method: "PUT" + body_format: "json" + body: + auth_url: "{{ cloud.auth_url | default(omit) }}" + username: "{{ cloud.username | default(omit) }}" + user_domain_name: "{{ cloud.user_domain_name | default(omit) }}" + username_template: "{{ cloud.username_template | default(omit) }}" + password_policy: "{{ cloud.password_policy | default(omit) }}" + status_code: [200, 201, 202, 204] + when: + - "current_cloud.status == 200" + - "current_cloud is defined and current_cloud.json is defined" diff --git a/playbooks/roles/configure_vault/tasks/os_role.yaml b/playbooks/roles/configure_vault/tasks/os_role.yaml new file mode 100644 index 0000000..6b76811 --- /dev/null +++ b/playbooks/roles/configure_vault/tasks/os_role.yaml @@ -0,0 +1,70 @@ +- name: Read OS Role {{ role.name }} + check_mode: "no" + ansible.builtin.uri: + url: "{{ vault_addr }}/v1/openstack/roles/{{ role.name }}" + headers: + X-Vault-Token: "{{ vault_token }}" + method: "GET" + return_content: "yes" + register: current_role + failed_when: false + +- name: Read OS Cloud {{ role.cloud }} to which role should be bound + check_mode: "no" + ansible.builtin.uri: + url: "{{ vault_addr }}/v1/openstack/clouds/{{ role.cloud }}" + headers: + X-Vault-Token: "{{ vault_token }}" + method: "GET" + return_content: "yes" + register: role_cloud + failed_when: false + +- name: Write OS Role {{ role.name }} to vault + ansible.builtin.uri: + url: "{{ vault_addr }}/v1/openstack/roles/{{ role.name }}" + headers: + X-Vault-Token: "{{ vault_token }}" + method: "POST" + body_format: "json" + body: + cloud: "{{ role.cloud | default(omit) }}" + project_name: "{{ role.project_name | default(omit) }}" + project_id: "{{ role.project_id | default(omit) }}" + domain_name: "{{ role.domain_name | default(omit) }}" + domain_id: "{{ role.domain_id | default(omit) }}" + ttl: "{{ role.ttl | default(omit) }}" + secret_type: "{{ role.secret_type | default(omit) }}" + user_groups: "{{ role.user_groups | default(omit) }}" + user_roles: "{{ role.user_roles | default(omit) }}" + root: "{{ role.root | default(omit) }}" + extensions: "{{ role.extensions | default(omit) }}" + status_code: [200, 201, 202, 204] + when: + - "current_role is not defined or current_role.status != 200" + - "role_cloud is defined and role_cloud.status == 200" + +- name: Update OS Role {{ role.name }} + ansible.builtin.uri: + url: "{{ vault_addr }}/v1/openstack/roles/{{ role.name }}" + headers: + X-Vault-Token: "{{ vault_token }}" + method: "PUT" + body_format: "json" + body: + cloud: "{{ role.cloud | default(omit) }}" + project_name: "{{ role.project_name | default(omit) }}" + project_id: "{{ role.project_id | default(omit) }}" + domain_name: "{{ role.domain_name | default(omit) }}" + domain_id: "{{ role.domain_id | default(omit) }}" + ttl: "{{ role.ttl | default(omit) }}" + secret_type: "{{ role.secret_type | default(omit) }}" + user_groups: "{{ role.user_groups | default(omit) }}" + user_roles: "{{ role.user_roles | default(omit) }}" + root: "{{ role.root | default(omit) }}" + extensions: "{{ role.extensions | default(omit) }}" + status_code: [200, 201, 202, 204] + when: + - "current_role.status == 200" + - "current_role is defined and current_role.json is defined" + - "role_cloud is defined and role_cloud.status == 200" diff --git a/playbooks/roles/configure_vault/tasks/os_static_role.yaml b/playbooks/roles/configure_vault/tasks/os_static_role.yaml new file mode 100644 index 0000000..f1d2985 --- /dev/null +++ b/playbooks/roles/configure_vault/tasks/os_static_role.yaml @@ -0,0 +1,66 @@ +- name: Read OS Static Role {{ static_role.name }} + check_mode: "no" + ansible.builtin.uri: + url: "{{ vault_addr }}/v1/openstack/static-roles/{{ static_role.name }}" + headers: + X-Vault-Token: "{{ vault_token }}" + method: "GET" + return_content: "yes" + register: current_static_role + failed_when: false + +- name: Read OS Cloud {{ static_role.cloud }} to which role should be bound + check_mode: "no" + ansible.builtin.uri: + url: "{{ vault_addr }}/v1/openstack/clouds/{{ static_role.cloud }}" + headers: + X-Vault-Token: "{{ vault_token }}" + method: "GET" + return_content: "yes" + register: static_role_cloud + failed_when: false + +- name: Write OS Static Role {{ static_role.name }} to vault + ansible.builtin.uri: + url: "{{ vault_addr }}/v1/openstack/static-roles/{{ static_role.name }}" + headers: + X-Vault-Token: "{{ vault_token }}" + method: "POST" + body_format: "json" + body: + cloud: "{{ static_role.cloud | default(omit) }}" + project_name: "{{ static_role.project_name | default(omit) }}" + project_id: "{{ static_role.project_id | default(omit) }}" + domain_name: "{{ static_role.domain_name | default(omit) }}" + domain_id: "{{ static_role.domain_id | default(omit) }}" + rotation_duration: "{{ static_role.rotation_duration | default(omit) }}" + secret_type: "{{ static_role.secret_type | default(omit) }}" + username: "{{ static_role.username | default(omit) }}" + extensions: "{{ static_role.extensions | default(omit) }}" + status_code: [200, 201, 202, 204] + when: + - "current_static_role is not defined or current_static_role.status != 200" + - "static_role_cloud is defined and static_role_cloud.status == 200" + +- name: Update OS Static Role {{ static_role.name }} + ansible.builtin.uri: + url: "{{ vault_addr }}/v1/openstack/static-roles/{{ static_role.name }}" + headers: + X-Vault-Token: "{{ vault_token }}" + method: "PUT" + body_format: "json" + body: + cloud: "{{ static_role.cloud | default(omit) }}" + project_name: "{{ static_role.project_name | default(omit) }}" + project_id: "{{ static_role.project_id | default(omit) }}" + domain_name: "{{ static_role.domain_name | default(omit) }}" + domain_id: "{{ static_role.domain_id | default(omit) }}" + rotation_duration: "{{ static_role.rotation_duration | default(omit) }}" + secret_type: "{{ static_role.secret_type | default(omit) }}" + username: "{{ static_role.username | default(omit) }}" + extensions: "{{ static_role.extensions | default(omit) }}" + status_code: [200, 201, 202, 204] + when: + - "current_static_role.status == 200" + - "current_static_role is defined and current_static_role.json is defined" + - "static_role_cloud is defined and static_role_cloud.status == 200" diff --git a/playbooks/roles/configure_vault/tasks/policy.yaml b/playbooks/roles/configure_vault/tasks/policy.yaml new file mode 100644 index 0000000..c04c73e --- /dev/null +++ b/playbooks/roles/configure_vault/tasks/policy.yaml @@ -0,0 +1,10 @@ +- name: Write policy {{ policy.name }} to vault + ansible.builtin.uri: + url: "{{ vault_addr }}/v1/sys/policies/acl/{{ policy.name }}" + headers: + X-Vault-Token: "{{ vault_token }}" + method: "PUT" + body_format: "json" + body: + policy: "{{ policy.definition }}" + status_code: [200, 201, 202, 204] diff --git a/playbooks/roles/configure_vault/tasks/pwd_policy.yaml b/playbooks/roles/configure_vault/tasks/pwd_policy.yaml new file mode 100644 index 0000000..9fb564b --- /dev/null +++ b/playbooks/roles/configure_vault/tasks/pwd_policy.yaml @@ -0,0 +1,36 @@ +- name: Read PWD Policy {{ pwd_policy.name }} + check_mode: "no" + ansible.builtin.uri: + url: "{{ vault_addr }}/v1/sys/policies/password/{{ pwd_policy.name }}" + headers: + X-Vault-Token: "{{ vault_token }}" + method: "GET" + return_content: "yes" + register: current_pwd_policy + failed_when: false + +- name: Write PWD Policy {{ pwd_policy.name }} to vault + ansible.builtin.uri: + url: "{{ vault_addr }}/v1/sys/policies/password/{{ pwd_policy.name }}" + headers: + X-Vault-Token: "{{ vault_token }}" + method: "POST" + body_format: "json" + body: + policy: "{{ pwd_policy.policy }}" + status_code: [200, 201, 202, 204] + when: "current_pwd_policy is not defined or current_pwd_policy.status != 200" + +- name: Update PWD Policy {{ pwd_policy.name }} + ansible.builtin.uri: + url: "{{ vault_addr }}/v1/sys/policies/password/{{ pwd_policy.name }}" + headers: + X-Vault-Token: "{{ vault_token }}" + method: "PUT" + body_format: "json" + body: + policy: "{{ pwd_policy.policy }}" + status_code: [200, 201, 202, 204] + when: + - "current_pwd_policy.status == 200" + - "current_pwd_policy is defined and current_pwd_policy.json is defined" \ No newline at end of file diff --git a/playbooks/roles/configure_vault/tasks/secret_engine.yaml b/playbooks/roles/configure_vault/tasks/secret_engine.yaml new file mode 100644 index 0000000..9f2688e --- /dev/null +++ b/playbooks/roles/configure_vault/tasks/secret_engine.yaml @@ -0,0 +1,68 @@ +- name: Read Secrets engine {{ engine.type }} at {{ engine.path }} + check_mode: "no" + ansible.builtin.uri: + url: "{{ vault_addr }}/v1/sys/mounts/{{ engine.path }}/tune" + headers: + X-Vault-Token: "{{ vault_token }}" + method: "GET" + return_content: "yes" + register: current_engine + failed_when: false + +- name: Mount Secrets engine {{ engine.type }} at {{ engine.path }} + ansible.builtin.uri: + url: "{{ vault_addr }}/v1/sys/mounts/{{ engine.path }}" + headers: + X-Vault-Token: "{{ vault_token }}" + method: "POST" + body_format: "json" + body: + type: "{{ engine.type }}" + description: "{{ engine.description | default(omit) }}" + config: + default_lease_ttl: "{{ engine.default_lease_ttl | default(omit) }}" + max_lease_ttl: "{{ engine.max_lease_ttl | default(omit) }}" + force_no_cache: "{{ engine.force_no_cache | default(omit) }}" + audit_non_hmac_request_keys: "{{ engine.audit_non_hmac_request_keys | default(omit) }}" + audit_non_hmac_response_keys: "{{ engine.audit_non_hmac_response_keys | default(omit) }}" + listing_visibility: "{{ engine.listing_visibility | default(omit) }}" + passthrough_request_headers: "{{ engine.passthrough_request_headers | default(omit) }}" + allowed_response_headers: "{{ engine.allowed_response_headers | default(omit) }}" + options: "{{ engine.options | default(omit) }}" + + status_code: [200, 201, 202, 204] + when: "current_engine is not defined or current_engine.status != 200" + +- name: Tune Secrets engine {{ engine.type }} at {{ engine.path }} + ansible.builtin.uri: + url: "{{ vault_addr }}/v1/sys/mounts/{{ engine.path }}/tune" + headers: + X-Vault-Token: "{{ vault_token }}" + method: "POST" + body_format: "json" + body: + description: "{{ engine.description | default(omit) }}" + config: + default_lease_ttl: "{{ engine.default_lease_ttl | default(omit) }}" + max_lease_ttl: "{{ engine.max_lease_ttl | default(omit) }}" + force_no_cache: "{{ engine.force_no_cache | default(omit) }}" + audit_non_hmac_request_keys: "{{ engine.audit_non_hmac_request_keys | default(omit) }}" + audit_non_hmac_response_keys: "{{ engine.audit_non_hmac_response_keys | default(omit) }}" + listing_visibility: "{{ engine.listing_visibility | default(omit) }}" + passthrough_request_headers: "{{ engine.passthrough_request_headers | default(omit) }}" + allowed_response_headers: "{{ engine.allowed_response_headers | default(omit) }}" + options: "{{ engine.options | default(omit) }}" + status_code: [200, 201, 202, 204] + when: + - "current_engine.status == 200" + - "current_engine is defined and current_engine.json is defined" + - "engine.description is defined and current_engine.json.description != engine.description" + # - "current_engine.json.default_lease_ttl != engine.default_lease_ttl" + # - "current_engine.json.max_lease_ttl != engine.max_lease_ttl" + # - "engine.force_no_cache is defined and current_engine.json.force_no_cache != engine.force_no_cache" + # - "engine.auditcurrent_engine.json.audit_non_hmac_request_keys != engine.audit_non_hmac_request_keys" + # - "current_engine.json.audit_non_hmac_response_keys != engine.audit_non_hmac_response_keys" + + # - "current_engine.json.listing_visibility != engine.listing_visibility" + # - "current_engine.json.passthrough_request_headers != engine.passthrough_request_headers" + # - "current_engine.json.allowed_response_headers != engine.allowed_response_headers" diff --git a/playbooks/roles/create-venv/README.rst b/playbooks/roles/create-venv/README.rst new file mode 100644 index 0000000..4f3d950 --- /dev/null +++ b/playbooks/roles/create-venv/README.rst @@ -0,0 +1,19 @@ +Create a venv + +You would think this role is unnecessary and roles could just install +a ``venv`` directly ... except sometimes pip/setuptools get out of +date on a platform and can't understand how to install compatible +things. For example the pip shipped on Bionic will upgrade itself to +a version that doesn't support Python 3.6 because it doesn't +understand the metadata tags the new version marks itself with. We've +seen similar problems with wheels. History has shown that whenever +this problem appears solved, another issue will appear. So for +reasons like this, we have this as a synchronization point for setting +up venvs. + +**Role Variables** + +.. zuul:rolevar:: create_venv_path + :default: unset + + Required argument; the directory to make the ``venv`` diff --git a/playbooks/roles/create-venv/tasks/main.yaml b/playbooks/roles/create-venv/tasks/main.yaml new file mode 100644 index 0000000..a84dcec --- /dev/null +++ b/playbooks/roles/create-venv/tasks/main.yaml @@ -0,0 +1,54 @@ +- name: Check directory is specified + assert: + that: create_venv_path is defined + +- name: Ensure venv dir + file: + path: '{{ create_venv_path }}' + state: directory + +# Xenial's default pip will try to pull in packages that +# aren't compatible with 3.5. Cap them +- name: Setup requirements for bionic + when: ansible_distribution_version is version('16.04', '==') + set_fact: + _venv_requirements: + - pip<21 + - setuptools<51 + +# Bionic's default pip will try to pull in packages that +# aren't compatible with 3.6. Cap them +- name: Setup requirements for Bionic + when: ansible_distribution_version is version('18.04', '==') + set_fact: + _venv_requirements: + - pip<22 + - setuptools<60 + +- name: Setup requirements for later era + when: ansible_distribution_version is version('20.04', '>=') + set_fact: + _venv_requirements: + - pip + - setuptools + +# This is used to timestamp the requirements-venv.txt file. This +# means we will run --upgrade on the venv once a day, but otherwise +# leave it alone. +- name: Get current day + shell: 'date +%Y-%m-%d' + register: _date + +- name: Write requirements + template: + src: requirements-venv.txt + dest: '{{ create_venv_path }}/requirements-venv.txt' + register: _venv_requirements_txt + +- name: Create or upgrade venv + when: _venv_requirements_txt.changed + pip: + requirements: '{{ create_venv_path }}/requirements-venv.txt' + state: latest + virtualenv: '{{ create_venv_path }}' + virtualenv_command: '/usr/bin/python3 -m venv' diff --git a/playbooks/roles/create-venv/templates/requirements-venv.txt b/playbooks/roles/create-venv/templates/requirements-venv.txt new file mode 100644 index 0000000..ebc968a --- /dev/null +++ b/playbooks/roles/create-venv/templates/requirements-venv.txt @@ -0,0 +1,4 @@ +# Update timestamp: {{ _date.stdout }} +{% for r in _venv_requirements %} +{{ r }} +{% endfor %} diff --git a/playbooks/roles/edit-secrets-script/README.rst b/playbooks/roles/edit-secrets-script/README.rst new file mode 100644 index 0000000..16196ae --- /dev/null +++ b/playbooks/roles/edit-secrets-script/README.rst @@ -0,0 +1,3 @@ +This role installs a script called `edit-secrets` to /usr/local/bin +that allows you to safely edit the secrets file without needing to +manage gpg-agent yourself. diff --git a/playbooks/roles/edit-secrets-script/files/edit-secrets b/playbooks/roles/edit-secrets-script/files/edit-secrets new file mode 100644 index 0000000..5f1a22d --- /dev/null +++ b/playbooks/roles/edit-secrets-script/files/edit-secrets @@ -0,0 +1,2 @@ +#!/bin/sh +gpg-agent --daemon emacs /root/passwords/passwords.gpg diff --git a/playbooks/roles/edit-secrets-script/tasks/main.yaml b/playbooks/roles/edit-secrets-script/tasks/main.yaml new file mode 100644 index 0000000..21800d5 --- /dev/null +++ b/playbooks/roles/edit-secrets-script/tasks/main.yaml @@ -0,0 +1,5 @@ +- name: Copy edit-secrets script + copy: + mode: 0750 + src: edit-secrets + dest: /usr/local/bin/edit-secrets diff --git a/playbooks/roles/fail2ban/README.rst b/playbooks/roles/fail2ban/README.rst new file mode 100644 index 0000000..e69de29 diff --git a/playbooks/roles/fail2ban/defaults/main.yaml b/playbooks/roles/fail2ban/defaults/main.yaml new file mode 100644 index 0000000..e69de29 diff --git a/playbooks/roles/fail2ban/handlers/main.yaml b/playbooks/roles/fail2ban/handlers/main.yaml new file mode 100644 index 0000000..b4054fc --- /dev/null +++ b/playbooks/roles/fail2ban/handlers/main.yaml @@ -0,0 +1,6 @@ +- name: Restart fail2ban + ansible.builtin.systemd: + name: "fail2ban" + enabled: true + state: "restarted" + daemon_reload: true diff --git a/playbooks/roles/fail2ban/tasks/main.yaml b/playbooks/roles/fail2ban/tasks/main.yaml new file mode 100644 index 0000000..2ebf2d7 --- /dev/null +++ b/playbooks/roles/fail2ban/tasks/main.yaml @@ -0,0 +1,55 @@ +--- +- name: Include variables + include_vars: "{{ lookup('first_found', params) }}" + vars: + params: + files: "{{ distro_lookup_path }}" + paths: + - "vars" + +- name: Install required packages + become: true + ansible.builtin.package: + state: present + name: "{{ item }}" + loop: + - "{{ packages }}" + when: "ansible_facts.pkg_mgr != 'atomic_container'" + register: task_result + until: task_result is success + retries: 5 + +- name: Write fail2ban local jail conf + become: true + ansible.builtin.template: + src: "jail.local.j2" + dest: "/etc/fail2ban/jail.local" + mode: "0640" + notify: + - Restart fail2ban + +- name: Write service specific filters + become: true + ansible.builtin.copy: + content: "{{ filter.content }}" + dest: "{{ filter.dest }}" + mode: "{{ filter.mode | default('0644') }}" + loop: + "{{ fail2ban_filters | default([]) }}" + loop_control: + loop_var: "filter" + notify: + - Restart fail2ban + +- name: Write service specific jails + become: true + ansible.builtin.copy: + content: "{{ jail.content }}" + dest: "{{ jail.dest }}" + mode: "{{ jail.mode | default('0644') }}" + loop: + "{{ fail2ban_jails | default([]) }}" + loop_control: + loop_var: "jail" + notify: + - Restart fail2ban diff --git a/playbooks/roles/fail2ban/templates/jail.local.j2 b/playbooks/roles/fail2ban/templates/jail.local.j2 new file mode 100644 index 0000000..3bc3747 --- /dev/null +++ b/playbooks/roles/fail2ban/templates/jail.local.j2 @@ -0,0 +1,5 @@ +[DEFAULT] +banaction = firewallcmd-rich-rules[actiontype=] +banaction_allports = firewallcmd-rich-rules[actiontype=] + +ignoreip = 127.0.0.1/8 ::1 192.168.0.0/16 diff --git a/playbooks/roles/fail2ban/vars/Debian.yaml b/playbooks/roles/fail2ban/vars/Debian.yaml new file mode 100644 index 0000000..6cabef5 --- /dev/null +++ b/playbooks/roles/fail2ban/vars/Debian.yaml @@ -0,0 +1,3 @@ +--- +packages: + - fail2ban diff --git a/playbooks/roles/fail2ban/vars/RedHat.yaml b/playbooks/roles/fail2ban/vars/RedHat.yaml new file mode 100644 index 0000000..8328a3c --- /dev/null +++ b/playbooks/roles/fail2ban/vars/RedHat.yaml @@ -0,0 +1,4 @@ +--- +packages: + - fail2ban + diff --git a/playbooks/roles/failover/README.rst b/playbooks/roles/failover/README.rst new file mode 100644 index 0000000..e69de29 diff --git a/playbooks/roles/failover/defaults/main.yaml b/playbooks/roles/failover/defaults/main.yaml new file mode 100644 index 0000000..60a89a5 --- /dev/null +++ b/playbooks/roles/failover/defaults/main.yaml @@ -0,0 +1,2 @@ +failover_user: zuul +failover_group: zuul diff --git a/playbooks/roles/failover/tasks/main.yaml b/playbooks/roles/failover/tasks/main.yaml new file mode 100644 index 0000000..241efb0 --- /dev/null +++ b/playbooks/roles/failover/tasks/main.yaml @@ -0,0 +1,21 @@ +- name: Add Zuul user + ansible.builtin.user: + name: "{{ failover_user }}" + state: "present" + uid: "{{ all_users[failover_user].uid }}" + group: "{{ failover_group }}" + comment: "{{ all_users[failover_user].comment | default(omit)}}" + groups: "sudo" + shell: "/bin/bash" + when: + - "failover_user in all_users" + - "'uid' in all_users[failover_user]" + +- name: Add Zuul ssh keys + ansible.builtin.authorized_key: + user: "{{ failover_user }}" + state: "present" + key: "{{ failover_user_ssh_key }}" + exclusive: true + when: + - "failover_user_ssh_key is defined" diff --git a/playbooks/roles/firewalld/README.rst b/playbooks/roles/firewalld/README.rst new file mode 100644 index 0000000..94976fe --- /dev/null +++ b/playbooks/roles/firewalld/README.rst @@ -0,0 +1,23 @@ +Install and configure firewalld + +**Role Variables** + +.. zuul:rolevar:: firewalld_services_enable + :default: [ssh] + + A list of services to allow on the host + +.. zuul:rolevar:: firewalld_services_disable + :default: [] + + A list of services to forbid on the host + +.. zuul:rolevar:: firewalld_ports_enable + :default: [] + + A list of ports to allow on the host + +.. zuul:rolevar:: firewalld_ports_disable + :default: [] + + A list of ports to forbid on the host diff --git a/playbooks/roles/firewalld/defaults/main.yaml b/playbooks/roles/firewalld/defaults/main.yaml new file mode 100644 index 0000000..bb7bfe8 --- /dev/null +++ b/playbooks/roles/firewalld/defaults/main.yaml @@ -0,0 +1,5 @@ +# _all_ is to forcibly include ssh service +firewalld_services_enable: [] +firewalld_services_disable: [] +firewalld_ports_enable: [] +firewalld_ports_disable: [] diff --git a/playbooks/roles/firewalld/handlers/main.yaml b/playbooks/roles/firewalld/handlers/main.yaml new file mode 100644 index 0000000..e1925a3 --- /dev/null +++ b/playbooks/roles/firewalld/handlers/main.yaml @@ -0,0 +1,4 @@ +- name: Reload firewalld + systemd: + name: firewalld + state: reloaded diff --git a/playbooks/roles/firewalld/tasks/main.yaml b/playbooks/roles/firewalld/tasks/main.yaml new file mode 100644 index 0000000..bf3347f --- /dev/null +++ b/playbooks/roles/firewalld/tasks/main.yaml @@ -0,0 +1,62 @@ +- name: Include OS-specific variables + include_vars: "{{ lookup('first_found', params) }}" + vars: + params: + files: "{{ distro_lookup_path }}" + paths: + - 'vars' + +- name: Install firewalld + ansible.builtin.package: + name: '{{ package_name }}' + state: present + when: "ansible_facts.pkg_mgr != 'atomic_container'" + +- name: Enable services + ansible.posix.firewalld: + permanent: "yes" + service: "{{ item }}" + state: "enabled" + loop: "{{ firewalld_services_enable }}" + notify: + - Reload firewalld + +- name: Disable services + ansible.posix.firewalld: + permanent: "yes" + service: "{{ item }}" + state: "disabled" + loop: "{{ firewalld_services_disable }}" + notify: + - Reload firewalld + +- name: Enable ports + ansible.posix.firewalld: + permanent: "yes" + port: "{{ item }}" + state: "enabled" + loop: "{{ firewalld_ports_enable }}" + notify: + - Reload firewalld + +- name: Disable ports + ansible.posix.firewalld: + permanent: "yes" + port: "{{ item }}" + state: "disabled" + loop: "{{ firewalld_ports_disable }}" + notify: + - Reload firewalld + +- name: Disable iptables + ansible.builtin.service: + name: "iptables" + state: "stopped" + enabled: "false" + ignore_errors: "true" + +- name: Enable firewalld + ansible.builtin.service: + name: "firewalld" + state: "started" + enabled: "true" diff --git a/playbooks/roles/firewalld/vars/Debian.yaml b/playbooks/roles/firewalld/vars/Debian.yaml new file mode 100644 index 0000000..4a8a56a --- /dev/null +++ b/playbooks/roles/firewalld/vars/Debian.yaml @@ -0,0 +1 @@ +package_name: firewalld diff --git a/playbooks/roles/firewalld/vars/RedHat.yaml b/playbooks/roles/firewalld/vars/RedHat.yaml new file mode 100644 index 0000000..4a8a56a --- /dev/null +++ b/playbooks/roles/firewalld/vars/RedHat.yaml @@ -0,0 +1 @@ +package_name: firewalld diff --git a/playbooks/roles/gitea/README.rst b/playbooks/roles/gitea/README.rst new file mode 100644 index 0000000..e69de29 diff --git a/playbooks/roles/gitea/defaults/main.yaml b/playbooks/roles/gitea/defaults/main.yaml new file mode 100644 index 0000000..354197e --- /dev/null +++ b/playbooks/roles/gitea/defaults/main.yaml @@ -0,0 +1,13 @@ +container_runtime: "/usr/bin/{{ container_command }}" +gitea_os_group: git +gitea_os_user: git + +gitea_version: "1.17.3" +gitea_checksum: "sha256:38c4e1228cd051b785c556bcadc378280d76c285b70e8761cd3f5051aed61b5e" + +arch_translation: + amd64: amd64 + x86_64: amd64 + i386: 386 + +gitea_mailer_enable: false diff --git a/playbooks/roles/gitea/handlers/main.yaml b/playbooks/roles/gitea/handlers/main.yaml new file mode 100644 index 0000000..0abd84a --- /dev/null +++ b/playbooks/roles/gitea/handlers/main.yaml @@ -0,0 +1,6 @@ +- name: Restart gitea + ansible.builtin.systemd: + name: "gitea" + enabled: true + state: "restarted" + daemon_reload: true diff --git a/playbooks/roles/gitea/tasks/main.yaml b/playbooks/roles/gitea/tasks/main.yaml new file mode 100644 index 0000000..ca268ce --- /dev/null +++ b/playbooks/roles/gitea/tasks/main.yaml @@ -0,0 +1,102 @@ +--- +- name: Include variables + include_vars: "{{ lookup('first_found', params) }}" + vars: + params: + files: "{{ distro_lookup_path }}" + paths: + - "vars" + +- name: Install required packages + become: true + ansible.builtin.package: + state: present + name: "{{ item }}" + loop: + - "{{ packages }}" + when: "ansible_facts.pkg_mgr != 'atomic_container'" + register: task_result + until: task_result is success + retries: 5 + +- name: Create gitea group + become: yes + ansible.builtin.group: + name: "{{ gitea_os_group }}" + state: present + +- name: Create gitea user + become: true + ansible.builtin.user: + name: "{{ gitea_os_user }}" + group: "{{ gitea_os_group }}" + shell: "/bin/bash" + comment: "Git Version Control" + home: "/home/git" + password_lock: true + system: true + state: "present" + +- name: Ensure directories exist + become: true + ansible.builtin.file: + state: "directory" + path: "{{ item.path }}" + mode: "{{ item.mode | default('0755') }}" + owner: "{{ item.owner | default(gitea_os_user) }}" + group: "{{ item.group | default(gitea_os_group) }}" + loop: + - path: "/etc/gitea" + mode: "0775" + - path: "/var/lib/gitea" + mode: "0750" + - path: "/var/lib/gitea/custom/conf" + mode: "0750" + +- name: Download gitea binary + become: true + ansible.builtin.get_url: + url: "https://dl.gitea.io/gitea/{{ gitea_version }}/gitea-{{ gitea_version }}-{{ ansible_system | lower }}-{{ arch_translation[ansible_architecture] }}" + dest: "/usr/local/bin/gitea" + mode: 0755 + checksum: "{{ gitea_checksum }}" + +- name: Write /etc/hosts entry for ldap + become: true + ansible.builtin.lineinfile: + path: "/etc/hosts" + regexp: "{{ gitea_ldap_hostname }}$" + line: "{{ gitea_ldap_ip }} {{ gitea_ldap_hostname }}" + when: + - gitea_ldap_ip is defined + - gitea_ldap_hostname is defined + +- name: Write gitea config env file + become: true + template: + src: "env.j2" + dest: "/etc/gitea/env" + owner: "{{ gitea_os_user }}" + group: "{{ gitea_os_group }}" + mode: "0640" + notify: + - Restart gitea + +- name: Write config file + become: true + ansible.builtin.template: + src: "app.ini.j2" + dest: "/var/lib/gitea/custom/conf/app.ini" + owner: "{{ gitea_os_user }}" + group: "{{ gitea_os_group }}" + mode: "0640" + notify: + - Restart gitea + +- name: Write systemd unit file + become: true + ansible.builtin.template: + src: "gitea.service.j2" + dest: "/etc/systemd/system/gitea.service" + notify: + - Restart gitea diff --git a/playbooks/roles/gitea/templates/app.ini.j2 b/playbooks/roles/gitea/templates/app.ini.j2 new file mode 100644 index 0000000..4c9886a --- /dev/null +++ b/playbooks/roles/gitea/templates/app.ini.j2 @@ -0,0 +1,488 @@ +{% if gitea_app_name is defined %} +APP_NAME = {{ gitea_app_name }} +{% endif %} +RUN_USER = ; git +RUN_MODE = ; prod + +;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; +;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; +[server] +;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; +;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; +PROTOCOL = https +DOMAIN = {{ gitea_domain }} +{% if gitea_root_url is defined %} +ROOT_URL = {{ gitea_root_url }} +{% endif %} +{% if gitea_http_port is defined %} +HTTP_PORT = {{ gitea_http_port }} +{% endif %} + +DISABLE_SSH = false +START_SSH_SERVER = true +SSH_PORT = 2222 + +;; TLS Settings: Either ACME or manual +;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; +;; Paths are relative to CUSTOM_PATH +CERT_FILE = /etc/ssl/{{ inventory_hostname }}/gitea/{{ gitea_cert }}-fullchain.crt +KEY_FILE = /etc/ssl/{{ inventory_hostname }}/gitea/{{ gitea_cert }}.pem +ENABLE_GZIP = true + +;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; +;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; +[database] +;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; +;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; +;; +;; Database to use. Either "mysql", "postgres", "mssql" or "sqlite3". +;; +;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; +;; +;; Database Configuration +;; +{% if gitea_db_type is defined %} +DB_TYPE = {{ gitea_db_type }} +{% endif %} +{% if gitea_db_host is defined %} +HOST = {{ gitea_db_host }} ; can use socket e.g. /var/run/postgresql/ +{% endif %} +{% if gitea_db_name is defined %} +NAME = {{ gitea_db_name }} +{% endif %} +{% if gitea_db_username is defined %} +USER = {{ gitea_db_username }} +{% endif %} +{% if gitea_db_password is defined %} +PASSWD = {{ gitea_db_password }} +{% endif %} +SSL_MODE=require + +;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; +[security] +;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; +INSTALL_LOCK = true +SECRET_KEY ={{ gitea_secret_key }} +INTERNAL_TOKEN={{ gitea_internal_token }} +;; Instead of defining internal token in the configuration, this configuration option can be used to give Gitea a path to a file that contains the internal token (example value: file:/etc/gitea/internal_token) +;INTERNAL_TOKEN_URI = ;e.g. /etc/gitea/internal_token +;; +;LOGIN_REMEMBER_DAYS = 7 +;; +DISABLE_GIT_HOOKS = true +DISABLE_WEBHOOKS = false +;;If left empty or no valid values are specified, the default is off (no checking) +;;Classes include "lower,upper,digit,spec" +;PASSWORD_COMPLEXITY = off +SUCCESSFUL_TOKENS_CACHE_SIZE = 20 + +;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; +;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; +[oauth2] +;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; +;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; +;; +;; Enables OAuth2 provider +ENABLE = true + +;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; +;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; +[log] +;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; +;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; +;; Root path for the log files - defaults to %(GITEA_WORK_DIR)/log +;ROOT_PATH = +;; +;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; +;; Main Logger +;; +;; Either "console", "file", "conn", "smtp" or "database", default is "console" +;; Use comma to separate multiple modes, e.g. "console, file" +MODE = "console, file" +;; +;; Either "Trace", "Debug", "Info", "Warn", "Error", "Critical" or "None", default is "Info" +LEVEL = Debug +;; +;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; +;; Router Logger +;; +;; Switch off the router log +;DISABLE_ROUTER_LOG=false +;; +;; Set the log "modes" for the router log (if file is set the log file will default to router.log) +ROUTER = console +;; +;; The router will log different things at different levels. +;; +;; * started messages will be logged at TRACE level +;; * polling/completed routers will be logged at INFO +;; * slow routers will be logged at WARN +;; * failed routers will be logged at WARN +;; +;; The routing level will default to that of the system but individual router level can be set in +;; [log..router] LEVEL +;; + +;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; +;; +;; Access Logger (Creates log in NCSA common log format) +;; +ENABLE_ACCESS_LOG = true + +;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; +;; +;; SSH log (Creates log from ssh git request) +;; +ENABLE_SSH_LOG = true + +;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; +;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; +[git] +MAX_GIT_DIFF_LINES = 100 +MAX_GIT_DIFF_FILES = 50 +;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; +;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; +[service] +;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; +;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; +;EMAIL_DOMAIN_WHITELIST = +DISABLE_REGISTRATION = false +ALLOW_ONLY_INTERNAL_REGISTRATION = false +;ALLOW_ONLY_EXTERNAL_REGISTRATION = false +;REQUIRE_SIGNIN_VIEW = false +;ENABLE_NOTIFY_MAIL = false +ENABLE_NOTIFY_MAIL = true +DEFAULT_KEEP_EMAIL_PRIVATE = true +;; +;; This setting enables gitea to be signed in with HTTP BASIC Authentication using the user's password +;; If you set this to false you will not be able to access the tokens endpoints on the API with your password +;; Please note that setting this to false will not disable OAuth Basic or Basic authentication using a token +;ENABLE_BASIC_AUTHENTICATION = true +SHOW_REGISTRATION_BUTTON = false +DEFAULT_ORG_VISIBILITY = limited +DEFAULT_USER_VISIBILITY = limited + +;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; +;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; +;[service.explore] +;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; +;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; +DISABLE_USERS_PAGE = true +REQUIRE_SIGNIN_VIEW = false + +;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; +;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; +;; Other Settings +;; +;; Uncomment the [section.header] if you wish to +;; set the below settings. +;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; +;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; + +;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; +;[repository] +;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; + +;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; +;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; +;[repository.editor] +;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; +;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; +;; +;; List of file extensions for which lines should be wrapped in the Monaco editor +;; Separate extensions with a comma. To line wrap files without an extension, just put a comma +;LINE_WRAP_EXTENSIONS = .txt,.md,.markdown,.mdown,.mkd, +;; +;; Valid file modes that have a preview API associated with them, such as api/v1/markdown +;; Separate the values by commas. The preview tab in edit mode won't be displayed if the file extension doesn't match +;PREVIEWABLE_FILE_MODES = markdown + +;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; +;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; +;[repository.local] +;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; +;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; +;; + +;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; +;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; +;[repository.upload] +;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; +;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; +;; +;; Whether repository file uploads are enabled. Defaults to `true` +;ENABLED = true +;; +;; Path for uploads. Defaults to `data/tmp/uploads` (content gets deleted on gitea restart) +;TEMP_PATH = data/tmp/uploads +;; +;; Comma-separated list of allowed file extensions (`.zip`), mime types (`text/plain`) or wildcard type (`image/*`, `audio/*`, `video/*`). Empty value or `*/*` allows all types. +;ALLOWED_TYPES = +;FILE_MAX_SIZE = 3 +;MAX_FILES = 5 + +;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; +;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; +;[repository.pull-request] +;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; +;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; +;; +;; List of prefixes used in Pull Request title to mark them as Work In Progress (matched in a case-insensitive manner) +;WORK_IN_PROGRESS_PREFIXES = WIP:,[WIP] +;; +;; Set default merge style for repository creating, valid options: merge, rebase, rebase-merge, squash +DEFAULT_MERGE_STYLE = squash + +;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; +;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; +;[repository.issue] +;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; +;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; + +;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; +;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; +;[repository.release] +;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; +;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; + +;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; +;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; +;[repository.signing] +;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; +;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; + +;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; +;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; +;[repository.mimetype_mapping] +;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; +;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; + +;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; +;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; +;[project] +;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; +;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; + +;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; +;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; +;[cors] +;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; +;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; + +;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; +;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; +[ui] +EXPLORE_PAGING_NUM = 20 +ISSUE_PAGING_NUM = 20 +SHOW_USER_EMAIL = false + +;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; +;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; + +;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; +;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; +;[ui.admin] +;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; +;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; + +;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; +;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; +;[ui.user] +;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; +;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; + +;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; +;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; +;[ui.meta] +;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; +;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; + +;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; +;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; +;[ui.notification] +;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; +;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; + +;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; +;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; +;[ui.svg] +;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; +;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; + +;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; +;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; +;[ui.csv] +;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; +;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; + +;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; +;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; +;[markdown] +;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; +;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; + +;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; +;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; +;[ssh.minimum_key_sizes] +;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; +;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; + +;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; +;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; +;[indexer] +;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; +;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; + +;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; +;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; +;[admin] +;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; +;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; +;; +DISABLE_REGULAR_ORG_CREATION = true + +;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; +;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; +;[openid] +;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; +;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; + +;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; +;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; +;[oauth2_client] +;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; +;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; +ENABLE_AUTO_REGISTRATION = true + +;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; +;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; +;[webhook] +;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; +;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; + +;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; +;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; +{%if gitea_mailer_enable %} +[mailer] +ENABLED = true +FROM = {{ gitea_mailer_from }} +MAILER_TYPE = {{ gitea_mailer_type }} +HOST = {{ gitea_mailer_host }} +IS_TLS_ENABLED = {{ gitea_mailer_is_tls_enabled }} +USER = {{ gitea_mailer_user }} +PASSWD = {{ gitea_mailer_passwd }} +{% endif %} + +;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; +;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; +;[cache] +;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; +;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; + +;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; +;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; +;; Last commit cache +;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; +;[cache.last_commit] +;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; +;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; + +;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; +;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; +;[session] +;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; +;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; +PROVIDER = db + +;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; +;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; +;[api] +;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; +;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; +;ENABLE_SWAGGER = true + +;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; +;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; +;[i18n] +;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; +;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; +;LANGS = en-US,zh-CN,zh-HK,zh-TW,de-DE,fr-FR,nl-NL,lv-LV,ru-RU,uk-UA,ja-JP,es-ES,pt-BR,pt-PT,pl-PL,bg-BG,it-IT,fi-FI,tr-TR,cs-CZ,sv-SE,ko-KR,el-GR,fa-IR,hu-HU,id-ID,ml-IN +;NAMES = English,简体中文,繁體中文(香港),繁體中文(台灣),Deutsch,Français,Nederlands,Latviešu,Русский,Українська,日本語,Español,Português do Brasil,Português de Portugal,Polski,Български,Italiano,Suomi,Türkçe,Čeština,Српски,Svenska,한국어,Ελληνικά,فارسی,Magyar nyelv,Bahasa Indonesia,മലയാളം + +;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; +;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; +;[highlight.mapping] +;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; +;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; +;; Extension mapping to highlight class +;; e.g. .toml=ini + +;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; +;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; +;[other] +;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; +;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; +;SHOW_FOOTER_BRANDING = false +;; Show version information about Gitea and Go in the footer +;SHOW_FOOTER_VERSION = true +;; Show template execution time in the footer +;SHOW_FOOTER_TEMPLATE_LOAD_TIME = true + + +;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; +;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; +;[markup] +;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; +;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; +;; Set the maximum number of characters in a mermaid source. (Set to -1 to disable limits) +;MERMAID_MAX_SOURCE_CHARACTERS = 5000 + +;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; +;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; +;[markup.sanitizer.1] +;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; +;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; +;; The following keys can appear once to define a sanitation policy rule. +;; This section can appear multiple times by adding a unique alphanumeric suffix to define multiple rules. +;; e.g., [markup.sanitizer.1] -> [markup.sanitizer.2] -> [markup.sanitizer.TeX] +;ELEMENT = span +;ALLOW_ATTR = class +;REGEXP = ^(info|warning|error)$ +;; + +[markup.markdown] +ENABLED = true +FILE_EXTENSIONS = .md,.markdown +RENDER_COMMAND = pandoc -f markdown -t html --katex + +[markup.restructuredtext] +ENABLED = true +; List of file extensions that should be rendered by an external command +FILE_EXTENSIONS = .rst +; External command to render all matching extensions +RENDER_COMMAND = "timeout 30s pandoc +RTS -M512M -RTS -f rst" +; Input is not a standard input but a file +IS_INPUT_FILE = false + +[markup.html] +ENABLED = true +FILE_EXTENSIONS = .html,.htm +RENDER_COMMAND = cat +; Input is not a standard input but a file +IS_INPUT_FILE = true + +[markup.sanitizer.html.1] +ELEMENT = div +ALLOW_ATTR = class + +[markup.sanitizer.html.2] +ELEMENT = a +ALLOW_ATTR = class + +[metrics] +ENABLED = true + +{% if gitea_packages_enable is defined and gitea_packages_enable %} +[packages] +ENABLED=true +{% endif %} diff --git a/playbooks/roles/gitea/templates/env.j2 b/playbooks/roles/gitea/templates/env.j2 new file mode 100644 index 0000000..f5bd1b8 --- /dev/null +++ b/playbooks/roles/gitea/templates/env.j2 @@ -0,0 +1,2 @@ +# Envs are currently not being read from here. It is possible to config gitea this way only, +# but it is then hard to use gitea CLI. diff --git a/playbooks/roles/gitea/templates/gitea.service.j2 b/playbooks/roles/gitea/templates/gitea.service.j2 new file mode 100644 index 0000000..e52a680 --- /dev/null +++ b/playbooks/roles/gitea/templates/gitea.service.j2 @@ -0,0 +1,66 @@ +[Unit] +Description=Gitea (Git with a cup of tea) +After=syslog.target +After=network.target +### +# If using socket activation for main http/s +### +# +#After=gitea.main.socket +#Requires=gitea.main.socket +# +### +# (You can also provide gitea an http fallback and/or ssh socket too) +# +# An example of /etc/systemd/system/gitea.main.socket +### +## +## [Unit] +## Description=Gitea Web Socket +## PartOf=gitea.service +## +## [Socket] +## Service=gitea.service +## ListenStream= +## NoDelay=true +## +## [Install] +## WantedBy=sockets.target +## +### + +[Service] +# Modify these two values and uncomment them if you have +# repos with lots of files and get an HTTP error 500 because +# of that +### +#LimitMEMLOCK=infinity +#LimitNOFILE=65535 +RestartSec=2s +Type=simple +User={{ gitea_os_user }} +Group={{ gitea_os_group }} +WorkingDirectory=/var/lib/gitea/ +# If using Unix socket: tells systemd to create the /run/gitea folder, which will contain the gitea.sock file +# (manually creating /run/gitea doesn't work, because it would not persist across reboots) +#RuntimeDirectory=gitea +#ExecStart=/usr/local/bin/gitea web --config /etc/gitea/app.ini +# NOTE: by default data will be searched under $WorkingDirectory +ExecStart=/usr/local/bin/gitea web +Restart=always +Environment=USER={{ gitea_os_user }} HOME=/home/{{ gitea_os_user }} GITEA_WORK_DIR=/var/lib/gitea +# If you install Git to directory prefix other than default PATH (which happens +# for example if you install other versions of Git side-to-side with +# distribution version), uncomment below line and add that prefix to PATH +# Don't forget to place git-lfs binary on the PATH below if you want to enable +# Git LFS support +#Environment=PATH=/path/to/git/bin:/bin:/sbin:/usr/bin:/usr/sbin +# If you want to bind Gitea to a port below 1024, uncomment +# the two values below, or use socket activation to pass Gitea its ports as above +### +CapabilityBoundingSet=CAP_NET_BIND_SERVICE +AmbientCapabilities=CAP_NET_BIND_SERVICE +### + +[Install] +WantedBy=multi-user.target diff --git a/playbooks/roles/gitea/vars/Ubuntu.jammy.yaml b/playbooks/roles/gitea/vars/Ubuntu.jammy.yaml new file mode 100644 index 0000000..8d3f626 --- /dev/null +++ b/playbooks/roles/gitea/vars/Ubuntu.jammy.yaml @@ -0,0 +1,5 @@ +--- +packages: + - pandoc + +container_command: podman diff --git a/playbooks/roles/hashivault/README.rst b/playbooks/roles/hashivault/README.rst new file mode 100644 index 0000000..e69de29 diff --git a/playbooks/roles/hashivault/defaults/main.yaml b/playbooks/roles/hashivault/defaults/main.yaml new file mode 100644 index 0000000..fd5a9cf --- /dev/null +++ b/playbooks/roles/hashivault/defaults/main.yaml @@ -0,0 +1,79 @@ +state: "present" + +vault_storage_path: "/opt/vault/data" +vault_plugin_path: "/etc/vault.d/plugins" +vault_enable_ui: false +vault_owner: "vault" +vault_group: "vault" + +vault_tls_cert_file: "/etc/ssl/{{ inventory_hostname }}/vault/vault-fullchain.crt" +vault_tls_key_file: "/etc/ssl/{{ inventory_hostname }}/vault/vault.pem" + +vault_plugins: [] + +# hashi GPG +hashicorp_gpg_key: | + -----BEGIN PGP PUBLIC KEY BLOCK----- + + mQINBGO9u+MBEADmE9i8rpt8xhRqxbzlBG06z3qe+e1DI+SyjscyVVRcGDrEfo+J + W5UWw0+afey7HFkaKqKqOHVVGSjmh6HO3MskxcpRm/pxRzfni/OcBBuJU2DcGXnG + nuRZ+ltqBncOuONi6Wf00McTWviLKHRrP6oWwWww7sYF/RbZp5xGmMJ2vnsNhtp3 + 8LIMOmY2xv9LeKMh++WcxQDpIeRohmSJyknbjJ0MNlhnezTIPajrs1laLh/IVKVz + 7/Z73UWX+rWI/5g+6yBSEtj368N7iyq+hUvQ/bL00eyg1Gs8nE1xiCmRHdNjMBLX + lHi0V9fYgg3KVGo6Hi/Is2gUtmip4ZPnThVmB5fD5LzS7Y5joYVjHpwUtMD0V3s1 + HiHAUbTH+OY2JqxZDO9iW8Gl0rCLkfaFDBS2EVLPjo/kq9Sn7vfp2WHffWs1fzeB + HI6iUl2AjCCotK61nyMR33rNuNcbPbp+17NkDEy80YPDRbABdgb+hQe0o8htEB2t + CDA3Ev9t2g9IC3VD/jgncCRnPtKP3vhEhlhMo3fUCnJI7XETgbuGntLRHhmGJpTj + ydudopoMWZAU/H9KxJvwlVXiNoBYFvdoxhV7/N+OBQDLMevB8XtPXNQ8ZOEHl22G + hbL8I1c2SqjEPCa27OIccXwNY+s0A41BseBr44dmu9GoQVhI7TsetpR+qwARAQAB + tFFIYXNoaUNvcnAgU2VjdXJpdHkgKEhhc2hpQ29ycCBQYWNrYWdlIFNpZ25pbmcp + IDxzZWN1cml0eStwYWNrYWdpbmdAaGFzaGljb3JwLmNvbT6JAlQEEwEIAD4CGwMF + CwkIBwIGFQoJCAsCBBYCAwECHgECF4AWIQR5iuxlTlwVQoyOQu6qFvy8piHnAQUC + Y728PQUJCWYB2gAKCRCqFvy8piHnAd16EADeBtTgkdVEvct40TH/9HKkR/Lc/ohM + rer6FFHdKmceJ6Ma8/Qm4nCO5C7c4+EPjsUXdhK5w8DSdC5VbKLJDY1EnDlmU5B1 + wSFkGoYKoB8lUn30E77E33MTu2kfrSuF605vetq269CyBwIJV7oNN6311dW8iQ6z + IytTtlJbVr4YZ7Vst40/uR4myumk9bVBGEd6JhFAPmr/um+BZFhRf9/8xtOryOyB + GF2d+bc9IoAugpxwv0IowHEqkI4RpK2U9hvxG80sTOcmerOuFbmNyPwnEgtJ6CM1 + bc8WAmObJiQcRSLbcgF+a7+2wqrUbCqRE7QoS2wjd1HpUVPmSdJN925c2uaua2A4 + QCbTEg8kV2HiP0HGXypVNhZJt5ouo0YgR6BSbMlsMHniDQaSIP1LgmEz5xD4UAxO + Y/GRR3LWojGzVzBb0T98jpDgPtOu/NpKx3jhSpE2U9h/VRDiL/Pf7gvEIxPUTKuV + 5D8VqAiXovlk4wSH13Q05d9dIAjuinSlxb4DVr8IL0lmx9DyHehticmJVooHDyJl + HoA2q2tFnlBBAFbN92662q8Pqi9HbljVRTD1vUjof6ohaoM+5K1C043dmcwZZMTc + 7gV1rbCuxh69rILpjwM1stqgI1ONUIkurKVGZHM6N2AatNKqtBRdGEroQo1aL4+4 + u+DKFrMxOqa5b7kCDQRjvbwTARAA0ut7iKLj9sOcp5kRG/5V+T0Ak2k2GSus7w8e + kFh468SVCNUgLJpLzc5hBiXACQX6PEnyhLZa8RAG+ehBfPt03GbxW6cK9nx7HRFQ + GA79H5B4AP3XdEdT1gIL2eaHdQot0mpF2b07GNfADgj99MhpxMCtTdVbBqHY8YEQ + Uq7+E9UCNNs45w5ddq07EDk+o6C3xdJ42fvS2x44uNH6Z6sdApPXLrybeun74C1Z + Oo4Ypre4+xkcw2q2WIhy0Qzeuw+9tn4CYjrhw/+fvvPGUAhtYlFGF6bSebmyua8Q + MTKhwqHqwJxpjftM3ARdgFkhlH1H+PcmpnVutgTNKGcy+9b/lu/Rjq/47JZ+5VkK + ZtYT/zO1oW5zRklHvB6R/OcSlXGdC0mfReIBcNvuNlLhNcBA9frNdOk3hpJgYDzg + f8Ykkc+4z8SZ9gA3g0JmDHY1X3SnSadSPyMas3zH5W+16rq9E+MZztR0RWwmpDtg + Ff1XGMmvc+FVEB8dRLKFWSt/E1eIhsK2CRnaR8uotKW/A/gosao0E3mnIygcyLB4 + fnOM3mnTF3CcRumxJvnTEmSDcoKSOpv0xbFgQkRAnVSn/gHkcbVw/ZnvZbXvvseh + 7dstp2ljCs0queKU+Zo22TCzZqXX/AINs/j9Ll67NyIJev445l3+0TWB0kego5Fi + UVuSWkMAEQEAAYkEcgQYAQgAJhYhBHmK7GVOXBVCjI5C7qoW/LymIecBBQJjvbwT + AhsCBQkJZgGAAkAJEKoW/LymIecBwXQgBBkBCAAdFiEE6wr14plJaVlvmYc+cG5m + g2nAhekFAmO9vBMACgkQcG5mg2nAhenPURAAimI0EBZbqpyHpwpbeYq3Pygg1bdo + IlBQUVoutaN1lR7kqGXwYH+BP6G40x79LwVy/fWV8gO7cDX6D1yeKLNbhnJHPBus + FJDmzDPbjTlyWlDqJoWMiPqfAOc1A1cHodsUJDUlA01j1rPTho0S9iALX5R50Wa9 + sIenpfe7RVunDwW5gw6y8me7ncl5trD0LM2HURw6nYnLrxePiTAF1MF90jrAhJDV + +krYqd6IFq5RHKveRtCuTvpL7DlgVCtntmbXLbVC/Fbv6w1xY3A7rXko/03nswAi + AXHKMP14UutVEcLYDBXbDrvgpb2p2ZUJnujs6cNyx9cOPeuxnke8+ACWvpnWxwjL + M5u8OckiqzRRobNxQZ1vLxzdovYTwTlUAG7QjIXVvOk9VNp/ERhh0eviZK+1/ezk + Z8nnPjx+elThQ+r16EM7hD0RDXtOR1VZ0R3OL64AlZYDZz1jEA3lrGhvbjSIfBQk + T6mxKUsCy3YbElcOyuohmPRgT1iVDIZ/1iPL0Q0HGm4+EsWCdH6fAPB7TlHD8z2D + 7JCFLihFDWs5lrZyuWMO9nryZiVjJrOLPcStgJYVd/MhRHR4hC6g09bgo25RMJ6f + gyzL4vlEB7aSUih7yjgL9s5DKXP2J71dAhIlF8nnM403R2xEeHyivnyeR/9Ifn7M + PJvUMUuoG+ZANSMkrw//XA31o//TVk9WsLD1Edxt5XZCoR+fS+Vz8ScLwP1d/vQE + OW/EWzeMRG15C0td1lfHvwPKvf2MN+WLenp9TGZ7A1kEHIpjKvY51AIkX2kW5QLu + Y3LBb+HGiZ6j7AaU4uYR3kS1+L79v4kyvhhBOgx/8V+b3+2pQIsVOp79ySGvVwpL + FJ2QUgO15hnlQJrFLRYa0PISKrSWf35KXAy04mjqCYqIGkLsz2qQCY2lGcD5k05z + bBC4TvxwVxv0ftl2C5Bd0ydl/2YM7GfLrmZmTijK067t4OO+2SROT2oYPDsMtZ6S + E8vUXvoGpQ8tf5Nkrn2t0zDG3UDtgZY5UVYnZI+xT7WHsCz//8fY3QMvPXAuc33T + vVdiSfP0aBnZXj6oGs/4Vl1Dmm62XLr13+SMoepMWg2Vt7C8jqKOmhFmSOWyOmRH + UZJR7nKvTpFnL8atSyFDa4o1bk2U3alOscWS8u8xJ/iMcoONEBhItft6olpMVdzP + CTrnCAqMjTSPlQU/9EGtp21KQBed2KdAsJBYuPgwaQeyNIvQEOXmINavl58VD72Y + 2T4TFEY8dUiExAYpSodbwBL2fr8DJxOX68WH6e3fF7HwX8LRBjZq0XUwh0KxgHN+ + b9gGXBvgWnJr4NSQGGPiSQVNNHt2ZcBAClYhm+9eC5/VwB+Etg4+1wDmggztiqE= + =FdUF + -----END PGP PUBLIC KEY BLOCK----- diff --git a/playbooks/roles/hashivault/handlers/main.yaml b/playbooks/roles/hashivault/handlers/main.yaml new file mode 100644 index 0000000..638a157 --- /dev/null +++ b/playbooks/roles/hashivault/handlers/main.yaml @@ -0,0 +1,12 @@ +- name: Reload Vault + ansible.builtin.service: + name: "vault" + enabled: true + state: "reloaded" + +- name: Restart Vault + ansible.builtin.systemd: + name: "vault" + enabled: true + state: "restarted" + daemon_reload: true diff --git a/playbooks/roles/hashivault/tasks/Debian.yaml b/playbooks/roles/hashivault/tasks/Debian.yaml new file mode 100644 index 0000000..adec99b --- /dev/null +++ b/playbooks/roles/hashivault/tasks/Debian.yaml @@ -0,0 +1,21 @@ +--- +- name: Add PPA GPG key + become: yes + apt_key: + data: "{{ hashicorp_gpg_key }}" + +- name: Add hashicorp apt repo + become: yes + template: + dest: /etc/apt/sources.list.d/hashicorp.list + group: root + mode: 0644 + owner: root + src: sources.list.j2 + +- name: Install vault + become: yes + apt: + name: vault + state: present + update_cache: yes diff --git a/playbooks/roles/hashivault/tasks/configure_plugins.yaml b/playbooks/roles/hashivault/tasks/configure_plugins.yaml new file mode 100644 index 0000000..ceedd10 --- /dev/null +++ b/playbooks/roles/hashivault/tasks/configure_plugins.yaml @@ -0,0 +1,41 @@ +- name: Register the plugin {{ plugin.name }} with sha256 {{ plugin.sha256 }} + become: true + command: "vault plugin register -sha256={{ plugin.sha256 }} {{ plugin.type }} {{ plugin.name }}" + when: + - "plugin.name is defined" + - "plugin.type is defined" + - "plugin.sha256 is defined" + +- name: Enable the plugin on defined paths + become: true + command: "vault secrets enable -path={{ path }} {{ plugin.name }}" + loop: "{{ plugin.paths }}" + loop_control: + loop_var: "path" + when: + - "plugin.paths is defined" + - "plugin.name is defined" + ignore_errors: true + +- name: Reload the plugin in case of a new version + become: true + command: "vault plugin reload -plugin {{ plugin.name }}" + when: + - "plugin.name is defined" + +- name: Get loaded plugin info + become: true + command: "vault plugin info {{ plugin.type}} {{ plugin.name }}" + when: + - "plugin.type is defined" + - "plugin.name is defined" + register: plugin_info + +- name: Loaded plugin info + become: true + debug: + msg: "{{ plugin_info.stdout_lines }}" + when: + - "plugin_info.stdout_lines is defined" + + diff --git a/playbooks/roles/hashivault/tasks/main.yaml b/playbooks/roles/hashivault/tasks/main.yaml new file mode 100644 index 0000000..d3006df --- /dev/null +++ b/playbooks/roles/hashivault/tasks/main.yaml @@ -0,0 +1,109 @@ +--- +- name: Include variables + include_vars: "{{ lookup('first_found', params) }}" + vars: + params: + files: "{{ distro_lookup_path }}" + paths: + - "vars" + +- name: Include OS-specific tasks + include_tasks: "{{ lookup('first_found', file_list) }}" + vars: + file_list: "{{ distro_lookup_path }}" + +- name: Add PPA GPG key + become: yes + apt_key: + data: "{{ hashicorp_gpg_key }}" + +- name: Install required packages + become: true + ansible.builtin.package: + state: present + name: "{{ item }}" + loop: + - "{{ packages }}" + when: "ansible_facts.pkg_mgr != 'atomic_container'" + register: task_result + until: task_result is success + retries: 5 + +- name: Create storage + ansible.builtin.file: + state: "directory" + path: "{{ vault_storage_path }}" + owner: "{{ vault_owner }}" + group: "{{ vault_group }}" + mode: 0755 + +- name: Create plugins dir + ansible.builtin.file: + state: "directory" + path: "{{ vault_plugin_path }}" + owner: "{{ vault_owner }}" + group: "{{ vault_group }}" + mode: 0755 + +- name: Install plugins + ansible.builtin.unarchive: + src: "{{ zj_plugin.url }}" + dest: "{{ vault_plugin_path }}" + owner: "{{ vault_owner }}" + group: "{{ vault_group }}" + remote_src: "yes" + loop: + "{{ vault_plugins }}" + loop_control: + loop_var: "zj_plugin" + +- name: Write config + ansible.builtin.template: + dest: /etc/vault.d/vault.hcl + src: vault.hcl.j2 + mode: 0644 + owner: "{{ vault_owner }}" + group: "{{ vault_group }}" + notify: + - Restart Vault + +- name: Write SSL Cert file + ansible.builtin.copy: + path: "{{ vault_tls_cert_file }}" + content: "{{ vault_tls_cert_content }}" + owner: "{{ vault_owner }}" + group: "{{ vault_group }}" + recurse: true + when: "vault_tls_cert_content is defined and vault_tls_cert_content|length>0" + +- name: Write SSL Key file + ansible.builtin.copy: + path: "{{ vault_tls_key_file }}" + content: "{{ vault_tls_key_content }}" + owner: "{{ vault_owner }}" + group: "{{ vault_group }}" + recurse: true + when: "vault_tls_key_content is defined and vault_tls_key_content|length>0" + +- name: Correct certs ownership + ansible.builtin.file: + path: "/etc/ssl/{{ inventory_hostname }}/vault" + state: "directory" + owner: "{{ vault_owner }}" + group: "{{ vault_group }}" + recurse: true + +- name: Enable vault service + ansible.builtin.service: + name: "vault" + enabled: "true" + state: "started" + +# - name: Renew transit token +# include_tasks: "renew_transit_token.yaml" +# vars: +# vault_addr: "{{ vault_seal_transit_address }}" +# transit_token: "{{ vault_seal_transit_token }}" +# when: +# - "vault_seal_transit_address is defined and vault_seal_transit_address | length > 0" +# - "vault_seal_transit_token is defined and vault_seal_transit_token | length > 0" diff --git a/playbooks/roles/hashivault/tasks/renew_transit_token.yaml b/playbooks/roles/hashivault/tasks/renew_transit_token.yaml new file mode 100644 index 0000000..b90459f --- /dev/null +++ b/playbooks/roles/hashivault/tasks/renew_transit_token.yaml @@ -0,0 +1,22 @@ +- name: Get vault token from master vault + set_fact: + vault_token: "{{ lookup('community.hashi_vault.hashi_vault', 'auth/token/lookup-self url={{ vault_addr }} auth_method=token').id }}" + delegate_to: "bridge.eco.tsi-dev.otc-service.com" + when: + - "vault_addr is defined and vault_addr | length > 0" + - "transit_token is defined and transit_token | length > 0" + +- name: Renew transit token + ansible.builtin.uri: + url: "{{ vault_addr }}/v1/auth/token/renew" + headers: + X-Vault-Token: "{{ vault_token }}" + method: "POST" + body_format: "json" + body: + token: "{{ transit_token }}" + status_code: [200, 201, 202, 204] + delegate_to: "bridge.eco.tsi-dev.otc-service.com" + when: + - "vault_addr is defined and vault_addr | length > 0" + - "transit_token is defined and transit_token | length > 0" \ No newline at end of file diff --git a/playbooks/roles/hashivault/templates/sources.list.j2 b/playbooks/roles/hashivault/templates/sources.list.j2 new file mode 100644 index 0000000..8e9db02 --- /dev/null +++ b/playbooks/roles/hashivault/templates/sources.list.j2 @@ -0,0 +1 @@ +deb https://apt.releases.hashicorp.com {{ ansible_lsb.codename }} main diff --git a/playbooks/roles/hashivault/templates/vault.hcl.j2 b/playbooks/roles/hashivault/templates/vault.hcl.j2 new file mode 100644 index 0000000..cf764b2 --- /dev/null +++ b/playbooks/roles/hashivault/templates/vault.hcl.j2 @@ -0,0 +1,54 @@ +# Full configuration options can be found at https://www.vaultproject.io/docs/configuration + +ui = {{ vault_enable_ui | ternary('true', 'false') }} +api_addr = "https://{{ inventory_hostname }}:8200" +cluster_addr = "https://{{ inventory_hostname }}:8201" + +disable_mlock = true + +plugin_directory = "{{ vault_plugin_path }}" + +storage "raft" { + path = "{{ vault_storage_path }}" +{% if vault_node_id is defined %} + node_id = "{{ vault_node_id }}" +{% else %} + node_id = "vault{{ vault_id | default((inventory_hostname | regex_replace('vault(\\d+)\..*$', '\\1')| int )) }}" +{% endif %} + +# Auto-join cluster +{% for host in play_hosts -%} +{% if host != inventory_hostname -%} + retry_join { + leader_api_address = "http://{{ hostvars[host]['ansible_host'] }}:8200" + } +{% endif -%} +{% endfor -%} +} + +# HTTPS listener +listener "tcp" { + address = "0.0.0.0:8200" + tls_cert_file = "{{ vault_tls_cert_file }}" + tls_key_file = "{{ vault_tls_key_file }}" +{% if vault_proxy_protocol_behavior is defined %} + proxy_protocol_behavior = "{{ vault_proxy_protocol_behavior }}" +{% endif %} +{% if vault_proxy_protocol_authorized_addrs is defined %} + proxy_protocol_authorized_addrs = "{{ vault_proxy_protocol_authorized_addrs }}" +{% endif %} +{% if vault_x_forwarded_for_authorized_addrs is defined %} + x_forwarded_for_authorized_addrs = "{{ vault_x_forwarded_for_authorized_addrs }}" +{% endif %} +} + +{% if vault_seal_transit_address is defined %} +seal "transit" { + address = "{{ vault_seal_transit_address }}" + token = "{{ vault_seal_transit_token }}" + disable_renewal = "false" + // Key configuration + key_name = "unseal_key" + mount_path = "transit/" +} +{% endif %} diff --git a/playbooks/roles/hashivault/vars/Debian.yaml b/playbooks/roles/hashivault/vars/Debian.yaml new file mode 100644 index 0000000..cfeef17 --- /dev/null +++ b/playbooks/roles/hashivault/vars/Debian.yaml @@ -0,0 +1,3 @@ +--- +packages: + - unzip diff --git a/playbooks/roles/import-gpg-key/README.rst b/playbooks/roles/import-gpg-key/README.rst new file mode 100644 index 0000000..a78dc29 --- /dev/null +++ b/playbooks/roles/import-gpg-key/README.rst @@ -0,0 +1,14 @@ +import-gpg-key + +Import a gpg ASCII armored public key to the local keystore. + +**Role Variables** + +.. zuul:rolevar:: gpg_key_id + + The ID of the key to import. If it already exists, the file is not + imported. + +.. zuul:rolevar:: gpg_key_asc + + The path of the ASCII armored GPG key to import diff --git a/playbooks/roles/import-gpg-key/tasks/main.yaml b/playbooks/roles/import-gpg-key/tasks/main.yaml new file mode 100644 index 0000000..d9d19ec --- /dev/null +++ b/playbooks/roles/import-gpg-key/tasks/main.yaml @@ -0,0 +1,30 @@ +- name: Check for input args + assert: + that: gpg_key_id is defined + +- name: Check for existing key + command: | + gpg --list-keys {{ gpg_key_id }} + ignore_errors: true + register: _key_exists + +- name: Install key + when: _key_exists.rc != 0 + block: + + - name: Look for gpg key + lineinfile: + path: '{{ gpg_key_asc }}' + regexp: '^-----BEGIN PGP PUBLIC KEY BLOCK-----$' + state: absent + check_mode: yes + changed_when: false + register: _out + + - name: Check keyfile + assert: + that: _out.found + + - name: Import key + command: | + gpg --import {{ gpg_key_asc }} diff --git a/playbooks/roles/install-ansible-roles/README.rst b/playbooks/roles/install-ansible-roles/README.rst new file mode 100644 index 0000000..f03fabe --- /dev/null +++ b/playbooks/roles/install-ansible-roles/README.rst @@ -0,0 +1 @@ +Install additional Ansible roles from git repos diff --git a/playbooks/roles/install-ansible-roles/defaults/main.yaml b/playbooks/roles/install-ansible-roles/defaults/main.yaml new file mode 100644 index 0000000..8bfa95e --- /dev/null +++ b/playbooks/roles/install-ansible-roles/defaults/main.yaml @@ -0,0 +1,4 @@ +# Roles to install from source +ansible_roles: [] +ansible_role_src_root: /home/zuul +ansible_role_dest: /etc/ansible/roles diff --git a/playbooks/roles/install-ansible-roles/tasks/main.yaml b/playbooks/roles/install-ansible-roles/tasks/main.yaml new file mode 100644 index 0000000..1662a5e --- /dev/null +++ b/playbooks/roles/install-ansible-roles/tasks/main.yaml @@ -0,0 +1,8 @@ +- name: Install ansible roles to /etc/ansible/roles + git: + repo: '{{ ansible_role_src_root }}/src/opendev.org/opendev/ansible-role-{{ ansible_role }}' + dest: '/etc/ansible/roles/{{ ansible_role }}' + force: yes + loop: '{{ ansible_roles }}' + loop_control: + loop_var: ansible_role diff --git a/playbooks/roles/install-ansible/README.rst b/playbooks/roles/install-ansible/README.rst new file mode 100644 index 0000000..04351c8 --- /dev/null +++ b/playbooks/roles/install-ansible/README.rst @@ -0,0 +1,44 @@ +Install and configure Ansible on a host via pip + +This will install ansible into a virtualenv at ``/usr/ansible-venv`` + +**Role Variables** + +.. zuul:rolevar:: install_ansible_requirements + :default: [ansible, openstacksdk] + + The packages to install into the virtualenv. A list in Python + ``requirements.txt`` format. + +.. zuul:rolevar:: install_ansible_collections + :default: undefined + + A list of Ansible collections to install. In the format + + .. + - namespace: + name: + repo: + +.. zuul:rolevar:: install_ansible_ara_enable + :default: false + + Whether or not to install the ARA Records Ansible callback plugin + into Ansible. If using the default + ``install_ansible_requirements`` will install the ARA package too. + +.. zuul:rolevar:: install_ansible_ara_config + + A dictionary of configuration keys and their values for ARA's Ansible plugins. + + Default configuration keys: + + - ``api_client: offline`` (can be ``http`` for sending to remote API servers) + - ``api_server: http://127.0.0.1:8000`` (has no effect when using offline) + - ``api_username: null`` (if required, an API username) + - ``api_password: null`` (if required, an API password) + - ``api_timeout: 30`` (the timeout on http requests) + + For a list of available configuration options, see the `ARA documentation`_ + +.. _ARA documentation: https://ara.readthedocs.io/en/latest/ara-plugin-configuration.html diff --git a/playbooks/roles/install-ansible/defaults/main.yaml b/playbooks/roles/install-ansible/defaults/main.yaml new file mode 100644 index 0000000..a26df83 --- /dev/null +++ b/playbooks/roles/install-ansible/defaults/main.yaml @@ -0,0 +1,11 @@ +# Whether or not to install ARA +install_ansible_ara_enable: false + +# See available configuration options for the callback in the ARA docs: +# https://ara.readthedocs.io/en/latest/ara-plugin-configuration.html +install_ansible_ara_config: + api_client: offline +# api_server: http://127.0.0.1:8000 +# api_username: null +# api_password: null +# api_timeout: 30 diff --git a/playbooks/roles/install-ansible/files/disable-ansible b/playbooks/roles/install-ansible/files/disable-ansible new file mode 100644 index 0000000..b60610c --- /dev/null +++ b/playbooks/roles/install-ansible/files/disable-ansible @@ -0,0 +1,20 @@ +#!/bin/bash + +# This is a simple script but ensures we don't mis-type the +# file name. + +DISABLE_FILE=/home/zuul/DISABLE-ANSIBLE + +if [[ "$#" -lt 1 ]]; then + echo "Usage: disable-ansible COMMENT" + echo + echo "Please supply a comment to be placed in the disable file for the benefit" + echo "of other admins. Include your name. Don't forget to #status log." + exit 1 +fi + +date -Iseconds >> $DISABLE_FILE +echo "$*" >> $DISABLE_FILE + +echo "Current value of DISABLE-ANSIBLE": +cat $DISABLE_FILE diff --git a/playbooks/roles/install-ansible/files/inventory b/playbooks/roles/install-ansible/files/inventory new file mode 120000 index 0000000..2054721 --- /dev/null +++ b/playbooks/roles/install-ansible/files/inventory @@ -0,0 +1 @@ +../../../../inventory \ No newline at end of file diff --git a/playbooks/roles/install-ansible/files/inventory_plugins/test-fixtures/groups.yaml b/playbooks/roles/install-ansible/files/inventory_plugins/test-fixtures/groups.yaml new file mode 120000 index 0000000..df70ccf --- /dev/null +++ b/playbooks/roles/install-ansible/files/inventory_plugins/test-fixtures/groups.yaml @@ -0,0 +1 @@ +../../../../../../inventory/service/groups.yaml \ No newline at end of file diff --git a/playbooks/roles/install-ansible/files/inventory_plugins/test-fixtures/results.yaml b/playbooks/roles/install-ansible/files/inventory_plugins/test-fixtures/results.yaml new file mode 100644 index 0000000..431566a --- /dev/null +++ b/playbooks/roles/install-ansible/files/inventory_plugins/test-fixtures/results.yaml @@ -0,0 +1,26 @@ +# This is a dictionary of hosts, with a list of what +# groups they should be in + +results: + + bridge.eco.tsi-dev.otc-service.com: + - prod_bastion + - apimon-clouds + - apimon + - bastion + - cloud-launcher + - database-launcher + - control-plane-clouds + - grafana-controller + - k8s-controller + - otc + - alerta + - grafana + - zuul + - nodepool + - ssl_certs + - carbonapi + - matrix + - vault-controller + - keycloak-controller + - vault diff --git a/playbooks/roles/install-ansible/files/inventory_plugins/test_yamlgroup.py b/playbooks/roles/install-ansible/files/inventory_plugins/test_yamlgroup.py new file mode 100644 index 0000000..f6b1873 --- /dev/null +++ b/playbooks/roles/install-ansible/files/inventory_plugins/test_yamlgroup.py @@ -0,0 +1,98 @@ +# Copyright (C) 2018 Red Hat, Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or +# implied. +# +# See the License for the specific language governing permissions and +# limitations under the License. + +# Make coding more python3-ish +from __future__ import (absolute_import, division, print_function) +__metaclass__ = type + +import os +import testtools +import mock +import yaml + +from ansible.inventory.host import Host + +from .yamlgroup import InventoryModule + +FIXTURE_DIR = os.path.join(os.path.dirname(__file__), + 'test-fixtures') + +class TestInventory(testtools.TestCase): + + def test_yaml_groups(self): + inventory = mock.MagicMock() + + results_yaml = os.path.join(FIXTURE_DIR, 'results.yaml') + with open(results_yaml) as f: + results = yaml.load(f, Loader=yaml.FullLoader) + results = results['results'] + + # Build the inventory list. This is a list of Host objects + # which are the keys in our results.yaml file, keyed by the + # hostname (... I dunno, we're just tricking the inventory and + # making something it's happy with) + inventory.hosts = {} + for host in results.keys(): + inventory.hosts[host] = Host(name=host) + + # Fake out add_group() and add_child() for the inventory + # object to store our groups. + inventory.groups = {} + def add_group(group): + inventory.groups[group] = [] + inventory.add_group = add_group + def add_child(group, host): + inventory.groups[group].append(host) + inventory.add_child = add_child + + # Not really needed for unit test + loader = mock.MagicMock() + + # This is all setup by ansible magic plugin/inventory stuff in + # real-life, which gets the groups into the config object + path = os.path.join(FIXTURE_DIR, 'groups.yaml') + with open(path) as f: + config_groups = yaml.load(f, Loader=yaml.FullLoader) + config_groups = config_groups['groups'] + im = InventoryModule() + im._read_config_data = mock.MagicMock() + im._load_name = 'yamlgroup' + im.get_option = mock.MagicMock(side_effect=lambda x: config_groups) + + im.parse(inventory, loader, path) + + # Now, for every host we have in our results, we should be + # able to see it listed as a child of the groups it wants to + # be in + for host, groups in results.items(): + for group in groups: + message=( + "The inventory does not have a group <%s>;" + "host <%s> should be in this group" % (group, host)) + self.assertEquals(group in inventory.groups, True, message) + + message=( + "The group <%s> does not contain host <%s>" + % (group, host)) + self.assertIn(host, inventory.groups[group], message) + + # Additionally, check this host hasn't managed to get into + # any groups it is *not* supposed to be in + for inventory_group, inventory_hosts in inventory.groups.items(): + if host in inventory_hosts: + message = ("The host <%s> should not be in group <%s>" + % (host, inventory_group)) + self.assertTrue(inventory_group in groups, message) diff --git a/playbooks/roles/install-ansible/files/inventory_plugins/yamlgroup.py b/playbooks/roles/install-ansible/files/inventory_plugins/yamlgroup.py new file mode 100644 index 0000000..1b47315 --- /dev/null +++ b/playbooks/roles/install-ansible/files/inventory_plugins/yamlgroup.py @@ -0,0 +1,99 @@ +# Copyright (c) 2018 Red Hat, Inc. +# GNU General Public License v3.0+ (see COPYING.GPL or https://www.gnu.org/licenses/gpl-3.0.txt) + +import fnmatch +import os +import re + +from ansible.parsing.yaml.objects import AnsibleMapping +from ansible.plugins.inventory import BaseFileInventoryPlugin + +DOCUMENTATION = ''' + inventory: yamlgroup + version_added: "2.8" + short_description: Simple group manipulation for existing hosts + description: + - YAML based inventory that only manipulates group membership for + existing hosts. + options: + yaml_extensions: + description: list of 'valid' extensions for files containing YAML + type: list + default: ['.yaml', '.yml', '.json'] + env: + - name: ANSIBLE_YAML_FILENAME_EXT + - name: ANSIBLE_INVENTORY_PLUGIN_EXTS + ini: + - key: yaml_valid_extensions + section: defaults + - section: inventory_plugin_yaml + key: yaml_valid_extensions + groups: + description: | + dict with group name as key. If the list item starts with a + ^ it will be considered a regex pattern (i.e. passed to + re.match), otherwise it is considered a fnmatch pattern. + type: dict + default: {} +''' +EXAMPLES = ''' +plugin: yamlgroup +groups: + amazing: + - fullhost.example.com + - amazing* + - ^regex.*pattern +''' + + +class InventoryModule(BaseFileInventoryPlugin): + + NAME = 'yamlgroup' + + def verify_file(self, path): + + valid = False + if super(InventoryModule, self).verify_file(path): + file_name, ext = os.path.splitext(path) + if ext in self.get_option('yaml_extensions'): + valid = True + return valid + + def parse(self, inventory, loader, path, cache=True): + ''' parses the inventory file ''' + + super(InventoryModule, self).parse(inventory, loader, path) + + self._read_config_data(path) + + groups = self.get_option('groups') + + found_groups = {} + + for group, hosts in groups.items(): + if not isinstance(hosts, list): + hosts = [hosts] + for candidate in hosts: + # If someone accidentally puts a dict into the list of hosts, + # the errors are ... obscure at best and the entire inventory + # will fail. Grab the dict key in those cases rather than + # failing. + if isinstance(candidate, AnsibleMapping): + candidate = list(candidate.keys())[0] + + # Starts with ^ means it is already a regex. + # Otherwise it's a fnmatch compatible string; use it's + # helper to turn that into a regex so we have a common + # match below. + if not candidate.startswith('^'): + candidate = fnmatch.translate(candidate) + + for existing in self.inventory.hosts.values(): + if re.match(candidate, existing.get_name()): + found_groups.setdefault(group, []) + found_groups[group].append(existing) + + for group, hosts in found_groups.items(): + self.inventory.add_group(group) + for host in hosts: + self.inventory.add_child(group, host.get_name()) diff --git a/playbooks/roles/install-ansible/files/lookup_plugins/vault_cloud_config.py b/playbooks/roles/install-ansible/files/lookup_plugins/vault_cloud_config.py new file mode 100644 index 0000000..c12b41d --- /dev/null +++ b/playbooks/roles/install-ansible/files/lookup_plugins/vault_cloud_config.py @@ -0,0 +1,114 @@ +# GNU General Public License v3.0+ (see COPYING.GPL or https://www.gnu.org/licenses/gpl-3.0.txt) + +from __future__ import (absolute_import, division, print_function) +__metaclass__ = type + +DOCUMENTATION = """ + lookup: vault_cloud_config + short_description: Get cloud config + extends_documentation_fragment: + - community.hashi_vault.connection + - community.hashi_vault.connection.plugins + - community.hashi_vault.auth + - community.hashi_vault.auth.plugins + options: + user_path: + description: Path to the user name + required: True + project_name: + description: Cloud project name to use + project_id: + description: Cloud project id to use + domain_id: + description: Cloud domain id to use + domain_name: + description: Cloud domain name to use + _terms: + description: | + Additional options to be set on the cloud config (root level). + type: str +""" + +from ansible.errors import AnsibleError +from ansible.utils.display import Display + +from ansible_collections.community.hashi_vault.plugins.plugin_utils._hashi_vault_lookup_base import HashiVaultLookupBase +from ansible_collections.community.hashi_vault.plugins.module_utils._hashi_vault_common import HashiVaultValueError + +display = Display() + +HAS_HVAC = False +try: + import hvac + HAS_HVAC = True +except ImportError: + HAS_HVAC = False + + +class LookupModule(HashiVaultLookupBase): + def run(self, terms, variables=None, **kwargs): + if not HAS_HVAC: + raise AnsibleError("Please pip install hvac to use the vault_read lookup.") + + ret = [] + + self.set_options(direct=kwargs, var_options=variables) + self.process_deprecations() + + self.connection_options.process_connection_options() + client_args = self.connection_options.get_hvac_connection_options() + client = self.helper.get_vault_client(**client_args) + + try: + self.authenticator.validate() + self.authenticator.authenticate(client) + except (NotImplementedError, HashiVaultValueError) as e: + raise AnsibleError(e) + + user_path = self.get_option('user_path') + auth_attrs = ['auth_url', 'user_domain_id', 'user_domain_name', + 'username', 'password', 'token'] + cloud_config = {'auth': {}} + user_data = None + try: + data = client.read(user_path) + try: + if not data: + raise AnsibleError( + f"No data at path '{user_path}'.") + + # sentinel field checks + check_dd = data['data']['data'] + check_md = data['data']['metadata'] + # unwrap nested data + auth = data['data']['data'] + for k, v in auth.items(): + # We want only supported keys to remain under auth. All + # rest are placed as root props + if k in auth_attrs: + cloud_config['auth'][k] = v + else: + cloud_config[k] = v + except KeyError: + pass + except hvac.exceptions.Forbidden: + raise AnsibleError( + f"Forbidden: Permission Denied to path '{user_path}'.") + + # We allow asking for specific project/domain + for opt in ['domain_id', 'domain_name', 'project_name', 'project_id']: + opt_val = self.get_option(opt) + if opt_val: + cloud_config['auth'][opt] = opt_val + # Add all other options passed as terms + for term in terms: + try: + dt = term.split('=') + val = dt[1] + cloud_config[dt[0]] = val if not val.isnumeric() else int(val) + except IndexError: + pass + + ret.append(cloud_config) + + return ret diff --git a/playbooks/roles/install-ansible/files/roles b/playbooks/roles/install-ansible/files/roles new file mode 120000 index 0000000..4bdbcba --- /dev/null +++ b/playbooks/roles/install-ansible/files/roles @@ -0,0 +1 @@ +../../../../roles \ No newline at end of file diff --git a/playbooks/roles/install-ansible/files/roles.yaml b/playbooks/roles/install-ansible/files/roles.yaml new file mode 120000 index 0000000..469bdeb --- /dev/null +++ b/playbooks/roles/install-ansible/files/roles.yaml @@ -0,0 +1 @@ +../../../../roles.yaml \ No newline at end of file diff --git a/playbooks/roles/install-ansible/tasks/install_ansible_collection.yaml b/playbooks/roles/install-ansible/tasks/install_ansible_collection.yaml new file mode 100644 index 0000000..d2fa1de --- /dev/null +++ b/playbooks/roles/install-ansible/tasks/install_ansible_collection.yaml @@ -0,0 +1,11 @@ +- name: "Ensure {{ item.namespace }} top-level directory" + file: + path: "/root/.ansible/collections/ansible_collections/{{ item.namespace }}/" + state: "directory" + mode: "0755" + +- name: "Link in {{ item.namespace }}/{{ item.name }} collection" + file: + src: "/home/zuul/src/{{ item.git_provider | default('github.com') }}/{{ item.repo }}" + dest: "/root/.ansible/collections/ansible_collections/{{ item.namespace }}/{{ item.name }}" + state: "link" diff --git a/playbooks/roles/install-ansible/tasks/install_ansible_stub.yaml b/playbooks/roles/install-ansible/tasks/install_ansible_stub.yaml new file mode 100644 index 0000000..18723ff --- /dev/null +++ b/playbooks/roles/install-ansible/tasks/install_ansible_stub.yaml @@ -0,0 +1,22 @@ +- name: Create build dir + tempfile: + state: directory + suffix: fake-ansible + register: _build_dir + +- name: Install fake setup.py + blockinfile: + create: yes + path: '{{ _build_dir.path }}/setup.py' + block: | + import setuptools + + setuptools.setup(name="ansible", + url="http://fake.com", + maintainer="nobody@nobody.com", + version="2.9.0", + description="Fake ansible") + +- name: Install stub ansible + pip: + name: '{{ _build_dir.path }}' diff --git a/playbooks/roles/install-ansible/tasks/install_ara.yaml b/playbooks/roles/install-ansible/tasks/install_ara.yaml new file mode 100644 index 0000000..08bc34e --- /dev/null +++ b/playbooks/roles/install-ansible/tasks/install_ara.yaml @@ -0,0 +1,38 @@ +- name: Install pymysql for ara + pip: + name: pymysql + state: present + when: '"pymysql" in install_ansible_ara_config["database"]' + +# If ansible_install_ansible_ara_version is not defined it should be "latest" +- name: Set ara default version to latest + set_fact: + install_ansible_ara_version: latest + when: install_ansible_ara_version is not defined + +# If a version is not explicitly set we want to make sure to +# completely omit the version argument to pip, as it will be coming +# from the long-form install_ansible_ara_name variable. Additionally, +# if the version is the special value "latest", then we also want to +# omit any version number, but also set the package state to "latest". +- name: Set ARA version for installation + set_fact: + _install_ansible_ara_version: '{{ install_ansible_ara_version }}' + when: install_ansible_ara_version not in ('', 'latest') + +- name: Set ARA package state for installation + set_fact: + _install_ansible_ara_state: latest + when: install_ansible_ara_version == 'latest' + +- name: Install ARA + pip: + name: '{{ install_ansible_ara_name | default("ara") }}' + version: '{{ _install_ansible_ara_version | default(omit) }}' + state: '{{ _install_ansible_ara_state | default(omit) }}' + +# For configuring the callback plugins location in ansible.cfg +- name: Get ARA's location for callback plugins + command: python3 -m ara.setup.callback_plugins + register: install_ansible_ara_callback_plugins + changed_when: false diff --git a/playbooks/roles/install-ansible/tasks/main.yaml b/playbooks/roles/install-ansible/tasks/main.yaml new file mode 100644 index 0000000..dabfba0 --- /dev/null +++ b/playbooks/roles/install-ansible/tasks/main.yaml @@ -0,0 +1,187 @@ +# The -devel job in particular already defines +# install_ansbile_requirements in the job definition to pick +# main/devel branch repos checked out from Zuul +- name: Set default ansible install requirements + when: install_ansible_requirements is not defined + block: + - name: Set defaults + set_fact: + _install_ansible_requirements: + - 'ansible<8' + - 'openstacksdk' + + - name: Add ARA to defaults if enabled + when: install_ansible_ara_enable + set_fact: + _install_ansible_requirements: '{{ _install_ansible_requirements + ["ara[server]"] }}' + + - name: Set variable + # NOTE(ianw) the block when: statement is calcuated for each task + # -- keep this last! + set_fact: + install_ansible_requirements: '{{ _install_ansible_requirements }}' + +# NOTE(ianw) 2022-10-26 : ARM64 generally needs this because upstream +# projects don't always ship arm64 wheels. But x86 may need it when +# we have a fresh host with a more recent Python too +- name: Ensure required Ansible build packages + apt: + update_cache: yes + name: + - libffi-dev + - libssl-dev + - build-essential + - python3-dev + +- name: Install python-venv package + package: + name: + - python3-venv + state: present + +- name: Create venv + include_role: + name: create-venv + vars: + create_venv_path: '/usr/ansible-venv' + +# The boostrap job runs this all the time, and we'd like to skip +# trying to update the venv mostly. But we also want to have things +# like ansible specify '= most of the fun stuff is in collections. Clone +# our required collections here. Note this is only for our testing of +# the devel branch; if we're using a release we use the Ansible +# distribution package which bundles all this. +- name: Install Ansible collections + include_tasks: install_ansible_collection.yaml + when: install_ansible_collections is defined + loop: '{{ install_ansible_collections }}' + +- name: Symlink Ansible globally + file: + src: '{{ item.src }}' + dest: '{{ item.dest }}' + state: link + loop: + - { src: '/usr/ansible-venv/bin/ansible-playbook', dest: '/usr/local/bin/ansible-playbook' } + - { src: '/usr/ansible-venv/bin/ansible', dest: '/usr/local/bin/ansible' } + +- name: Ansible version check + command: 'ansible-playbook --version' + register: _ansible_version_check + +- name: Sanity check Ansible version + debug: + msg: '{{ _ansible_version_check.stdout }}' + +- name: Ansible cmd version check + command: 'ansible --version' + register: _ansible_version_check + +- name: Sanity check Ansible version + debug: + msg: '{{ _ansible_version_check.stdout }}' + +# This registered variable is templated into ansible.cfg below +# to setup the callback plugins for ARA +- name: Get ARA's location for callback plugins + when: install_ansible_ara_enable + command: /usr/ansible-venv/bin/python3 -m ara.setup.callback_plugins + register: install_ansible_ara_callback_plugins + changed_when: false + +# For use by k8s_raw ansible module +# - name: Install openshift client +# pip: +# name: 'openshift' +# TODO(corvus): re-add this once kubernetes 9.0.0 is released + +- name: Ensure /etc/ansible and /etc/ansible/hosts + file: + state: directory + path: /etc/ansible/hosts + +- name: Ensure /etc/ansible/inventory_plugins + file: + state: directory + path: /etc/ansible/inventory_plugins + +- name: Ensure /var/cache/ansible + file: + state: directory + path: /var/cache/ansible + owner: root + group: root + mode: 0770 + +- name: Ensure ansible log dir is writable + file: + path: /var/log/ansible + state: directory + owner: root + group: root + mode: 0775 + +- name: Copy ansible.cfg in to place + template: + src: ansible.cfg.j2 + dest: /etc/ansible/ansible.cfg + +- name: Remove old inventory files + file: + path: '/etc/ansible/hosts/{{ item }}' + state: absent + loop: + - openstack.yaml + - groups.yaml + +- name: Copy system-config roles into place + copy: + src: roles/ + dest: /etc/ansible/roles + +- name: Copy disable-ansible utility script in place + copy: + src: disable-ansible + dest: /usr/local/bin/disable-ansible + mode: 0755 + owner: root + group: root + +- name: Copy yamlgroup inventory in place + copy: + src: inventory_plugins/yamlgroup.py + dest: /etc/ansible/inventory_plugins/yamlgroup.py + +- name: Setup log rotation + include_role: + name: logrotate + vars: + logrotate_file_name: /var/log/ansible/ansible.log + +- name: Verify ansible install + command: ansible --version diff --git a/playbooks/roles/install-ansible/templates/ansible.cfg.j2 b/playbooks/roles/install-ansible/templates/ansible.cfg.j2 new file mode 100644 index 0000000..da3fc3b --- /dev/null +++ b/playbooks/roles/install-ansible/templates/ansible.cfg.j2 @@ -0,0 +1,43 @@ +[defaults] +inventory=/home/zuul/src/github.com/opentelekomcloud-infra/system-config/inventory/base/hosts.yaml,/home/zuul/src/github.com/opentelekomcloud-infra/system-config/inventory/service/groups.yaml,/home/zuul/src/gitlab/ecosystem/system-config/inventory/base/hosts.yaml,/home/zuul/src/gitlab/ecosystem/system-config/inventory/service/groups.yaml,/etc/ansible/hosts/emergency.yaml +library=/usr/share/ansible +log_path=/var/log/ansible/ansible.log +inventory_plugins=/etc/ansible/inventory_plugins +lookup_plugins=/etc/ansible/plugins/lookup +roles_path=/etc/ansible/roles +retry_files_enabled=False +retry_files_save_path= +gathering=smart +fact_caching=jsonfile +fact_caching_connection=/var/cache/ansible/facts +# Squash warning about ansible auto-transforming group names with -'s in them +force_valid_group_names=ignore +callback_enabled=profile_tasks, timer +{% if install_ansible_ara_enable %} +callback_plugins=/etc/ansible/callback_plugins:{{ install_ansible_ara_callback_plugins.stdout }} +{% else %} +callback_plugins=/etc/ansible/callback_plugins +{% endif %} +stdout_callback=debug +pipelining = True + +[inventory] +enable_plugins=yaml,yamlgroup,advanced_host_list,ini +cache=True +cache_plugin=jsonfile +cache_connection=/var/cache/ansible/inventory +any_unparsed_is_failed=True + +[ssh_connection] +retries=3 +pipelining = True + +[callback_profile_tasks] +task_output_limit = 50 + +{% if install_ansible_ara_enable %} +[ara] +{% for k, v in install_ansible_ara_config.items() %} +{{ k }}={{ v }} +{% endfor %} +{% endif %} diff --git a/playbooks/roles/install-ansible/templates/requirements.txt.j2 b/playbooks/roles/install-ansible/templates/requirements.txt.j2 new file mode 100644 index 0000000..0c83457 --- /dev/null +++ b/playbooks/roles/install-ansible/templates/requirements.txt.j2 @@ -0,0 +1,4 @@ +# Update timestamp: {{ _date.stdout }} +{% for r in install_ansible_requirements %} +{{ r }} +{% endfor %} diff --git a/playbooks/roles/install-apt-repo/README.rst b/playbooks/roles/install-apt-repo/README.rst new file mode 100644 index 0000000..19c5d76 --- /dev/null +++ b/playbooks/roles/install-apt-repo/README.rst @@ -0,0 +1,15 @@ +Install an APT repo + +**Role Variables** + +.. zuul:rolevar:: repo_name + + The name of the repo (used for filenames). + +.. zuul:rolevar:: repo_key + + The contents of the GPG key, ASCII armored. + +.. zuul:rolevar:: repo_content + + The file content for the sources list. diff --git a/playbooks/roles/install-apt-repo/tasks/main.yaml b/playbooks/roles/install-apt-repo/tasks/main.yaml new file mode 100644 index 0000000..50852fc --- /dev/null +++ b/playbooks/roles/install-apt-repo/tasks/main.yaml @@ -0,0 +1,20 @@ +- name: Add apt repo key + become: yes + apt_key: + data: "{{ repo_key }}" + keyring: "/etc/apt/trusted.gpg.d/{{ repo_name }}.gpg" + +- name: Add apt repo + become: yes + copy: + dest: "/etc/apt/sources.list.d/{{ repo_name }}.list" + group: root + owner: root + mode: 0644 + content: "{{ repo_content }}" + register: apt_repo + +- name: Run the equivalent of "apt-get update" as a separate step + apt: + update_cache: yes + when: apt_repo is changed diff --git a/playbooks/roles/install-docker/README.rst b/playbooks/roles/install-docker/README.rst new file mode 100644 index 0000000..071934d --- /dev/null +++ b/playbooks/roles/install-docker/README.rst @@ -0,0 +1,27 @@ +An ansible role to install docker in the OpenStack infra production environment + +This also installs a log redirector for syslog ```docker-`` tags. For +most containers, they can be setup in the compose file with a section +such as: + +.. code-block:: yaml + + logging: + driver: syslog + options: + tag: docker- + +**Role Variables** + +.. zuul:rolevar:: use_upstream_docker + :default: True + + By default this role adds repositories to install docker from upstream + docker. Set this to False to use the docker that comes with the distro. + +.. zuul:rolevar:: docker_update_channel + :default: stable + + Which update channel to use for upstream docker. The two choices are + ``stable``, which is the default and updates quarterly, and ``edge`` + which updates monthly. diff --git a/playbooks/roles/install-docker/defaults/main.yaml b/playbooks/roles/install-docker/defaults/main.yaml new file mode 100644 index 0000000..29cd6ac --- /dev/null +++ b/playbooks/roles/install-docker/defaults/main.yaml @@ -0,0 +1,129 @@ +use_upstream_docker: True +docker_update_channel: stable +debian_gpg_key: | + -----BEGIN PGP PUBLIC KEY BLOCK----- + + mQINBFit2ioBEADhWpZ8/wvZ6hUTiXOwQHXMAlaFHcPH9hAtr4F1y2+OYdbtMuth + lqqwp028AqyY+PRfVMtSYMbjuQuu5byyKR01BbqYhuS3jtqQmljZ/bJvXqnmiVXh + 38UuLa+z077PxyxQhu5BbqntTPQMfiyqEiU+BKbq2WmANUKQf+1AmZY/IruOXbnq + L4C1+gJ8vfmXQt99npCaxEjaNRVYfOS8QcixNzHUYnb6emjlANyEVlZzeqo7XKl7 + UrwV5inawTSzWNvtjEjj4nJL8NsLwscpLPQUhTQ+7BbQXAwAmeHCUTQIvvWXqw0N + cmhh4HgeQscQHYgOJjjDVfoY5MucvglbIgCqfzAHW9jxmRL4qbMZj+b1XoePEtht + ku4bIQN1X5P07fNWzlgaRL5Z4POXDDZTlIQ/El58j9kp4bnWRCJW0lya+f8ocodo + vZZ+Doi+fy4D5ZGrL4XEcIQP/Lv5uFyf+kQtl/94VFYVJOleAv8W92KdgDkhTcTD + G7c0tIkVEKNUq48b3aQ64NOZQW7fVjfoKwEZdOqPE72Pa45jrZzvUFxSpdiNk2tZ + XYukHjlxxEgBdC/J3cMMNRE1F4NCA3ApfV1Y7/hTeOnmDuDYwr9/obA8t016Yljj + q5rdkywPf4JF8mXUW5eCN1vAFHxeg9ZWemhBtQmGxXnw9M+z6hWwc6ahmwARAQAB + tCtEb2NrZXIgUmVsZWFzZSAoQ0UgZGViKSA8ZG9ja2VyQGRvY2tlci5jb20+iQI3 + BBMBCgAhBQJYrefAAhsvBQsJCAcDBRUKCQgLBRYCAwEAAh4BAheAAAoJEI2BgDwO + v82IsskP/iQZo68flDQmNvn8X5XTd6RRaUH33kXYXquT6NkHJciS7E2gTJmqvMqd + tI4mNYHCSEYxI5qrcYV5YqX9P6+Ko+vozo4nseUQLPH/ATQ4qL0Zok+1jkag3Lgk + jonyUf9bwtWxFp05HC3GMHPhhcUSexCxQLQvnFWXD2sWLKivHp2fT8QbRGeZ+d3m + 6fqcd5Fu7pxsqm0EUDK5NL+nPIgYhN+auTrhgzhK1CShfGccM/wfRlei9Utz6p9P + XRKIlWnXtT4qNGZNTN0tR+NLG/6Bqd8OYBaFAUcue/w1VW6JQ2VGYZHnZu9S8LMc + FYBa5Ig9PxwGQOgq6RDKDbV+PqTQT5EFMeR1mrjckk4DQJjbxeMZbiNMG5kGECA8 + g383P3elhn03WGbEEa4MNc3Z4+7c236QI3xWJfNPdUbXRaAwhy/6rTSFbzwKB0Jm + ebwzQfwjQY6f55MiI/RqDCyuPj3r3jyVRkK86pQKBAJwFHyqj9KaKXMZjfVnowLh + 9svIGfNbGHpucATqREvUHuQbNnqkCx8VVhtYkhDb9fEP2xBu5VvHbR+3nfVhMut5 + G34Ct5RS7Jt6LIfFdtcn8CaSas/l1HbiGeRgc70X/9aYx/V/CEJv0lIe8gP6uDoW + FPIZ7d6vH+Vro6xuWEGiuMaiznap2KhZmpkgfupyFmplh0s6knymuQINBFit2ioB + EADneL9S9m4vhU3blaRjVUUyJ7b/qTjcSylvCH5XUE6R2k+ckEZjfAMZPLpO+/tF + M2JIJMD4SifKuS3xck9KtZGCufGmcwiLQRzeHF7vJUKrLD5RTkNi23ydvWZgPjtx + Q+DTT1Zcn7BrQFY6FgnRoUVIxwtdw1bMY/89rsFgS5wwuMESd3Q2RYgb7EOFOpnu + w6da7WakWf4IhnF5nsNYGDVaIHzpiqCl+uTbf1epCjrOlIzkZ3Z3Yk5CM/TiFzPk + z2lLz89cpD8U+NtCsfagWWfjd2U3jDapgH+7nQnCEWpROtzaKHG6lA3pXdix5zG8 + eRc6/0IbUSWvfjKxLLPfNeCS2pCL3IeEI5nothEEYdQH6szpLog79xB9dVnJyKJb + VfxXnseoYqVrRz2VVbUI5Blwm6B40E3eGVfUQWiux54DspyVMMk41Mx7QJ3iynIa + 1N4ZAqVMAEruyXTRTxc9XW0tYhDMA/1GYvz0EmFpm8LzTHA6sFVtPm/ZlNCX6P1X + zJwrv7DSQKD6GGlBQUX+OeEJ8tTkkf8QTJSPUdh8P8YxDFS5EOGAvhhpMBYD42kQ + pqXjEC+XcycTvGI7impgv9PDY1RCC1zkBjKPa120rNhv/hkVk/YhuGoajoHyy4h7 + ZQopdcMtpN2dgmhEegny9JCSwxfQmQ0zK0g7m6SHiKMwjwARAQABiQQ+BBgBCAAJ + BQJYrdoqAhsCAikJEI2BgDwOv82IwV0gBBkBCAAGBQJYrdoqAAoJEH6gqcPyc/zY + 1WAP/2wJ+R0gE6qsce3rjaIz58PJmc8goKrir5hnElWhPgbq7cYIsW5qiFyLhkdp + YcMmhD9mRiPpQn6Ya2w3e3B8zfIVKipbMBnke/ytZ9M7qHmDCcjoiSmwEXN3wKYI + mD9VHONsl/CG1rU9Isw1jtB5g1YxuBA7M/m36XN6x2u+NtNMDB9P56yc4gfsZVES + KA9v+yY2/l45L8d/WUkUi0YXomn6hyBGI7JrBLq0CX37GEYP6O9rrKipfz73XfO7 + JIGzOKZlljb/D9RX/g7nRbCn+3EtH7xnk+TK/50euEKw8SMUg147sJTcpQmv6UzZ + cM4JgL0HbHVCojV4C/plELwMddALOFeYQzTif6sMRPf+3DSj8frbInjChC3yOLy0 + 6br92KFom17EIj2CAcoeq7UPhi2oouYBwPxh5ytdehJkoo+sN7RIWua6P2WSmon5 + U888cSylXC0+ADFdgLX9K2zrDVYUG1vo8CX0vzxFBaHwN6Px26fhIT1/hYUHQR1z + VfNDcyQmXqkOnZvvoMfz/Q0s9BhFJ/zU6AgQbIZE/hm1spsfgvtsD1frZfygXJ9f + irP+MSAI80xHSf91qSRZOj4Pl3ZJNbq4yYxv0b1pkMqeGdjdCYhLU+LZ4wbQmpCk + SVe2prlLureigXtmZfkqevRz7FrIZiu9ky8wnCAPwC7/zmS18rgP/17bOtL4/iIz + QhxAAoAMWVrGyJivSkjhSGx1uCojsWfsTAm11P7jsruIL61ZzMUVE2aM3Pmj5G+W + 9AcZ58Em+1WsVnAXdUR//bMmhyr8wL/G1YO1V3JEJTRdxsSxdYa4deGBBY/Adpsw + 24jxhOJR+lsJpqIUeb999+R8euDhRHG9eFO7DRu6weatUJ6suupoDTRWtr/4yGqe + dKxV3qQhNLSnaAzqW/1nA3iUB4k7kCaKZxhdhDbClf9P37qaRW467BLCVO/coL3y + Vm50dwdrNtKpMBh3ZpbB1uJvgi9mXtyBOMJ3v8RZeDzFiG8HdCtg9RvIt/AIFoHR + H3S+U79NT6i0KPzLImDfs8T7RlpyuMc4Ufs8ggyg9v3Ae6cN3eQyxcK3w0cbBwsh + /nQNfsA6uu+9H7NhbehBMhYnpNZyrHzCmzyXkauwRAqoCbGCNykTRwsur9gS41TQ + M8ssD1jFheOJf3hODnkKU+HKjvMROl1DK7zdmLdNzA1cvtZH/nCC9KPj1z8QC47S + xx+dTZSx4ONAhwbS/LN3PoKtn8LPjY9NP9uDWI+TWYquS2U+KHDrBDlsgozDbs/O + jCxcpDzNmXpWQHEtHU7649OXHP7UeNST1mCUCH5qdank0V1iejF6/CfTFU4MfcrG + YT90qFF93M3v01BbxP+EIY2/9tiIPbrd + =0YYh + -----END PGP PUBLIC KEY BLOCK----- + +ubuntu_gpg_key: | + -----BEGIN PGP PUBLIC KEY BLOCK----- + + mQINBFit2ioBEADhWpZ8/wvZ6hUTiXOwQHXMAlaFHcPH9hAtr4F1y2+OYdbtMuth + lqqwp028AqyY+PRfVMtSYMbjuQuu5byyKR01BbqYhuS3jtqQmljZ/bJvXqnmiVXh + 38UuLa+z077PxyxQhu5BbqntTPQMfiyqEiU+BKbq2WmANUKQf+1AmZY/IruOXbnq + L4C1+gJ8vfmXQt99npCaxEjaNRVYfOS8QcixNzHUYnb6emjlANyEVlZzeqo7XKl7 + UrwV5inawTSzWNvtjEjj4nJL8NsLwscpLPQUhTQ+7BbQXAwAmeHCUTQIvvWXqw0N + cmhh4HgeQscQHYgOJjjDVfoY5MucvglbIgCqfzAHW9jxmRL4qbMZj+b1XoePEtht + ku4bIQN1X5P07fNWzlgaRL5Z4POXDDZTlIQ/El58j9kp4bnWRCJW0lya+f8ocodo + vZZ+Doi+fy4D5ZGrL4XEcIQP/Lv5uFyf+kQtl/94VFYVJOleAv8W92KdgDkhTcTD + G7c0tIkVEKNUq48b3aQ64NOZQW7fVjfoKwEZdOqPE72Pa45jrZzvUFxSpdiNk2tZ + XYukHjlxxEgBdC/J3cMMNRE1F4NCA3ApfV1Y7/hTeOnmDuDYwr9/obA8t016Yljj + q5rdkywPf4JF8mXUW5eCN1vAFHxeg9ZWemhBtQmGxXnw9M+z6hWwc6ahmwARAQAB + tCtEb2NrZXIgUmVsZWFzZSAoQ0UgZGViKSA8ZG9ja2VyQGRvY2tlci5jb20+iQI3 + BBMBCgAhBQJYrefAAhsvBQsJCAcDBRUKCQgLBRYCAwEAAh4BAheAAAoJEI2BgDwO + v82IsskP/iQZo68flDQmNvn8X5XTd6RRaUH33kXYXquT6NkHJciS7E2gTJmqvMqd + tI4mNYHCSEYxI5qrcYV5YqX9P6+Ko+vozo4nseUQLPH/ATQ4qL0Zok+1jkag3Lgk + jonyUf9bwtWxFp05HC3GMHPhhcUSexCxQLQvnFWXD2sWLKivHp2fT8QbRGeZ+d3m + 6fqcd5Fu7pxsqm0EUDK5NL+nPIgYhN+auTrhgzhK1CShfGccM/wfRlei9Utz6p9P + XRKIlWnXtT4qNGZNTN0tR+NLG/6Bqd8OYBaFAUcue/w1VW6JQ2VGYZHnZu9S8LMc + FYBa5Ig9PxwGQOgq6RDKDbV+PqTQT5EFMeR1mrjckk4DQJjbxeMZbiNMG5kGECA8 + g383P3elhn03WGbEEa4MNc3Z4+7c236QI3xWJfNPdUbXRaAwhy/6rTSFbzwKB0Jm + ebwzQfwjQY6f55MiI/RqDCyuPj3r3jyVRkK86pQKBAJwFHyqj9KaKXMZjfVnowLh + 9svIGfNbGHpucATqREvUHuQbNnqkCx8VVhtYkhDb9fEP2xBu5VvHbR+3nfVhMut5 + G34Ct5RS7Jt6LIfFdtcn8CaSas/l1HbiGeRgc70X/9aYx/V/CEJv0lIe8gP6uDoW + FPIZ7d6vH+Vro6xuWEGiuMaiznap2KhZmpkgfupyFmplh0s6knymuQINBFit2ioB + EADneL9S9m4vhU3blaRjVUUyJ7b/qTjcSylvCH5XUE6R2k+ckEZjfAMZPLpO+/tF + M2JIJMD4SifKuS3xck9KtZGCufGmcwiLQRzeHF7vJUKrLD5RTkNi23ydvWZgPjtx + Q+DTT1Zcn7BrQFY6FgnRoUVIxwtdw1bMY/89rsFgS5wwuMESd3Q2RYgb7EOFOpnu + w6da7WakWf4IhnF5nsNYGDVaIHzpiqCl+uTbf1epCjrOlIzkZ3Z3Yk5CM/TiFzPk + z2lLz89cpD8U+NtCsfagWWfjd2U3jDapgH+7nQnCEWpROtzaKHG6lA3pXdix5zG8 + eRc6/0IbUSWvfjKxLLPfNeCS2pCL3IeEI5nothEEYdQH6szpLog79xB9dVnJyKJb + VfxXnseoYqVrRz2VVbUI5Blwm6B40E3eGVfUQWiux54DspyVMMk41Mx7QJ3iynIa + 1N4ZAqVMAEruyXTRTxc9XW0tYhDMA/1GYvz0EmFpm8LzTHA6sFVtPm/ZlNCX6P1X + zJwrv7DSQKD6GGlBQUX+OeEJ8tTkkf8QTJSPUdh8P8YxDFS5EOGAvhhpMBYD42kQ + pqXjEC+XcycTvGI7impgv9PDY1RCC1zkBjKPa120rNhv/hkVk/YhuGoajoHyy4h7 + ZQopdcMtpN2dgmhEegny9JCSwxfQmQ0zK0g7m6SHiKMwjwARAQABiQQ+BBgBCAAJ + BQJYrdoqAhsCAikJEI2BgDwOv82IwV0gBBkBCAAGBQJYrdoqAAoJEH6gqcPyc/zY + 1WAP/2wJ+R0gE6qsce3rjaIz58PJmc8goKrir5hnElWhPgbq7cYIsW5qiFyLhkdp + YcMmhD9mRiPpQn6Ya2w3e3B8zfIVKipbMBnke/ytZ9M7qHmDCcjoiSmwEXN3wKYI + mD9VHONsl/CG1rU9Isw1jtB5g1YxuBA7M/m36XN6x2u+NtNMDB9P56yc4gfsZVES + KA9v+yY2/l45L8d/WUkUi0YXomn6hyBGI7JrBLq0CX37GEYP6O9rrKipfz73XfO7 + JIGzOKZlljb/D9RX/g7nRbCn+3EtH7xnk+TK/50euEKw8SMUg147sJTcpQmv6UzZ + cM4JgL0HbHVCojV4C/plELwMddALOFeYQzTif6sMRPf+3DSj8frbInjChC3yOLy0 + 6br92KFom17EIj2CAcoeq7UPhi2oouYBwPxh5ytdehJkoo+sN7RIWua6P2WSmon5 + U888cSylXC0+ADFdgLX9K2zrDVYUG1vo8CX0vzxFBaHwN6Px26fhIT1/hYUHQR1z + VfNDcyQmXqkOnZvvoMfz/Q0s9BhFJ/zU6AgQbIZE/hm1spsfgvtsD1frZfygXJ9f + irP+MSAI80xHSf91qSRZOj4Pl3ZJNbq4yYxv0b1pkMqeGdjdCYhLU+LZ4wbQmpCk + SVe2prlLureigXtmZfkqevRz7FrIZiu9ky8wnCAPwC7/zmS18rgP/17bOtL4/iIz + QhxAAoAMWVrGyJivSkjhSGx1uCojsWfsTAm11P7jsruIL61ZzMUVE2aM3Pmj5G+W + 9AcZ58Em+1WsVnAXdUR//bMmhyr8wL/G1YO1V3JEJTRdxsSxdYa4deGBBY/Adpsw + 24jxhOJR+lsJpqIUeb999+R8euDhRHG9eFO7DRu6weatUJ6suupoDTRWtr/4yGqe + dKxV3qQhNLSnaAzqW/1nA3iUB4k7kCaKZxhdhDbClf9P37qaRW467BLCVO/coL3y + Vm50dwdrNtKpMBh3ZpbB1uJvgi9mXtyBOMJ3v8RZeDzFiG8HdCtg9RvIt/AIFoHR + H3S+U79NT6i0KPzLImDfs8T7RlpyuMc4Ufs8ggyg9v3Ae6cN3eQyxcK3w0cbBwsh + /nQNfsA6uu+9H7NhbehBMhYnpNZyrHzCmzyXkauwRAqoCbGCNykTRwsur9gS41TQ + M8ssD1jFheOJf3hODnkKU+HKjvMROl1DK7zdmLdNzA1cvtZH/nCC9KPj1z8QC47S + xx+dTZSx4ONAhwbS/LN3PoKtn8LPjY9NP9uDWI+TWYquS2U+KHDrBDlsgozDbs/O + jCxcpDzNmXpWQHEtHU7649OXHP7UeNST1mCUCH5qdank0V1iejF6/CfTFU4MfcrG + YT90qFF93M3v01BbxP+EIY2/9tiIPbrd + =0YYh + -----END PGP PUBLIC KEY BLOCK----- diff --git a/playbooks/roles/install-docker/files/10-docker.conf b/playbooks/roles/install-docker/files/10-docker.conf new file mode 100644 index 0000000..165dde5 --- /dev/null +++ b/playbooks/roles/install-docker/files/10-docker.conf @@ -0,0 +1,7 @@ +# Create a template for the target log file +$template CUSTOM_LOGS,"/var/log/containers/%programname%.log" + +if $programname startswith 'docker-' then { + ?CUSTOM_LOGS + stop +} diff --git a/playbooks/roles/install-docker/handlers/main.yaml b/playbooks/roles/install-docker/handlers/main.yaml new file mode 100644 index 0000000..a80bac5 --- /dev/null +++ b/playbooks/roles/install-docker/handlers/main.yaml @@ -0,0 +1,4 @@ +- name: Restart rsyslog + service: + name: rsyslog + state: restarted diff --git a/playbooks/roles/install-docker/tasks/distro.yaml b/playbooks/roles/install-docker/tasks/distro.yaml new file mode 100644 index 0000000..99fd589 --- /dev/null +++ b/playbooks/roles/install-docker/tasks/distro.yaml @@ -0,0 +1,5 @@ +- name: Install docker + become: yes + package: + name: docker.io + state: present diff --git a/playbooks/roles/install-docker/tasks/main.yaml b/playbooks/roles/install-docker/tasks/main.yaml new file mode 100644 index 0000000..dffbb5a --- /dev/null +++ b/playbooks/roles/install-docker/tasks/main.yaml @@ -0,0 +1,69 @@ +- name: Create docker directory + become: yes + file: + state: directory + path: /etc/docker + +- name: Install docker-ce from upstream + include_tasks: upstream.yaml + when: use_upstream_docker|bool + +- name: Install docker-engine from distro + include_tasks: distro.yaml + when: not use_upstream_docker|bool + +- name: reset ssh connection to pick up docker group + meta: reset_connection + +# We install docker-compose from pypi to get features like +# stop_grace_period. + +# On arm64 we need build-essential, python3-dev, libffi-dev, and libssl-dev +# because wheels don't exist for all the things on arm64 + +- name: Install arm64 dev pacakges + when: ansible_architecture == 'aarch64' + package: + name: + - build-essential + - python3-dev + - libffi-dev + - libssl-dev + state: present + +- name: ensure pip3 is installed + include_role: + name: pip3 + +- name: Install docker-compose + pip: + name: docker-compose + state: present + executable: pip3 + +- name: Install rsyslog redirector for container tags + copy: + src: '10-docker.conf' + dest: /etc/rsyslog.d/ + owner: root + group: root + mode: 0644 + notify: + - Restart rsyslog + +- name: Ensure rsyslog restarted now + meta: flush_handlers + +- name: Create container log directories + file: + state: directory + path: /var/log/containers/ + owner: syslog + group: adm + mode: 0775 + +- name: Install log rotation for docker files + include_role: + name: logrotate + vars: + logrotate_file_name: '/var/log/containers/*.log' diff --git a/playbooks/roles/install-docker/tasks/upstream.yaml b/playbooks/roles/install-docker/tasks/upstream.yaml new file mode 100644 index 0000000..7a59af5 --- /dev/null +++ b/playbooks/roles/install-docker/tasks/upstream.yaml @@ -0,0 +1,32 @@ +- name: Install pre-reqs + package: + name: "{{ item }}" + state: present + with_items: + - apt-transport-https + - ca-certificates + - curl + - software-properties-common + become: yes + +- name: Add docker GPG key + become: yes + apt_key: + data: "{{ debian_gpg_key }}" + +# TODO(mordred) We should add a proxy cache mirror for this +- name: Add docker apt repo + become: yes + template: + dest: /etc/apt/sources.list.d/docker.list + group: root + mode: 0644 + owner: root + src: sources.list.j2 + +- name: Install docker + become: yes + apt: + name: docker-ce + state: present + update_cache: yes diff --git a/playbooks/roles/install-docker/templates/sources.list.j2 b/playbooks/roles/install-docker/templates/sources.list.j2 new file mode 100644 index 0000000..c980ee0 --- /dev/null +++ b/playbooks/roles/install-docker/templates/sources.list.j2 @@ -0,0 +1,2 @@ +deb https://download.docker.com/linux/{{ ansible_facts.distribution|lower }} {{ +ansible_lsb.codename }} {{ docker_update_channel }} diff --git a/playbooks/roles/install-helm-chart/README.rst b/playbooks/roles/install-helm-chart/README.rst new file mode 100644 index 0000000..e69de29 diff --git a/playbooks/roles/install-helm-chart/defaults/main.yaml b/playbooks/roles/install-helm-chart/defaults/main.yaml new file mode 100644 index 0000000..8b13789 --- /dev/null +++ b/playbooks/roles/install-helm-chart/defaults/main.yaml @@ -0,0 +1 @@ + diff --git a/playbooks/roles/install-helm-chart/tasks/main.yaml b/playbooks/roles/install-helm-chart/tasks/main.yaml new file mode 100644 index 0000000..0305304 --- /dev/null +++ b/playbooks/roles/install-helm-chart/tasks/main.yaml @@ -0,0 +1,37 @@ +--- +- name: Add HELM chart repo {{ chart.repo_name }} + kubernetes.core.helm_repository: + state: present + repo_url: "{{ chart.repo_url }}" + name: "{{ chart.repo_name }}" + when: + - "chart.repo_url is defined" + - "chart.repo_name is defined" + +- set_fact: + values: "{{ lookup('template', chart.values_template ) | from_yaml }}" + when: "chart.values_template is defined" + +- name: Install HELM chart {{ chart.name }} on cluster {{ chart.context }} + kubernetes.core.helm: + state: present + wait: true + kube_context: "{{ chart.context }}" + name: "{{ chart.name }}" + chart_ref: "{{ chart.ref }}" + chart_version: "{{ chart.version }}" + release_namespace: "{{ chart.namespace }}" + create_namespace: true + update_repo_cache: true + values: "{{ values | default(omit) }}" + +- name: Apply post-config manifest on cluster {{ chart.context }} + kubernetes.core.k8s: + context: "{{ chart.context }}" + namespace: "{{ chart.namespace }}" + state: present + definition: "{{ lookup('template', chart.post_config_template ) | from_yaml_all | list }}" + when: "chart.post_config_template is defined" + register: result + until: result is not failed + retries: 5 diff --git a/playbooks/roles/install-helm/README.rst b/playbooks/roles/install-helm/README.rst new file mode 100644 index 0000000..e69de29 diff --git a/playbooks/roles/install-helm/defaults/main.yaml b/playbooks/roles/install-helm/defaults/main.yaml new file mode 100644 index 0000000..ae39254 --- /dev/null +++ b/playbooks/roles/install-helm/defaults/main.yaml @@ -0,0 +1,2 @@ +helm_url: https://get.helm.sh/helm-v3.11.2-linux-amd64.tar.gz +helm_checksum: sha256:781d826daec584f9d50a01f0f7dadfd25a3312217a14aa2fbb85107b014ac8ca diff --git a/playbooks/roles/install-helm/tasks/main.yaml b/playbooks/roles/install-helm/tasks/main.yaml new file mode 100644 index 0000000..b1238dc --- /dev/null +++ b/playbooks/roles/install-helm/tasks/main.yaml @@ -0,0 +1,28 @@ +- name: Make /opt/helm directory + file: + path: /opt/helm + state: directory + +- name: Download Helm tarball + get_url: + url: "{{ helm_url }}" + checksum: "{{ helm_checksum }}" + dest: /opt/helm.tgz + +- name: Extract Helm tarball + unarchive: + src: /opt/helm.tgz + dest: /opt/helm + +- name: Copy files into /usr/local + copy: + remote_src: true + src: "/opt/helm/linux-amd64/{{ item }}" + dest: "/usr/local/bin/{{ item }}" + mode: 0755 + loop: + - helm + +- name: Try to install Helm diff for better idempotency results + command: helm plugin install https://github.com/databus23/helm-diff + ignore_errors: yes diff --git a/playbooks/roles/install-kubectl/README.rst b/playbooks/roles/install-kubectl/README.rst new file mode 100644 index 0000000..09bb4b2 --- /dev/null +++ b/playbooks/roles/install-kubectl/README.rst @@ -0,0 +1,5 @@ +Install kubectl + +**Role Variables** + +* None diff --git a/playbooks/roles/install-kubectl/defaults/main.yaml b/playbooks/roles/install-kubectl/defaults/main.yaml new file mode 100644 index 0000000..69e24bb --- /dev/null +++ b/playbooks/roles/install-kubectl/defaults/main.yaml @@ -0,0 +1,2 @@ +kubectl_openshift_url: https://github.com/openshift/okd/releases/download/4.7.0-0.okd-2021-03-28-152009/openshift-client-linux-4.7.0-0.okd-2021-03-28-152009.tar.gz +kubectl_openshift_checksum: sha256:9789b42cfceaae56e5506d97510bc5e2254676bf2bd3a4e1cab635fa5b17f963 diff --git a/playbooks/roles/install-kubectl/tasks/main.yaml b/playbooks/roles/install-kubectl/tasks/main.yaml new file mode 100644 index 0000000..0d7989b --- /dev/null +++ b/playbooks/roles/install-kubectl/tasks/main.yaml @@ -0,0 +1,25 @@ +- name: Make /opt/oc directory + file: + path: /opt/oc + state: directory + +- name: Download openshift client tarball + get_url: + url: "{{ kubectl_openshift_url }}" + checksum: "{{ kubectl_openshift_checksum }}" + dest: /opt/oc.tgz + +- name: Extract openshift client tarball + unarchive: + src: /opt/oc.tgz + dest: /opt/oc + +- name: Copy files into /usr/local + copy: + remote_src: true + src: "/opt/oc/{{ item }}" + dest: "/usr/local/bin/{{ item }}" + mode: 0755 + loop: + - oc + - kubectl diff --git a/playbooks/roles/install-osc-container/README.rst b/playbooks/roles/install-osc-container/README.rst new file mode 100644 index 0000000..5d2e77e --- /dev/null +++ b/playbooks/roles/install-osc-container/README.rst @@ -0,0 +1 @@ +An ansible role to install openstackclient container and helper script diff --git a/playbooks/roles/install-osc-container/files/openstack b/playbooks/roles/install-osc-container/files/openstack new file mode 100644 index 0000000..49dc6a1 --- /dev/null +++ b/playbooks/roles/install-osc-container/files/openstack @@ -0,0 +1,20 @@ +#!/bin/bash +# Copyright (c) 2020 Red Hat, Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or +# implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +exec docker run -it --rm \ + -v/etc/openstack:/etc/openstack \ + quay.io/opentelekomcloud/python-openstackclient \ + openstack $@ diff --git a/playbooks/roles/install-osc-container/tasks/main.yaml b/playbooks/roles/install-osc-container/tasks/main.yaml new file mode 100644 index 0000000..f821e8c --- /dev/null +++ b/playbooks/roles/install-osc-container/tasks/main.yaml @@ -0,0 +1,8 @@ +- name: Add helper script + become: yes + copy: + dest: /usr/local/bin/openstack + group: root + mode: 0755 + owner: root + src: openstack diff --git a/playbooks/roles/install-podman/README.rst b/playbooks/roles/install-podman/README.rst new file mode 100644 index 0000000..7afdaaa --- /dev/null +++ b/playbooks/roles/install-podman/README.rst @@ -0,0 +1 @@ +An ansible role to install podman in the OpenDev production environment diff --git a/playbooks/roles/install-podman/defaults/main.yaml b/playbooks/roles/install-podman/defaults/main.yaml new file mode 100644 index 0000000..b255555 --- /dev/null +++ b/playbooks/roles/install-podman/defaults/main.yaml @@ -0,0 +1,29 @@ +projectatomic_gpg_key: | + -----BEGIN PGP PUBLIC KEY BLOCK----- + + xsFNBFlRJjABEADuE3ZLY/2W++bPsxtcaoi7VaNnkvsXuVYbbHalEh/YwKFVsDTo + PQpuw1UlPpmVTwT3ufWfv2v42eZiiWMZaKG9/aWF/TeIdH5+3anfVi+X+tuIW9sv + GKTHZdtDqd7fIhtY6AuNQ/D629TJxLvafZ5MoGeyxjsebt5dOvOrl0SHpwR75uPP + aCXTWrokhH7W2BbJQUB+47k62BMd03EKe8stz9FzUxptROFJJ2bITijJlDXNfSbV + bwCiyREIkzXS6ZdWliJAqencOIZ4UbUax+5BT8SRbSLtr/c4YxvARilpSVCkxo8/ + EkPHBGygmgfw0kRPSGtLL7IqfWip9mFObji2geoU3A8gV/i3s9Ccc9GPKApX8r7b + QFs1tIlgUJKPqVwB2FAh+Xrqlsy/+8r95jL2gfRptSw7u8OP4AySj5WVm7cCEQ69 + aLyemCsf+v72bFOUXuYQ22Kr3yqz2O/1IsG/0Usr4riTdG65Aq6gnq4KRHMNgXu8 + 7fC9omoy3sKHvzeAJsw/eC9chYNwO8pv8KRIvpDSGL5L7Ems8mq2C5xMyzSVegTr + AvXu7nJoZWVBFRluh42bZa9QesX9MzzfOQ+G3085aW8BE++lhtX5QOkfRd74E49H + 1I2piAq/aE8P9jUHr60Po1C1Tw9iXeEaULLKut8eTMLkQ/02DXhBfq0I5QARAQAB + zSBMYXVuY2hwYWQgUFBBIGZvciBQcm9qZWN0IEF0b21pY8LBeAQTAQIAIgUCWVEm + MAIbAwYLCQgHAwIGFQgCCQoLBBYCAwECHgECF4AACgkQi+zxY3rYx50HLw/5Ad6k + EHf2uT4owvzu393S/bUR6VVwCWYMbg14XgphxnoOfrHZWUjbrETTURyd1UexoHt7 + ZDtMCVmzeY0jpvMb1W3WDebFVo+wR4CI15sPjyycsOxWTviD743wxaPCL1s009co + CzWg5AgP88B0D353Y39meC07BBgOJgIfk1OkFdeRjqHfAtucT99NrCuKr/bbBwDn + 0E+wWaJoIbQvBzsPIFzMWWQ6RcnrZtyQv35epo+VBmW3VEIkorv1VoStF0RjvJM+ + cMW/ogZsIEZk0IUREOtrtTKUXVrMw1hZ9IGYZRpbJ2g670UGuNjW/vo3rRCRSDaF + 6Txp5Pn6ZLTgQWsWMw/6M6ooFIEpz3rhYmQSJLNmUN6SgKeWGVmOrQlg4f7YM75o + UEw56GKQWl9FAthO0qH0qF1OMfUKp/Tv2OSV/FNZsokf6alWXOB6Bzj6gYmmGXIv + MfFW5fZ1cuu5/0ULDckxWhVQ1ywLHREEoBQ6oKYONwUjSdWcM+VsKCEFeCqsNwak + qweP8C0fooycfiEZuncc/9ZujgkQ2p7xXTlv3t2SPF9h43xHs3515VS/OTJPGW59 + 98AqllpfqGxggYs5cwi2LO3xwvHyPoTqj3hcl1dRMspZINRsIo4VC8bSrCOqbjDc + CD2WFOo2c4mwTDmJpz0PLK87ev/WZ8K0OEflTfc= + =DzDk + -----END PGP PUBLIC KEY BLOCK----- diff --git a/playbooks/roles/install-podman/tasks/main.yaml b/playbooks/roles/install-podman/tasks/main.yaml new file mode 100644 index 0000000..8318382 --- /dev/null +++ b/playbooks/roles/install-podman/tasks/main.yaml @@ -0,0 +1,20 @@ +- name: Add PPA GPG key + become: yes + apt_key: + data: "{{ projectatomic_gpg_key }}" + +- name: Add projectatomic apt repo + become: yes + template: + dest: /etc/apt/sources.list.d/projectatomic.list + group: root + mode: 0644 + owner: root + src: sources.list.j2 + +- name: Install podman + become: yes + apt: + name: podman + state: present + update_cache: yes diff --git a/playbooks/roles/install-podman/templates/sources.list.j2 b/playbooks/roles/install-podman/templates/sources.list.j2 new file mode 100644 index 0000000..cc249ac --- /dev/null +++ b/playbooks/roles/install-podman/templates/sources.list.j2 @@ -0,0 +1 @@ +deb http://ppa.launchpad.net/projectatomic/ppa/ubuntu {{ ansible_lsb.codename }} main diff --git a/playbooks/roles/iptables/README.rst b/playbooks/roles/iptables/README.rst new file mode 100644 index 0000000..b7368cd --- /dev/null +++ b/playbooks/roles/iptables/README.rst @@ -0,0 +1,63 @@ +Install and configure iptables + +**Role Variables** + +.. zuul:rolevar:: iptables_allowed_hosts + :default: [] + + A list of dictionaries, each item in the list is a rule to add for + a host/port combination. The format of the dictionary is: + + .. zuul:rolevar:: hostname + + The hostname to allow. It will automatically be resolved, and + the inventory IP address will be added to the firewall. + + .. zuul:rolevar:: protocol + + One of "tcp" or "udp". + + .. zuul:rolevar:: port + + The port number. + +.. zuul:rolevar:: iptables_allowed_groups + :default: [] + + A list of dictionaries, each item in the list is a rule to add for + a host/port combination. The format of the dictionary is: + + .. zuul:rolevar:: group + + The ansible inventory group to add. Every host in the group will + be added to the firewall. + + .. zuul:rolevar:: protocol + + One of "tcp" or "udp". + + .. zuul:rolevar:: port + + The port number. + +.. zuul:rolevar:: iptables_public_tcp_ports + :default: [] + + A list of public TCP ports to open. + +.. zuul:rolevar:: iptables_public_udp_ports + :default: [] + + A list of public UDP ports to open. + +.. zuul:rolevar:: iptables_rules_v4 + :default: [] + + A list of iptables v4 rules. Each item is a string containing the + iptables command line options for the rule. + +.. zuul:rolevar:: iptables_rules_v6 + :default: [] + + A list of iptables v6 rules. Each item is a string containing the + iptables command line options for the rule. diff --git a/playbooks/roles/iptables/defaults/main.yaml b/playbooks/roles/iptables/defaults/main.yaml new file mode 100644 index 0000000..8752607 --- /dev/null +++ b/playbooks/roles/iptables/defaults/main.yaml @@ -0,0 +1,7 @@ +iptables_allowed_hosts: [] +iptables_public_ports: [] +iptables_public_tcp_ports: '{{ iptables_public_ports }}' +iptables_public_udp_ports: '{{ iptables_public_ports }}' +iptables_rules: [] +iptables_rules_v4: '{{ iptables_rules }}' +iptables_rules_v6: '{{ iptables_rules }}' diff --git a/playbooks/roles/iptables/handlers/main.yaml b/playbooks/roles/iptables/handlers/main.yaml new file mode 100644 index 0000000..09d10fa --- /dev/null +++ b/playbooks/roles/iptables/handlers/main.yaml @@ -0,0 +1,20 @@ +- name: Reload iptables (Debian) + command: '{{ reload_command }}' + when: + - not ansible_facts.is_chroot + - ansible_facts.os_family == 'Debian' + listen: "Reload iptables" + +- name: Reload iptables (RedHat) + command: 'systemctl reload iptables' + when: + - not ansible_facts.is_chroot + - ansible_facts.os_family == 'RedHat' + listen: "Reload iptables" + +- name: Reload ip6tables (Red Hat) + command: 'systemctl reload ip6tables' + when: + - not ansible_facts.is_chroot + - ansible_facts.os_family == 'RedHat' + listen: "Reload iptables" \ No newline at end of file diff --git a/playbooks/roles/iptables/tasks/RedHat.yaml b/playbooks/roles/iptables/tasks/RedHat.yaml new file mode 100644 index 0000000..426e766 --- /dev/null +++ b/playbooks/roles/iptables/tasks/RedHat.yaml @@ -0,0 +1,11 @@ +- name: Disable firewalld + service: + name: firewalld + enabled: no + state: stopped + failed_when: false + +- name: Ensure firewalld is removed + package: + name: firewalld + state: absent diff --git a/playbooks/roles/iptables/tasks/main.yaml b/playbooks/roles/iptables/tasks/main.yaml new file mode 100644 index 0000000..64925e0 --- /dev/null +++ b/playbooks/roles/iptables/tasks/main.yaml @@ -0,0 +1,51 @@ +- name: Include OS-specific variables + include_vars: "{{ lookup('first_found', params) }}" + vars: + params: + files: "{{ distro_lookup_path }}" + paths: + - 'vars' + +- name: Install iptables + package: + name: '{{ package_name }}' + state: present + +- name: Ensure iptables rules directory + file: + state: directory + path: '{{ rules_dir }}' + +- name: Install IPv4 rules files + template: + src: rules.v4.j2 + dest: '{{ ipv4_rules }}' + owner: root + group: root + mode: 0640 + setype: '{{ setype | default(omit) }}' + notify: + - Reload iptables + +- name: Install IPv6 rules files + template: + src: rules.v6.j2 + dest: '{{ ipv6_rules }}' + owner: root + group: root + mode: 0640 + setype: '{{ setype | default(omit) }}' + notify: + - Reload iptables + +- name: Include OS specific tasks + include_tasks: "{{ item }}" + vars: + params: + files: "{{ distro_lookup_path }}" + loop: "{{ query('first_found', params, errors='ignore') }}" + +- name: Enable iptables service + service: + name: '{{ service_name }}' + enabled: true diff --git a/playbooks/roles/iptables/templates/rules.v4.j2 b/playbooks/roles/iptables/templates/rules.v4.j2 new file mode 100644 index 0000000..0b3c3f2 --- /dev/null +++ b/playbooks/roles/iptables/templates/rules.v4.j2 @@ -0,0 +1,38 @@ +*filter +:INPUT ACCEPT [0:0] +:FORWARD DROP [0:0] +:OUTPUT ACCEPT [0:0] +:openstack-INPUT - [0:0] +-A INPUT -j openstack-INPUT +-A openstack-INPUT -i lo -j ACCEPT +-A openstack-INPUT -p icmp --icmp-type any -j ACCEPT +#-A openstack-INPUT -p udp --dport 5353 -d 224.0.0.251 -j ACCEPT +-A openstack-INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT +# SSH from anywhere +-A openstack-INPUT -m state --state NEW -m tcp -p tcp --dport 22 -j ACCEPT +# Public TCP ports +{% for port in iptables_public_tcp_ports -%} +-A openstack-INPUT -m state --state NEW -m tcp -p tcp --dport {{ port }} -j ACCEPT +{% endfor -%} +# Public UDP ports +{% for port in iptables_public_udp_ports -%} +-A openstack-INPUT -m udp -p udp --dport {{ port }} -j ACCEPT +{% endfor -%} +# Per-host rules +{% for rule in iptables_rules_v4 -%} +-A openstack-INPUT {{ rule }} +{% endfor -%} +{% for host in iptables_allowed_hosts -%} +{% for addr in host.hostname | dns_a -%} +-A openstack-INPUT {% if host.protocol == 'tcp' %}-m state --state NEW {% endif %} -m {{ host.protocol }} -p {{ host.protocol }} -s {{ addr }} --dport {{ host.port }} -j ACCEPT +{% endfor -%} +{% endfor -%} +{% for group in iptables_allowed_groups -%} +{% for addr in groups.get(group.group) | map('extract', hostvars, 'public_v4') -%} +{% if addr -%} +-A openstack-INPUT {% if group.protocol == 'tcp' %}-m state --state NEW {% endif %} -m {{ group.protocol }} -p {{ group.protocol }} -s {{ addr }} --dport {{ group.port }} -j ACCEPT +{% endif -%} +{% endfor -%} +{% endfor -%} +-A openstack-INPUT -j REJECT --reject-with icmp-host-prohibited +COMMIT diff --git a/playbooks/roles/iptables/templates/rules.v6.j2 b/playbooks/roles/iptables/templates/rules.v6.j2 new file mode 100644 index 0000000..d5a792b --- /dev/null +++ b/playbooks/roles/iptables/templates/rules.v6.j2 @@ -0,0 +1,37 @@ +*filter +:INPUT ACCEPT [0:0] +:FORWARD DROP [0:0] +:OUTPUT ACCEPT [0:0] +:openstack-INPUT - [0:0] +-A INPUT -j openstack-INPUT +-A openstack-INPUT -i lo -j ACCEPT +-A openstack-INPUT -p icmpv6 -j ACCEPT +-A openstack-INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT +# SSH from anywhere +-A openstack-INPUT -m state --state NEW -m tcp -p tcp --dport 22 -j ACCEPT +# Public TCP ports +{% for port in iptables_public_tcp_ports -%} +-A openstack-INPUT -m state --state NEW -m tcp -p tcp --dport {{ port }} -j ACCEPT +{% endfor -%} +# Public UDP ports +{% for port in iptables_public_udp_ports -%} +-A openstack-INPUT -m udp -p udp --dport {{ port }} -j ACCEPT +{% endfor -%} +# Per-host rules +{% for rule in iptables_rules_v6 -%} +-A openstack-INPUT {{ rule }} +{% endfor -%} +{% for host in iptables_allowed_hosts -%} +{% for addr in host.hostname | dns_aaaa -%} +-A openstack-INPUT {% if host.protocol == 'tcp' %}-m state --state NEW {% endif %}-m {{ host.protocol }} -p {{ host.protocol }} -s {{ addr }} --dport {{ host.port }} -j ACCEPT +{% endfor -%} +{% endfor -%} +{% for group in iptables_allowed_groups -%} +{% for addr in groups.get(group.group) | map('extract', hostvars, 'public_v6') -%} +{% if addr -%} +-A openstack-INPUT {% if group.protocol == 'tcp' %}-m state --state NEW {% endif %} -m {{ group.protocol }} -p {{ group.protocol }} -s {{ addr }} --dport {{ group.port }} -j ACCEPT +{% endif -%} +{% endfor -%} +{% endfor -%} +-A openstack-INPUT -j REJECT --reject-with icmp6-adm-prohibited +COMMIT diff --git a/playbooks/roles/iptables/vars/Debian.yaml b/playbooks/roles/iptables/vars/Debian.yaml new file mode 100644 index 0000000..769f18d --- /dev/null +++ b/playbooks/roles/iptables/vars/Debian.yaml @@ -0,0 +1,6 @@ +package_name: iptables-persistent +service_name: netfilter-persistent +rules_dir: /etc/iptables +ipv4_rules: /etc/iptables/rules.v4 +ipv6_rules: /etc/iptables/rules.v6 +reload_command: /usr/sbin/netfilter-persistent start diff --git a/playbooks/roles/iptables/vars/RedHat.yaml b/playbooks/roles/iptables/vars/RedHat.yaml new file mode 100644 index 0000000..465d5b3 --- /dev/null +++ b/playbooks/roles/iptables/vars/RedHat.yaml @@ -0,0 +1,6 @@ +package_name: iptables-services +service_name: iptables +rules_dir: /etc/sysconfig +ipv4_rules: /etc/sysconfig/iptables +ipv6_rules: /etc/sysconfig/ip6tables +setype: 'etc_t' diff --git a/playbooks/roles/iptables/vars/Ubuntu.trusty.yaml b/playbooks/roles/iptables/vars/Ubuntu.trusty.yaml new file mode 100644 index 0000000..e806919 --- /dev/null +++ b/playbooks/roles/iptables/vars/Ubuntu.trusty.yaml @@ -0,0 +1,6 @@ +package_name: iptables-persistent +service_name: iptables-persistent +rules_dir: /etc/iptables +ipv4_rules: /etc/iptables/rules.v4 +ipv6_rules: /etc/iptables/rules.v6 +reload_command: /etc/init.d/iptables-persistent reload diff --git a/playbooks/roles/logrotate/README.rst b/playbooks/roles/logrotate/README.rst new file mode 100644 index 0000000..c6bfeab --- /dev/null +++ b/playbooks/roles/logrotate/README.rst @@ -0,0 +1,55 @@ +Add log rotation file + +.. note:: This role does not manage the ``logrotate`` package or + configuration directory, and it is assumed to be installed + and available. + +This role installs a log rotation file in ``/etc/logrotate.d/`` for a +given file. + +For information on the directives see ``logrotate.conf(5)``. This is +not an exhaustive list of directives (contributions are welcome). + +** Role Variables ** + +.. zuul:rolevar:: logrotate_file_name + + The log file on disk to rotate + +.. zuul:rolevar:: logrotate_config_file_name + :default: Unique name based on :zuul:rolevar::`logrotate.logrotate_file_name` + + The name of the configuration file in ``/etc/logrotate.d`` + +.. zuul:rolevar:: logrotate_compress + :default: yes + +.. zuul:rolevar:: logrotate_copytruncate + :default: yes + +.. zuul:rolevar:: logrotate_delaycompress + :default: yes + +.. zuul:rolevar:: logrotate_missingok + :default: yes + +.. zuul:rolevar:: logrotate_rotate + :default: 7 + +.. zuul:rolevar:: logrotate_frequency + :default: daily + + One of ``hourly``, ``daily``, ``weekly``, ``monthly``, ``yearly`` + or ``size``. + + If choosing ``size``, :zuul:rolevar::`logrotate.logrotate_size` must + be specified + +.. zuul:rolevar:: logrotate_size + :default: None + + Size; e.g. 100K, 10M, 1G. Only when + :zuul:rolevar::`logrotate.logrotate_frequency` is ``size``. + +.. zuul:rolevar:: logrotate_notifempty + :default: yes diff --git a/playbooks/roles/logrotate/defaults/main.yaml b/playbooks/roles/logrotate/defaults/main.yaml new file mode 100644 index 0000000..f37e7d0 --- /dev/null +++ b/playbooks/roles/logrotate/defaults/main.yaml @@ -0,0 +1,8 @@ +logrotate_compress: yes +logrotate_copytruncate: yes +logrotate_delaycompress: yes +logrotate_missingok: yes +logrotate_rotate: 7 +logrotate_frequency: daily +logrotate_size: None +logrotate_notifempty: yes \ No newline at end of file diff --git a/playbooks/roles/logrotate/tasks/main.yaml b/playbooks/roles/logrotate/tasks/main.yaml new file mode 100644 index 0000000..fbdfa08 --- /dev/null +++ b/playbooks/roles/logrotate/tasks/main.yaml @@ -0,0 +1,29 @@ +- name: Check for filename + fail: + msg: Must set logrotate_file_name for logfile to rotate + when: logrotate_file_name is not defined + +- assert: + that: + - logrotate_frequency in ('hourly', 'daily', 'weekly', 'monthly', 'yearly', 'size') + fail_msg: Invalid logrotate_frequency + +- assert: + that: + - logrotate_size + fail_msg: Must specify size for rotation + when: logrotate_frequency == 'size' + +# Hash the full path to avoid any conflicts but remain idempotent. +# "/var/log/ansible/ansible.log" becomes "ansible.log.37237.conf" for example +- name: Create a unique config name + set_fact: + logrotate_generated_config_file_name: "{{ logrotate_file_name | basename }}.{{ (logrotate_file_name|hash('sha1'))[0:5] }}.conf" + +- name: 'Install {{ logrotate_file_name }} rotatation config file' + template: + src: logrotate.conf.j2 + dest: '/etc/logrotate.d/{{ logrotate_config_file_name|default(logrotate_generated_config_file_name) }}' + owner: root + group: root + mode: 0644 diff --git a/playbooks/roles/logrotate/templates/logrotate.conf.j2 b/playbooks/roles/logrotate/templates/logrotate.conf.j2 new file mode 100644 index 0000000..a1841c5 --- /dev/null +++ b/playbooks/roles/logrotate/templates/logrotate.conf.j2 @@ -0,0 +1,23 @@ +{{ logrotate_file_name }} { +{% if logrotate_compress %} + compress +{% endif %} +{% if logrotate_copytruncate %} + copytruncate +{% endif %} +{% if logrotate_delaycompress %} + delaycompress +{% endif %} +{% if logrotate_missingok %} + missingok +{% endif %} + rotate {{ logrotate_rotate }} +{% if logrotate_frequency != "size" %} + {{ logrotate_frequency }} +{% else %} + size {{ logrotate_size }} +{% endif %} +{% if logrotate_notifempty %} + notifempty +{% endif %} +} diff --git a/playbooks/roles/pip3/README.rst b/playbooks/roles/pip3/README.rst new file mode 100644 index 0000000..b810029 --- /dev/null +++ b/playbooks/roles/pip3/README.rst @@ -0,0 +1,5 @@ +Install system packages for python3 pip and virtualenv + +**Role Variables** + +* None diff --git a/playbooks/roles/pip3/tasks/default.yaml b/playbooks/roles/pip3/tasks/default.yaml new file mode 100644 index 0000000..e58d05a --- /dev/null +++ b/playbooks/roles/pip3/tasks/default.yaml @@ -0,0 +1,5 @@ +- name: Download get-pip.py + command: wget https://bootstrap.pypa.io/get-pip.py + args: + chdir: /var/lib + creates: /var/lib/get-pip.py diff --git a/playbooks/roles/pip3/tasks/main.yaml b/playbooks/roles/pip3/tasks/main.yaml new file mode 100644 index 0000000..091e3e6 --- /dev/null +++ b/playbooks/roles/pip3/tasks/main.yaml @@ -0,0 +1,41 @@ +- name: Remove pip and virtualenv packages + package: + name: + - python3-pip + - python3-virtualenv + state: absent + +# NOTE(ianw) : See https://github.com/pypa/get-pip/issues/43; +# requirement of get-pip.py +# Xenial doesn't have python3-distutils as it appears to be part +# of python3 itself. +- name: Ensure distutils + package: + name: + - python3-distutils + state: present + when: ansible_distribution_release != 'xenial' + +- name: Download OS/Python specific get-pip.py + include_tasks: "{{ get_pip_os }}" + with_first_found: + - "{{ ansible_distribution_release }}.yaml" + - "{{ ansible_distribution }}.yaml" + - "{{ ansible_os_family }}.yaml" + - "default.yaml" + loop_control: + loop_var: get_pip_os + +- name: Install pip + command: python3 /var/lib/get-pip.py + args: + creates: /usr/local/bin/pip3 + +- name: Install latest pip and virtualenv + pip: + name: "{{ item }}" + state: latest + executable: pip3 + loop: + - pip + - virtualenv diff --git a/playbooks/roles/pip3/tasks/xenial.yaml b/playbooks/roles/pip3/tasks/xenial.yaml new file mode 100644 index 0000000..b7faf61 --- /dev/null +++ b/playbooks/roles/pip3/tasks/xenial.yaml @@ -0,0 +1,6 @@ +# https://github.com/pypa/get-pip/issues/83 +- name: Download get-pip.py + command: wget https://bootstrap.pypa.io/pip/3.5/get-pip.py + args: + chdir: /var/lib + creates: /var/lib/get-pip.py diff --git a/playbooks/roles/root-keys/README.rst b/playbooks/roles/root-keys/README.rst new file mode 100644 index 0000000..b60f782 --- /dev/null +++ b/playbooks/roles/root-keys/README.rst @@ -0,0 +1,7 @@ +Write out root SSH private key + +**Role Variables** + +.. zuul:rolevar:: root_rsa_key + + The root key to place in ``/root/.ssh/id_rsa`` diff --git a/playbooks/roles/root-keys/tasks/main.yaml b/playbooks/roles/root-keys/tasks/main.yaml new file mode 100644 index 0000000..b896a97 --- /dev/null +++ b/playbooks/roles/root-keys/tasks/main.yaml @@ -0,0 +1,11 @@ +- name: Ensure .ssh directory + file: + path: /root/.ssh + mode: 0700 + state: directory + +- name: Write out ssh private key + copy: + content: '{{ root_rsa_key }}' + mode: 0400 + dest: /root/.ssh/id_rsa diff --git a/playbooks/roles/set-hostname b/playbooks/roles/set-hostname new file mode 120000 index 0000000..93e5683 --- /dev/null +++ b/playbooks/roles/set-hostname @@ -0,0 +1 @@ +../../roles/set-hostname/ \ No newline at end of file diff --git a/playbooks/roles/x509_cert/README.rst b/playbooks/roles/x509_cert/README.rst new file mode 100644 index 0000000..e69de29 diff --git a/playbooks/roles/x509_cert/defaults/main.yaml b/playbooks/roles/x509_cert/defaults/main.yaml new file mode 100644 index 0000000..821bc9e --- /dev/null +++ b/playbooks/roles/x509_cert/defaults/main.yaml @@ -0,0 +1,10 @@ +certs_path: "/etc/ssl" +x509_domain_ca: true +x509_ca_cert: "{{ certs_path }}/ca/ca.pem" +x509_ca_key: "{{ certs_path }}/ca/ca.key" + +x509_ca_common_name: "C=DE, ST=NRW, O=Open Telekom Cloud, OU=Eco, CN=caroot" +x509_subject_alt_name: + - "DNS:tsi-dev.otc-service.com" + +x509_certs: {} diff --git a/playbooks/roles/x509_cert/tasks/ca.yaml b/playbooks/roles/x509_cert/tasks/ca.yaml new file mode 100644 index 0000000..17e974b --- /dev/null +++ b/playbooks/roles/x509_cert/tasks/ca.yaml @@ -0,0 +1,38 @@ +--- +# Generate CA X509 certificate +- name: Create directories + become: true + ansible.builtin.file: + dest: "{{ certs_path }}/{{ item }}" + state: "directory" + loop: + - "ca" + - "csr" + +- name: Create CA private key + community.crypto.openssl_privatekey: + path: "{{ x509_ca_key }}" + size: 4096 + +- name: Create the CA CSR + community.crypto.openssl_csr: + path: "{{ certs_path }}/ca/ca.csr" + privatekey_path: "{{ x509_ca_key }}" + # constraints and usage required by CA + basic_constraints_critical: true + basic_constraints: + - "CA:TRUE" + - "pathlen:0" + key_usage: + - "digitalSignature" + - "cRLSign" + - "keyCertSign" + common_name: "{{ x509_ca_common_name }}" + subject_alt_name: "{{ x509_subject_alt_name }}" + +- name: Create CA certificate + community.crypto.x509_certificate: + path: "{{ x509_ca_cert }}" + privatekey_path: "{{ x509_ca_key }}" + csr_path: "{{ certs_path }}/ca/ca.csr" + provider: "selfsigned" diff --git a/playbooks/roles/x509_cert/tasks/cert.yaml b/playbooks/roles/x509_cert/tasks/cert.yaml new file mode 100644 index 0000000..1b767ec --- /dev/null +++ b/playbooks/roles/x509_cert/tasks/cert.yaml @@ -0,0 +1,44 @@ +--- +# Generate or return current X509 certificate +- name: Create directories + become: true + ansible.builtin.file: + dest: "{{ certs_path }}/{{ item }}" + state: "directory" + loop: + - "keys" + - "csr" + - "certs" + - "ca" + +- name: Create cert private key {{ x509_common_name }} + + community.crypto.openssl_privatekey: + path: "{{ certs_path }}/keys/{{ x509_common_name }}.pem" + format: "{{ x509_private_key_format | default(omit) }}" + size: 4096 + +- name: Generate csr {{ x509_common_name }} + community.crypto.openssl_csr: + path: "{{ certs_path }}/csr/{{ x509_common_name }}.csr" + privatekey_path: "{{ certs_path }}/keys/{{ x509_common_name }}.pem" + common_name: "{{ x509_common_name }}" + subject_alt_name: "{{ x509_alt_name | default(omit) }}" + +- name: Create certificate {{ x509_common_name }} + community.crypto.x509_certificate: + path: "{{ certs_path }}/certs/{{ x509_common_name }}.pem" + privatekey_path: "{{ certs_path }}/keys/{{ x509_common_name }}.pem" + csr_path: "{{ certs_path }}/csr/{{ x509_common_name }}.csr" + provider: "{{ x509_domain_ca is defined | ternary('ownca','selfsigned') }}" + ownca_path: "{{ x509_ca_cert }}" + ownca_privatekey_path: "{{ x509_ca_key }}" + +- name: Set facts + set_fact: + x509_certs: "{{ (x509_certs | default({})) | combine({ + x509_common_name: { + 'ca': (x509_ca_cert), + 'cert': (certs_path + '/certs/' + x509_common_name + '.pem'), + 'key': (certs_path + '/keys/' + x509_common_name + '.pem'), + }}) }}" diff --git a/playbooks/roles/x509_vault/README.rst b/playbooks/roles/x509_vault/README.rst new file mode 100644 index 0000000..e69de29 diff --git a/playbooks/roles/x509_vault/tasks/cert.yaml b/playbooks/roles/x509_vault/tasks/cert.yaml new file mode 100644 index 0000000..7a5c5ec --- /dev/null +++ b/playbooks/roles/x509_vault/tasks/cert.yaml @@ -0,0 +1,55 @@ +- name: Check {{ vault_secret_path }} certificate in Vault + community.hashi_vault.vault_read: + url: "{{ ansible_hashi_vault_addr }}" + token: "{{ ansible_hashi_vault_token }}" + path: "{{ vault_secret_path }}" + register: cert_in_vault + failed_when: false + +- name: Check certificate validity + community.crypto.x509_certificate_info: + content: "{{ cert_in_vault.data.data.data.certificate }}" + valid_at: + point_1: "+30d" + register: cert_info + when: "cert_in_vault.data is defined" + +- name: Make sure alt_names_str is empty - in loop with include_role this can be already set to unwanted value + ansible.builtin.set_fact: + alt_names_str: "" + +- name: Construct alt_names value + ansible.builtin.set_fact: + alt_names_str: "{{ alt_names | join(',') }}" + when: + - "alt_names is defined" + - "alt_names is iterable" + - "alt_names is not match('__omit')" + +- name: Issue certificate + ansible.builtin.uri: + url: "{{ ansible_hashi_vault_addr }}/v1/{{ vault_pki_path }}/issue/{{ vault_pki_role }}" + method: "POST" + headers: + X-Vault-Token: "{{ ansible_hashi_vault_token }}" + body: + common_name: "{{ common_name }}" + alt_names: "{{ alt_names_str | default(omit) }}" + private_key_format: "{{ private_key_format | default(omit) }}" + body_format: "json" + when: "cert_in_vault.data is not defined or not cert_info.valid_at.point_1" + register: "cert" + +- name: Save certificate in Vault + ansible.builtin.uri: + url: "{{ ansible_hashi_vault_addr }}/v1/{{ vault_secret_path }}" + method: "POST" + headers: + X-Vault-Token: "{{ ansible_hashi_vault_token }}" + body: + data: + certificate: "{{ cert.json.data.certificate }}" + private_key: "{{ cert.json.data.private_key }}" + body_format: "json" + when: + - "cert.json is defined" diff --git a/playbooks/service-bridge.yaml b/playbooks/service-bridge.yaml new file mode 100644 index 0000000..b586069 --- /dev/null +++ b/playbooks/service-bridge.yaml @@ -0,0 +1,40 @@ +- hosts: bridge.eco.tsi-dev.otc-service.com:!disabled + become: true + name: "Bridge: configure the bastion host" + roles: + #- iptables + - edit-secrets-script + - install-docker + tasks: + # Skip as no arm64 support available; only used for gate testing, + # where we can't mix arm64 and x86 nodes, so need a minimally + # working bridge to drive the tests for mirrors/nodepool + # etc. things. + - name: Install openshift/kubectl/helm + when: ansible_architecture != 'aarch64' + block: + - include_role: + name: install-osc-container + - include_role: + name: install-kubectl + - include_role: + name: configure-kubectl + - include_role: + name: install-helm + + - include_role: + name: configure-openstacksdk + vars: + openstacksdk_config_template: clouds/bridge_all_clouds.yaml.j2 + + - name: Get rid of all-clouds.yaml + file: + state: absent + path: '/etc/openstack/all-clouds.yaml' + + - name: Install additional python packages + ansible.builtin.pip: + name: "{{ item }}" + state: present + loop: + - hvac diff --git a/playbooks/service-gitea.yaml b/playbooks/service-gitea.yaml new file mode 100644 index 0000000..439fe30 --- /dev/null +++ b/playbooks/service-gitea.yaml @@ -0,0 +1,8 @@ +- hosts: "gitea:!disabled" + name: "Base: configure gitea" + become: true + roles: + # Group should be responsible for defining open ports + - firewalld + - gitea + - fail2ban diff --git a/playbooks/service-vault.yaml b/playbooks/service-vault.yaml new file mode 100644 index 0000000..1f5536d --- /dev/null +++ b/playbooks/service-vault.yaml @@ -0,0 +1,9 @@ +--- +- hosts: "vault:!disabled" + become: true + name: "Vault: configure vault instances" + serial: 1 + roles: + # Group should be responsible for defining open ports + - firewalld + - hashivault diff --git a/playbooks/set-hostnames.yaml b/playbooks/set-hostnames.yaml new file mode 100644 index 0000000..5fb276b --- /dev/null +++ b/playbooks/set-hostnames.yaml @@ -0,0 +1,5 @@ +- hosts: "!disabled" + become: true + gather_facts: false + roles: + - set-hostname diff --git a/playbooks/sync-gitea-data.yaml b/playbooks/sync-gitea-data.yaml new file mode 100644 index 0000000..776a479 --- /dev/null +++ b/playbooks/sync-gitea-data.yaml @@ -0,0 +1,30 @@ +- hosts: "gitea:!disabled" + name: "Base: configure gitea" + become: true + tasks: + - name: Copy simple script to disable gitea sync + ansible.builtin.copy: + src: templates/gitea_sync/disable-gitea-sync + dest: /usr/local/bin/disable-gitea-sync + mode: 0755 + owner: root + group: root + delegate_to: bridge.eco.tsi-dev.otc-service.com + + - name: Check if sync is required + ansible.builtin.stat: + path: /home/zuul/DISABLE-GITEA-SYNC + delegate_to: bridge.eco.tsi-dev.otc-service.com + register: disable_gitea_sync + + - name: Synchronize gitea data directory + ansible.posix.synchronize: + src: /var/lib/gitea/data/ + dest: /var/lib/gitea/data/ + mode: push + archive: yes + compress: yes + delegate_to: gitea1.eco.tsi-dev.otc-service.com + when: + - "inventory_hostname == 'gitea2.eco.tsi-dev.otc-service.com'" + - "not disable_gitea_sync.stat.exists" diff --git a/playbooks/templates/charts/argocd/argocd-values.yaml.j2 b/playbooks/templates/charts/argocd/argocd-values.yaml.j2 new file mode 100644 index 0000000..ca10111 --- /dev/null +++ b/playbooks/templates/charts/argocd/argocd-values.yaml.j2 @@ -0,0 +1,77 @@ +# -- Provide a name in place of `argocd` +nameOverride: argocd + +createAggregateRoles: false +createClusterRoles: true + +crds: + install: true + keep: true + +dex: + enabled: false + +## Server +server: + extensions: + containerSecurityContext: + seccompProfile: + type: Unconfined + + containerSecurityContext: + seccompProfile: + type: Unconfined + + ingress: + enabled: true + annotations: + cert-manager.io/cluster-issuer: letsencrypt-prod + kubernetes.io/tls-acme: "true" + nginx.ingress.kubernetes.io/backend-protocol: HTTPS + nginx.ingress.kubernetes.io/force-ssl-redirect: "true" + nginx.ingress.kubernetes.io/ssl-passthrough: "true" + ingressClassName: "nginx" + + hosts: + - argocd.eco.tsi-dev.otc-service.com + paths: + - / + tls: + - secretName: argocd-cert + hosts: + - argocd.eco.tsi-dev.otc-service.com + https: true + +repoServer: + env: + - name: ARGOCD_GPG_ENABLED + value: "false" + containerSecurityContext: + seccompProfile: + type: Unconfined + +## ApplicationSet controller +applicationSet: + containerSecurityContext: + seccompProfile: + type: Unconfined + +configs: + cm: + url: https://argocd.eco.tsi-dev.otc-service.com + oidc.config: | + name: Keycloak + issuer: https://keycloak.eco.tsi-dev.otc-service.com/realms/eco + clientID: argocd + clientSecret: $oidc.keycloak.clientSecret + requestedScopes: ["openid", "profile", "email", "groups"] + + secret: + createSecret: true + extra: + oidc.keycloak.clientSecret: {{ chart.secrets.keycloak_client_secret }} + + rbac: + policy.default: role:readonly + policy.csv: | + g, /argocd-admin, role:admin diff --git a/playbooks/templates/charts/cert-manager/cert-manager-post-config.yaml.j2 b/playbooks/templates/charts/cert-manager/cert-manager-post-config.yaml.j2 new file mode 100644 index 0000000..8b397c8 --- /dev/null +++ b/playbooks/templates/charts/cert-manager/cert-manager-post-config.yaml.j2 @@ -0,0 +1,39 @@ +apiVersion: cert-manager.io/v1 +kind: ClusterIssuer +metadata: + name: letsencrypt-staging +spec: + acme: + # You must replace this email address with your own. + # Let's Encrypt will use this to contact you about expiring + # certificates, and issues related to your account. + email: "{{ chart.email }}" + server: https://acme-staging-v02.api.letsencrypt.org/directory + privateKeySecretRef: + # Secret resource that will be used to store the account's private key. + name: letsencrypt-stg-issuer-account-key + # Add a single challenge solver, HTTP01 using nginx + solvers: + - http01: + ingress: + class: nginx +--- +apiVersion: cert-manager.io/v1 +kind: ClusterIssuer +metadata: + name: letsencrypt-prod +spec: + acme: + # You must replace this email address with your own. + # Let's Encrypt will use this to contact you about expiring + # certificates, and issues related to your account. + email: "{{ chart.email }}" + server: https://acme-v02.api.letsencrypt.org/directory + privateKeySecretRef: + # Secret resource that will be used to store the account's private key. + name: letsencrypt-prod-issuer-account-key + # Add a single challenge solver, HTTP01 using nginx + solvers: + - http01: + ingress: + class: nginx diff --git a/playbooks/templates/charts/cert-manager/cert-manager-values.yaml.j2 b/playbooks/templates/charts/cert-manager/cert-manager-values.yaml.j2 new file mode 100644 index 0000000..1b4551c --- /dev/null +++ b/playbooks/templates/charts/cert-manager/cert-manager-values.yaml.j2 @@ -0,0 +1 @@ +installCRDs: true diff --git a/playbooks/templates/charts/ingress-nginx/ingress-nginx-values.yaml.j2 b/playbooks/templates/charts/ingress-nginx/ingress-nginx-values.yaml.j2 new file mode 100644 index 0000000..80d8ad5 --- /dev/null +++ b/playbooks/templates/charts/ingress-nginx/ingress-nginx-values.yaml.j2 @@ -0,0 +1,21 @@ +controller: +{% if chart.is_default is defined and not chart.is_default %} + watchIngressWithoutClass: false +{% else %} + watchIngressWithoutClass: true +{% endif %} + ingressClassResource: + default: "{{ chart.is_default | default('true') }}" +{% if chart.class_name is defined %} + name: "{{ chart.class_name }}" + controllerValue: "k8s.io/{{ chart.class_name }}" +{% endif %} + service: + annotations: + kubernetes.io/elb.class: performance + kubernetes.io/elb.id: "{{ chart.elb_id }}" + kubernetes.io/elb.eip: "{{ chart.elb_eip }}" + externalTrafficPolicy: "Local" +{% if chart.config_entries is defined %} + config: {{ chart.config_entries }} +{% endif %} diff --git a/playbooks/templates/charts/loki/loki-values.yaml.j2 b/playbooks/templates/charts/loki/loki-values.yaml.j2 new file mode 100644 index 0000000..db949eb --- /dev/null +++ b/playbooks/templates/charts/loki/loki-values.yaml.j2 @@ -0,0 +1,87 @@ +global: + dnsService: coredns +loki: + commonConfig: + replication_factor: 1 + storage: + # where to store logs + type: s3 + # retention enable + compactor: + retention_enabled: true + compaction_interval: 10m + retention_delete_delay: 2h + # retention period config + limits_config: + retention_period: 744h +read: + replicas: 1 + persistence: + storageClass: csi-disk +write: + replicas: 1 + persistence: + storageClass: csi-disk +backend: + persistence: + storageClass: csi-disk +monitoring: + selfMonitoring: + enabled: false + lokiCanary: + enabled: false + grafanaAgent: + installOperator: false + # grafana dashboards to be added under prometheus/grafana instance + # running on the same k8s cluster + dashboards: + namespace: monitoring + labels: + grafana_dashboard: "1" + release: "prometheus" + # prometheus rules to be added under prometheus instance running + # on the same k8s cluster + rules: + namespace: monitoring + labels: + release: "prometheus" + # servicemonitor instance to be added under prometheus instance + # running on the same k8s cluster - to scrape loki metrics + serviceMonitor: + metricsInstance: + enabled: false + namespace: monitoring + namespaceSelector: + matchNames: + - loki + labels: + release: "prometheus" +test: + enabled: false + +gateway: + ingress: + enabled: true + ingressClassName: nginx + annotations: + cert-manager.io/cluster-issuer: "letsencrypt-prod" + hosts: + - host: "{{ chart.fqdn }}" + paths: + - path: / + pathType: Prefix + tls: + - secretName: "{{ chart.tls_secret_name }}" + hosts: + - "{{ chart.fqdn }}" + basicAuth: + enabled: true + username: "{{ chart.loki_username }}" + password: "{{ chart.loki_password }}" + +minio: + enabled: true + rootPassword: "{{ chart.loki_minio_root_password }}" + persistence: + size: 300Gi + storageClass: csi-disk diff --git a/playbooks/templates/charts/opensearch/opensearch-dashboard-otcinfra-values.yaml.j2 b/playbooks/templates/charts/opensearch/opensearch-dashboard-otcinfra-values.yaml.j2 new file mode 100644 index 0000000..8107bc0 --- /dev/null +++ b/playbooks/templates/charts/opensearch/opensearch-dashboard-otcinfra-values.yaml.j2 @@ -0,0 +1,23 @@ +opensearchHosts: "http://opensearch-cluster-master:9200" +ingress: + enabled: true + annotations: + "kubernetes.io/ingress.class" : "nginx" + hosts: + - host: "{{ chart.opensearch_dashboard_fqdn }}" + paths: + - path: / + backend: + serviceName: opensearch-dashboards + servicePort: 5601 + tls: + - secretName: "{{ chart.opensearch_dashboard_tls_name }}" + hosts: + - "{{ chart.opensearch_dashboard_fqdn }}" + +extraEnvs: + - name: OPENSEARCH_USERNAME + value: "{{ chart.opensearch_username }}" + - name: OPENSEARCH_PASSWORD + value: "{{ chart.opensearch_password }}" + diff --git a/playbooks/templates/charts/opensearch/opensearch-otcinfra-values.yaml.j2 b/playbooks/templates/charts/opensearch/opensearch-otcinfra-values.yaml.j2 new file mode 100644 index 0000000..f100797 --- /dev/null +++ b/playbooks/templates/charts/opensearch/opensearch-otcinfra-values.yaml.j2 @@ -0,0 +1,89 @@ +ingress: + enabled: true + annotations: + "kubernetes.io/ingress.class" : "nginx" + path: / + hosts: + - "{{ chart.opensearch_fqdn }}" + tls: + - secretName: "{{ chart.opensearch_tls_name }}" + hosts: + - "{{ chart.opensearch_fqdn }}" +persistence: + storageClass: csi-disk + size: 100Gi + annotations: + everest.io/disk-volume-type: sas +secretMounts: + - name: "node-pem" + secretName: "{{ chart.opensearch_node_tls_name }}" + path: "/usr/share/opensearch/config/node.pem" + subPath: "tls.crt" + - name: "node-key-pem" + secretName: "{{ chart.opensearch_node_tls_name }}" + path: "/usr/share/opensearch/config/node-key.pem" + subPath: "tls.key" + - name: "root-cacert" + secretName: "{{ chart.opensearch_node_tls_name }}" + path: "/usr/share/opensearch/config/root-cacert.pem" + subPath: "tls.crt" + - name: "admin-pem" + secretName: "{{ chart.opensearch_admin_tls_name }}" + path: "/usr/share/opensearch/config/admin.pem" + subPath: "tls.crt" + - name: "admin-key-pem" + secretName: "{{ chart.opensearch_admin_tls_name }}" + path: "/usr/share/opensearch/config/admin-key.pem" + subPath: "tls.key" +config: + opensearch.yml: | + cluster.name: opensearch-cluster + + # Bind to all interfaces because we don't know what IP address Docker will assign to us. + network.host: 0.0.0.0 + + plugins: + security: + ssl: + transport: + pemcert_filepath: node.pem + pemkey_filepath: node-key.pem + pemtrustedcas_filepath: root-cacert.pem + enforce_hostname_verification: false + http: + enabled: false + allow_unsafe_democertificates: true + allow_default_init_securityindex: true + nodes_dn: + - CN={{ chart.opensearch_node_fqdn }} + authcz: + admin_dn: + - CN={{ chart.opensearch_admin_fqdn }} + audit.type: internal_opensearch + enable_snapshot_restore_privilege: true + check_snapshot_restore_write_privileges: true + restapi: + roles_enabled: ["all_access", "security_rest_api_access"] + password_validation_regex: '(?=.*[A-Z])(?=.*[^a-zA-Z\d])(?=.*[0-9])(?=.*[a-z]).{8,}' + password_validation_error_message: "Password must be minimum 8 characters long and must contain at least one uppercase letter, one lowercase letter, one digit, and one special character." + system_indices: + enabled: true + indices: + [ + ".opendistro-alerting-config", + ".opendistro-alerting-alert*", + ".opendistro-anomaly-results*", + ".opendistro-anomaly-detector*", + ".opendistro-anomaly-checkpoints", + ".opendistro-anomaly-detection-state", + ".opendistro-reports-*", + ".opendistro-notifications-*", + ".opendistro-notebooks", + ".opendistro-asynchronous-search-response*", + ] + +securityConfig: + configSecret: "{{ chart.opensearch_security_config_secret_name }}" + internalUsersSecret: "{{ chart.opensearch_security_config_secret_name }}" + rolesSecret: "{{ chart.opensearch_security_config_secret_name }}" + rolesMappingSecret: "{{ chart.opensearch_security_config_secret_name }}" diff --git a/playbooks/templates/charts/opensearch/opensearch-stg-dashboard-otcinfra2-values.yaml.j2 b/playbooks/templates/charts/opensearch/opensearch-stg-dashboard-otcinfra2-values.yaml.j2 new file mode 100644 index 0000000..8107bc0 --- /dev/null +++ b/playbooks/templates/charts/opensearch/opensearch-stg-dashboard-otcinfra2-values.yaml.j2 @@ -0,0 +1,23 @@ +opensearchHosts: "http://opensearch-cluster-master:9200" +ingress: + enabled: true + annotations: + "kubernetes.io/ingress.class" : "nginx" + hosts: + - host: "{{ chart.opensearch_dashboard_fqdn }}" + paths: + - path: / + backend: + serviceName: opensearch-dashboards + servicePort: 5601 + tls: + - secretName: "{{ chart.opensearch_dashboard_tls_name }}" + hosts: + - "{{ chart.opensearch_dashboard_fqdn }}" + +extraEnvs: + - name: OPENSEARCH_USERNAME + value: "{{ chart.opensearch_username }}" + - name: OPENSEARCH_PASSWORD + value: "{{ chart.opensearch_password }}" + diff --git a/playbooks/templates/charts/opensearch/opensearch-stg-otcinfra2-values.yaml.j2 b/playbooks/templates/charts/opensearch/opensearch-stg-otcinfra2-values.yaml.j2 new file mode 100644 index 0000000..9fa30fc --- /dev/null +++ b/playbooks/templates/charts/opensearch/opensearch-stg-otcinfra2-values.yaml.j2 @@ -0,0 +1,89 @@ +ingress: + enabled: true + annotations: + "kubernetes.io/ingress.class" : "nginx" + path: / + hosts: + - "{{ chart.opensearch_fqdn }}" + tls: + - secretName: "{{ chart.opensearch_tls_name }}" + hosts: + - "{{ chart.opensearch_fqdn }}" +persistence: + storageClass: csi-disk + size: 10Gi + annotations: + everest.io/disk-volume-type: sas +secretMounts: + - name: "node-pem" + secretName: "{{ chart.opensearch_node_tls_name }}" + path: "/usr/share/opensearch/config/node.pem" + subPath: "tls.crt" + - name: "node-key-pem" + secretName: "{{ chart.opensearch_node_tls_name }}" + path: "/usr/share/opensearch/config/node-key.pem" + subPath: "tls.key" + - name: "root-cacert" + secretName: "{{ chart.opensearch_node_tls_name }}" + path: "/usr/share/opensearch/config/root-cacert.pem" + subPath: "tls.crt" + - name: "admin-pem" + secretName: "{{ chart.opensearch_admin_tls_name }}" + path: "/usr/share/opensearch/config/admin.pem" + subPath: "tls.crt" + - name: "admin-key-pem" + secretName: "{{ chart.opensearch_admin_tls_name }}" + path: "/usr/share/opensearch/config/admin-key.pem" + subPath: "tls.key" +config: + opensearch.yml: | + cluster.name: opensearch-cluster + + # Bind to all interfaces because we don't know what IP address Docker will assign to us. + network.host: 0.0.0.0 + + plugins: + security: + ssl: + transport: + pemcert_filepath: node.pem + pemkey_filepath: node-key.pem + pemtrustedcas_filepath: root-cacert.pem + enforce_hostname_verification: false + http: + enabled: false + allow_unsafe_democertificates: true + allow_default_init_securityindex: true + nodes_dn: + - CN={{ chart.opensearch_node_fqdn }} + authcz: + admin_dn: + - CN={{ chart.opensearch_admin_fqdn }} + audit.type: internal_opensearch + enable_snapshot_restore_privilege: true + check_snapshot_restore_write_privileges: true + restapi: + roles_enabled: ["all_access", "security_rest_api_access"] + password_validation_regex: '(?=.*[A-Z])(?=.*[^a-zA-Z\d])(?=.*[0-9])(?=.*[a-z]).{8,}' + password_validation_error_message: "Password must be minimum 8 characters long and must contain at least one uppercase letter, one lowercase letter, one digit, and one special character." + system_indices: + enabled: true + indices: + [ + ".opendistro-alerting-config", + ".opendistro-alerting-alert*", + ".opendistro-anomaly-results*", + ".opendistro-anomaly-detector*", + ".opendistro-anomaly-checkpoints", + ".opendistro-anomaly-detection-state", + ".opendistro-reports-*", + ".opendistro-notifications-*", + ".opendistro-notebooks", + ".opendistro-asynchronous-search-response*", + ] + +securityConfig: + configSecret: "{{ chart.opensearch_security_config_secret_name }}" + internalUsersSecret: "{{ chart.opensearch_security_config_secret_name }}" + rolesSecret: "{{ chart.opensearch_security_config_secret_name }}" + rolesMappingSecret: "{{ chart.opensearch_security_config_secret_name }}" diff --git a/playbooks/templates/charts/prometheus-blackbox/prometheus-blackbox-otcinfra-values.yaml.j2 b/playbooks/templates/charts/prometheus-blackbox/prometheus-blackbox-otcinfra-values.yaml.j2 new file mode 100644 index 0000000..a8d9ea4 --- /dev/null +++ b/playbooks/templates/charts/prometheus-blackbox/prometheus-blackbox-otcinfra-values.yaml.j2 @@ -0,0 +1,26 @@ +config: + modules: + http_2xx: + prober: http + timeout: 5s + http: + valid_http_versions: ["HTTP/1.1", "HTTP/2.0"] + follow_redirects: true + preferred_ip_protocol: "ip4" + valid_status_codes: [ 200, 202 ] + http_403: + prober: http + timeout: 5s + http: + valid_http_versions: ["HTTP/1.1", "HTTP/2.0"] + follow_redirects: true + preferred_ip_protocol: "ip4" + valid_status_codes: [ 403 ] + http_2xx_429: + prober: http + timeout: 5s + http: + valid_http_versions: ["HTTP/1.1", "HTTP/2.0"] + follow_redirects: true + preferred_ip_protocol: "ip4" + valid_status_codes: [ 200, 202, 429 ] diff --git a/playbooks/templates/charts/prometheus-blackbox/prometheus-blackbox-otcinfra2-values.yaml.j2 b/playbooks/templates/charts/prometheus-blackbox/prometheus-blackbox-otcinfra2-values.yaml.j2 new file mode 100644 index 0000000..a8d9ea4 --- /dev/null +++ b/playbooks/templates/charts/prometheus-blackbox/prometheus-blackbox-otcinfra2-values.yaml.j2 @@ -0,0 +1,26 @@ +config: + modules: + http_2xx: + prober: http + timeout: 5s + http: + valid_http_versions: ["HTTP/1.1", "HTTP/2.0"] + follow_redirects: true + preferred_ip_protocol: "ip4" + valid_status_codes: [ 200, 202 ] + http_403: + prober: http + timeout: 5s + http: + valid_http_versions: ["HTTP/1.1", "HTTP/2.0"] + follow_redirects: true + preferred_ip_protocol: "ip4" + valid_status_codes: [ 403 ] + http_2xx_429: + prober: http + timeout: 5s + http: + valid_http_versions: ["HTTP/1.1", "HTTP/2.0"] + follow_redirects: true + preferred_ip_protocol: "ip4" + valid_status_codes: [ 200, 202, 429 ] diff --git a/playbooks/templates/charts/prometheus/prometheus-otcinfra-go-neb-post-config.yaml.j2 b/playbooks/templates/charts/prometheus/prometheus-otcinfra-go-neb-post-config.yaml.j2 new file mode 100644 index 0000000..ace83ea --- /dev/null +++ b/playbooks/templates/charts/prometheus/prometheus-otcinfra-go-neb-post-config.yaml.j2 @@ -0,0 +1,84 @@ +apiVersion: v1 +stringData: + config.yaml: | + clients: + - UserID: "{{ chart.go_neb_user_id }}" + AccessToken: "{{ chart.go_neb_access_token }}" + HomeserverURL: "https://matrix.org" + Sync: true + AutoJoinRooms: true + DisplayName: "OpenTelekomCloud Bot" + services: + - ID: "alertmanager_service" + Type: "alertmanager" + UserID: "{{ chart.go_neb_user_id }}" + Config: + webhook_url: "/alertmanager_service" + rooms: + "{{ chart.go_neb_room_id }}": +{% raw %} + text_template: "{{range .Alerts -}} [{{ .Status }}] {{index .Labels \"alertname\" }}: {{index .Annotations \"description\"}} {{ end -}}" + html_template: "{{range .Alerts -}} {{ $severity := index .Labels \"severity\" }} {{ if eq .Status \"firing\" }} {{ if eq $severity \"critical\"}} [FIRING - CRITICAL] {{ else if eq $severity \"warning\"}} [FIRING - WARNING] {{ else }} [FIRING - {{ $severity }}] {{ end }} {{ else }} [RESOLVED] {{ end }} {{ index .Labels \"alertname\"}} : {{ index .Annotations \"description\"}} source
{{end -}}" + msg_type: "m.text" # Must be either `m.text` or `m.notice` +{% endraw %} +kind: Secret +type: Opaque +metadata: + creationTimestamp: null + name: go-neb + namespace: monitoring +--- +apiVersion: apps/v1 +kind: Deployment +metadata: + creationTimestamp: null + labels: + app: go-neb + name: go-neb + namespace: monitoring +spec: + replicas: 1 + selector: + matchLabels: + app: go-neb + strategy: {} + template: + metadata: + creationTimestamp: null + labels: + app: go-neb + spec: + containers: + - image: "{{ chart.go_neb_image }}" + name: go-neb + resources: {} + env: + - name: "BASE_URL" + value: "http://go-neb.monitoring.svc:4050" + - name: "CONFIG_FILE" + value: "/etc/config/go-neb/config.yaml" + volumeMounts: + - name: config-volume + mountPath: /etc/config/go-neb/ + volumes: + - name: config-volume + secret: + secretName: go-neb +--- +apiVersion: v1 +kind: Service +metadata: + creationTimestamp: null + labels: + app: go-neb + name: go-neb + namespace: monitoring +spec: + ports: + - name: 4050-4050 + port: 4050 + protocol: TCP + targetPort: 4050 + selector: + app: go-neb + type: ClusterIP diff --git a/playbooks/templates/charts/prometheus/prometheus-otcinfra-values.yaml.j2 b/playbooks/templates/charts/prometheus/prometheus-otcinfra-values.yaml.j2 new file mode 100644 index 0000000..7c4f2ad --- /dev/null +++ b/playbooks/templates/charts/prometheus/prometheus-otcinfra-values.yaml.j2 @@ -0,0 +1,143 @@ +alertmanager: + ingress: + enabled: true + ingressClassName: nginx + annotations: + nginx.ingress.kubernetes.io/auth-realm: Authentication Required + nginx.ingress.kubernetes.io/auth-secret: basic-auth + nginx.ingress.kubernetes.io/auth-type: basic + cert-manager.io/cluster-issuer: "letsencrypt-prod" + hosts: + - "{{ chart.alertmanager_fqdn }}" + paths: + - / + tls: + - hosts: + - "{{ chart.alertmanager_fqdn }}" + secretName: alertmanager-mon + alertamanagerSpec: + externalUrl: "https://{{ chart.alertmanager_fqdn }}" + config: + route: + routes: + - receiver: 'null' + matchers: + - alertname =~ "InfoInhibitor|Watchdog" + - receiver: "matrix-webhook" + matchers: + - severity =~ "critical|warning" + continue: true + receivers: + - name: 'null' + - name: "matrix-webhook" + webhook_configs: + - url: "{{ chart.matrix_webhook_url }}" + send_resolved: true +prometheus: + extraSecret: + name: basic-auth + data: + auth: "{{ chart.prometheus_basic_auth_credentials }}" + ingress: + enabled: true + ingressClassName: nginx + annotations: + nginx.ingress.kubernetes.io/auth-realm: Authentication Required + nginx.ingress.kubernetes.io/auth-secret: basic-auth + nginx.ingress.kubernetes.io/auth-type: basic + cert-manager.io/cluster-issuer: "letsencrypt-prod" + hosts: + - "{{ chart.prometheus_fqdn }}" + paths: + - / + tls: + - hosts: + - "{{ chart.prometheus_fqdn }}" + secretName: prometheus-mon + prometheusSpec: + externalUrl: "https://{{ chart.prometheus_fqdn }}" + retention: "7d" + additionalScrapeConfigs: + - job_name: blackbox_http_2xx + metrics_path: /probe + params: + module: [http_2xx] + static_configs: + - targets: {{ chart.prometheus_endpoints_2xx }} + relabel_configs: + - source_labels: [__address__] + target_label: __param_target + - source_labels: [__param_target] + target_label: instance + - target_label: __address__ + replacement: prometheus-blackbox-exporter:9115 + - job_name: blackbox_http_403 + metrics_path: /probe + params: + module: [http_403] + static_configs: + - targets: {{ chart.prometheus_endpoints_403 }} + relabel_configs: + - source_labels: [__address__] + target_label: __param_target + - source_labels: [__param_target] + target_label: instance + - target_label: __address__ + replacement: prometheus-blackbox-exporter:9115 + - job_name: blackbox_http_429 + metrics_path: /probe + params: + module: [http_2xx_429] + static_configs: + - targets: {{ chart.prometheus_endpoints_429 }} + relabel_configs: + - source_labels: [__address__] + target_label: __param_target + - source_labels: [__param_target] + target_label: instance + - target_label: __address__ + replacement: prometheus-blackbox-exporter:9115 +additionalPrometheusRulesMap: + endpoint-mon: + groups: + - name: critical-rules + rules: + - alert: ProbeFailing + expr: up{job="blackbox"} == 0 or probe_success{job="blackbox"} == 0 + for: 2m + labels: + severity: critical + annotations: + summary: Endpoint Down + description: "Endpoint is Down\n {{ '{{ $labels.instance }}' }}" +grafana: + ingress: + enabled: true + ingressClassName: nginx + annotations: + cert-manager.io/cluster-issuer: "letsencrypt-prod" + hosts: + - "{{ chart.grafana_fqdn }}" + paths: + - / + tls: + - hosts: + - "{{ chart.grafana_fqdn }}" + secretName: grafana-mon + adminPassword: "{{ chart.grafana_admin_password }}" +defaultRules: + rules: + kubeScheduler: false + kubeProxy: false + kubeControllerManager: false + etcd: false +kubeScheduler: + enabled: false +kubeEtcd: + enabled: false +kubeControllerManager: + enabled: false +kubeProxy: + enabled: false +coreDns: + enabled: false diff --git a/playbooks/templates/charts/prometheus/prometheus-otcinfra2-go-neb-post-config.yaml.j2 b/playbooks/templates/charts/prometheus/prometheus-otcinfra2-go-neb-post-config.yaml.j2 new file mode 100644 index 0000000..ace83ea --- /dev/null +++ b/playbooks/templates/charts/prometheus/prometheus-otcinfra2-go-neb-post-config.yaml.j2 @@ -0,0 +1,84 @@ +apiVersion: v1 +stringData: + config.yaml: | + clients: + - UserID: "{{ chart.go_neb_user_id }}" + AccessToken: "{{ chart.go_neb_access_token }}" + HomeserverURL: "https://matrix.org" + Sync: true + AutoJoinRooms: true + DisplayName: "OpenTelekomCloud Bot" + services: + - ID: "alertmanager_service" + Type: "alertmanager" + UserID: "{{ chart.go_neb_user_id }}" + Config: + webhook_url: "/alertmanager_service" + rooms: + "{{ chart.go_neb_room_id }}": +{% raw %} + text_template: "{{range .Alerts -}} [{{ .Status }}] {{index .Labels \"alertname\" }}: {{index .Annotations \"description\"}} {{ end -}}" + html_template: "{{range .Alerts -}} {{ $severity := index .Labels \"severity\" }} {{ if eq .Status \"firing\" }} {{ if eq $severity \"critical\"}} [FIRING - CRITICAL] {{ else if eq $severity \"warning\"}} [FIRING - WARNING] {{ else }} [FIRING - {{ $severity }}] {{ end }} {{ else }} [RESOLVED] {{ end }} {{ index .Labels \"alertname\"}} : {{ index .Annotations \"description\"}} source
{{end -}}" + msg_type: "m.text" # Must be either `m.text` or `m.notice` +{% endraw %} +kind: Secret +type: Opaque +metadata: + creationTimestamp: null + name: go-neb + namespace: monitoring +--- +apiVersion: apps/v1 +kind: Deployment +metadata: + creationTimestamp: null + labels: + app: go-neb + name: go-neb + namespace: monitoring +spec: + replicas: 1 + selector: + matchLabels: + app: go-neb + strategy: {} + template: + metadata: + creationTimestamp: null + labels: + app: go-neb + spec: + containers: + - image: "{{ chart.go_neb_image }}" + name: go-neb + resources: {} + env: + - name: "BASE_URL" + value: "http://go-neb.monitoring.svc:4050" + - name: "CONFIG_FILE" + value: "/etc/config/go-neb/config.yaml" + volumeMounts: + - name: config-volume + mountPath: /etc/config/go-neb/ + volumes: + - name: config-volume + secret: + secretName: go-neb +--- +apiVersion: v1 +kind: Service +metadata: + creationTimestamp: null + labels: + app: go-neb + name: go-neb + namespace: monitoring +spec: + ports: + - name: 4050-4050 + port: 4050 + protocol: TCP + targetPort: 4050 + selector: + app: go-neb + type: ClusterIP diff --git a/playbooks/templates/charts/prometheus/prometheus-otcinfra2-values.yaml.j2 b/playbooks/templates/charts/prometheus/prometheus-otcinfra2-values.yaml.j2 new file mode 100644 index 0000000..1180fdb --- /dev/null +++ b/playbooks/templates/charts/prometheus/prometheus-otcinfra2-values.yaml.j2 @@ -0,0 +1,117 @@ +alertmanager: + ingress: + enabled: true + ingressClassName: nginx + annotations: + nginx.ingress.kubernetes.io/auth-realm: Authentication Required + nginx.ingress.kubernetes.io/auth-secret: basic-auth + nginx.ingress.kubernetes.io/auth-type: basic + cert-manager.io/cluster-issuer: "letsencrypt-prod" + hosts: + - "{{ chart.alertmanager_fqdn }}" + paths: + - / + tls: + - hosts: + - "{{ chart.alertmanager_fqdn }}" + secretName: alertmanager-mon + alertamanagerSpec: + externalUrl: "https://{{ chart.alertmanager_fqdn }}" + config: + route: + routes: + - receiver: 'null' + matchers: + - alertname =~ "InfoInhibitor|Watchdog" + - receiver: "matrix-webhook" + matchers: + - severity =~ "critical|warning" + continue: true + receivers: + - name: 'null' + - name: "matrix-webhook" + webhook_configs: + - url: "{{ chart.matrix_webhook_url }}" + send_resolved: true +prometheus: + extraSecret: + name: basic-auth + data: + auth: "{{ chart.prometheus_basic_auth_credentials }}" + ingress: + enabled: true + ingressClassName: nginx + annotations: + nginx.ingress.kubernetes.io/auth-realm: Authentication Required + nginx.ingress.kubernetes.io/auth-secret: basic-auth + nginx.ingress.kubernetes.io/auth-type: basic + cert-manager.io/cluster-issuer: "letsencrypt-prod" + hosts: + - "{{ chart.prometheus_fqdn }}" + paths: + - / + tls: + - hosts: + - "{{ chart.prometheus_fqdn }}" + secretName: prometheus-mon + prometheusSpec: + externalUrl: "https://{{ chart.prometheus_fqdn }}" + retention: "7d" + additionalScrapeConfigs: + - job_name: blackbox + metrics_path: /probe + params: + module: [http_2xx] + static_configs: + - targets: {{ chart.prometheus_endpoints_2xx }} + relabel_configs: + - source_labels: [__address__] + target_label: __param_target + - source_labels: [__param_target] + target_label: instance + - target_label: __address__ + replacement: prometheus-blackbox-exporter:9115 +additionalPrometheusRulesMap: + endpoint-mon: + groups: + - name: critical-rules + rules: + - alert: ProbeFailing + expr: up{job="blackbox"} == 0 or probe_success{job="blackbox"} == 0 + for: 5m + labels: + severity: critical + annotations: + summary: Endpoint Down + description: "Endpoint is Down\n {{ '{{ $labels.instance }}' }}" +grafana: + ingress: + enabled: true + ingressClassName: nginx + annotations: + cert-manager.io/cluster-issuer: "letsencrypt-prod" + hosts: + - "{{ chart.grafana_fqdn }}" + paths: + - / + tls: + - hosts: + - "{{ chart.grafana_fqdn }}" + secretName: grafana-mon + adminPassword: "{{ chart.grafana_admin_password }}" +defaultRules: + rules: + kubeScheduler: false + kubeProxy: false + kubeControllerManager: false + etcd: false +kubeScheduler: + enabled: false +kubeEtcd: + enabled: false +kubeControllerManager: + enabled: false +kubeProxy: + enabled: false +coreDns: + enabled: false diff --git a/playbooks/templates/charts/promtail/promtail-otcci-values.yaml.j2 b/playbooks/templates/charts/promtail/promtail-otcci-values.yaml.j2 new file mode 100644 index 0000000..cc3be71 --- /dev/null +++ b/playbooks/templates/charts/promtail/promtail-otcci-values.yaml.j2 @@ -0,0 +1,11 @@ +config: + clients: + - url: {{ chart.promtail_loki_url }} + tenant_id: {{ chart.promtail_tenant_id }} + basic_auth: + username: {{ chart.promtail_loki_username }} + password: {{ chart.promtail_loki_password }} + snippets: + extraRelabelConfigs: + - target_label: "cluster" + replacement: "otcci" diff --git a/playbooks/templates/charts/promtail/promtail-otcinfra-values.yaml.j2 b/playbooks/templates/charts/promtail/promtail-otcinfra-values.yaml.j2 new file mode 100644 index 0000000..f278fb2 --- /dev/null +++ b/playbooks/templates/charts/promtail/promtail-otcinfra-values.yaml.j2 @@ -0,0 +1,19 @@ +serviceMonitor: + enabled: true + namespace: monitoring + namespaceSelector: + matchNames: + - promtail + labels: + release: "prometheus" +config: + clients: + - url: {{ chart.promtail_loki_url }} + tenant_id: {{ chart.promtail_tenant_id }} + basic_auth: + username: {{ chart.promtail_loki_username }} + password: {{ chart.promtail_loki_password }} + snippets: + extraRelabelConfigs: + - target_label: "cluster" + replacement: "otcinfra" diff --git a/playbooks/templates/charts/promtail/promtail-otcinfra2-values.yaml.j2 b/playbooks/templates/charts/promtail/promtail-otcinfra2-values.yaml.j2 new file mode 100644 index 0000000..8ea58fc --- /dev/null +++ b/playbooks/templates/charts/promtail/promtail-otcinfra2-values.yaml.j2 @@ -0,0 +1,19 @@ +serviceMonitor: + enabled: true + namespace: monitoring + namespaceSelector: + matchNames: + - promtail + labels: + release: "prometheus" +config: + clients: + - url: {{ chart.promtail_loki_url }} + tenant_id: {{ chart.promtail_tenant_id }} + basic_auth: + username: {{ chart.promtail_loki_username }} + password: {{ chart.promtail_loki_password }} + snippets: + extraRelabelConfigs: + - target_label: "cluster" + replacement: "otcinfra2" diff --git a/playbooks/templates/charts/telegraf/telegraf-otcci-values.yaml.j2 b/playbooks/templates/charts/telegraf/telegraf-otcci-values.yaml.j2 new file mode 100644 index 0000000..903f72d --- /dev/null +++ b/playbooks/templates/charts/telegraf/telegraf-otcci-values.yaml.j2 @@ -0,0 +1,53 @@ +override_config: + toml: |+ + [agent] + collection_jitter = "0s" + debug = false + flush_interval = "10s" + flush_jitter = "0s" + hostname = "$HOSTNAME" + interval = "10s" + logfile = "" + metric_batch_size = 1000 + metric_buffer_limit = 10000 + omit_hostname = false + precision = "" + quiet = false + round_interval = true + + + [[outputs.graphite]] + prefix = "stats.telegraf.otcci" + servers = [ + "192.168.14.159:2013" + ] + templates = [ + "disk host.measurement.device.field", + "host.measurement.tags.field" + ] + timeout = 2 + + [[inputs.mem]] + [[inputs.net]] + [[inputs.system]] + + [[inputs.cpu]] + percpu = true + totalcpu = true + collect_cpu_time = false + report_active = false + + [[inputs.disk]] + ignore_fs = ["tmpfs", "devtmpfs", "devfs", "iso9660", "overlay", "aufs", "squashfs"] + [inputs.disk.tagdrop] + device = ["mapper-docker*"] + + [[inputs.kubernetes]] + url = "https://$HOSTIP:10250" + bearer_token = "/var/run/secrets/kubernetes.io/serviceaccount/token" + insecure_skip_verify = true + namepass = ["kubernetes_pod_volume"] + [inputs.kubernetes.tagpass] + pod_name = ["[a-z]*-[0-9]"] + [inputs.kubernetes.tagdrop] + volume_name = ["*config*", "*token*"] diff --git a/playbooks/templates/charts/telegraf/telegraf-otcinfra-values.yaml.j2 b/playbooks/templates/charts/telegraf/telegraf-otcinfra-values.yaml.j2 new file mode 100644 index 0000000..7e1e71e --- /dev/null +++ b/playbooks/templates/charts/telegraf/telegraf-otcinfra-values.yaml.j2 @@ -0,0 +1,53 @@ +override_config: + toml: |+ + [agent] + collection_jitter = "0s" + debug = false + flush_interval = "10s" + flush_jitter = "0s" + hostname = "$HOSTNAME" + interval = "10s" + logfile = "" + metric_batch_size = 1000 + metric_buffer_limit = 10000 + omit_hostname = false + precision = "" + quiet = false + round_interval = true + + + [[outputs.graphite]] + prefix = "stats.telegraf.otcinfra" + servers = [ + "192.168.14.159:2013" + ] + templates = [ + "disk host.measurement.device.field", + "host.measurement.tags.field" + ] + timeout = 2 + + [[inputs.mem]] + [[inputs.net]] + [[inputs.system]] + + [[inputs.cpu]] + percpu = true + totalcpu = true + collect_cpu_time = false + report_active = false + + [[inputs.disk]] + ignore_fs = ["tmpfs", "devtmpfs", "devfs", "iso9660", "overlay", "aufs", "squashfs"] + [inputs.disk.tagdrop] + device = ["mapper-docker*"] + + [[inputs.kubernetes]] + url = "https://$HOSTIP:10250" + bearer_token = "/var/run/secrets/kubernetes.io/serviceaccount/token" + insecure_skip_verify = true + namepass = ["kubernetes_pod_volume"] + [inputs.kubernetes.tagpass] + pod_name = ["[a-z]*-[0-9]"] + [inputs.kubernetes.tagdrop] + volume_name = ["*config*", "*token*"] diff --git a/playbooks/templates/charts/telegraf/telegraf-otcinfra2-values.yaml.j2 b/playbooks/templates/charts/telegraf/telegraf-otcinfra2-values.yaml.j2 new file mode 100644 index 0000000..99eb513 --- /dev/null +++ b/playbooks/templates/charts/telegraf/telegraf-otcinfra2-values.yaml.j2 @@ -0,0 +1,53 @@ +override_config: + toml: |+ + [agent] + collection_jitter = "0s" + debug = false + flush_interval = "10s" + flush_jitter = "0s" + hostname = "$HOSTNAME" + interval = "10s" + logfile = "" + metric_batch_size = 1000 + metric_buffer_limit = 10000 + omit_hostname = false + precision = "" + quiet = false + round_interval = true + + + [[outputs.graphite]] + prefix = "stats.telegraf.otcinfra2" + servers = [ + "192.168.14.159:2013" + ] + templates = [ + "disk host.measurement.device.field", + "host.measurement.tags.field" + ] + timeout = 2 + + [[inputs.mem]] + [[inputs.net]] + [[inputs.system]] + + [[inputs.cpu]] + percpu = true + totalcpu = true + collect_cpu_time = false + report_active = false + + [[inputs.disk]] + ignore_fs = ["tmpfs", "devtmpfs", "devfs", "iso9660", "overlay", "aufs", "squashfs"] + [inputs.disk.tagdrop] + device = ["mapper-docker*"] + + [[inputs.kubernetes]] + url = "https://$HOSTIP:10250" + bearer_token = "/var/run/secrets/kubernetes.io/serviceaccount/token" + insecure_skip_verify = true + namepass = ["kubernetes_pod_volume"] + [inputs.kubernetes.tagpass] + pod_name = ["[a-z]*-[0-9]"] + [inputs.kubernetes.tagdrop] + volume_name = ["*config*", "*token*"] diff --git a/playbooks/templates/clouds/bridge_all_clouds.yaml.j2 b/playbooks/templates/clouds/bridge_all_clouds.yaml.j2 new file mode 100644 index 0000000..685b97a --- /dev/null +++ b/playbooks/templates/clouds/bridge_all_clouds.yaml.j2 @@ -0,0 +1,357 @@ +# +# Bridge all clouds +# +# This file is deployed to /etc/openstack/clouds.yaml on the +# bastion host and contains information for all cloud environments. +# +clouds: + # DNS + otc-dns: + profile: otc + auth: + username: {{ clouds.otcdns.auth.username }} + password: "{{ clouds.otcdns.auth.password }}" + project_name: {{ clouds.otcdns.auth.project_name }} + user_domain_name: {{ clouds.otcdns.auth.user_domain_name }} + # Tests admin + otc-tests-admin: + auth: + auth_url: {{ clouds.otc_tests_admin.auth.auth_url | default('https://iam.eu-de.otc.t-systems.com/v3') }} + user_domain_name: {{ clouds.otc_tests_admin.auth.user_domain_name }} + domain_name: {{ clouds.otc_tests_admin.auth.domain_name }} + username: {{ clouds.otc_tests_admin.auth.username }} + password: "{{ clouds.otc_tests_admin.auth.password }}" + interface: public + identity_api_version: 3 + identity_endpoint_override: https://iam.eu-de.otc.t-systems.com/v3 + # Infra clouds + otcinfra-domain1-admin: + auth: + auth_url: {{ clouds.otcinfra_domain1_admin.auth.auth_url | default('https://iam.eu-de.otc.t-systems.com/v3') }} + user_domain_name: {{ clouds.otcinfra_domain1_admin.auth.user_domain_name }} + domain_name: {{ clouds.otcinfra_domain1_admin.auth.domain_name }} + username: {{ clouds.otcinfra_domain1_admin.auth.username }} + password: "{{ clouds.otcinfra_domain1_admin.auth.password }}" + interface: public + identity_api_version: 3 + identity_endpoint_override: https://iam.eu-de.otc.t-systems.com/v3 + otcinfra-domain2-admin: + auth: + auth_url: {{ clouds.otcinfra_domain2_admin.auth.auth_url | default('https://iam.eu-de.otc.t-systems.com/v3') }} + user_domain_name: {{ clouds.otcinfra_domain2_admin.auth.user_domain_name }} + domain_name: {{ clouds.otcinfra_domain2_admin.auth.domain_name }} + username: {{ clouds.otcinfra_domain2_admin.auth.username }} + password: "{{ clouds.otcinfra_domain2_admin.auth.password }}" + interface: public + identity_api_version: 3 + identity_endpoint_override: https://iam.eu-de.otc.t-systems.com/v3 + otcinfra-domain3-admin: + auth: + auth_url: {{ clouds.otcinfra_domain3_admin.auth.auth_url | default('https://iam.eu-de.otc.t-systems.com/v3') }} + user_domain_name: {{ clouds.otcinfra_domain3_admin.auth.user_domain_name }} + domain_name: {{ clouds.otcinfra_domain3_admin.auth.domain_name }} + username: {{ clouds.otcinfra_domain3_admin.auth.username }} + password: "{{ clouds.otcinfra_domain3_admin.auth.password }}" + interface: public + identity_api_version: 3 + identity_endpoint_override: https://iam.eu-de.otc.t-systems.com/v3 + + # Zuul + otcci-main: + profile: otc + auth: + user_domain_name: {{ clouds.otcci_main.auth.user_domain_name }} + project_name: {{ clouds.otcci_main.auth.project_name }} + username: {{ clouds.otcci_main.auth.username }} + password: "{{ clouds.otcci_main.auth.password }}" + interface: public + identity_api_version: 3 + region_name: eu-de + otcci-pool1: + profile: otc + auth: + user_domain_name: {{ clouds.otcci_pool1.auth.user_domain_name }} + project_name: {{ clouds.otcci_pool1.auth.project_name }} + username: {{ clouds.otcci_pool1.auth.username }} + password: "{{ clouds.otcci_pool1.auth.password }}" + interface: public + identity_api_version: 3 + region_name: eu-de + otcci-pool2: + profile: otc + auth: + user_domain_name: {{ clouds.otcci_pool2.auth.user_domain_name }} + project_name: {{ clouds.otcci_pool2.auth.project_name }} + username: {{ clouds.otcci_pool2.auth.username }} + password: "{{ clouds.otcci_pool2.auth.password }}" + interface: public + identity_api_version: 3 + region_name: eu-de + otcci-pool3: + profile: otc + auth: + user_domain_name: {{ clouds.otcci_pool3.auth.user_domain_name }} + project_name: {{ clouds.otcci_pool3.auth.project_name }} + username: {{ clouds.otcci_pool3.auth.username }} + password: "{{ clouds.otcci_pool3.auth.password }}" + interface: public + identity_api_version: 3 + region_name: eu-de + otcci-logs: + profile: otc + auth: + user_domain_name: {{ clouds.otcci_logs.auth.user_domain_name }} + project_name: {{ clouds.otcci_logs.auth.project_name }} + username: {{ clouds.otcci_logs.auth.username }} + password: "{{ clouds.otcci_logs.auth.password }}" + #interface: public + #identity_api_version: 3 + region_name: eu-de + object_store_endpoint_override: "{{ clouds.otcci_logs.object_store_endpoint_override }}" + + # Documentation portals + otcinfra-docs: + profile: otc + auth: + user_domain_name: {{ clouds.otcinfra_docs.auth.user_domain_name }} + project_name: {{ clouds.otcinfra_docs.auth.project_name }} + username: {{ clouds.otcinfra_docs.auth.username }} + password: "{{ clouds.otcinfra_docs.auth.password }}" + region_name: eu-de + object_store_endpoint_override: "{{ clouds.otcinfra_docs.object_store_endpoint_override }}" + otcinfra-docs-int: + profile: otc + auth: + user_domain_name: {{ clouds.otcinfra_docs_int.auth.user_domain_name }} + project_name: {{ clouds.otcinfra_docs_int.auth.project_name }} + username: {{ clouds.otcinfra_docs_int.auth.username }} + password: "{{ clouds.otcinfra_docs_int.auth.password }}" + region_name: eu-de + object_store_endpoint_override: "{{ clouds.otcinfra_docs_int.object_store_endpoint_override }}" + otcinfra-docs-hc: + profile: otc + auth: + user_domain_name: {{ clouds.otcinfra_docs_hc.auth.user_domain_name }} + project_name: {{ clouds.otcinfra_docs_hc.auth.project_name }} + username: {{ clouds.otcinfra_docs_hc.auth.username }} + password: "{{ clouds.otcinfra_docs_hc.auth.password }}" + region_name: eu-de + object_store_endpoint_override: "{{ clouds.otcinfra_docs_hc.object_store_endpoint_override }}" + # DB + otcinfra-domain2-database: + profile: otc + auth: + user_domain_name: {{ clouds.otcinfra_domain2.auth.user_domain_name }} + project_name: eu-de_database + username: {{ clouds.otcinfra_domain2.auth.username }} + password: "{{ clouds.otcinfra_domain2.auth.password }}" + interface: public + identity_api_version: 3 + region_name: eu-de + + otcinfra-domain2-infra-de: + profile: otc + auth: + user_domain_name: {{ clouds.otcinfra_domain2.auth.user_domain_name }} + project_name: eu-de_eco_infra + username: {{ clouds.otcinfra_domain2.auth.username }} + password: "{{ clouds.otcinfra_domain2.auth.password }}" + interface: public + identity_api_version: 3 + region_name: eu-de + otcinfra-domain2-infra2-de: + profile: otc + auth: + user_domain_name: {{ clouds.otcinfra_domain2.auth.user_domain_name }} + project_name: eu-de_eco_infra2 + username: {{ clouds.otcinfra_domain2.auth.username }} + password: "{{ clouds.otcinfra_domain2.auth.password }}" + interface: public + identity_api_version: 3 + region_name: eu-de + otcinfra-domain3-infra-nl: + profile: otc + auth: + user_domain_name: {{ clouds.otcinfra_domain3.auth.user_domain_name }} + project_name: eu-nl_eco_infra + username: {{ clouds.otcinfra_domain3.auth.username }} + password: "{{ clouds.otcinfra_domain3.auth.password }}" + interface: public + identity_api_version: 3 + region_name: eu-nl + otcinfra-domain3-infra-de: + profile: otc + auth: + user_domain_name: {{ clouds.otcinfra_domain3.auth.user_domain_name }} + project_name: eu-de_eco_infra + username: {{ clouds.otcinfra_domain3.auth.username }} + password: "{{ clouds.otcinfra_domain3.auth.password }}" + interface: public + identity_api_version: 3 + region_name: eu-de + + # OTC Swift + otc-swift: + profile: otc + auth: + username: {{ clouds.otc_swift.auth.username }} + password: "{{ clouds.otc_swift.auth.password }}" + project_name: {{ clouds.otc_swift.auth.project_name }} + user_domain_name: {{ clouds.otc_swift.auth.user_domain_name }} + + # APImon projects + otcapimon-pool1: + profile: otc + auth: + username: {{ clouds.otcapimon_pool1.auth.username }} + password: "{{ clouds.otcapimon_pool1.auth.password }}" + project_name: {{ clouds.otcapimon_pool1.auth.project_name }} + user_domain_name: {{ clouds.otcapimon_pool1.auth.user_domain_name }} + otcapimon-pool2: + profile: otc + auth: + username: {{ clouds.otcapimon_pool2.auth.username }} + password: "{{ clouds.otcapimon_pool2.auth.password }}" + project_name: {{ clouds.otcapimon_pool2.auth.project_name }} + user_domain_name: {{ clouds.otcapimon_pool2.auth.user_domain_name }} + # APImon probe projects + otccloudmon-de: + profile: otc + auth: + username: {{ cloud_448_de_cloudmon.auth.username }} + password: "{{ cloud_448_de_cloudmon.auth.password }}" + project_name: {{ cloud_448_de_cloudmon.auth.project_name }} + # Replace once vault plugin is fixed to return user_domain_name + # user_domain_name: { { cloud_448_de_cloudmon.auth.user_domain_name } } + user_domain_id: {{ cloud_448_de_cloudmon.auth.project_domain_id }} + otccloudmon-nl: + profile: otc + auth: + username: {{ cloud_448_nl_cloudmon.auth.username }} + password: "{{ cloud_448_nl_cloudmon.auth.password }}" + project_name: {{ cloud_448_nl_cloudmon.auth.project_name }} + user_domain_id: {{ cloud_448_nl_cloudmon.auth.project_domain_id }} + region_name: eu-nl + + # APImon probe projects + otcapimon-probes11: + profile: otc + auth: + username: {{ apimon_all_clouds.otcapimon_probes11.auth.username }} + password: "{{ apimon_all_clouds.otcapimon_probes11.auth.password }}" + project_name: {{ apimon_all_clouds.otcapimon_probes11.auth.project_name }} + user_domain_name: {{ apimon_all_clouds.otcapimon_probes11.auth.user_domain_name }} + otcapimon-probes12: + profile: otc + auth: + username: {{ apimon_all_clouds.otcapimon_probes12.auth.username }} + password: "{{ apimon_all_clouds.otcapimon_probes12.auth.password }}" + project_name: {{ apimon_all_clouds.otcapimon_probes12.auth.project_name }} + user_domain_name: {{ apimon_all_clouds.otcapimon_probes12.auth.user_domain_name }} + otcapimon-probes13: + profile: otc + auth: + username: {{ apimon_all_clouds.otcapimon_probes13.auth.username }} + password: "{{ apimon_all_clouds.otcapimon_probes13.auth.password }}" + project_name: {{ apimon_all_clouds.otcapimon_probes13.auth.project_name }} + user_domain_name: {{ apimon_all_clouds.otcapimon_probes13.auth.user_domain_name }} + otcapimon-probes14: + profile: otc + auth: + username: {{ apimon_all_clouds.otcapimon_probes14.auth.username }} + password: "{{ apimon_all_clouds.otcapimon_probes14.auth.password }}" + project_name: {{ apimon_all_clouds.otcapimon_probes14.auth.project_name }} + user_domain_name: {{ apimon_all_clouds.otcapimon_probes14.auth.user_domain_name }} + otcapimon-probes15: + profile: otc + auth: + username: {{ apimon_all_clouds.otcapimon_probes15.auth.username }} + password: "{{ apimon_all_clouds.otcapimon_probes15.auth.password }}" + project_name: {{ apimon_all_clouds.otcapimon_probes15.auth.project_name }} + user_domain_name: {{ apimon_all_clouds.otcapimon_probes15.auth.user_domain_name }} + region_name: eu-nl + otcapimon-probes16: + profile: otc + auth: + username: {{ apimon_all_clouds.otcapimon_probes16.auth.username }} + password: "{{ apimon_all_clouds.otcapimon_probes16.auth.password }}" + project_name: {{ apimon_all_clouds.otcapimon_probes16.auth.project_name }} + user_domain_name: {{ apimon_all_clouds.otcapimon_probes16.auth.user_domain_name }} + region_name: eu-nl + otcapimon-probes17: + profile: otc + auth: + username: {{ apimon_all_clouds.otcapimon_probes17.auth.username }} + password: "{{ apimon_all_clouds.otcapimon_probes17.auth.password }}" + project_name: {{ apimon_all_clouds.otcapimon_probes17.auth.project_name }} + user_domain_name: {{ apimon_all_clouds.otcapimon_probes17.auth.user_domain_name }} + region_name: eu-nl + otcapimon-probes18: + profile: otc + auth: + username: {{ apimon_all_clouds.otcapimon_probes18.auth.username }} + password: "{{ apimon_all_clouds.otcapimon_probes18.auth.password }}" + project_name: {{ apimon_all_clouds.otcapimon_probes18.auth.project_name }} + user_domain_name: {{ apimon_all_clouds.otcapimon_probes18.auth.user_domain_name }} + region_name: eu-nl + + otcapimon-preprod: + profile: otc + auth: + auth_url: {{ apimon_all_clouds.otcapimon_preprod.auth.auth_url }} + username: {{ apimon_all_clouds.otcapimon_preprod.auth.username }} + password: "{{ apimon_all_clouds.otcapimon_preprod.auth.password }}" + project_name: {{ apimon_all_clouds.otcapimon_preprod.auth.project_name }} + user_domain_name: {{ apimon_all_clouds.otcapimon_preprod.auth.user_domain_name }} + otcapimon-hybrid-eum: + auth: + auth_url: {{ apimon_all_clouds.otcapimon_hybrid_eum.auth.auth_url }} + username: {{ apimon_all_clouds.otcapimon_hybrid_eum.auth.username }} + password: "{{ apimon_all_clouds.otcapimon_hybrid_eum.auth.password }}" + project_name: {{ apimon_all_clouds.otcapimon_hybrid_eum.auth.project_name }} + user_domain_name: {{ apimon_all_clouds.otcapimon_hybrid_eum.auth.user_domain_name }} + interface: public + vendor_hook: "otcextensions.sdk:load" + volume_api_version: "2" + otcapimon-hybrid-sbb: + auth: + auth_url: {{ apimon_all_clouds.otcapimon_hybrid_sbb.auth.auth_url }} + username: {{ apimon_all_clouds.otcapimon_hybrid_sbb.auth.username }} + password: "{{ apimon_all_clouds.otcapimon_hybrid_sbb.auth.password }}" + project_name: {{ apimon_all_clouds.otcapimon_hybrid_sbb.auth.project_name }} + user_domain_name: {{ apimon_all_clouds.otcapimon_hybrid_sbb.auth.user_domain_name }} + interface: public + vendor_hook: "otcextensions.sdk:load" + otcapimon-hybrid-swiss: + auth: + auth_url: {{ apimon_all_clouds.otcapimon_hybrid_swiss.auth.auth_url }} + username: {{ apimon_all_clouds.otcapimon_hybrid_swiss.auth.username }} + password: "{{ apimon_all_clouds.otcapimon_hybrid_swiss.auth.password }}" + project_name: {{ apimon_all_clouds.otcapimon_hybrid_swiss.auth.project_name }} + user_domain_name: {{ apimon_all_clouds.otcapimon_hybrid_swiss.auth.user_domain_name }} + interface: public + vendor_hook: "otcextensions.sdk:load" + otcapimon-csm1: + profile: otc + auth: + username: {{ apimon_all_clouds.otcapimon_csm1.auth.username }} + password: "{{ apimon_all_clouds.otcapimon_csm1.auth.password }}" + project_name: {{ apimon_all_clouds.otcapimon_csm1.auth.project_name }} + user_domain_name: {{ apimon_all_clouds.otcapimon_csm1.auth.user_domain_name }} + interface: public + object_store_endpoint_override: "{{ apimon_all_clouds.otcapimon_csm1.object_store_endpoint_override }}" + otcapimon-logs: + profile: otc + auth: + username: {{ apimon_all_clouds.otcapimon_logs.auth.username }} + password: "{{ apimon_all_clouds.otcapimon_logs.auth.password }}" + project_name: {{ apimon_all_clouds.otcapimon_logs.auth.project_name }} + user_domain_name: {{ apimon_all_clouds.otcapimon_logs.auth.user_domain_name }} + otcapimon-logs-stg: + profile: otc + auth: + username: {{ apimon_all_clouds.otcapimon_logs_stg.auth.username }} + password: "{{ apimon_all_clouds.otcapimon_logs_stg.auth.password }}" + project_name: {{ apimon_all_clouds.otcapimon_logs_stg.auth.project_name }} + user_domain_name: {{ apimon_all_clouds.otcapimon_logs_stg.auth.user_domain_name }} + object_store_endpoint_override: "{{ apimon_all_clouds.otcapimon_logs_stg.object_store_endpoint_override }}" diff --git a/playbooks/templates/clouds/bridge_kube_config.yaml.j2 b/playbooks/templates/clouds/bridge_kube_config.yaml.j2 new file mode 100644 index 0000000..5bde58f --- /dev/null +++ b/playbooks/templates/clouds/bridge_kube_config.yaml.j2 @@ -0,0 +1,43 @@ +apiVersion: v1 +kind: Config +current-context: otcci +preferences: {} +clusters: + - name: otcci + cluster: + server: {{ otcci_k8s.server }} + insecure-skip-tls-verify: true + - name: otcinfra + cluster: + server: {{ otcinfra1_k8s.server }} + insecure-skip-tls-verify: true + - name: otcinfra2 + cluster: + server: {{ otcinfra2_k8s.server }} + insecure-skip-tls-verify: true +contexts: + - name: otcci + context: + cluster: otcci + user: otcci-admin + - name: otcinfra + context: + cluster: otcinfra + user: otcinfra-admin + - name: otcinfra2 + context: + cluster: otcinfra2 + user: otcinfra2-admin +users: + - name: otcci-admin + user: + client-certificate-data: {{ otcci_k8s.secrets['client.crt'] | b64encode }} + client-key-data: {{ otcci_k8s.secrets['client.key'] | b64encode }} + - name: otcinfra-admin + user: + client-certificate-data: {{ otcinfra1_k8s.secrets['client.crt'] | b64encode }} + client-key-data: {{ otcinfra1_k8s.secrets['client.key'] | b64encode }} + - name: otcinfra2-admin + user: + client-certificate-data: {{ otcinfra2_k8s.secrets['client.crt'] | b64encode }} + client-key-data: {{ otcinfra2_k8s.secrets['client.key'] | b64encode }} diff --git a/playbooks/templates/clouds/nodepool_clouds.hcl.j2 b/playbooks/templates/clouds/nodepool_clouds.hcl.j2 new file mode 100644 index 0000000..f7df9bc --- /dev/null +++ b/playbooks/templates/clouds/nodepool_clouds.hcl.j2 @@ -0,0 +1,31 @@ +# +# Nodepool openstacksdk configuration +# +# This file is deployed to nodepool launcher and builder hosts +# and is used there to authenticate nodepool operations to clouds. +# This file only contains projects we are launching test nodes in, and +# the naming should correspond that used in nodepool configuration +# files. +# +# Generated automatically, please do not edit directly! + +cache: + expiration: + server: 5 + port: 5 + floating-ip: 5 +clouds: +{% for cloud in zuul.nodepool_clouds %} + {{ cloud.name }}: + auth: +[%- with secret "{{ cloud.vault_path }}" %] +[%- with secret (printf "secret/%s" .Data.data.user_secret_name) %] + auth_url: "[% .Data.data.auth_url %]" + user_domain_name: "[% .Data.data.user_domain_name %]" + username: "[% .Data.data.username %]" + password: "[% .Data.data.password %]" +[%- end %] + project_name: "[% .Data.data.project_name %]" +[%- end %] + private: true +{% endfor %} diff --git a/playbooks/templates/clouds/nodepool_clouds.yaml.j2 b/playbooks/templates/clouds/nodepool_clouds.yaml.j2 new file mode 100644 index 0000000..ff862e3 --- /dev/null +++ b/playbooks/templates/clouds/nodepool_clouds.yaml.j2 @@ -0,0 +1,40 @@ +# +# Nodepool openstacksdk configuration +# +# This file is deployed to nodepool launcher and builder hosts +# and is used there to authenticate nodepool operations to clouds. +# This file only contains projects we are launching test nodes in, and +# the naming should correspond that used in nodepool configuration +# files. +# + +cache: + expiration: + server: 5 + port: 5 + floating-ip: 5 +clouds: + otcci-pool1: + auth: + auth_url: "https://iam.eu-de.otc.t-systems.com/v3" + username: "{{ nodepool_pool1_username }}" + password: "{{ nodepool_pool1_password }}" + project_name: "{{ nodepool_pool1_project }}" + user_domain_name: "{{ nodepool_pool1_user_domain_name }}" + private: true + otcci-pool2: + auth: + auth_url: "https://iam.eu-de.otc.t-systems.com/v3" + username: "{{ nodepool_pool2_username }}" + password: "{{ nodepool_pool2_password }}" + project_name: "{{ nodepool_pool2_project}}" + user_domain_name: "{{ nodepool_pool2_user_domain_name }}" + private: true + otcci-pool3: + auth: + auth_url: "https://iam.eu-de.otc.t-systems.com/v3" + username: "{{ nodepool_pool3_username }}" + password: "{{ nodepool_pool3_password }}" + project_name: "{{ nodepool_pool3_project }}" + user_domain_name: "{{ nodepool_pool3_user_domain_name }}" + private: true diff --git a/playbooks/templates/clouds/nodepool_kube_config.hcl.j2 b/playbooks/templates/clouds/nodepool_kube_config.hcl.j2 new file mode 100644 index 0000000..90e3b97 --- /dev/null +++ b/playbooks/templates/clouds/nodepool_kube_config.hcl.j2 @@ -0,0 +1,31 @@ +apiVersion: v1 +kind: Config +current-context: otcci +preferences: {} + +clusters: +{% for k8 in zuul.nodepool_k8s %} + - name: {{ k8.name }} + cluster: + server: {{ k8.server }} + insecure-skip-tls-verify: true +{% endfor %} + +contexts: +{% for k8 in zuul.nodepool_k8s %} + - name: {{ k8.name }} + context: + cluster: {{ k8.name }} + user: {{ k8.name }}-admin +{% endfor %} + +users: +{% for k8 in zuul.nodepool_k8s %} + - name: {{ k8.name }}-admin + user: +[%- with secret "{{ k8.vault_path }}" %] + client-certificate-data: "[% base64Encode .Data.data.client_crt %]" + client-key-data: "[% base64Encode .Data.data.client_key %]" +[%- end %] + +{% endfor %} diff --git a/playbooks/templates/clouds/nodepool_kube_config.yaml.j2 b/playbooks/templates/clouds/nodepool_kube_config.yaml.j2 new file mode 100644 index 0000000..e031075 --- /dev/null +++ b/playbooks/templates/clouds/nodepool_kube_config.yaml.j2 @@ -0,0 +1,19 @@ +apiVersion: v1 +kind: Config +current-context: otcci +preferences: {} +clusters: + - name: otcci + cluster: + server: {{ otcci_k8s.server }} + insecure-skip-tls-verify: true +contexts: + - name: otcci + context: + cluster: otcci + user: otcci-admin +users: + - name: otcci-admin + user: + client-certificate-data: {{ otcci_k8s.secrets['client.crt'] | b64encode }} + client-key-data: {{ otcci_k8s.secrets['client.key'] | b64encode }} diff --git a/playbooks/x509-certs.yaml b/playbooks/x509-certs.yaml new file mode 100644 index 0000000..3e1b9fe --- /dev/null +++ b/playbooks/x509-certs.yaml @@ -0,0 +1,13 @@ +- hosts: bridge.eco.tsi-dev.otc-service.com:!disabled + become: true + tasks: + - include_role: + name: "x509_cert" + tasks_from: "ca.yaml" + + - include_role: + name: "x509_cert" + tasks_from: "cert.yaml" + vars: + x509_common_name: "{{ item }}" + loop: "{{ x509_certificates }}" diff --git a/playbooks/zuul/roles/add-bastion-host/README.rst b/playbooks/zuul/roles/add-bastion-host/README.rst new file mode 100644 index 0000000..bab34c1 --- /dev/null +++ b/playbooks/zuul/roles/add-bastion-host/README.rst @@ -0,0 +1,4 @@ +Add the bastion host to the inventory dynamically + +For roles that run on the bastion host, it should be added to the +inventory dynamically by the production jobs. diff --git a/playbooks/zuul/roles/add-bastion-host/tasks/main.yaml b/playbooks/zuul/roles/add-bastion-host/tasks/main.yaml new file mode 100644 index 0000000..7bd7dc5 --- /dev/null +++ b/playbooks/zuul/roles/add-bastion-host/tasks/main.yaml @@ -0,0 +1,13 @@ +- name: Add bastion host to inventory for production playbook + ansible.builtin.add_host: + name: 'bridge01.eco.tsi-dev.otc-service.com' + groups: 'prod_bastion' + ansible_python_interpreter: python3 + ansible_user: zuul + # Without setting ansible_host directly, mirror-workspace-git-repos + # gets sad because if delegate_to localhost and with add_host that + # ends up with ansible_host being localhost. + ansible_host: 'bridge01.eco.tsi-dev.otc-service.com' + ansible_port: 22 + # Port 19885 is firewalled + zuul_console_disabled: true diff --git a/playbooks/zuul/roles/encrypt-logs/defaults/main.yaml b/playbooks/zuul/roles/encrypt-logs/defaults/main.yaml new file mode 100644 index 0000000..ff3817c --- /dev/null +++ b/playbooks/zuul/roles/encrypt-logs/defaults/main.yaml @@ -0,0 +1,47 @@ +# Anyone who wants to be able to read encrypted published logs should +# have an entry in this variable in the format +# +# - name: +# key_id: +# gpg_asc: +# +encrypt_logs_keys: + - name: 'gtema' + key_id: 'EE9CF90803E191FCA74A6CE421E08923422F2D65' + gpg_asc: | + -----BEGIN PGP PUBLIC KEY BLOCK----- + + mQENBF3DzCsBCAC7E0BwWlmYCTE+KBnKnMfK1C/GgDitqn8pg3JFp95q3HHK3WyD + 2hABJ7a+r5oiNBjknY1X85lL17/xsoU6CuB+8ydZPNJdJOEtHSJJbgzVzev1Q0e+ + AcFudA7aug5Yy1nLQ7dLDJqS1MBS1ACewvNg7fBnbxW5eukJRXIgceE+qM0UzWii + eL5bt3gsOWGgCXq1frF2f9W+4Ge14Pv/dcA4SkMHwbL83uXZoFnQ5LHT+iDbMhMK + pBL7lRjNsIBT4sfpcr9XbnCuvpSWHjO9pv67rlUFJ7X2Tavq9wgCra8TYk/EF7qy + 5Xx5f/iQX7giRN94Rwk0/wV92slN8WIaY6FBABEBAAG0K0FydGVtIEdvbmNoYXJv + diA8YXJ0ZW0uZ29uY2hhcm92QGdtYWlsLmNvbT6JAU0EEwEIADgWIQTunPkIA+GR + /KdKbOQh4IkjQi8tZQUCXcPMKwIbAwULCQgHAgYVCgkICwIEFgIDAQIeAQIXgAAK + CRAh4IkjQi8tZTEhB/Ys87/vGqp5b3yRKLqLNhbl4n2nu+8BICq+o7OnQhQHRyxG + m49lTfikyO7NSI96ms/TsIWm6+vpTyrUpt/cJlDAOC+YhZsUw11rDxH+x56oFeE5 + 5jpAIP85GD02U9+q7UHFDKgZipUvQX/EoJdkj7y5u6GnO1GuU8STRPsGDpviWS+N + JoVVjLnwfV46y09IBGWzcOSXAaEJ07XAF7l/27zFVgRmoD1zKeIaVgfZ4wn8gavU + gEaXor6brfS9HLAnj8hNGOLV8A/twPd2G6MNz7mQbMI8RJNvGqfZftxwMU0vHev5 + Z15EO5/oCDHlLZDOQrPmKDcKihIKyd65K8/WCHi5AQ0EXcPMKwEIAMMMUXoIikgf + qYGM8EAqGtu7cHRF7b9yebnDQRjyqFkH6EdSTWObeDRwJ7pLMdsgsOqBV/iPNJKN + Z54ALlwvtCxI6saAQqm5/hXcmpSk/36BRTfJOMmaC6VJRxN7uCh1ETr3bCtlX1c3 + yT9WV6Rwg6/A0TisXkKXQOtJlk3yIFVYdiQ/6BP9KN/8nfNgKrD0AQwQhdIYmQoj + Mrm7oQ4Ac4JFeEzAKU5v5HHP87LBriQcpjwIdcEEvGzOL22mhCLIbxyAO0ZUKUjR + gzredTcyAlAk6OXB7dOGYN1B5kPpE9wh6Aa7QiMMk9uXE0egGDQV4guvenePopf5 + Qyoa06Ixtx0AEQEAAYkBNgQYAQgAIBYhBO6c+QgD4ZH8p0ps5CHgiSNCLy1lBQJd + w8wrAhsMAAoJECHgiSNCLy1laykIAIjTdRAoLGSgkU9ME8pVBZm8YUwOFa+xhhD7 + ahiQfcfhQbWtjbM+D18h53VWsLsQ04710qygaLDmQqkvxFYidOum6PB2H54o8EVE + MBolzhGo1ngAcPauE9TJF6gDubJ1XhGgffLC0dHjsPr03U5HKd9YZgVrl5kKGFeD + 13mgPRvwln6gIcyq1jC4q2NCrKKso8U3J6qQyN941Jev8RooCLaHCEwRcxz6+6y0 + i0SjZ+KHhXL6BnpPVoPrfrf5gQ/eMapBmpC8ovE1MfxcXswgNpDnaEj2ByKyemPG + NTcQSDmiSxQ++v8RpJHiLvbi7NAdCZ6petx76r68gFWB1TKsOTg= + =cm/6 + -----END PGP PUBLIC KEY BLOCK----- + +# This is the default list of keys from ``encrypt_log_keys`` that wish +# to always be a recipient of encrypted logs. Others can add +# themselves to particular prod jobs of interest individually. +encrypt_logs_recipients: + - gtema diff --git a/playbooks/zuul/roles/encrypt-logs/tasks/main.yaml b/playbooks/zuul/roles/encrypt-logs/tasks/main.yaml new file mode 100644 index 0000000..2f43497 --- /dev/null +++ b/playbooks/zuul/roles/encrypt-logs/tasks/main.yaml @@ -0,0 +1,27 @@ +- name: Encrypt file + include_role: + name: encrypt-file + vars: + encrypt_file: '{{ encrypt_logs_files }}' + encrypt_file_keys: '{{ encrypt_logs_keys }}' + encrypt_file_recipients: '{{ encrypt_logs_recipients + encrypt_logs_job_recipients|default([]) }}' + +- name: Write download script + template: + src: download-logs.sh.j2 + dest: '{{ encrypt_logs_download_script_path }}/download-logs.sh' + mode: 0755 + vars: + encrypt_logs_download_api: 'https://zuul.otc-service.com/api/tenant/{{ zuul.tenant }}' + +- name: Return artifact + zuul_return: + data: + zuul: + artifacts: + # This is parsed by the log download script above, so any + # changes to format must be accounted for there too. + - name: Encrypted logs + url: '{{ encrypt_logs_artifact_path }}' + metadata: + logfiles: "{{ encrypt_logs_files | map('basename') | map('regex_replace', '^(.*)$', '\\1.gpg') | list }}" diff --git a/playbooks/zuul/roles/encrypt-logs/templates/download-logs.sh.j2 b/playbooks/zuul/roles/encrypt-logs/templates/download-logs.sh.j2 new file mode 100644 index 0000000..eaba4a1 --- /dev/null +++ b/playbooks/zuul/roles/encrypt-logs/templates/download-logs.sh.j2 @@ -0,0 +1,89 @@ +#!/bin/bash + +set -e + +ZUUL_API=${ZUUL_API:-"{{ encrypt_logs_download_api }}"} +ZUUL_BUILD_UUID=${ZUUL_BUILD_UUID:-"{{ zuul.build }}"} +{% raw %} +ZUUL_API_URL=${ZUUL_API}/build/${ZUUL_BUILD_UUID} + +(( ${BASH_VERSION%%.*} >= 4 )) || { echo >&2 "bash >=4 required to download."; exit 1; } +command -v python3 >/dev/null 2>&1 || { echo >&2 "Python3 is required to download."; exit 1; } +command -v curl >/dev/null 2>&1 || { echo >&2 "curl is required to download."; exit 1; } + +function log { + echo "$(date -Iseconds) | $@" +} + +function get_urls { + /usr/bin/env python3 - < /dev/null + +log "Getting logs from ${ZUUL_BUILD_UUID}" +for (( i=0; i<$len; i++ )); do + file="${files[i]}" + printf -v _out " %-80s [ %04d/%04d ]" "${file}" "${i}" $(( len -1 )) + log "$_out" + save_file $file +done + +for f in ${DOWNLOAD_DIR}/*.gpg; do + log "Decrypting $(basename $f)" + gpg --output ${f/.gpg/} --decrypt ${f} + rm ${f} +done + +popd >/dev/null + +log "Download to ${DOWNLOAD_DIR} complete!" +{% endraw %} diff --git a/playbooks/zuul/run-base-post.yaml b/playbooks/zuul/run-base-post.yaml new file mode 100644 index 0000000..1019ed0 --- /dev/null +++ b/playbooks/zuul/run-base-post.yaml @@ -0,0 +1,52 @@ +- hosts: localhost + tasks: + - name: Make log directories for testing hosts + ansible.builtin.file: + path: "{{ zuul.executor.log_root }}/{{ item }}/logs" + state: directory + recurse: true + loop: "{{ query('inventory_hostnames', 'all') }}" + +- hosts: all + tasks: + - ansible.builtin.include_role: + name: collect-container-logs + # The zuul user isn't part of the docker group on our fake + # production systems. Work around this by operating as root + # when collecting logs. This collects podman containers + # running as root; we may need to think about some flags for + # this role for collecting logs from containers under other + # users. + apply: + become: true + vars: + container_command: "{{ item }}" + loop: + - docker + - podman + + - ansible.builtin.include_role: + name: stage-output + +- hosts: prod_bastion[0] + tasks: + - name: Set log directory + ansible.builtin.set_fact: + log_dir: "{{ zuul.executor.log_root }}/{{ inventory_hostname }}" + + - name: Collect tox output + ansible.builtin.include_role: + name: fetch-tox-output + vars: + tox_envlist: testinfra + zuul_work_dir: src/github.com/opentelekomcloud-infra/system-config + + - name: Collect ansible configuration + ansible.posix.synchronize: + dest: "{{ log_dir }}/etc" + mode: pull + src: "/etc/ansible" + verify_host: true + rsync_opts: + - "--exclude=__pycache__" + ignore_errors: true diff --git a/playbooks/zuul/run-base-pre.yaml b/playbooks/zuul/run-base-pre.yaml new file mode 100644 index 0000000..810dce3 --- /dev/null +++ b/playbooks/zuul/run-base-pre.yaml @@ -0,0 +1,7 @@ +- hosts: all + roles: + - ensure-tox + - multi-node-known-hosts + - copy-build-sshkey + - set-hostname + - multi-node-hosts-file diff --git a/playbooks/zuul/run-base.yaml b/playbooks/zuul/run-base.yaml new file mode 100644 index 0000000..d2eb24d --- /dev/null +++ b/playbooks/zuul/run-base.yaml @@ -0,0 +1,144 @@ +- import_playbook: ../bootstrap-bridge.yaml + vars: + root_rsa_key: "{{ lookup('file', zuul.executor.work_root + '/' + zuul.build + '_id_rsa', rstrip=False) }}" + ansible_cron_disable_job: true + cloud_launcher_disable_job: true + +- hosts: prod_bastion[0] + become: true + tasks: + - name: Write inventory on bridge + include_role: + name: write-inventory + vars: + write_inventory_dest: /home/zuul/src/github.com/opentelekomcloud-infra/system-config/inventory/base/gate-hosts.yaml + write_inventory_exclude_hostvars: + - ansible_user + - ansible_python_interpreter + write_inventory_additional_hostvars: + public_v4: nodepool.private_ipv4 + public_v6: nodepool.public_ipv6 + - name: Add groups config for test nodes + template: + src: "templates/gate-groups.yaml.j2" + dest: "/etc/ansible/hosts/gate-groups.yaml" + - name: Update ansible.cfg to use job inventory + ini_file: + path: /etc/ansible/ansible.cfg + section: defaults + option: inventory + value: /home/zuul/src/github.com/opentelekomcloud-infra/system-config/inventory/base/gate-hosts.yaml,/home/zuul/src/github.com/opentelekomcloud-infra/system-config/inventory/service/groups.yaml,/etc/ansible/hosts/gate-groups.yaml + - name: Make host_vars directory + file: + path: "/etc/ansible/hosts/host_vars" + state: directory + - name: Make group_vars directory + file: + path: "/etc/ansible/hosts/group_vars" + state: directory + - name: Write hostvars files + vars: + bastion_ipv4: "{{ nodepool['private_ipv4'] }}" + bastion_ipv6: "{{ nodepool['private_ipv6'] | default('') }}" + bastion_public_key: "{{ lookup('file', zuul.executor.work_root + '/' + zuul.build + '_id_rsa.pub') }}" + firewalld_test_ports_enable: + # Zuul web console + - 19885/tcp + # selenium + # - 4444/tcp + template: + src: "templates/{{ item }}.j2" + dest: "/etc/ansible/hosts/{{ item }}" + loop: + - group_vars/all.yaml + - group_vars/bastion.yaml + - group_vars/control-plane-clouds.yaml + - group_vars/ssl_certs.yaml + - group_vars/apimon.yaml + - group_vars/apimon-clouds.yaml + - group_vars/apimon-inst1.yaml + - group_vars/statsd.yaml + - group_vars/graphite.yaml + - group_vars/memcached.yaml + - group_vars/alerta.yaml + - group_vars/gitea.yaml + - group_vars/keycloak.yaml + - group_vars/grafana.yaml + - group_vars/proxy.yaml + - group_vars/k8s-controller.yaml + - host_vars/bridge.eco.tsi-dev.otc-service.com.yaml + - host_vars/epmon.centos-stream.yaml + - host_vars/epmon.focal.yaml + - host_vars/hc1.eco.tsi-dev.otc-service.com.yaml + - host_vars/le1.yaml + - host_vars/proxy1.centos-stream.yaml + - host_vars/zk.centos-stream.yaml + - name: Display group membership + command: ansible localhost -m debug -a 'var=groups' + - name: Run base.yaml + shell: "set -o pipefail && ansible-playbook -f 50 -v /home/zuul/src/github.com/opentelekomcloud-infra/system-config/playbooks/base.yaml 2>&1 | tee /var/log/ansible/base.yaml.log" + args: + executable: /bin/bash + - name: Run bridge service playbook + shell: "set -o pipefail && ansible-playbook -v /home/zuul/src/github.com/opentelekomcloud-infra/system-config/playbooks/service-bridge.yaml 2>&1 | tee /var/log/ansible/service-bridge.yaml.log" + args: + executable: /bin/bash + - name: Run playbook + when: run_playbooks is defined + loop: "{{ run_playbooks }}" + shell: "set -o pipefail && ansible-playbook -f 50 -v /home/zuul/src/github.com/opentelekomcloud-infra/system-config/{{ item }} 2>&1 | tee /var/log/ansible/{{ item | basename }}.log" + args: + executable: /bin/bash + - name: Run test playbook + when: run_test_playbook is defined + shell: "set -o pipefail && ANSIBLE_ROLES_PATH=/home/zuul/src/github.com/opentelekomcloud-infra/system-config/playbooks/roles ansible-playbook -v /home/zuul/src/github.com/opentelekomcloud-infra/system-config/{{ run_test_playbook }} 2>&1 | tee /var/log/ansible/{{ run_test_playbook | basename }}.log" + args: + executable: /bin/bash + + - name: Generate testinfra extra data fixture + set_fact: + testinfra_extra_data: + zuul_job: '{{ zuul.job }}' + zuul: '{{ zuul }}' + + - name: Write out testinfra extra data fixture + copy: + content: '{{ testinfra_extra_data | to_nice_yaml(indent=2) }}' + dest: '/home/zuul/testinfra_extra_data_fixture.yaml' + + - name: Make screenshots directory + file: + path: '/var/log/screenshots' + state: directory + + - name: Return screenshots artifact + zuul_return: + data: + zuul: + artifacts: + - name: Screenshots + url: "{{ groups['prod_bastion'][0] }}/screenshots" + + - name: Allow PBR's git calls to operate in system-config, despite not owning it + command: git config --global safe.directory /home/zuul/src/github.com/opentelekomcloud-infra/system-config + + - name: Run and collect testinfra + block: + - name: Run testinfra to validate configuration + include_role: + name: tox + vars: + tox_envlist: testinfra + # This allows us to run from external projects (like testinfra + # itself) + tox_environment: + TESTINFRA_EXTRA_DATA: '/home/zuul/testinfra_extra_data_fixture.yaml' + zuul_work_dir: src/github.com/opentelekomcloud-infra/system-config + always: + - name: Return testinfra report artifact + zuul_return: + data: + zuul: + artifacts: + - name: testinfra results + url: "{{ groups['prod_bastion'][0] }}/test-results.html" diff --git a/playbooks/zuul/run-production-bootstrap-bridge.yaml b/playbooks/zuul/run-production-bootstrap-bridge.yaml new file mode 100644 index 0000000..e833fc4 --- /dev/null +++ b/playbooks/zuul/run-production-bootstrap-bridge.yaml @@ -0,0 +1,5 @@ +- hosts: localhost + roles: + - add-bastion-host + +- import_playbook: ../bootstrap-bridge.yaml diff --git a/playbooks/zuul/run-production-playbook-post.yaml b/playbooks/zuul/run-production-playbook-post.yaml new file mode 100644 index 0000000..0dfe79a --- /dev/null +++ b/playbooks/zuul/run-production-playbook-post.yaml @@ -0,0 +1,112 @@ +--- +- hosts: localhost + roles: + - add-bastion-host + +- hosts: prod_bastion[0] + tasks: + - name: Encrypt log + when: infra_prod_playbook_encrypt_log|default(False) + block: + + - name: Create temporary staging area for encrypted logs + ansible.builtin.tempfile: + state: directory + register: _encrypt_tempdir + + - name: Copy log to tempdir as Zuul user + ansible.builtin.copy: + src: '/var/log/ansible/{{ playbook_name }}.log' + dest: '{{ _encrypt_tempdir.path }}' + owner: zuul + group: zuul + mode: '0644' + remote_src: true + become: true + + - name: Encrypt logs + include_role: + name: encrypt-logs + vars: + encrypt_logs_files: + - '{{ _encrypt_tempdir.path }}/{{ playbook_name }}.log' + # Artifact URL should just point to root directory, so blank + encrypt_logs_artifact_path: '' + encrypt_logs_download_script_path: '{{ _encrypt_tempdir.path }}' + + - name: Return logs + ansible.posix.synchronize: + src: '{{ item[0] }}' + dest: '{{ item[1] }}' + mode: pull + verify_host: true + loop: + - ['{{ _encrypt_tempdir.path }}/{{ playbook_name }}.log.gpg', '{{ zuul.executor.log_root }}/{{ playbook_name }}.log.gpg'] + - ['{{ _encrypt_tempdir.path }}/download-logs.sh' , '{{ zuul.executor.log_root }}/download-gpg-logs.sh'] + + always: + + - name: Remove temporary staging + file: + path: '{{ _encrypt_tempdir.path }}' + state: absent + when: _encrypt_tempdir is defined + + # Not using normal zuul job roles as the bastion host is not a + # test node with all the normal bits in place. + - name: Collect log output + ansible.posix.synchronize: + dest: "{{ zuul.executor.log_root }}/{{ playbook_name }}.log" + mode: pull + src: "/var/log/ansible/{{ playbook_name }}.log" + verify_host: true + when: infra_prod_playbook_collect_log + + - name: Return playbook log artifact to Zuul + when: infra_prod_playbook_collect_log + zuul_return: + data: + zuul: + artifacts: + - name: "Playbook Log" + url: "{{ playbook_name }}.log" + metadata: + type: text + + # Save files locally on bridge + - name: Get original timestamp from file header + ansible.builtin.shell: | + head -1 /var/log/ansible/{{ playbook_name }}.log | sed -n 's/^Running \(.*\):.*$/\1/p' + args: + executable: /bin/bash + register: _log_timestamp + + - name: Turn timestamp into a string + ansible.builtin.set_fact: + _log_timestamp: '{{ _log_timestamp.stdout | trim }}' + + - name: Rename playbook log on bridge + when: not infra_prod_playbook_collect_log + become: yes + ansible.builtin.copy: + remote_src: yes + src: "/var/log/ansible/{{ playbook_name }}.log" + dest: "/var/log/ansible/{{ playbook_name }}.log.{{ _log_timestamp }}" + + # Reset the access/modification time to the timestamp in the filename; this + # makes lining things up more logical + - name: Reset file time + ansible.builtin.file: + path: '/var/log/ansible/{{ playbook_name }}.log.{{ _log_timestamp }}' + state: touch + modification_time: '{{ _log_timestamp }}' + modification_time_format: '%Y-%m-%dT%H:%M:%S' + access_time: '{{ _log_timestamp }}' + access_time_format: '%Y-%m-%dT%H:%M:%S' + become: yes + + - name: Cleanup old playbook logs on bridge + when: not infra_prod_playbook_collect_log + become: true + ansible.builtin.shell: | + find /var/log/ansible -name '{{ playbook_name }}.log.*' -type f -mtime +30 -delete diff --git a/playbooks/zuul/run-production-playbook.yaml b/playbooks/zuul/run-production-playbook.yaml new file mode 100644 index 0000000..81b8e79 --- /dev/null +++ b/playbooks/zuul/run-production-playbook.yaml @@ -0,0 +1,24 @@ +--- +- hosts: localhost + roles: + - add-bastion-host + +- hosts: prod_bastion[0] + tasks: + - name: Run the production playbook and capture logs + block: + - name: Get a current timestamp + ansible.builtin.set_fact: + _log_timestamp: "{{ lookup('pipe', 'date +%Y-%m-%dT%H:%M:%S') }}" + + - name: Construct execution command + ansible.builtin.set_fact: + ansible_command: "ansible-playbook -v -f {{ infra_prod_ansible_forks }} /home/zuul/src/github.com/opentelekomcloud-infra/system-config/playbooks/{{ playbook_name }} -e '{{ ((extra_job_vars | default({}))) | to_json }}'" + + - name: Log a playbook start header + become: true + ansible.builtin.shell: 'echo "Running {{ _log_timestamp }}: {{ ansible_command }}" > /var/log/ansible/{{ playbook_name }}.log' + + - name: Run specified playbook on bridge and redirect output + become: true + ansible.builtin.shell: "{{ ansible_command }} >> /var/log/ansible/{{ playbook_name }}.log" diff --git a/playbooks/zuul/templates/gate-groups.yaml.j2 b/playbooks/zuul/templates/gate-groups.yaml.j2 new file mode 100644 index 0000000..76ecdd4 --- /dev/null +++ b/playbooks/zuul/templates/gate-groups.yaml.j2 @@ -0,0 +1,68 @@ +# This is just to ensure nodes only defined in system-config-run-base +# for gate jobs are put in the right groups for testing +plugin: yamlgroup +groups: + certcheck: + - bridge.eco.tsi-dev.otc-service.com + statsd: + - statsd.centos-stream + apimon: + - epmon.centos-stream + - epmon.focal + - scheduler.centos-stream + - scheduler.focal + - executor.centos-stream + - executor.focal + apimon-clouds: + - bridge.eco.tsi-dev.otc-service.com + - epmon.centos-stream + - epmon.focal + - scheduler.centos-stream + - scheduler.focal + apimon-epmon: + - epmon.centos-stream + - epmon.focal + apimon-scheduler: + - scheduler.centos-stream + - scheduler.focal + apimon-executor: + - executor.centos-stream + - executor.focal + apimon-inst1: + - epmon.centos-stream + - epmon.focal + - scheduler.centos-stream + - scheduler.focal + - executor.centos-stream + - executor.focal + graphite: + - graphite1.apimon.eco.tsi-dev.otc-service.com + - graphite2.apimon.eco.tsi-dev.otc-service.com + graphite-apimon: + - graphite1.apimon.eco.tsi-dev.otc-service.com + - graphite2.apimon.eco.tsi-dev.otc-service.com + graphite-web: + - web3.eco.tsi-dev.otc-service.com + ssl_certs: + - bridge.eco.tsi-dev.otc-service.com + - graphite1.apimon.eco.tsi-dev.otc-service.com + - graphite2.apimon.eco.tsi-dev.otc-service.com + - le1 + - proxy1.centos-stream + - web1.eco.tsi-dev.otc-service.com + memcached: + - memcached.focal + alerta: + - alerta.focal + gitea: + - gitea.focal + keycloak: + - keycloak.focal + grafana: + - grafana.focal + proxy: + - le1 + vault: + - vault1.eco.tsi-dev.otc-service.com + zookeeper: + - zk.centos-stream diff --git a/playbooks/zuul/templates/group_vars/alerta.yaml.j2 b/playbooks/zuul/templates/group_vars/alerta.yaml.j2 new file mode 100644 index 0000000..9f27dd0 --- /dev/null +++ b/playbooks/zuul/templates/group_vars/alerta.yaml.j2 @@ -0,0 +1,24 @@ +alerta_instances_secrets: + monitoring: + alerta_db_url: "postgresql://fake" + alerta_zulip_api_key: "apikey" + alerta_admin_key: "adminkey" + alerta_secret_key: "adminkey" + alerta_api_key: "apikey" + alerta_admin_users: "admin" + alerta_bind_password: "password" + alerta_add_host: "ldap.internal:172.0.0.1" + alerta_auth_provider: ldap + alerta_auth_options_str: + LDAP_URL: "ldaps://ldap.internal:636" + LDAP_DEFAULT_DOMAIN: "example.com" + alerta_auth_options_non_str: + LDAP_ALLOW_SELF_SIGNED_CERT: False + alerta_allowed_environments: ['Prod', 'Dev'] + alerta_plugins_options: + ZULIP_SITE: "https://zulip.internal" + ZULIP_EMAIL: "apimon@zulip.example.com" + +alerta_instance: monitoring +# disable k8 in tests +alerta_k8s_instances: [] diff --git a/playbooks/zuul/templates/group_vars/all.yaml.j2 b/playbooks/zuul/templates/group_vars/all.yaml.j2 new file mode 100644 index 0000000..8e19530 --- /dev/null +++ b/playbooks/zuul/templates/group_vars/all.yaml.j2 @@ -0,0 +1,11 @@ +# If the bastion ipv4 or v6 addresses aren't available (because this +# job runs in an environment without them) just fall back to the +# defaults for the real bridge. +{% if bastion_ipv4 %} +bastion_ipv4: {{ bastion_ipv4 }} +{% endif %} +{% if bastion_ipv6 %} +bastion_ipv6: {{ bastion_ipv6 }} +{% endif %} +bastion_public_key: {{ bastion_public_key }} +firewalld_test_ports_enable: {{ firewalld_test_ports_enable }} diff --git a/playbooks/zuul/templates/group_vars/apimon-clouds.yaml.j2 b/playbooks/zuul/templates/group_vars/apimon-clouds.yaml.j2 new file mode 100644 index 0000000..ca61beb --- /dev/null +++ b/playbooks/zuul/templates/group_vars/apimon-clouds.yaml.j2 @@ -0,0 +1,165 @@ +apimon_all_clouds: + otcapimon_probes1: + auth: + auth_url: https://test.com + username: username + password: pwd + project_name: pn + user_domain_name: udn + otcapimon_probes2: + auth: + auth_url: https://test.com + username: username + password: pwd + project_name: pn + user_domain_name: udn + otcapimon_probes3: + auth: + auth_url: https://test.com + username: username + password: pwd + project_name: pn + user_domain_name: udn + otcapimon_probes4: + auth: + auth_url: https://test.com + username: username + password: pwd + project_name: pn + user_domain_name: udn + otcapimon_probes5: + auth: + auth_url: https://test.com + username: username + password: pwd + project_name: pn + user_domain_name: udn + otcapimon_probes6: + auth: + auth_url: https://test.com + username: username + password: pwd + project_name: pn + user_domain_name: udn + otcapimon_probes11: + auth: + auth_url: https://test.com + username: username + password: pwd + project_name: pn + user_domain_name: udn + otcapimon_probes12: + auth: + auth_url: https://test.com + username: username + password: pwd + project_name: pn + user_domain_name: udn + otcapimon_probes13: + auth: + auth_url: https://test.com + username: username + password: pwd + project_name: pn + user_domain_name: udn + otcapimon_probes14: + auth: + auth_url: https://test.com + username: username + password: pwd + project_name: pn + user_domain_name: udn + otcapimon_probes15: + auth: + auth_url: https://test.com + username: username + password: pwd + project_name: pn + user_domain_name: udn + otcapimon_probes16: + auth: + auth_url: https://test.com + username: username + password: pwd + project_name: pn + user_domain_name: udn + otcapimon_probes17: + auth: + auth_url: https://test.com + username: username + password: pwd + project_name: pn + user_domain_name: udn + otcapimon_probes18: + auth: + auth_url: https://test.com + username: username + password: pwd + project_name: pn + user_domain_name: udn + + otcapimon_preprod: + auth: + auth_url: https://test.com + username: username + password: pwd + project_name: pn + user_domain_name: udn + otcapimon_hybrid_eum: + auth: + auth_url: https://test.com + username: username + password: pwd + project_name: pn + user_domain_name: udn + otcapimon_hybrid_sbb: + auth: + auth_url: https://test.com + username: username + password: pwd + project_name: pn + user_domain_name: udn + vendor_hook: "otcextensions.sdk:load" + otcapimon_hybrid_swiss: + auth: + auth_url: https://test.com + username: username + password: pwd + project_name: pn + user_domain_name: udn + vendor_hook: "otcextensions.sdk:load" + otcapimon_stg_probes1: + profile: fake + auth: + auth_url: https://test.com + username: username + password: pwd + project_name: pn + user_domain_name: udn + object_store_endpoint_override: fake + otcapimon_csm1: + profile: fake + auth: + auth_url: https://test.com + username: username + password: pwd + project_name: pn + user_domain_name: udn + object_store_endpoint_override: fake + otcapimon_logs: + auth: + auth_url: https://test.com + username: username + password: pwd + project_name: pn + user_domain_name: udn + vendor_hook: "otcextensions.sdk:load" + otcapimon_logs_stg: + auth: + auth_url: https://test.com + username: username + password: pwd + project_name: pn + user_domain_name: udn + vendor_hook: "otcextensions.sdk:load" + object_store_endpoint_override: fake diff --git a/playbooks/zuul/templates/group_vars/apimon-inst1.yaml.j2 b/playbooks/zuul/templates/group_vars/apimon-inst1.yaml.j2 new file mode 100644 index 0000000..787b7f8 --- /dev/null +++ b/playbooks/zuul/templates/group_vars/apimon-inst1.yaml.j2 @@ -0,0 +1 @@ +apimon_instance: production_stg diff --git a/playbooks/zuul/templates/group_vars/apimon.yaml.j2 b/playbooks/zuul/templates/group_vars/apimon.yaml.j2 new file mode 100644 index 0000000..09eafcc --- /dev/null +++ b/playbooks/zuul/templates/group_vars/apimon.yaml.j2 @@ -0,0 +1,11 @@ +apimon_instances_secrets: + production_stg: + alerta_token: alerta_token + alerta_endpoint: test_alerta_endpoint + image: "quay.io/opentelekomcloud/apimon:latest" + db_url: "postgresql://dummy:dummy@{{ ansible_host }}:5432/dummy" + production_2: + alerta_token: alerta_token + alerta_endpoint: test_alerta_endpoint + image: "quay.io/opentelekomcloud/apimon:latest" + db_url: "postgresql://dummy:dummy@{{ ansible_host }}:5432/dummy" diff --git a/playbooks/zuul/templates/group_vars/bastion.yaml.j2 b/playbooks/zuul/templates/group_vars/bastion.yaml.j2 new file mode 100644 index 0000000..ed384b1 --- /dev/null +++ b/playbooks/zuul/templates/group_vars/bastion.yaml.j2 @@ -0,0 +1 @@ +extra_users: [] diff --git a/playbooks/zuul/templates/group_vars/control-plane-clouds.yaml.j2 b/playbooks/zuul/templates/group_vars/control-plane-clouds.yaml.j2 new file mode 100644 index 0000000..3cac190 --- /dev/null +++ b/playbooks/zuul/templates/group_vars/control-plane-clouds.yaml.j2 @@ -0,0 +1,157 @@ +# Necessary for fake clouds.yaml to be written +clouds: + otcdns: + auth: + username: username + password: pwd + project_name: pn + user_domain_name: udn + otc_tests_admin: + auth: + username: username + password: pwd + project_name: pn + user_domain_name: udn + domain_name: dn + + otcinfra_domain1_admin: + auth: + username: username + password: pwd + project_name: pn + user_domain_name: udn + domain_name: dn + otcinfra_domain2_admin: + auth: + username: username + password: pwd + project_name: pn + user_domain_name: udn + domain_name: dn + otcinfra_domain3_admin: + auth: + username: username + password: pwd + project_name: pn + user_domain_name: udn + domain_name: dn + + otcinfra_domain2: + auth: + username: username + password: pwd + project_name: pn + user_domain_name: udn + domain_name: dn + + otcinfra_domain3: + auth: + username: username + password: pwd + project_name: pn + user_domain_name: udn + domain_name: dn + + otcci_main: + auth: + username: username + password: pwd + project_name: pn + user_domain_name: udn + domain_name: dn + otcci_pool1: + auth: + username: username + password: pwd + project_name: pn + user_domain_name: udn + domain_name: dn + otcci_pool2: + auth: + username: username + password: pwd + project_name: pn + user_domain_name: udn + domain_name: dn + otcci_pool3: + auth: + username: username + password: pwd + project_name: pn + user_domain_name: udn + domain_name: dn + otcci_logs: + auth: + username: username + password: pwd + project_name: pn + user_domain_name: udn + domain_name: dn + object_store_endpoint_override: object_store_endpoint_override + + otc_vault_448_de_cloudmon: + auth: + username: username + password: pwd + project_name: pn + user_domain_name: udn + domain_name: dn + otc_vault_448_nl_cloudmon: + auth: + username: username + password: pwd + project_name: pn + user_domain_name: udn + domain_name: dn + + otcinfra_docs: + auth: + username: username + password: pwd + project_name: pn + user_domain_name: udn + domain_name: dn + object_store_endpoint_override: object_store_endpoint_override + + otcinfra_docs_int: + auth: + username: username + password: pwd + project_name: pn + user_domain_name: udn + domain_name: dn + object_store_endpoint_override: object_store_endpoint_override + otcinfra_docs_hc: + auth: + username: username + password: pwd + project_name: pn + user_domain_name: udn + domain_name: dn + object_store_endpoint_override: object_store_endpoint_override + + otc_swift: + auth: + username: username + password: pwd + project_name: pn + user_domain_name: udn + otcapimon_pool1: + auth: + username: username + password: pwd + project_name: pn + user_domain_name: udn + otcapimon_pool2: + auth: + username: username + password: pwd + project_name: pn + user_domain_name: udn + +cloud_users: + user1: + cloud: fake + domain: fake + name: fake + password: fake diff --git a/playbooks/zuul/templates/group_vars/gitea.yaml.j2 b/playbooks/zuul/templates/group_vars/gitea.yaml.j2 new file mode 100644 index 0000000..9f70837 --- /dev/null +++ b/playbooks/zuul/templates/group_vars/gitea.yaml.j2 @@ -0,0 +1,5 @@ +gitea_internal_token: "dummy-token" +gitea_secret_key: "dummy-secret-key" +gitea_db_type: "sqlite3" +gitea_cert: "gitea" + diff --git a/playbooks/zuul/templates/group_vars/grafana.yaml.j2 b/playbooks/zuul/templates/group_vars/grafana.yaml.j2 new file mode 100644 index 0000000..b587fe8 --- /dev/null +++ b/playbooks/zuul/templates/group_vars/grafana.yaml.j2 @@ -0,0 +1,27 @@ +firewalld_extra_ports_enable: ['3000/tcp', '8081/tcp'] +grafana_instances_secrets: + dashboard: + grafana_security_admin_password: foobar + grafana_auth_github_enable: true + grafana_auth_ldap_ca_path: "/fake/dir/ldap.toml" + grafana_auth_github_client_id: grafana + grafana_auth_github_client_secret: secret + grafana_auth_github_allowed_orgs: opentelekomcloud-infra + grafana_auth_ldap_certificate: | + dummy-ca-cert + grafana_auth_ldap_host: "ldap.example.com" + grafana_auth_ldap_hosts_entry: "ldap.example.com:1.2.3.4" + grafana_auth_ldap_bind_dn: "cn=proxy,ou=profile,dc=example,dc=com" + grafana_auth_ldap_bind_password: 'foobar' + grafana_auth_ldap_search_filter: "(uid=%s)" + grafana_auth_ldap_search_base_dns: "[\"ou=people,dc=example,dc=com\"]" + grafana_auth_ldap_group_search_filter: "(&(objectClass=posixGroup)(memberUid=%s))" + grafana_auth_ldap_group_search_filter_user_attribute: "uid" + grafana_auth_ldap_group_dn_super_admin: "cn=grafana-super-admins,ou=group,dc=example,dc=com" + grafana_auth_ldap_group_dn_admin: "cn=grafana-admins,ou=group,dc=example,dc=com" + grafana_auth_ldap_group_dn_editor: "cn=grafana-editors,ou=group,dc=example,dc=com" + grafana_auth_ldap_group_search_base_dns: "[ \"cn=grafana-super-admins,ou=group,dc=example,dc=com\", \"cn=grafana-admins,ou=group,dc=example,dc=com\", \"cn=grafana-editors,ou=group,dc=example,dc=com\" ]" + +grafana_instance: dashboard +# disable k8 in tests +grafana_k8s_instances: [] diff --git a/playbooks/zuul/templates/group_vars/graphite.yaml.j2 b/playbooks/zuul/templates/group_vars/graphite.yaml.j2 new file mode 100644 index 0000000..82c50fa --- /dev/null +++ b/playbooks/zuul/templates/group_vars/graphite.yaml.j2 @@ -0,0 +1,7 @@ +graphite_public_key: | + ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDW9xL5obur1bEGvMdAEQV+h8uT1fi1SiKz20FPSiYGmL6YeKHN7xrtKr4OQD2GUhyv2jVPculGoaqS+WfQUVL/YfigERGWxErX4pyEW5wU/9TFullPmduGsw03reFvxoJHNm7Uq/Os/Ulthh5J753CuPSAvf1mEPk63/rhyeFcacnzD1tUbzSZrQbC56kXppQyQQFf53MOMTMYrzGOZeBVT7G9OesxTnQrWjrDe5F3VdP5rywMFGG3xvBlvZnvlZiZIkcYtDDm65GsJqCpapBpGXKw0kqXygij3pjRNwMIowR/euwVy9g8EnFV4zGLNNNPq9MeE3/Nujl/W4VNw2UPqgFYrIPi21Moed3PE+CLqUY1mzFVO1ZyPtZoWRt1A1N22fXBgRPI8+42agPU90yKskysWPI27jmHjWuv4XgtuOjyEUJzGg0lOmvlgHeLVtMweyn51K+A7+FsvB+PvEpVlEsm1WTAhTSd/sC536xYA+xfXNtOXjJOe4HGnYZdzD0= fake_keypair + +graphite_private_key: | + -----BEGIN OPENSSH PRIVATE KEY----- + totally_fake + -----END OPENSSH PRIVATE KEY----- diff --git a/playbooks/zuul/templates/group_vars/k8s-controller.yaml.j2 b/playbooks/zuul/templates/group_vars/k8s-controller.yaml.j2 new file mode 100644 index 0000000..bf53b8b --- /dev/null +++ b/playbooks/zuul/templates/group_vars/k8s-controller.yaml.j2 @@ -0,0 +1,6 @@ +# Empty to disable k8 deployments in tests until we have a solution +# +graphite_web_instances: [] +graphite_web_k8s_instances: [] +carbonapi_k8s_instances: [] +zookeeper_k8s_instances: [] diff --git a/playbooks/zuul/templates/group_vars/keycloak.yaml.j2 b/playbooks/zuul/templates/group_vars/keycloak.yaml.j2 new file mode 100644 index 0000000..6e582ae --- /dev/null +++ b/playbooks/zuul/templates/group_vars/keycloak.yaml.j2 @@ -0,0 +1,3 @@ +keycloak_admin_password: "dummy" +keycloak_enable_https: false +keycloak_cert: "keycloak" diff --git a/playbooks/zuul/templates/group_vars/memcached.yaml.j2 b/playbooks/zuul/templates/group_vars/memcached.yaml.j2 new file mode 100644 index 0000000..56ae4d0 --- /dev/null +++ b/playbooks/zuul/templates/group_vars/memcached.yaml.j2 @@ -0,0 +1 @@ +firewalld_extra_ports_enable: ['11211/tcp'] diff --git a/playbooks/zuul/templates/group_vars/nodepool.yaml.j2 b/playbooks/zuul/templates/group_vars/nodepool.yaml.j2 new file mode 100644 index 0000000..addb110 --- /dev/null +++ b/playbooks/zuul/templates/group_vars/nodepool.yaml.j2 @@ -0,0 +1,12 @@ +nodepool_pool1_username: user +nodepool_pool1_password: password +nodepool_pool1_project: project +nodepool_pool1_user_domain_name: udn +nodepool_pool2_username: user +nodepool_pool2_password: password +nodepool_pool2_project: project +nodepool_pool2_user_domain_name: udn +nodepool_pool3_username: user +nodepool_pool3_password: password +nodepool_pool3_project: project +nodepool_pool3_user_domain_name: udn diff --git a/playbooks/zuul/templates/group_vars/proxy.yaml.j2 b/playbooks/zuul/templates/group_vars/proxy.yaml.j2 new file mode 100644 index 0000000..a47307b --- /dev/null +++ b/playbooks/zuul/templates/group_vars/proxy.yaml.j2 @@ -0,0 +1 @@ +searchuser_auth: "dummy" diff --git a/playbooks/zuul/templates/group_vars/ssl_certs.yaml.j2 b/playbooks/zuul/templates/group_vars/ssl_certs.yaml.j2 new file mode 100644 index 0000000..9f97c81 --- /dev/null +++ b/playbooks/zuul/templates/group_vars/ssl_certs.yaml.j2 @@ -0,0 +1,2 @@ +acme_directory: "https://acme-staging-v02.api.letsencrypt.org/directory" +ssl_cert_selfsign: true diff --git a/playbooks/zuul/templates/group_vars/statsd.yaml.j2 b/playbooks/zuul/templates/group_vars/statsd.yaml.j2 new file mode 100644 index 0000000..8d5e7b3 --- /dev/null +++ b/playbooks/zuul/templates/group_vars/statsd.yaml.j2 @@ -0,0 +1,15 @@ +statsd_image: "quay.io/opentelekomcloud/statsd:v0.9.0" +statsd_graphite_host: "127.0.0.1" +statsd_graphite_port: 2003 +statsd_graphite_port_pickle: 2004 +statsd_graphite_protocol: "pickle" +statsd_legacy_namespace: false +statsd_server: "./servers/udp" +statsd_delete_timers: false +statsd_delete_gauges: false +statsd_delete_counters: false +statsd_delete_sets: false +statsd_config_location: /etc/statsd + +statsd_service_name: "statsd" +statsd_service_unit: "statsd.service" diff --git a/playbooks/zuul/templates/host_vars/bridge.eco.tsi-dev.otc-service.com.yaml.j2 b/playbooks/zuul/templates/host_vars/bridge.eco.tsi-dev.otc-service.com.yaml.j2 new file mode 100644 index 0000000..1b203da --- /dev/null +++ b/playbooks/zuul/templates/host_vars/bridge.eco.tsi-dev.otc-service.com.yaml.j2 @@ -0,0 +1,30 @@ +# Necessary for fake clouds.yaml to be written +#clouds: +ansible_cron_disable_job: true +cloud_launcher_disable_job: true +extra_users: [] + +otcci_k8s: + server: https://abc + secrets: + client.crt: fake_cert_key + client.key: fake_key_data + +otcinfra1_k8s: + server: https://fake_server + secrets: + client.crt: fake_cert_key + client.key: fake_key_data + +otcinfra2_k8s: + server: https://fake_server + secrets: + client.crt: fake_cert_key + client.key: fake_key_data + +otcinfra_stg_k8s: + server: https://fake_server + secrets: + client.crt: fake_cert_key + client.key: fake_key_data + diff --git a/playbooks/zuul/templates/host_vars/epmon.centos-stream.yaml.j2 b/playbooks/zuul/templates/host_vars/epmon.centos-stream.yaml.j2 new file mode 100644 index 0000000..e378fc4 --- /dev/null +++ b/playbooks/zuul/templates/host_vars/epmon.centos-stream.yaml.j2 @@ -0,0 +1,2 @@ +apimon_statsd_host: 1.2.3.4 +apimon_zone: zone_centos-stream diff --git a/playbooks/zuul/templates/host_vars/epmon.focal.yaml.j2 b/playbooks/zuul/templates/host_vars/epmon.focal.yaml.j2 new file mode 100644 index 0000000..476f52b --- /dev/null +++ b/playbooks/zuul/templates/host_vars/epmon.focal.yaml.j2 @@ -0,0 +1,2 @@ +apimon_statsd_host: 1.2.3.4 +apimon_zone: zone_focal diff --git a/playbooks/zuul/templates/host_vars/hc1.eco.tsi-dev.otc-service.com.yaml.j2 b/playbooks/zuul/templates/host_vars/hc1.eco.tsi-dev.otc-service.com.yaml.j2 new file mode 100644 index 0000000..dcadfde --- /dev/null +++ b/playbooks/zuul/templates/host_vars/hc1.eco.tsi-dev.otc-service.com.yaml.j2 @@ -0,0 +1,9 @@ +# We can't modify zuul user (failover role) when connecting as zuul user +failover_user: zuul2 +all_users: + zuul2: + comment: Zuul CICD + key: | + ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCqO/dXXqmBr1RP8+En5iuLDkPtk7S1jbqjD6QppHo3eKe0WDXeENydPQrXrYf1wJcRa9a8Mdxx2tSxVNqyNVLmlyzPzPc9K2TM6shtHoc3Jzd1HlmfB9MJU2amKuqePwAptCgsxxLBvK+mvh0kXmKnkfMSItCpjOyj6udwwFChJFU/2LB3X9FqLCQB7n3FYKwvbrFDtcIa1COo2h8TychwqWAPKj0Fh7M+mjaF41vcBcmz+uaNk5czC0c7b03TVjKTpYFEmZNtoc0taLP6Ya2exYdHo2uiPYmFiPdVFuv6AMpRnO9CRZzQv+1tlcEPVfsp8gHJVOI47NTx5c5PRTMl system-config + uid: 2031 + gid: 2031 diff --git a/playbooks/zuul/templates/host_vars/le1.yaml.j2 b/playbooks/zuul/templates/host_vars/le1.yaml.j2 new file mode 100644 index 0000000..169d5a1 --- /dev/null +++ b/playbooks/zuul/templates/host_vars/le1.yaml.j2 @@ -0,0 +1,3 @@ +ssl_certs: + fake-domain: + - test1.test.com diff --git a/playbooks/zuul/templates/host_vars/proxy1.centos-stream.yaml.j2 b/playbooks/zuul/templates/host_vars/proxy1.centos-stream.yaml.j2 new file mode 100644 index 0000000..02aeb13 --- /dev/null +++ b/playbooks/zuul/templates/host_vars/proxy1.centos-stream.yaml.j2 @@ -0,0 +1,16 @@ +ssl_certs: + dummy1-test: + - dummy1.test.com + +proxy_backends: + - name: "dummy1" + domain_names: + - dummy1.test.com + servers: + - name: "t1" + address: "1.2.3.4:80" + opts: "check" + - name: "t2" + address: "2.3.4.5:8010" + +statsd_host: localhost diff --git a/playbooks/zuul/templates/host_vars/zk.centos-stream.yaml.j2 b/playbooks/zuul/templates/host_vars/zk.centos-stream.yaml.j2 new file mode 100644 index 0000000..264f456 --- /dev/null +++ b/playbooks/zuul/templates/host_vars/zk.centos-stream.yaml.j2 @@ -0,0 +1 @@ +zookeeper_instance_group: zookeeper diff --git a/requirements.txt b/requirements.txt new file mode 100644 index 0000000..f56962d --- /dev/null +++ b/requirements.txt @@ -0,0 +1 @@ +ansible-core diff --git a/roles/set-hostname/README.rst b/roles/set-hostname/README.rst new file mode 100644 index 0000000..957763e --- /dev/null +++ b/roles/set-hostname/README.rst @@ -0,0 +1,7 @@ +Set hostname + +Statically set the hostname, hosts and mailname + +**Role Variables** + +* None diff --git a/roles/set-hostname/tasks/main.yml b/roles/set-hostname/tasks/main.yml new file mode 100644 index 0000000..92dceff --- /dev/null +++ b/roles/set-hostname/tasks/main.yml @@ -0,0 +1,25 @@ +# Setting hostname with systemd apparently +# requires dbus. We have this on our cloud-provided +# nodes, but not on the minimal ones we get from +# nodepool. +- name: ensure dbus for working hostnamectl + become: true + ansible.builtin.package: + name: dbus + state: present + +# Set hostname and /etc/hosts +# Inspired by: +# https://github.com/ansible/ansible/pull/8482) +# https://gist.github.com/rothgar/8793800 +- name: Set /etc/hostname + become: true + ansible.builtin.hostname: name="{{ inventory_hostname.split('.', 1)[0] }}" + +- name: Set /etc/hosts + become: true + ansible.builtin.template: src=hosts.j2 dest=/etc/hosts mode=0644 + +- name: Set /etc/mailname + become: true + ansible.builtin.template: src=mailname.j2 dest=/etc/mailname mode=0644 diff --git a/roles/set-hostname/templates/hosts.j2 b/roles/set-hostname/templates/hosts.j2 new file mode 100644 index 0000000..1c39377 --- /dev/null +++ b/roles/set-hostname/templates/hosts.j2 @@ -0,0 +1,2 @@ +127.0.0.1 localhost +127.0.1.1 {{ inventory_hostname }} {{ inventory_hostname.split('.', 1)[0] }} diff --git a/roles/set-hostname/templates/mailname.j2 b/roles/set-hostname/templates/mailname.j2 new file mode 100644 index 0000000..b7d75c0 --- /dev/null +++ b/roles/set-hostname/templates/mailname.j2 @@ -0,0 +1 @@ +{{ inventory_hostname.split('.', 1)[0] }} diff --git a/setup.cfg b/setup.cfg new file mode 100644 index 0000000..e24f480 --- /dev/null +++ b/setup.cfg @@ -0,0 +1,17 @@ +[metadata] +name = opentelekomcloud-scs-infra-config +summary = Open Telekom Cloud SCS Infrastructure Config +description_file = + README.rst +author = SCS Contributors +classifier = + Environment :: OpenStack + Intended Audience :: Information Technology + Intended Audience :: System Administrators + License :: OSI Approved :: Apache Software License + Operating System :: POSIX :: Linux + Programming Language :: Python + +[build_sphinx] +all_files = 1 +warning-is-error = 1 diff --git a/setup.py b/setup.py new file mode 100644 index 0000000..58fb626 --- /dev/null +++ b/setup.py @@ -0,0 +1,22 @@ +# Copyright (c) 2013 Hewlett-Packard Development Company, L.P. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or +# implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +# THIS FILE IS MANAGED BY THE GLOBAL REQUIREMENTS REPO - DO NOT EDIT +import setuptools + +setuptools.setup( + setup_requires=['pbr>=2.0.0'], + pbr=True, + py_modules=[]) diff --git a/test_inventory/group_vars/apimon-clouds.yaml b/test_inventory/group_vars/apimon-clouds.yaml new file mode 100644 index 0000000..d53d915 --- /dev/null +++ b/test_inventory/group_vars/apimon-clouds.yaml @@ -0,0 +1,15 @@ +apimon_all_clouds: + otcapimon_probes1: + profile: test-profile1 + auth: + username: un + password: pwd + project_name: pn + user_domain_name: udm + otcapimon_probes2: + profile: test-profile2 + auth: + username: un + password: pwd + project_name: pn + user_domain_name: udm2 diff --git a/test_inventory/group_vars/apimon-inst1.yaml b/test_inventory/group_vars/apimon-inst1.yaml new file mode 100644 index 0000000..21ecc78 --- /dev/null +++ b/test_inventory/group_vars/apimon-inst1.yaml @@ -0,0 +1,4 @@ +apimon_epmon_secure_file_location: /etc/apimon/apimon-secure.yaml +apimon_epmon_clouds: + - target_cloud: + service_overrride: [] diff --git a/test_inventory/group_vars/apimon.yaml b/test_inventory/group_vars/apimon.yaml new file mode 100644 index 0000000..f724ec4 --- /dev/null +++ b/test_inventory/group_vars/apimon.yaml @@ -0,0 +1,2 @@ +apimon_epmon_secure_file_location: /etc/apimon/apimon-secure.yaml + diff --git a/test_inventory/host_vars/t1.yaml b/test_inventory/host_vars/t1.yaml new file mode 100644 index 0000000..3b4474a --- /dev/null +++ b/test_inventory/host_vars/t1.yaml @@ -0,0 +1,7 @@ +ansible_host: localhost +ansible_connection: local +apimon_zone: zone_t1 +apimon_clouds: + - name: target_cloud + cloud: otcapimon_probes1 + diff --git a/test_inventory/host_vars/t2.yaml b/test_inventory/host_vars/t2.yaml new file mode 100644 index 0000000..cf4570b --- /dev/null +++ b/test_inventory/host_vars/t2.yaml @@ -0,0 +1,7 @@ +ansible_host: localhost +ansible_connection: local +apimon_zone: zone_t2 +apimon_clouds: + - name: target_cloud + cloud: otcapimon_probes2 + diff --git a/test_inventory/hosts.yaml b/test_inventory/hosts.yaml new file mode 100644 index 0000000..60d4930 --- /dev/null +++ b/test_inventory/hosts.yaml @@ -0,0 +1,21 @@ +hosts: + all: + t1: + ansible_host: localhost + ansible_connection: local + t2: + ansible_host: localhost + ansible_connection: local + children: + apimon-clouds: + hosts: + t1: + t2: + apimon-inst1: + hosts: + t1: + t2: + apimon-epmon: + hosts: + t1: + t2: diff --git a/testinfra/conftest.py b/testinfra/conftest.py new file mode 100644 index 0000000..5270e60 --- /dev/null +++ b/testinfra/conftest.py @@ -0,0 +1,20 @@ +import os +import pytest +import yaml + +@pytest.fixture +def zuul_data(): + + data = {} + + with open('/home/zuul/src/github.com/opentelekomcloud-infra/system-config/inventory/base/gate-hosts.yaml') as f: + inventory = yaml.safe_load(f) + data['inventory'] = inventory + + zuul_extra_data_file = os.environ.get('TESTINFRA_EXTRA_DATA') + if os.path.exists(zuul_extra_data_file): + with open(zuul_extra_data_file, 'r') as f: + extra = yaml.safe_load(f) + data['extra'] = extra + + return data diff --git a/testinfra/test_acme.py b/testinfra/test_acme.py new file mode 100644 index 0000000..d0efb5a --- /dev/null +++ b/testinfra/test_acme.py @@ -0,0 +1,21 @@ +# Licensed under the Apache License, Version 2.0 (the "License"); you may +# not use this file except in compliance with the License. You may obtain +# a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +# License for the specific language governing permissions and limitations +# under the License. +testinfra_hosts = ['le1'] + + +def test_cert_exists(host): + for f in ['csr', 'pem', 'crt']: + crt_file = host.file('/etc/ssl/le1/fake-domain.%s' % f) + assert crt_file.exists + + haproxy_cert = host.file('/etc/ssl/le1/haproxy/fake-domain.pem') + assert haproxy_cert.exists diff --git a/testinfra/test_base.py b/testinfra/test_base.py new file mode 100644 index 0000000..4331819 --- /dev/null +++ b/testinfra/test_base.py @@ -0,0 +1,120 @@ +# Copyright 2018 Red Hat, Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); you may +# not use this file except in compliance with the License. You may obtain +# a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +# License for the specific language governing permissions and limitations +# under the License. + +import util + +testinfra_hosts = ['all'] + + +def test_firewalld(host): + + firewalld = host.service('firewalld') + assert firewalld.is_running + assert firewalld.is_enabled + ports = util.verify_firewalld_ports(host) + services = util.verify_firewalld_services(host) + + # Make sure that the zuul console stream rule is still present + zuul = '19885/tcp' + assert zuul in ports + + +def test_ntp(host): + package = host.package("ntp") + if host.system_info.distribution in ['fedora', 'centos']: + package = host.package('chrony') + assert package.is_installed + + service = host.service('chronyd') + assert service.is_running + assert service.is_enabled + + else: + assert not package.is_installed + + service = host.service('systemd-timesyncd') + assert service.is_running + + # Focal updates the status string to just say NTP + if host.system_info.codename == 'bionic': + stdout_string = 'systemd-timesyncd.service active' + else: + stdout_string = 'NTP service: active' + cmd = host.run("timedatectl status") + assert stdout_string in cmd.stdout + + +def test_timezone(host): + tz = host.check_output('date +%Z') + assert tz == "UTC" + + +def test_unbound(host): + output = host.check_output('host opendev.org') + assert 'has address' in output + + +def test_unattended_upgrades(host): + if host.system_info.distribution in ['ubuntu', 'debian']: + package = host.package("unattended-upgrades") + assert package.is_installed + + package = host.package("mailutils") + assert package.is_installed + + cfg_file = host.file("/etc/apt/apt.conf.d/10periodic") + assert cfg_file.exists + assert cfg_file.contains('^APT::Periodic::Enable "1"') + assert cfg_file.contains('^APT::Periodic::Update-Package-Lists "1"') + assert cfg_file.contains('^APT::Periodic::Download-Upgradeable-Packages "1"') + assert cfg_file.contains('^APT::Periodic::AutocleanInterval "5"') + assert cfg_file.contains('^APT::Periodic::Unattended-Upgrade "1"') + assert cfg_file.contains('^APT::Periodic::RandomSleep "1800"') + + cfg_file = host.file("/etc/apt/apt.conf.d/50unattended-upgrades") + assert cfg_file.contains('^Unattended-Upgrade::Mail "root"') + + else: + package = host.package("dnf-automatic") + assert package.is_installed + + service = host.service("crond") + assert service.is_enabled + assert service.is_running + + cfg_file = host.file("/etc/dnf/automatic.conf") + assert cfg_file.exists + assert cfg_file.contains('apply_updates = yes') + + +def test_logrotate(host): + '''Check for log rotation configuration files + + The magic number here is [0:5] of the sha1 hash of the full + path to the rotated logfile; the role adds this for uniqueness. + ''' + ansible_vars = host.ansible.get_variables() + if ansible_vars['inventory_hostname'] == 'bridge.eco.tsi-dev.otc-service.com': + cfg_file = host.file("/etc/logrotate.d/ansible.log.37237.conf") + assert cfg_file.exists + assert cfg_file.contains('/var/log/ansible/ansible.log') + + +def test_no_recommends(host): + if host.system_info.distribution in ['ubuntu', 'debian']: + cfg_file = host.file("/etc/apt/apt.conf.d/95disable-recommends") + assert cfg_file.exists + + assert cfg_file.contains('^APT::Install-Recommends "0"') + assert cfg_file.contains('^APT::Install-Suggests "0"') diff --git a/testinfra/test_bridge.py b/testinfra/test_bridge.py new file mode 100644 index 0000000..ae15969 --- /dev/null +++ b/testinfra/test_bridge.py @@ -0,0 +1,81 @@ +# Copyright 2018 Red Hat, Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); you may +# not use this file except in compliance with the License. You may obtain +# a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +# License for the specific language governing permissions and limitations +# under the License. +import platform +import pytest +import yaml + +testinfra_hosts = ['bridge.eco.tsi-dev.otc-service.com'] + + +def test_zuul_data(host, zuul_data): + # Test the zuul_data fixture that picks up things set by Zuul + assert 'inventory' in zuul_data + assert 'extra' in zuul_data + assert 'zuul' in zuul_data['extra'] + + +def test_clouds_yaml(host): + clouds_yaml = host.file('/etc/openstack/clouds.yaml') + assert clouds_yaml.exists + + assert b'password' in clouds_yaml.content + yaml.safe_load(clouds_yaml.content) + + +def test_openstacksdk_config(host): + f = host.file('/etc/openstack') + assert f.exists + assert f.is_directory + assert f.user == 'root' + assert f.group == 'root' + assert f.mode == 0o750 + del f + + +def test_root_authorized_keys(host): + authorized_keys = host.file('/root/.ssh/authorized_keys') + assert authorized_keys.exists + + content = authorized_keys.content.decode('utf8') + lines = content.split('\n') + assert len(lines) >= 2 + + +def test_kube_config(host): + if platform.machine() != 'x86_64': + pytest.skip() + kubeconfig = host.file('/root/.kube/config') + assert kubeconfig.exists + + assert b'ZmFrZV9rZXlfZGF0YQ==' in kubeconfig.content + + +def test_kubectl(host): + if platform.machine() != 'x86_64': + pytest.skip() + kube = host.run('kubectl help') + assert kube.rc == 0 + + +def test_zuul_authorized_keys(host): + authorized_keys = host.file('/home/zuul/.ssh/authorized_keys') + assert authorized_keys.exists + + content = authorized_keys.content.decode('utf8') + lines = content.split('\n') + # Remove empty lines + keys = list(filter(None, lines)) + assert len(keys) >= 2 + for key in keys: + assert 'ssh-rsa' in key diff --git a/testinfra/test_gitea.py b/testinfra/test_gitea.py new file mode 100644 index 0000000..7e15f31 --- /dev/null +++ b/testinfra/test_gitea.py @@ -0,0 +1,21 @@ +# Licensed under the Apache License, Version 2.0 (the "License"); you may +# not use this file except in compliance with the License. You may obtain +# a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +# License for the specific language governing permissions and limitations +# under the License. + +testinfra_hosts = ['gitea.focal'] + +def test_gitea_listening(host): + sock = host.socket("tcp://0.0.0.0:2222") + assert sock.is_listening + +def test_gitea_systemd(host): + service = host.service('gitea') + assert service.is_enabled diff --git a/testinfra/test_vault.py b/testinfra/test_vault.py new file mode 100644 index 0000000..633fc62 --- /dev/null +++ b/testinfra/test_vault.py @@ -0,0 +1,24 @@ +# Licensed under the Apache License, Version 2.0 (the "License"); you may +# not use this file except in compliance with the License. You may obtain +# a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +# License for the specific language governing permissions and limitations +# under the License. + +testinfra_hosts = ['vault1.eco.tsi-dev.otc-service.com'] + + +def test_vault_container_listening(host): + sock = host.socket("tcp://0.0.0.0:8200") + assert sock.is_listening + + +def test_vault_systemd(host): + service = host.service('vault') + assert service.is_enabled + assert service.is_running diff --git a/testinfra/util.py b/testinfra/util.py new file mode 100644 index 0000000..42f7709 --- /dev/null +++ b/testinfra/util.py @@ -0,0 +1,50 @@ +# Copyright 2018 Red Hat, Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); you may +# not use this file except in compliance with the License. You may obtain +# a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +# License for the specific language governing permissions and limitations +# under the License. + +import socket + +def get_ips(value, family=None): + ret = set() + try: + addr_info = socket.getaddrinfo(value, None, family) + except socket.gaierror: + return ret + for addr in addr_info: + ret.add(addr[4][0]) + return ret + + +def verify_firewalld_ports(host): + ports = host.run('firewall-cmd --list-ports --zone public') + ports = [x.strip() for x in ports.stdout.split(' ')] + + needed_ports = [] + + for port in needed_ports: + assert port in ports + + return ports + + +def verify_firewalld_services(host): + services = host.run('firewall-cmd --list-services --zone public') + services = [x.strip() for x in services.stdout.split(' ')] + + needed_services = [ + 'ssh' + ] + for service in needed_services: + assert service in services + + return services diff --git a/tools/ansible-runtime.py b/tools/ansible-runtime.py new file mode 100644 index 0000000..7b1bccd --- /dev/null +++ b/tools/ansible-runtime.py @@ -0,0 +1,48 @@ +#!/usr/bin/python3 + +# Copyright 2018 Red Hat, Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); you may +# not use this file except in compliance with the License. You may obtain +# a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +# License for the specific language governing permissions and limitations +# under the License. + +# This script parses the logfiles on bridge.o.o to give an overview of +# how long the last "run_all.sh" iterations took and give a clue to +# what might have changed inbetween runs. + +from datetime import datetime +import os + +# TODO: reverse walk rotated logs for longer history +with open('/var/log/ansible/run_all_cron.log') as f: + begin = None + for line in f: + if "--- begin run @" in line: + # 2018-09-05T01:10:36+00:00 + begin = datetime.strptime(line[16:-5], '%Y-%m-%dT%H:%M:%S+00:00') + continue + if "--- end run @" in line: + end = datetime.strptime(line[14:-5], '%Y-%m-%dT%H:%M:%S+00:00') + if not begin: + print("end @ %s had no beginning?", end) + continue + runtime = end - begin + # NOTE(ianw): try to get what would have been the HEAD at + # the time the run started. "--first-parent" I hope means + # that we show merge commits of when the change actually + # was in the tree, not when it was originally proposed. + git_head_commit = os.popen('git -C /opt/system-config/ rev-list --first-parent -1 --before="%s" master' % begin).read().strip() + git_head = os.popen('git -C /opt/system-config log --abbrev-commit --pretty=oneline --max-count=1 %s' % git_head_commit).read().strip() + print("%s - %s - %s" % (runtime, begin, git_head)) + begin = None + +if begin: + print("Incomplete run started @ %s" % begin) diff --git a/tools/apply-test.sh b/tools/apply-test.sh new file mode 100755 index 0000000..5f541b6 --- /dev/null +++ b/tools/apply-test.sh @@ -0,0 +1,88 @@ +#!/bin/bash -ex + +# Copyright 2014 Hewlett-Packard Development Company, L.P. +# +# Licensed under the Apache License, Version 2.0 (the "License"); you may +# not use this file except in compliance with the License. You may obtain +# a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +# License for the specific language governing permissions and limitations +# under the License. + +. ./tools/prep-apply.sh + +if [[ ! -d applytest ]] ; then + mkdir ~/applytest +fi + +trap "sudo mv ~/applytest applytest" EXIT + +# Split the class defs. +csplit -sf ~/applytest/puppetapplytest $PUPPET_MANIFEST '/^}$/' {*} +# Remove } header left by csplit +sed -i -e '/^\}$/d' ~/applytest/puppetapplytest* +# Comment out anything that doesn't begin with a space. +# This gives us the node {} internal contents. +sed -i -e 's/^[^][:space:]$]/#&/g' ~/applytest/puppetapplytest* +sed -i -e 's@hiera(.\([^.]*\).,\([^)]*\))@\2@' ~/applytest/puppetapplytest* +sed -i -e "s@hiera(.\([^.]*\).)@'\1NoDefault'@" ~/applytest/puppetapplytest* + +if [[ `lsb_release -i -s` == 'CentOS' ]]; then + if [[ `lsb_release -r -s` =~ '7' ]]; then + CODENAME='centos7' + fi +elif [[ `lsb_release -i -s` == 'Debian' ]]; then + CODENAME=`lsb_release -c -s` +elif [[ `lsb_release -i -s` == 'Ubuntu' ]]; then + CODENAME=`lsb_release -c -s` +elif [[ `lsb_release -i -s` == 'Fedora' ]]; then + REL=`lsb_release -r -s` + CODENAME="fedora$REL" +fi + +FOUND=0 +for f in `find ~/applytest -name 'puppetapplytest*' -print` ; do + if grep -q "Node-OS: $CODENAME" $f; then + if grep -q "Puppet-Version: !${PUPPET_VERSION}" $f; then + echo "Skipping $f due to unsupported puppet version" + continue + else + cp $f $f.final + FOUND=1 + fi + fi +done + +if [[ $FOUND == "0" ]]; then + echo "No hosts found for node type $CODENAME" + exit 1 +fi + +cat > ~/applytest/primer.pp << EOF +class helloworld { + notify { 'hello, world!': } +} +EOF + +sudo mkdir -p /var/run/puppet +echo "Running apply test primer to avoid setup races when run in parallel." +./tools/test_puppet_apply.sh ~/applytest/primer.pp + +THREADS=$(nproc) +if grep -qi centos /etc/os-release ; then + # Single thread on centos to workaround a race with rsync on centos + # when copying puppet modules for multiple puppet applies at the same + # time. + THREADS=1 +fi + +echo "Running apply test on these hosts:" +find ~/applytest -name 'puppetapplytest*.final' -print0 +find ~/applytest -name 'puppetapplytest*.final' -print0 | \ + xargs -0 -P $THREADS -n 1 -I filearg \ + ./tools/test_puppet_apply.sh filearg diff --git a/tools/build-swift-rings.sh b/tools/build-swift-rings.sh new file mode 100755 index 0000000..93434c6 --- /dev/null +++ b/tools/build-swift-rings.sh @@ -0,0 +1,46 @@ +#!/usr/bin/env bash +# +add_disk() { + # $1 - zone + # $2 - ip last octet + # $3 - replication ip last octet + # $4 - disk name + swift-ring-builder data/account.builder add r1z$1-192.168.82.$2:6202R192.168.83.$3:6202/$4 100 + swift-ring-builder data/container.builder add r1z$1-192.168.82.$2:6201R192.168.83.$3:6201/$4 100 + swift-ring-builder data/object.builder add r1z$1-192.168.82.$2:6200R192.168.83.$3:6200/$4 100 +} + +POLICIES="object container account" + +#for p in $POLICIES; do +# swift-ring-builder $p.builder create 14 3 24 +#done + +# Zone 1 +add_disk 1 101 101 vdd1 + +# Zone 2 +add_disk 2 102 102 vdd1 +# +# Zone 3 +add_disk 3 103 103 vdd1 + +# Zone 4 +add_disk 4 104 104 vdd1 + +# Zone 5 +add_disk 5 105 105 vdd1 + +# Zone 6 +add_disk 6 106 106 vdd1 + +# Zone 7 +add_disk 7 107 107 vdd1 + +# Zone 8 +add_disk 8 108 108 vdd1 + + +for p in $POLICIES; do + swift-ring-builder data/$p.builder rebalance +done diff --git a/tools/check_clouds_yaml.py b/tools/check_clouds_yaml.py new file mode 100644 index 0000000..7ab142f --- /dev/null +++ b/tools/check_clouds_yaml.py @@ -0,0 +1,57 @@ +#! /usr/bin/env python + +# Copyright 2018 Red Hat +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import os +import openstack +import re +import sys +import tempfile + +FILES_TO_CHECK = ( + 'playbooks/templates/clouds/nodepool_clouds.yaml.j2', + 'playbooks/templates/clouds/bridge_all_clouds.yaml.j2', +) + + +def check_files(): + + with tempfile.TemporaryDirectory() as tempdir: + for file in FILES_TO_CHECK: + # These are actually jinja files that have templating in + # them, we just rewrite them with a string in there for + # the parser to read, as the <>'s can confuse yaml + # depending on how they're quoted in the file + temp = open(os.path.join(tempdir, + os.path.basename(file)), 'w') + in_file = open(file, 'r') + for line in in_file: + line = re.sub(r'{{.*}}', 'loremipsum', line) + temp.write(line) + temp.close() + + try: + print("Checking parsing of %s" % file) + c = openstack.config.OpenStackConfig(config_files=[temp.name]) + except Exception as e: + print("Error parsing : %s" % file) + print(e) + sys.exit(1) + +def main(): + check_files() + +if __name__ == "__main__": + sys.exit(main()) diff --git a/tools/cloud-to-env.py b/tools/cloud-to-env.py new file mode 100644 index 0000000..069ed29 --- /dev/null +++ b/tools/cloud-to-env.py @@ -0,0 +1,30 @@ +import argparse +import sys + +import openstack + + +def main(): + parser = argparse.ArgumentParser() + parser.add_argument( + "--cloud", dest="cloud", required=True, + help="cloud name") + parser.add_argument( + "--region", dest="region", required=True, + help="cloud region") + + options = parser.parse_args() + + cloud_region = openstack.config.OpenStackConfig().get_one( + cloud=options.cloud, region_name=options.region) + + print("export OS_REGION_NAME='{region_name}'".format( + region_name=cloud_region.region_name)) + for k, v in cloud_region.auth.items(): + print("export OS_{key}='{value}'".format( + key=k.upper(), + value=v)) + return 0 + +if __name__ == '__main__': + sys.exit(main()) diff --git a/tools/fake-ansible/library/zuul_return.py b/tools/fake-ansible/library/zuul_return.py new file mode 100644 index 0000000..e1542c5 --- /dev/null +++ b/tools/fake-ansible/library/zuul_return.py @@ -0,0 +1,11 @@ +# This is a fake zuul_return to make ansible-lint happy +from ansible.module_utils.basic import AnsibleModule + +def main(): + module = AnsibleModule( + argument_spec=dict( + data=dict(default=None), + path=dict(default=None, type=str), + file=dict(default=None, type=str), + ) + ) diff --git a/tools/generate-diagrams.py b/tools/generate-diagrams.py new file mode 100644 index 0000000..38bba9d --- /dev/null +++ b/tools/generate-diagrams.py @@ -0,0 +1,395 @@ +#!/usr/bin/env python3 + +import argparse +import logging + +import graphviz + +from ansible.parsing.dataloader import DataLoader +from ansible.inventory.manager import InventoryManager as _InventoryManager +from ansible.plugins.loader import inventory_loader +from ansible.vars.manager import VariableManager + + +graph_attr = { + "fontsize": "10", + "bgcolor": "transparent", + "pad": "0", + "splines": "curved" +} +node_attr = { + #'fontsize': '10', + 'shape': 'box', + #'height': '1.3', + #'width': '1', + 'imagescale': 'true' +} +graphviz_graph_attr = { + 'bgcolor': 'transparent', + 'fontcolor': '#2D3436', + 'fontname': 'Sans-Serif', + 'fontsize': '10', + # 'pad': '0', + 'rankdir': 'LR', + # 'ranksep': '0.75', + #'splines': 'curved', + 'compound': 'true' # important for ltail/lhead +} + +graphviz_cluster_attrs = { + 'bgcolor': '#E5F5FD', + 'shape': 'box', + 'style': 'rounded' +} + +graphviz_icon_node_attrs = { + 'imagescale': 'true', + 'fixedsize': 'true', + 'fontsize': '10', + 'width': '1', + 'height': '1.4', + 'shape':'none', + 'labelloc': 'b', +} + + +# Override Ansible inventory manager to inject our plugin +class InventoryManager(_InventoryManager): + def _fetch_inventory_plugins(self): + plugins = super()._fetch_inventory_plugins() + inventory_loader.add_directory('playbooks/roles/install-ansible/files/inventory_plugins') + for plugin_name in ['yamlgroup']: + plugin = inventory_loader.get(plugin_name) + if plugin: + plugins.append(plugin) + return plugins + + +def zuul(path, inventory, variable_manager): + """General Zuul software diagram""" + g = graphviz.Digraph( + 'Zuul CI/CD', + graph_attr=graphviz_graph_attr, + node_attr={'fixedsize': 'false'} + ) + user = g.node( + 'user', 'Clients', + image='../_images/users.png', + **graphviz_icon_node_attrs + ) + # NOTE: adding elb and user<=>git communication make graph overloaded and + # and badly placed + #elb = g.node( + # 'elb', 'Elastic Load Balancer', + # image='../_images/elb-network-load-balancer.png', + # **graphviz_icon_node_attrs + #) + git = g.node( + 'git', 'Git Provider', + image='../_images/git.png', + **graphviz_icon_node_attrs + ) + #g.edge('user', 'elb') + #g.edge('git', 'elb') + #g.edge('user', 'git') + + # NOTE: cluster name must start with "cluster_" for graphviz + with g.subgraph( + name='cluster_zuul', + graph_attr=graphviz_cluster_attrs, + node_attr={ + 'fontsize': '8' + } + ) as zuul: + zuul.attr(label='Zuul CI/CD') + + zuul.node('zuul-web', 'Zuul Web') + zuul.node('zuul-merger', 'Zuul Merger') + zuul.node('zuul-executor', 'Zuul Executor') + zuul.node('zuul-scheduler', 'Zuul Scheduler') + zuul.node('nodepool-launcher', 'Nodepool Launcher') + zuul.node('nodepool-builder', 'Nodepool Builder') + + g.node( + 'zookeeper', label='Zookeeper', + image='../_images/zookeeper.png', + **graphviz_icon_node_attrs) + + g.edge('zuul-web', 'zookeeper') + g.edge('zuul-merger', 'zookeeper') + g.edge('zuul-executor', 'zookeeper') + g.edge('zuul-scheduler', 'zookeeper') + g.edge('nodepool-launcher', 'zookeeper') + g.edge('nodepool-builder', 'zookeeper') + db = g.node( + 'db', 'SQL Database', + image='../_images/postgresql.png', + **graphviz_icon_node_attrs) + cloud = g.node( + 'cloud', 'Clouds resources', + image='../_images/openstack.png', + **graphviz_icon_node_attrs) + + g.edge('user', 'zuul-web') + g.edge('zuul-merger', 'git') + g.edge('zuul-executor', 'git') + g.edge('zuul-web', 'db') + g.edge('nodepool-launcher', 'cloud') + g.edge('nodepool-builder', 'cloud') + g.edge('zuul-executor', 'cloud') + + g.render(f'{path}/zuul', format='svg', view=False) + + zuul_sec(path, inventory, variable_manager) + zuul_dpl(path, inventory, variable_manager) + + +def zuul_sec(path, inventory, variable_manager): + """Zuul security deployment diagram""" + edge_attrs = {'fontsize': '8'} + edge_attrs_zk = {'color': 'red', 'label': 'TLS', 'fontsize': '8'} + edge_attrs_ssh = {'color': 'blue', 'label': 'SSH', 'fontsize': '8'} + edge_attrs_https = {'color': 'green', 'label': 'HTTPS', 'fontsize': '8'} + + g = graphviz.Digraph( + 'Zuul CI/CD Security Design', + graph_attr=graphviz_graph_attr, + node_attr={'fixedsize': 'false'} + ) + git = g.node( + 'git', 'Git Provider', + image='../_images/git.png', + **graphviz_icon_node_attrs + ) + db = g.node( + 'db', 'SQL Database', + image='../_images/postgresql.png', + **graphviz_icon_node_attrs) + cloud = g.node( + 'cloud', 'Clouds resources', + image='../_images/openstack.png', + **graphviz_icon_node_attrs) + + with g.subgraph( + name='cluster_k8', + graph_attr=graphviz_cluster_attrs, + node_attr={ + 'fontsize': '8' + } + ) as k8: + k8.attr(label='Kubernetes Cluster') + + with k8.subgraph( + name='cluster_zuul', + #graph_attr=graphviz_cluster_attrs + node_attr={ + 'fontsize': '8' + } + ) as zuul: + zuul.attr(label='Zuul Namespace') + + zuul.node('zuul-web', 'Zuul Web') + zuul.node('zuul-merger', 'Zuul Merger') + zuul.node('zuul-executor', 'Zuul Executor') + zuul.node('zuul-scheduler', 'Zuul Scheduler') + zuul.node('nodepool-launcher', 'Nodepool Launcher') + zuul.node('nodepool-builder', 'Nodepool Builder') + + with k8.subgraph( + name='cluster_zk', + node_attr={ + 'fontsize': '8' + } + ) as zk: + zk.attr(label='Zuul Namespace') + + zk.node( + 'zookeeper', label='Zookeeper', + image='../_images/zookeeper.png', + **graphviz_icon_node_attrs) + + g.edge('zuul-web', 'zookeeper', **edge_attrs_zk) + g.edge('zuul-merger', 'zookeeper', **edge_attrs_zk) + g.edge('zuul-executor', 'zookeeper', **edge_attrs_zk) + g.edge('zuul-scheduler', 'zookeeper', **edge_attrs_zk) + g.edge('nodepool-launcher', 'zookeeper', **edge_attrs_zk) + g.edge('nodepool-builder', 'zookeeper', **edge_attrs_zk) + + g.edge('zuul-merger', 'git', **edge_attrs_ssh) + g.edge('zuul-executor', 'git', **edge_attrs_ssh) + g.edge('zuul-web', 'db', label='TLS', **edge_attrs) + g.edge('nodepool-launcher', 'cloud', **edge_attrs_https) + g.edge('nodepool-builder', 'cloud', **edge_attrs_https) + g.edge('zuul-executor', 'cloud', **edge_attrs_ssh) + + g.render(f'{path}/zuul_sec', format='svg', view=False) + + +def zuul_dpl(path, inventory, variable_manager): + """ Zuul deployment diagram""" + edge_attrs_zk = {'color': 'red', 'label': 'TLS', 'fontsize': '8'} + edge_attrs_vault = {'color': 'blue', 'label': 'TLS', 'fontsize': '8'} + + g = graphviz.Digraph( + 'Zuul CI/CD Deployment Design', + graph_attr=graphviz_graph_attr, + node_attr={'fixedsize': 'false'} + ) + + g.node( + 'vault', 'Vault', + image='../_images/vault.png', + **graphviz_icon_node_attrs) + + with g.subgraph( + name='cluster_k8', + graph_attr=graphviz_cluster_attrs, + node_attr={ + 'fontsize': '8' + } + ) as k8: + k8.attr(label='Kubernetes Cluster') + + with k8.subgraph( + name='cluster_zuul', + #graph_attr=graphviz_cluster_attrs + node_attr={ + 'fontsize': '8' + } + ) as zuul: + zuul.attr(label='Zuul Namespace') + + zuul.node('zuul-web', 'Zuul Web') + zuul.node('zuul-merger', 'Zuul Merger') + zuul.node('zuul-executor', 'Zuul Executor') + zuul.node('zuul-scheduler', 'Zuul Scheduler') + zuul.node('nodepool-launcher', 'Nodepool Launcher') + zuul.node('nodepool-builder', 'Nodepool Builder') + + g.edge('zuul-web', 'vault', **edge_attrs_vault) + g.edge('zuul-merger', 'vault', **edge_attrs_vault) + g.edge('zuul-executor', 'vault', **edge_attrs_vault) + g.edge('zuul-scheduler', 'vault', **edge_attrs_vault) + g.edge('nodepool-launcher', 'vault', **edge_attrs_vault) + g.edge('nodepool-builder', 'vault', **edge_attrs_vault) + + with k8.subgraph( + name='cluster_zk', + node_attr={ + 'fontsize': '8' + } + ) as zk: + zk.attr(label='Zuul Namespace') + + zk.node( + 'zookeeper', label='Zookeeper', + image='../_images/zookeeper.png', + **graphviz_icon_node_attrs) + g.edge('zookeeper', 'vault', **edge_attrs_vault) + + g.edge('zuul-web', 'zookeeper', **edge_attrs_zk) + g.edge('zuul-merger', 'zookeeper', **edge_attrs_zk) + g.edge('zuul-executor', 'zookeeper', **edge_attrs_zk) + g.edge('zuul-scheduler', 'zookeeper', **edge_attrs_zk) + g.edge('nodepool-launcher', 'zookeeper', **edge_attrs_zk) + g.edge('nodepool-builder', 'zookeeper', **edge_attrs_zk) + + g.render(f'{path}/zuul_dpl', format='svg', view=False) + + +def proxy(path, inventory, variable_manager): + dot = graphviz.Digraph( + 'Reverse Proxy', + format='svg', + graph_attr=graphviz_graph_attr, + node_attr={'fixedsize': 'false'} + ) + user = dot.node( + 'user', 'Clients', + image='../_images/users.png', + **graphviz_icon_node_attrs + ) + lb = dot.node( + 'lb', 'Load Balancer', + tooltip='Load Balancer in OTC', + **node_attr) + gw = dot.node( + 'gw', 'Network Gateway', + tooltip='Network Gateway in vCloud', + **node_attr) + dot.edge('user', 'lb') + dot.edge('user', 'gw') + + proxies = [] + with dot.subgraph( + name="cluster_proxy", + graph_attr=graphviz_cluster_attrs + ) as prox: + prox.attr(label='Reverse Proxy') + for host in inventory.groups['proxy'].get_hosts(): + host_vars = variable_manager.get_vars( + host=host) + host_name = host_vars['inventory_hostname_short'] + host = prox.node( + host_name, + label=host_name, + tooltip=host_vars['inventory_hostname'], + image='../_images/haproxy.png', + **graphviz_icon_node_attrs + ) + proxies.append(host_name) + provider = host_vars.get('location', {}).get('provider', {}) + if provider == 'otc': + dot.edge('lb', host_name) + elif provider == 'vcloud': + dot.edge('gw', host_name) + + with dot.subgraph( + name="cluster_apps", + graph_attr=graphviz_cluster_attrs + ) as apps: + apps.attr(label='Applications') + edge_from = proxies[len(proxies) // 2] + _apps = [x['name'] for x in host_vars['proxy_backends']] + _apps.sort() + for _app in _apps: + app = apps.node(_app) + dot.edge( + edge_from, _app, + ltail='cluster_proxy') + + dot.render(f'{path}/reverse_proxy', view=False) + + +def main(): + logging.basicConfig(level=logging.DEBUG) + # create parser + parser = argparse.ArgumentParser() + + # add arguments to the parser + parser.add_argument( + "--path", + default='./', + help='Path to generate diagrams in' + ) + # parse the arguments + args = parser.parse_args() + + loader = DataLoader() + inventory = InventoryManager( + loader=loader, + sources=[ + 'inventory/base/hosts.yaml', + 'inventory/service/groups.yaml' + ]) + variable_manager = VariableManager( + loader=loader, + inventory=inventory) + + path = args.path + proxy(path, inventory, variable_manager) + zuul(path, inventory, variable_manager) + + +if __name__ == '__main__': + main() diff --git a/tools/install_modules_acceptance.sh b/tools/install_modules_acceptance.sh new file mode 100755 index 0000000..4c837b6 --- /dev/null +++ b/tools/install_modules_acceptance.sh @@ -0,0 +1,81 @@ +#!/bin/bash -ex + +# Copyright 2015 Hewlett-Packard Development Company, L.P. +# +# Licensed under the Apache License, Version 2.0 (the "License"); you may +# not use this file except in compliance with the License. You may obtain +# a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +# License for the specific language governing permissions and limitations +# under the License. + +# This script installs the puppet modules required by infra to run acceptance +# tests. It can run in two different contexts. In the first case, it is running +# as part of a zuul driven check/gate queue where it needs to opportunisticly +# install patches to repositories that are not landed yet. In the second case, +# it is running from a base virtual machine by the beaker tooling where it needs +# to install master of all openstack-infra repos and the tagged versions of all +# library modules. + +# This script uses system-config/modules.env as the source of truth for modules +# to install. It detects the presence of /home/zuul to decide if we are running +# in a zuul environment or not. + +ROOT=$(readlink -fn $(dirname $0)/..) + +# These arrays are initialized here and populated in modules.env + +# Array of modules to be installed key:value is module:version. +declare -A MODULES + +# Array of modues to be installed from source and without dependency resolution. +# key:value is source location, revision to checkout +declare -A SOURCE_MODULES + +# Array of modues to be installed from source and without dependency resolution from openstack git +# key:value is source location, revision to checkout +declare -A INTEGRATION_MODULES + +install_external() { + PUPPET_INTEGRATION_TEST=1 ${ROOT}/install_modules.sh +} + +install_openstack() { + local modulepath + if [ "$PUPPET_VERSION" == "3" ] ; then + modulepath='/etc/puppet/modules' + else + modulepath='/etc/puppetlabs/code/modules' + fi + + sudo -E git clone /home/zuul/src/opendev.org/openstack/project-config /etc/project-config + + project_names="" + source ${ROOT}/modules.env + for MOD in ${!INTEGRATION_MODULES[*]}; do + project_scope=$(basename $(dirname $MOD)) + repo_name=$(basename $MOD) + short_name=$(echo $repo_name | cut -f2- -d-) + sudo -E git clone /home/zuul/src/opendev.org/$project_scope/$repo_name $modulepath/$short_name + done +} + +install_all() { + PUPPET_INTEGRATION_TEST=0 ${ROOT}/install_modules.sh + +} + +if [ -d /home/zuul/src/opendev.org ] ; then + install_external + install_openstack +else + install_all +fi + +# Information on what has been installed +puppet module list diff --git a/tools/module_versions.sh b/tools/module_versions.sh new file mode 100644 index 0000000..617a41e --- /dev/null +++ b/tools/module_versions.sh @@ -0,0 +1,32 @@ +#!/bin/bash + +# Copyright 2014 Hewlett-Packard Development Company, L.P. +# +# Licensed under the Apache License, Version 2.0 (the "License"); you may +# not use this file except in compliance with the License. You may obtain +# a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +# License for the specific language governing permissions and limitations +# under the License. + + + +for mod in $(ls /etc/puppet/modules/); do + echo -n "${mod}: " + cd /etc/puppet/modules/$mod + branch=$(git rev-parse --abbrev-ref HEAD) + if [[ $branch == "HEAD" ]]; then + tag=$(git name-rev --name-only --tags $(git rev-parse HEAD)) + version=$tag + else + version=$branch + fi + echo $version + cd - >/dev/null + +done diff --git a/tools/run-bashate.sh b/tools/run-bashate.sh new file mode 100755 index 0000000..ee166fd --- /dev/null +++ b/tools/run-bashate.sh @@ -0,0 +1,4 @@ +#!/bin/bash + +ROOT=$(readlink -fn $(dirname $0)/.. ) +find $ROOT -not -wholename \*.tox/\* -and \( -name \*.sh -or -name \*rc -or -name functions\* \) -print0 | xargs -0 bashate -i E006 -v diff --git a/tox.ini b/tox.ini new file mode 100644 index 0000000..d55e047 --- /dev/null +++ b/tox.ini @@ -0,0 +1,63 @@ +[tox] +minversion = 1.6 +envlist = linters +skipsdist = True + +[testenv] +basepython = python3 +usedevelop = True +install_command = pip install {opts} {packages} + +[testenv:linters] +deps = + hacking!=0.13.0,<0.14,>=0.12.0 # Apache-2.0 + bashate>=0.2 # Apache-2.0 + PyYAML>=3.10.0 # MIT + ansible + openstacksdk + testtools + mock + flake8 +whitelist_externals = bash +setenv = + ANSIBLE_LIBRARY= {toxinidir}/tools/fake-ansible/library +commands = + flake8 + {toxinidir}/tools/run-bashate.sh + python3 {toxinidir}/tools/check_clouds_yaml.py + # The following command validates that inventory/base/hosts.yaml + # parses, but doesn't do anything. + bash -c "ANSIBLE_INVENTORY_PLUGINS=./playbooks/roles/install-ansible/files/inventory_plugins ansible -i ./inventory/base/hosts.yaml not_a_host -a 'true'" + python3 -m unittest playbooks/roles/install-ansible/files/inventory_plugins/test_yamlgroup.py + +[testenv:docs] +deps = -r{toxinidir}/doc/requirements.txt +whitelist_externals = cp +commands = + python3 {toxinidir}/tools/generate-diagrams.py --path doc/source/_svg + sphinx-build -W -E -b html doc/source doc/build/html -vv + # this copy is crucially important for svg to work + cp -av doc/source/_images doc/build/html/ + +[testenv:testinfra] +deps = + ansible-core + pytest-html # MPL-2.0 + pytest-testinfra>=6.0.0 + python-memcached + selenium + +# This environment assumes a gate-hosts.yaml file has been written. +passenv = + TESTINFRA_EXTRA_DATA +commands = py.test \ + --junit-xml junit.xml -o junit_family=xunit1 \ + --html=test-results.html --self-contained-html \ + --connection=ansible \ + --ansible-inventory=/home/zuul/src/gitea.eco.tsi-dev.otc-service.com/scs/system-config/inventory/base/gate-hosts.yaml -v testinfra {posargs} + +[flake8] +show-source = True +exclude = .tox,.eggs +ignore = E125,H +select = H231 diff --git a/zuul.d/docker-images/base.yaml b/zuul.d/docker-images/base.yaml new file mode 100644 index 0000000..a3754a0 --- /dev/null +++ b/zuul.d/docker-images/base.yaml @@ -0,0 +1,14 @@ +# Base image building jobs +- job: + name: system-config-build-image + parent: otc-build-container-image + abstract: true + vars: + zuul_work_dir: /home/zuul/src/github.com/opentelekomcloud-infra/system-config + +- job: + name: system-config-upload-image + parent: otcinfra-upload-container-images + abstract: true + vars: + zuul_work_dir: /home/zuul/src/github.com/opentelekomcloud-infra/system-config diff --git a/zuul.d/docker-images/graphite-statsd.yaml b/zuul.d/docker-images/graphite-statsd.yaml new file mode 100644 index 0000000..90870fc --- /dev/null +++ b/zuul.d/docker-images/graphite-statsd.yaml @@ -0,0 +1,20 @@ +# graphite-statsd jobs +- job: + name: system-config-build-image-graphite-statsd + description: Build a graphite-statsd image. + parent: system-config-build-image + vars: &graphite-statsd_vars + container_images: + - context: docker/graphite-statsd + registry: quay.io + repository: opentelekomcloud/graphite-statsd + tags: ['1.1.10-4', 'latest'] + files: &graphite-statsd_files + - docker/graphite-statsd/ + +- job: + name: system-config-upload-image-graphite-statsd + description: Build and upload a graphite-statsd image. + parent: system-config-upload-image + vars: *graphite-statsd_vars + files: *graphite-statsd_files diff --git a/zuul.d/docker-images/haproxy-statsd.yaml b/zuul.d/docker-images/haproxy-statsd.yaml new file mode 100644 index 0000000..fb6df7c --- /dev/null +++ b/zuul.d/docker-images/haproxy-statsd.yaml @@ -0,0 +1,20 @@ +# haproxy-statsd jobs +- job: + name: system-config-build-image-haproxy-statsd + description: Build a haproxy-statsd image. + parent: system-config-build-image + vars: &haproxy-statsd_vars + container_images: + - context: docker/haproxy-statsd + registry: quay.io + repository: opentelekomcloud/haproxy-statsd + tags: ['latest'] + files: &haproxy-statsd_files + - docker/haproxy-statsd/ + +- job: + name: system-config-upload-image-haproxy-statsd + description: Build and upload a haproxy-statsd image. + parent: system-config-upload-image + vars: *haproxy-statsd_vars + files: *haproxy-statsd_files diff --git a/zuul.d/docker-images/vault.yaml b/zuul.d/docker-images/vault.yaml new file mode 100644 index 0000000..0b5bfc7 --- /dev/null +++ b/zuul.d/docker-images/vault.yaml @@ -0,0 +1,20 @@ +# vault jobs +- job: + name: system-config-build-image-vault + description: Build a vault image with kubectl included + parent: system-config-build-image + vars: &vault_vars + container_images: + - context: docker/vault + registry: quay.io + repository: opentelekomcloud/vault + tags: ['latest'] + files: &vault_files + - docker/vault/ + +- job: + name: system-config-upload-image-vault + description: Build and upload vault image with kubectl included + parent: system-config-upload-image + vars: *vault_vars + files: *vault_files diff --git a/zuul.d/docker-images/zookeeper-statsd.yaml b/zuul.d/docker-images/zookeeper-statsd.yaml new file mode 100644 index 0000000..6c2332d --- /dev/null +++ b/zuul.d/docker-images/zookeeper-statsd.yaml @@ -0,0 +1,20 @@ +# zookeeper-statsd jobs +- job: + name: system-config-build-image-zookeeper-statsd + description: Build a zookeeper-statsd image. + parent: system-config-build-image + vars: &zookeeper-statsd_vars + container_images: + - context: docker/zookeeper-statsd + registry: quay.io + repository: opentelekomcloud/zookeeper-statsd + tags: ['latest'] + files: &zookeeper-statsd_files + - docker/zookeeper-statsd/ + +- job: + name: system-config-upload-image-zookeeper-statsd + description: Build and upload a zookeeper-statsd image. + parent: system-config-upload-image + vars: *zookeeper-statsd_vars + files: *zookeeper-statsd_files diff --git a/zuul.d/docker-images/zuul.yaml b/zuul.d/docker-images/zuul.yaml new file mode 100644 index 0000000..f6a5abe --- /dev/null +++ b/zuul.d/docker-images/zuul.yaml @@ -0,0 +1,42 @@ +# zuul jobs +- job: + name: system-config-build-image-zuul + description: Build zuul images. + parent: system-config-build-image + vars: &zuul_vars + container_images: + - context: "docker/zuul" + registry: quay.io + repository: opentelekomcloud/zuul + target: zuul + tags: + &imagetag ["latest", "change_859940"] + - context: "docker/zuul" + registry: quay.io + repository: opentelekomcloud/zuul-executor + target: zuul-executor + tags: *imagetag + - context: "docker/zuul" + registry: quay.io + repository: opentelekomcloud/zuul-merger + target: zuul-merger + tags: *imagetag + - context: "docker/zuul" + registry: quay.io + repository: opentelekomcloud/zuul-scheduler + target: zuul-scheduler + tags: *imagetag + - context: "docker/zuul" + registry: quay.io + repository: opentelekomcloud/zuul-web + target: zuul-web + tags: *imagetag + files: &zuul_files + - docker/zuul + +- job: + name: system-config-upload-image-zuul + description: Build and upload a zuul images. + parent: system-config-upload-image + vars: *zuul_vars + files: *zuul_files diff --git a/zuul.d/infra-prod.yaml b/zuul.d/infra-prod.yaml new file mode 100644 index 0000000..2434a28 --- /dev/null +++ b/zuul.d/infra-prod.yaml @@ -0,0 +1,193 @@ +# Make sure only one run of a system-config playbook happens at a time +- semaphore: + name: infra-prod-playbook + max: 1 + +- job: + name: infra-prod-playbook + parent: otc-infra-prod-base + description: | + Run specified playbook against productions hosts. + + This is a parent job designed to be inherited to enabled + CD deployment of our infrastructure. Set playbook_name to + specify the playbook relative to + /home/zuul/src/github.com/opentelekomcloud-infra/system-config/playbooks + on bridgeXX.eco.tsi-dev.otc-service.com. + abstract: true + semaphore: infra-prod-playbook + run: playbooks/zuul/run-production-playbook.yaml + post-run: playbooks/zuul/run-production-playbook-post.yaml + required-projects: + - opentelekomcloud-infra/system-config + vars: + infra_prod_ansible_forks: 10 + infra_prod_playbook_collect_log: false + infra_prod_playbook_encrypt_log: true + nodeset: + nodes: [] + +- job: + name: infra-prod-bootstrap-bridge + parent: otc-infra-prod-setup-keys + description: | + Configure the bastion host (bridge) + This job does minimal configuration on the bastion host + (bridge.openstack.org) to allow it to run system-config + playbooks against our production hosts. It sets up Ansible on + the host. + Note that this is separate to infra-prod-service-bridge; + bridge in it's role as the bastion host actaully runs that + against itself; it includes things not strictly needed to make + the host able to deploy system-config. + run: playbooks/zuul/run-production-bootstrap-bridge.yaml + required-projects: + - name: github.com/stackmon/ansible-collection-apimon + override-checkout: main + - name: github.com/opentelekomcloud/ansible-collection-cloud + override-checkout: main + - name: github.com/opentelekomcloud/ansible-collection-gitcontrol + override-checkout: main + - name: opendev.org/openstack/ansible-collections-openstack + override-checkout: main + files: + - playbooks/boostrap-bridge.yaml + - playbooks/zuul/run-production-bootstrap-bridge.yaml + - playbooks/zuul/run-production-bootstrap-bridge-add-rootkey.yaml + - playbooks/roles/install-ansible/ + - playbooks/roles/root-keys/ + - inventory/service/host_vars/bridge.eco.tsi-dev.otc-service.com.yaml + - inventory/base/hosts.yaml + - inventory/service/group_vars/bastion.yaml + vars: + install_ansible_collections: + - namespace: opentelekomcloud + name: apimon + repo: stackmon/ansible-collection-apimon + - namespace: opentelekomcloud + name: cloud + repo: opentelekomcloud/ansible-collection-cloud + - namespace: opentelekomcloud + name: gitcontrol + repo: opentelekomcloud/ansible-collection-gitcontrol + - namespace: openstack + name: cloud + repo: openstack/ansible-collections-openstack + git_provider: opendev.org + install_ansible_requirements: + - hvac + +- job: + name: infra-prod-base + parent: infra-prod-playbook + description: Run the base playbook everywhere. + vars: + playbook_name: base.yaml + infra_prod_ansible_forks: 50 + files: + - inventory/ + - inventory/service/host_vars/ + - inventory/service/group_vars/ + - playbooks/base.yaml + - playbooks/roles/base/ + +- job: + name: infra-prod-service-base + parent: infra-prod-playbook + description: Base job for most service playbooks. + abstract: true + irrelevant-files: + - inventory/service/group_vars/zuul.yaml + +- job: + name: infra-prod-base-ext + parent: infra-prod-service-base + description: Run base-ext.yaml playbook. + vars: + playbook_name: base-ext.yaml + files: + - inventory/ + - playbooks/base-ext.yaml + - playbooks/roles/base/audit/ + +- job: + name: infra-prod-service-bridge + parent: infra-prod-service-base + description: Run service-bridge.yaml playbook. + vars: + playbook_name: service-bridge.yaml + files: + - inventory/ + - playbooks/service-bridge.yaml + - inventory/service/host_vars/bridge.eco-tsi-dev.otc-service.com.yaml + - playbooks/roles/logrotate/ + - playbooks/roles/edit-secrets-script/ + - playbooks/roles/install-kubectl/ + - playbooks/roles/firewalld/ + - playbooks/roles/configure-kubectl/ + - playbooks/roles/configure-openstacksdk/ + - playbooks/templates/clouds/ + +- job: + name: infra-prod-service-x509-cert + parent: infra-prod-service-base + description: Run x509-certs.yaml playbook. + vars: + playbook_name: x509-certs.yaml + files: + - inventory/ + - playbooks/x509-certs.yaml + - playbooks/roles/x509_cert + +- job: + name: infra-prod-service-gitea + parent: infra-prod-service-base + description: Run service-gitea.yaml playbook. + vars: + playbook_name: service-gitea.yaml + files: + - inventory/ + - playbooks/service-gitea.yaml + - playbooks/roles/gitea/ + +- job: + name: infra-prod-gitea-sync + parent: infra-prod-service-base + description: Run sync-gitea-data.yaml playbook + vars: + playbook_name: sync-gitea-data.yaml + files: + - playbooks/sync-gitea-data.yaml + +- job: + name: infra-prod-service-acme-ssl + parent: infra-prod-service-base + description: Run acme-certs.yaml playbook. + vars: + playbook_name: acme-certs.yaml + files: + - inventory/ + - playbooks/acme-certs.yaml + - playbooks/roles/acme + +- job: + name: infra-prod-service-vault + parent: infra-prod-service-base + description: Run service-vault.yaml playbook. + vars: + playbook_name: service-vault.yaml + files: + - inventory/ + - playbooks/service-vault.yaml + - playbooks/roles/hashivault + +- job: + name: infra-prod-install-cce + parent: infra-prod-service-base + description: Install cloud CCE clusters + vars: + playbook_name: cloud-cce.yaml + files: + - inventory/service/group_vars/cloud-launcher.yaml + - playbooks/cloud-cce.yaml + - playbooks/roles/cloud_cce diff --git a/zuul.d/project.yaml b/zuul.d/project.yaml new file mode 100644 index 0000000..f4a3aed --- /dev/null +++ b/zuul.d/project.yaml @@ -0,0 +1,86 @@ +--- +- project: + merge-mode: squash-merge + default-branch: main + check: + jobs: + - otc-tox-linters + - system-config-run-base + - system-config-run-acme-ssl + - system-config-run-gitea + gate: + jobs: + - otc-tox-linters + - system-config-run-base + - system-config-run-acme-ssl + - system-config-run-vault + - system-config-run-gitea + deploy: + jobs: + # This installs the ansible on bridge that all the infra-prod + # jobs will run with. Note the jobs use this ansible to then + # run against zuul's checkout of system-config. + - infra-prod-bootstrap-bridge + + # From now on, all jobs should depend on base + - infra-prod-base: &infra-prod-base + dependencies: + - name: infra-prod-bootstrap-bridge + soft: true + + - infra-prod-base-ext: &infra-prod-base-ext + dependencies: + - name: infra-prod-base + soft: true + + - infra-prod-service-bridge: &infra-prod-service-bridge + dependencies: + - name: infra-prod-base + soft: true + + - infra-prod-install-helm-chart: &infra-prod-install-helm-chart + dependencies: + - name: infra-prod-base + soft: true + + - infra-prod-service-acme-ssl: &infra-prod-service-acme-ssl + dependencies: + - name: infra-prod-base + soft: true + + - infra-prod-service-apimon-k8s: &infra-prod-service-apimon-k8s + dependencies: + - name: infra-prod-base + soft: true + + - infra-prod-service-gitea: &infra-prod-service-gitea + dependencies: + - name: infra-prod-base + soft: true + + - infra-prod-service-vault: &infra-prod-service-vault + dependencies: + - name: infra-prod-base + soft: true + + periodic: + # Nightly execution + jobs: + - infra-prod-bootstrap-bridge + + - infra-prod-base: *infra-prod-base + - infra-prod-base-ext: *infra-prod-base-ext + - infra-prod-service-bridge: *infra-prod-service-bridge + + - infra-prod-install-helm-chart: *infra-prod-install-helm-chart + + - infra-prod-service-acme-ssl: *infra-prod-service-acme-ssl + - infra-prod-service-gitea: *infra-prod-service-gitea + - infra-prod-service-vault: *infra-prod-service-vault + + periodic-hourly: + # hourly execution + jobs: + - infra-prod-bootstrap-bridge + + - infra-prod-service-bridge: *infra-prod-service-bridge diff --git a/zuul.d/secrets.yaml b/zuul.d/secrets.yaml new file mode 100644 index 0000000..76b2762 --- /dev/null +++ b/zuul.d/secrets.yaml @@ -0,0 +1,16 @@ +- secret: + name: zuul_eco_system_config_vault + data: + vault_addr: https://vault.eco.tsi-dev.otc-service.com:8200 + role_id: 1cf0e942-0a7b-add6-26b1-76753d54ddab + secret_id: !encrypted/pkcs1-oaep + - f7gYQUTi+HqrCuKsPXsnByv2RO0+A8f84rTjAvf4GOfslBGZixZCuiU1BkmPZtWgc+gNw + LsUAovNcqhldF+QkS8awz/XbyEh7jm/YVOXcRx4h81U1x87pGBkmBDI6kY8WkbhjyjoVM + yeyRDxx7vqjBm1ZJTL+KvgzF7iG8Bo5dC57hqbXHkllPFBAypuJ/sPbqYaCQ1rDPVpaJG + Bzo0Cn982b+9Zl1efAPm7wieD4ukXLBJVDFDF6KXE5cx5Tryz8vOJjCWWpFrsvvgLW9VO + 0z4mYkuDWSeT1elXF8oudTaEZdjg1AlUAnTugneB13npqsOMwV1tnJf7YaUjjOjaxvOms + yZi6dQlSIf07nYAg7yFlJF9pT0JoxkDd6PVlpq/Ey64bctve1JDa14UKUYVFG7Vzl/v6A + vIgANKC2yy3GENxtfCyc6t7wWuNE1q3nGYgAnASwtMQi0qxiCegoYelyY1ylYlsOQkhjS + QgM+r6afuIw1Na0e19YTodhtXpKi3lNSAAUHfOfDFkpPe8P1DLhH6y6OqXux8LeIXgiPk + AAotLRGFiwiWbEPoHHllNUh/A7r6st5gqQznLSe0J1Q+Id6PuOWJmpX9Jp4l2P/TphzcE + mCTjOxYqez0mim4Ov5VBbdnB+/rRUTNr/rrnX+553z5PNu1YeE6InnVJfvn4Dg= diff --git a/zuul.d/system-config-run.yaml b/zuul.d/system-config-run.yaml new file mode 100644 index 0000000..3ed7d25 --- /dev/null +++ b/zuul.d/system-config-run.yaml @@ -0,0 +1,175 @@ +- job: + name: system-config-run + description: | + Run the "base" playbook for system-config hosts. + + This is a parent job designed to be inherited. + abstract: true + pre-run: playbooks/zuul/run-base-pre.yaml + run: playbooks/zuul/run-base.yaml + post-run: playbooks/zuul/run-base-post.yaml + vars: + zuul_copy_output: "{{ copy_output | combine(host_copy_output | default({})) }}" + stage_dir: "{{ ansible_user_dir }}/zuul-output" + copy_output: + '/var/log/syslog': logs_txt + '/var/log/messages': logs_txt + '/var/log/docker': logs + '/var/log/containers': logs + install_ansible_collections: + - namespace: opentelekomcloud + name: apimon + repo: stackmon/ansible-collection-apimon + - namespace: opentelekomcloud + name: cloud + repo: opentelekomcloud/ansible-collection-cloud + - namespace: opentelekomcloud + name: gitcontrol + repo: opentelekomcloud/ansible-collection-gitcontrol + - namespace: openstack + name: cloud + repo: openstack/ansible-collections-openstack + git_provider: opendev.org + required-projects: + - name: github.com/opentelekomcloud/ansible-collection-cloud + override-checkout: main + - name: github.com/stackmon/ansible-collection-apimon + override-checkout: main + - name: github.com/opentelekomcloud/ansible-collection-gitcontrol + override-checkout: main + - name: opendev.org/openstack/ansible-collections-openstack + override-checkout: master + host-vars: + bridge*.eco.tsi-dev.otc-service.com: + install_ansible_collections: + - namespace: opentelekomcloud + name: apimon + repo: stackmon/ansible-collection-apimon + - namespace: opentelekomcloud + name: cloud + repo: opentelekomcloud/ansible-collection-cloud + - namespace: opentelekomcloud + name: gitcontrol + repo: opentelekomcloud/ansible-collection-gitcontrol + - namespace: openstack + name: cloud + repo: openstack/ansible-collections-openstack + git_provider: opendev.org + host_copy_output: + '{{ zuul.project.src_dir }}/junit.xml': logs + '{{ zuul.project.src_dir }}/test-results.html': logs + '{{ zuul.project.src_dir }}/inventory/base/gate-hosts.yaml': logs + '/var/log/screenshots': logs + +- job: + name: system-config-run-base + parent: system-config-run + description: | + Run the "base" playbook on each of the node types + currently in use. + nodeset: + nodes: + - &bridge_node_x86 {name: bridge99.eco.tsi-dev.otc-service.com, label: ubuntu-jammy} + groups: + # Each job should define this group -- to avoid hard-coding + # the bastion hostname in the job setup, playbooks/tasks refer + # to it only by this group. This should only have one entry + # -- in a couple of places the jobs use the actual hostname + # and assume element [0] here is that hostname. + # + # Note that this shouldn't be confused with the group in + # inventory/service/groups.yaml -- this group contains the + # host that Zuul, running on the executor, will setup as the + # bridge node. This node will then run a nested Ansible to + # test the production playbooks -- *that* Ansible has a + # "bastion" group too + - &bastion_group { name: prod_bastion, nodes: [ bridge99.eco.tsi-dev.otc-service.com ] } + files: + - tox.ini + - playbooks/ + - roles/ + - testinfra/ + +- job: + name: system-config-run-x509-cert + parent: system-config-run + description: | + Run the playbook for the x509 certificates. + nodeset: + nodes: + - <<: *bridge_node_x86 + groups: + - <<: *bastion_group + vars: + run_playbooks: + - playbooks/x509-certs.yaml + files: + - playbooks/bootstrap-bridge.yaml + - playbooks/x509-certs.yaml + - playbooks/roles/x509_cert + +- job: + name: system-config-run-acme-ssl + parent: system-config-run + description: | + Run the playbook for the acme-ssl servers. + nodeset: + nodes: + - <<: *bridge_node_x86 + - name: le1 + label: ubuntu-focal + groups: + - <<: *bastion_group + vars: + run_playbooks: + - playbooks/acme-certs.yaml + files: + - playbooks/bootstrap-bridge.yaml + - playbooks/acme-ssl.yaml + - playbooks/roles/acme_create_certs + - playbooks/roles/acme_request_certs + - playbooks/roles/acme_install_txt_records + - playbooks/roles/acme_drop_txt_records + +- job: + name: system-config-run-vault + parent: system-config-run + description: | + Run the playbook for the vault servers. + nodeset: + nodes: + - <<: *bridge_node_x86 + - name: vault1.eco.tsi-dev.otc-service.com + label: ubuntu-focal + groups: + - <<: *bastion_group + vars: + run_playbooks: + # We do not want to create CA part of ZK setup, therefore only invoke additional playbook in the test. + - playbooks/acme-certs.yaml + - playbooks/service-vault.yaml + files: + - playbooks/bootstrap-bridge.yaml + - playbooks/service-vault.yaml + - playbooks/roles/hashivault + +- job: + name: system-config-run-gitea + parent: system-config-run + description: | + Run the playbook for the gitea servers. + nodeset: + nodes: + - <<: *bridge_node_x86 + - name: gitea.focal + label: ubuntu-jammy + groups: + - <<: *bastion_group + vars: + run_playbooks: + - playbooks/service-gitea.yaml + files: + - playbooks/bootstrap-bridge.yaml + - playbooks/service-gitea.yaml + - playbooks/roles/gitea/ + - testinfra/test_gitea.py