1
0
forked from docs/docsportal

fixing wrong references

This commit is contained in:
Hasko, Vladimir 2023-05-21 20:24:41 +00:00
parent 536c48b5c3
commit b7483b5dc0

View File

@ -17,11 +17,11 @@ mentioned test jobs do not need take care of generating data implicitly. Since
the API related tasks in the playbooks rely on the Python OpenStack SDK (and its
OTC extensions), metric data generated automatically by a logging interface of
the SDK ('openstack_api' metrics). Those metrics are collected by statsd and
stored to `graphite TSDB <metric_databases>`.
stored to :ref:`graphite TSDB <metric_databases>`.
Additionall metric data are generated also by executor service which collects
the playbook names, results and duration time ('ansible_stats' metrics) and
stores them to `postgresql relational database <metric_databases>`.
stores them to :ref:`postgresql relational database <metric_databases>`.
The playbooks with monitoring scenarios are stored in separete repository on
`github <https://github.com/opentelekomcloud-infra/apimon-test>`_ (the location
@ -32,12 +32,12 @@ The metrics generated by Executor are described on :ref:`Metric
Definitions <metrics_definition>` page.
In addition to metrics generated and captured by a playbook ApiMon also captures
`stdout of the execution <logs>`. and saves this log for additional analysis to OpenStack
:ref:`stdout of the execution <logs>`. and saves this log for additional analysis to OpenStack
Swift storage where logs are being uploaded there with a configurable retention
policy.
New test scenario introduction
New Test Scenario introduction
==============================
@ -59,7 +59,7 @@ Rules for Test Scenarios
Ansible playbooks need to follow some basic regression testing principles to
ensure sustainability of the endless exceution of such scenarios:
- **use OpenTelekomCloud collection and OpenStack collection**
- **OpenTelekomCloud and OpenStack collection**
- When developing test scenarios use available `Opentelekomcloud.Cloud
<https://docs.otc-service.com/ansible-collection-cloud/>`_ or
@ -71,24 +71,24 @@ ensure sustainability of the endless exceution of such scenarios:
script module and call directly python SDK script to invoke required request
towards cloud
- **unique names of resources**
- **Unique names of resources**
- Make sure that resources don't conflict with each other and are easily
trackable by its unique name
- **teardown of the resources**
- **Teardown of the resources**
- Make sure that deletion / cleanup of the resources is triggered even if some
of the tasks in playbooks will fail
- Make sure that deletion / cleanup is triggered in right order
- **simplicity**
- **Simplicity**
- Do not overcomplicate test scenario. Use default auto-autofilled parameters
whereever possible
- **only basic / core functions are in scope of testing**
- **Only basic / core functions in scope of testing**
- ApiMon is not supposed to validate full service functionality. For such
cases we have different team / framework within QA responsibility
@ -99,14 +99,14 @@ ensure sustainability of the endless exceution of such scenarios:
- The less functions you use the less potential failure rate you will have on
runnign scenario for whatever reasons
- **minimize hardcoding**
- **No hardcoding**
- Every single hardcoded parameter in scenario will later lead to potential
outage of the scenario's run in future when such parameter might change
- Try to obtain all such parameters dynamically from the cloud directly.
- **use special tags for combined metrics**
- **Special tags for combined metrics**
- In case that you want to combine multiple tasks in playbook in single custom
metric you can do with using tags parameter in the tasks