diff --git a/doc/source/internal/apimon_training/test_scenarios.rst b/doc/source/internal/apimon_training/test_scenarios.rst index cb31ce8..3b66956 100644 --- a/doc/source/internal/apimon_training/test_scenarios.rst +++ b/doc/source/internal/apimon_training/test_scenarios.rst @@ -17,11 +17,11 @@ mentioned test jobs do not need take care of generating data implicitly. Since the API related tasks in the playbooks rely on the Python OpenStack SDK (and its OTC extensions), metric data generated automatically by a logging interface of the SDK ('openstack_api' metrics). Those metrics are collected by statsd and -stored to `graphite TSDB `. +stored to :ref:`graphite TSDB `. Additionall metric data are generated also by executor service which collects the playbook names, results and duration time ('ansible_stats' metrics) and -stores them to `postgresql relational database `. +stores them to :ref:`postgresql relational database `. The playbooks with monitoring scenarios are stored in separete repository on `github `_ (the location @@ -32,12 +32,12 @@ The metrics generated by Executor are described on :ref:`Metric Definitions ` page. In addition to metrics generated and captured by a playbook ApiMon also captures -`stdout of the execution `. and saves this log for additional analysis to OpenStack +:ref:`stdout of the execution `. and saves this log for additional analysis to OpenStack Swift storage where logs are being uploaded there with a configurable retention policy. -New test scenario introduction +New Test Scenario introduction ============================== @@ -59,7 +59,7 @@ Rules for Test Scenarios Ansible playbooks need to follow some basic regression testing principles to ensure sustainability of the endless exceution of such scenarios: -- **use OpenTelekomCloud collection and OpenStack collection** +- **OpenTelekomCloud and OpenStack collection** - When developing test scenarios use available `Opentelekomcloud.Cloud `_ or @@ -71,24 +71,24 @@ ensure sustainability of the endless exceution of such scenarios: script module and call directly python SDK script to invoke required request towards cloud -- **unique names of resources** +- **Unique names of resources** - Make sure that resources don't conflict with each other and are easily trackable by its unique name -- **teardown of the resources** +- **Teardown of the resources** - Make sure that deletion / cleanup of the resources is triggered even if some of the tasks in playbooks will fail - Make sure that deletion / cleanup is triggered in right order -- **simplicity** +- **Simplicity** - Do not overcomplicate test scenario. Use default auto-autofilled parameters whereever possible -- **only basic / core functions are in scope of testing** +- **Only basic / core functions in scope of testing** - ApiMon is not supposed to validate full service functionality. For such cases we have different team / framework within QA responsibility @@ -99,14 +99,14 @@ ensure sustainability of the endless exceution of such scenarios: - The less functions you use the less potential failure rate you will have on runnign scenario for whatever reasons -- **minimize hardcoding** +- **No hardcoding** - Every single hardcoded parameter in scenario will later lead to potential outage of the scenario's run in future when such parameter might change - Try to obtain all such parameters dynamically from the cloud directly. -- **use special tags for combined metrics** +- **Special tags for combined metrics** - In case that you want to combine multiple tasks in playbook in single custom metric you can do with using tags parameter in the tasks