adding new content for apimon training

This commit is contained in:
Hasko, Vladimir 2023-05-19 14:47:45 +00:00
parent 992f20b2fb
commit e2609f9ac5
9 changed files with 169 additions and 5 deletions

File diff suppressed because it is too large Load Diff

View File

@ -1,3 +1,5 @@
.. _EpMon Overview:
============================ ============================
Endpoint Monitoring overview Endpoint Monitoring overview
============================ ============================

View File

@ -11,6 +11,7 @@ Apimon Training
test_scenarios test_scenarios
epmon_checks epmon_checks
dashboards dashboards
metrics
alerts alerts
notifications notifications
logs logs

View File

@ -0,0 +1,49 @@
.. _Metrics:
=======
Metrics
=======
The ansible playbook scenarios generate metrics in two ways:
- The Ansible playbook internally invokes method calls to **OpenStack SDK
libraries.** They in turn generate metrics about each API call they do. This
requires some special configuration in the clouds.yaml file (currently
exposing metrics into statsd and InfluxDB is supported). For details refer
to the [config
documentation](https://docs.openstack.org/openstacksdk/latest/user/guides/stats.html)
of the OpenStack SDK. The following metrics are captured:
- response HTTP code
- duration of API call
- name of API call
- method of API call
- service type
- Ansible plugins may **expose additional metrics** (i.e. whether the overall
scenario succeed or not) with help of [callback
plugin](https://github.com/stackmon/apimon/tree/main/apimon/ansible/callback).
Since sometimes it is not sufficient to know only the timings of each API
call, Ansible callbacks are utilized to report overall execution time and
result (whether the scenario succeeded and how long it took). The following
metrics are captured:
- test case
- playbook name
- environment
- action name
- result code
- result string
- service type
- state type
- total amount of (failed, passed, ignored, skipped tests)
Custom metrics:
In some situations more complex metric generation is required which consists of
execution of multiple tasks in scenario. For such cases the tags parameter is
used. Once the specific tasks in playbook are tagged with some specific metric
name the metrics are calculated as sum of all executed tasks with respective
tag. It's useful in cases where measured metric contains multiple steps to
achieve the desired state of service or service resource. For example boot up of
virtual machine from deployment until succesfull login via SSH.
tags: ["metric=delete_server"]
tags: ["az={{ availability_zone }}", "service=compute", "metric=create_server{{ metric_suffix }}"]

View File

@ -1,3 +1,5 @@
.. _Test Scenarios:
============== ==============
Test Scenarios Test Scenarios
============== ==============

Binary file not shown.

After

Width:  |  Height:  |  Size: 188 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 142 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 157 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 184 KiB