diff --git a/doc/source/internal/apimon_training/test_scenarios.rst b/doc/source/internal/apimon_training/test_scenarios.rst index 660543c..206b850 100644 --- a/doc/source/internal/apimon_training/test_scenarios.rst +++ b/doc/source/internal/apimon_training/test_scenarios.rst @@ -12,7 +12,7 @@ python script). With Ansible on it's own having nearly limitless capability and availability to execute anything else ApiMon can do pretty much anything. The only expectation is that whatever is being done produces some form of metric for further analysis and evaluation. Otherwise there is no sense in monitoring. The -scenarios are collected in a `Git repository +scenarios are collected in a `Github `_ and updated in real-time. In general mentioned test jobs do not need take care of generating data implicitly. Since the API related tasks in the playbooks rely on the Python @@ -25,7 +25,7 @@ the playbook names, results and duration time ('ansible_stats' metrics) and stores them to :ref:`postgresql relational database `. The playbooks with monitoring scenarios are stored in separate repository on -`github `_ (the location +`Github `_ (the location will change with CloudMon replacement in `future `_). Playbooks address the most common use cases with cloud services conducted by end customers.