We introduce a patroni_api fixture, defined in tests/conftest.py, which
sets up an HTTP server serving files in a temporary directory. The
server is itself defined by the PatroniAPI class; it has a 'routes()'
context manager method to be used in actual tests to setup expected
responses based on specified JSON files.
We set up some logging in order to improve debugging.
The direct advantage of this is that PatroniResource.rest_api() method
is now covered by the test suite.
Coverage before this commit:
Name Stmts Miss Cover
-----------------------------------------------
check_patroni/__init__.py 3 0 100%
check_patroni/cli.py 193 18 91%
check_patroni/cluster.py 113 0 100%
check_patroni/convert.py 23 5 78%
check_patroni/node.py 146 1 99%
check_patroni/types.py 50 23 54%
-----------------------------------------------
TOTAL 528 47 91%
and after this commit:
Name Stmts Miss Cover
-----------------------------------------------
check_patroni/__init__.py 3 0 100%
check_patroni/cli.py 193 18 91%
check_patroni/cluster.py 113 0 100%
check_patroni/convert.py 23 5 78%
check_patroni/node.py 146 1 99%
check_patroni/types.py 50 9 82%
-----------------------------------------------
TOTAL 528 33 94%
In actual test functions, we either invoke patroni_api.routes() to
configure which JSON file(s) should be served for each endpoint, or we
define dedicated fixtures (e.g. cluster_config_has_changed()) to
configure this for several test functions or the whole module.
The 'old_replica_state' parametrized fixture is used when needed to
adjust such fixtures, e.g. in cluster_has_replica_ok(), to modify the
JSON content using cluster_api_set_replica_running() (previously in
tests/tools.py, now in tests/__init__.py).
The dependency on pytest-mock is no longer needed.
Instead of defining the CliRunner value in each test, we use a fixture.
The CliRunner is also configured with stdout and stderr separated
because mixing them will pose problem if we use stderr for other
purposes in tests, e.g. to emit log messages from a forth-coming HTTP
server.
* Change all replica status from `running` to `streaming`
* Add an option to pytest to change the state back to `running`
* Also tests the output of the script
* Add a quick test script for live clusters
The checks `cluster_config_has_changed` and `node_tl_has_changed` use a
state file to store the previous value of the config hash and the
timeline.
Previously the check would fail if something changed, but the new value
would be saved directly. This behavious has changed. The new value
is saved only if `--save` is passed to the check.
The mimics the way [check_pgactivity] manages this kind of checks.
[check_pgactivity]: https://github.com/OPMDG/check_pgactivity