serhii.net

In the middle of the desert you can say anything you want

09 Dec 2021

211209-1354 Python testing basics with poetry and pytest

(From a python-worshop I attended)

Pytest

Basics

Fixtures for boilerplate code

Fixtures are useful bits you don’t want to repeat every time, like connecting to a database etc.

It’s a function, that may or may not take arguments, that might or might not return something.

Tests can request a fixture, and it’s basically done like this:

@pytest.fixture 
def my_fixture():
	return "fix"

def test_with_fixture(my_fixture):
	assert my_fixture == "fix"

# fixtures inside other fixtures
@pytest.fixture 
def next_fixture(my_fixture):
	return my_fixture + "fix"

They are run independently for each test, to ensure that tests are as separated as possible. There are ways to define their scope, but it’s rarely used.

You can also use them to change settings like logging, by adding a fixture that changes etc.

Marks1 are used to select what you run

“By using the pytest.mark helper you can easily set metadata on your test functions1

Defining marks

Default marks
#@pytest.mark.skip(reason="there's a good reason")
@pytest.mark.skipif(pytorch.cuda.is_available(), reason="there's a good reason")
def test_always_ski():
  assert False

That way you don’t have to do anything inside the test and based on python environment.

Custom marks2
# simple marks
@pytest.mark.whatever
def test_whatever():
  pass

# complex marks (and defined beforehand)
cuda = pytest.mark.skipif(True, reason="...")
@cuda
def test_require_cuda():
  assert False

Marks can be combined

@pytest.mark.one
@cuda
def test_whatever():

Selecting marks when running

Assuming @pytest.mark.gpu:

python3 -m "not gpu"
python3 -m "gpu"
Registering marks 3

Recommended, to keep track of them and get stuff like pytest --markers etc. In pyproject.toml:

[tool.pytest.ini_options]
markers = [
  "gpu: marks test which require a gpu"
]

Mocking

Replace some functions, including ones deep inside code. Lives inside the pypy package pytest-mock · PyPI.

You can patch calls, objects, etc.

from pytest_mock import MockerFixture

def test_mock(mocker: MockerFixture) -> None:
	env_mock = mocker.patch("os.environ.get")
	os.environ.get("something")
	assert env_mock.call_count == 1
# Do stuff to dictionaries:
mocker.patch.dict("os.environ", {"sth": "test"})
assert os.environ.get("sth") == "test"
assert os.environ.get("not_there") == None
# classes, function calls, etc

TODO - does this work for class instances created after the mock?

Spying to keep track of function calls etc

mocker.spy Sample from documentation:

def test_spy_method(mocker):
    class Foo(object):
        def bar(self, v):
            return v * 2

    foo = Foo()
    spy = mocker.spy(foo, 'bar')
    assert foo.bar(21) == 42

    spy.assert_called_once_with(21)
    assert spy.spy_return == 42

Running stuff

Selecting tests 4

  • By filesystem: pytest test_mod.py and pytest testing/
  • By markers: pytest -m mark, pytest -m "not mark"
  • Keywords:
    • pytest -k "MyClass and not method would run TestMyClass.test_something but not TestMyClass.test_method_something
  • Node ids: pytest test_mod.py::test_func or pytest test_mod.py::TestClass::test_method

Useful bits

Loop on fail

pytest-xdist package allows to do pytest --loop-on-fail, which keeps looping tests and you can see the test results in real time

Logging and output

Setting loglevel globally

logger.warning("test") inside tests doesn’t get shown by default, but you can enable this in pytest results:

[tool.pytest.ini_options]
log_cli = true
log_cli_level = "DEBUG"
Setting it for a single test

You can change it in single tests: caplog.set_level(logging.DEBUG)

This is useful if you’re fixing a specific bug and want more logging on a specific test.

Nel mezzo del deserto posso dire tutto quello che voglio.
comments powered by Disqus