Intro
Pytest is a very powerful test framework / runner for Python, but if you are new to it, the depth and surface area can be a bit overwhelming; there are lots of pytest docs, but once you are past the basics, it can be unclear what features you might be missing out on. One of my favorite types of documentation is a “getting productive with ___” guide or “___ cheatsheet”, and I feel like pytest could use one – this is not that exactly, but at least lays out a few “thing I wish I had known sooner about pytest” type tips.
My last blog post was about the importance of high-level documentation and was inspired by situations like my experience getting up-to-speed with pytest. The pytest docs are comprehensive, but not the easiest to navigate and could use some improvement “at the top of the funnel” (see linked blog post for more on what this means).
Pytest Real-Time Usage
Similar to many other test frameworks, the default way that pytest acts is in sort of a black-box one-shot approach:
- Input: tests, source code, pytest config files
- Black box: pytest runs
- Output: pytest spits out everything at once – which tests passed or failed, stdout and stderr, etc.
However, this doesn’t have to be how you use pytest; there are different ways you can interact with pytest as it runs and gain better visibility into how your tests are working.
You Can Stream Live Logs
Rather then get all your log messages at the very end of the entire pytest run, you can stream them as they run, with the log_cli
live log feature: set log_cli
to True
(in the config file or via CLI).
By combining this with tee
(in *nix shells), you can even see the output in real-time in your terminal while also saving it to a log file:
log_cli=true pytest 2>&1 | tee pytest_output.log
You Can See and Interact With Exceptions As They Happen
The very nature of test runners requires that they catch errors as they happen, but that same feature makes interacting with real-time errors generated by your code, as it gets executed through the test runner, tricky.
For example, if you tell your interactive debugger to break on exceptions and invoke pytest, you are going to end up looking at pytest’s own code, as it ends up handling errors in order to intercept them and determine the pass
/ fail
outcome of a given test.
What I wish I had discovered sooner (it’s a bit buried in the docs) is that pytest exposes some lifecycle hooks that you can “register” functions for and hook into – including one for interacting with exceptions as they happen!
By hooking into the pytest_exception_interact
hook, we can interact with an exception as pytest catches it and do whatever we want (including calling a third-party debugger, printing special messages, etc.).
The easiest way to register this hook is through the conftest.py
file. Here is an example, complete with optional typings:
""""
@file conftest.py
"""
from typing import Any
from pytest import CallInfo, Item, TestReport
def pytest_exception_interact(node: Item, call: CallInfo[Any], report: TestReport):
# This is just an example
# You could invoke an interactive debugger here, call an API, or really do anything you want
print(
f"""
=====
Intercepted pytest test failure!:
Node: {node.nodeid}
Marks: {', '.join([m.name for m in node.own_markers])}
Duration: {call.duration}
Exception: \n{report.longreprtext}
=====
"""
)
You Can Debug at Any Level
There is a tendency when dealing with tests to think of them as running completely detached from your normal environment, and to a certain extent this is (and should be) true. However, that doesn’t mean that you should abandon time-saving tools like interactive debuggers.
Although the above section showed hooking into only exception catching in pytest, the truth is that you can invoke an interactive debugger anywhere in your tests, just like how you would normally do it in the rest of your codebase.
If you use a debugger that needs to establish a connection at the start of the session, like Microsoft’s debugpy
, you can hook into the pytest_sessionstart
hook to do so before any tests run:
"""
@file conftest.py
"""
from pytest import Session
import debugpy
def pytest_sessionstart(session: Session):
debugpy.listen(("localhost", 5678))
print("Waiting for client connection...")
debugpy.wait_for_client()
Visual Studio Code (aka VSCode), along with pycharm, offers a test GUI that should have a one-click “debug test” button that should eliminate the need for this boilerplate code in most situations.
If you need help getting Python debugging setup with VSCode, I have a guide I’ve written that covers it!
Support for Flaky Tests
A flaky test is one that is inconsistent in whether or not it passes, even if no code has been changed, usually due to some subtle edge-case, accidental dependency on execution order, or timing-related quirk. If you have a flaky test, it should go without saying that the most optimal solution should be to fix it, but sometimes that is easier said than done. In those instances, it is worth knowing that pytest maintains a test cache by default, and has some built-in tools to help with flaky tests.
Some examples:
pytest --last-failed
will re-run only the tests that failed in the last runpytest --failed-first
will re-run all tests, but start with the failed ones firstpytest --exitfirst
orpytest -x
will exit the entire test run on the first failed testpytest --stepwise
will exit on test failure and continue from last failing test next time
There is also an installable plugin that does configurable re-runs all in one command: the
pytest-rerunfailures
plugin
You Can Get Granular With Running Tests
Pytest has many different ways it can be invoked, including:
- For specific files:
pytest FILEPATH
- For single test functions or methods:
pytest "MODULE::MY_TEST"
- For pattern matching:
pytest -k "SEARCH_STRING"
If you are trying to see which tests are matched against a given invocation command, you can use the --collect-only
flag to get back a list of tests that will run, without actually running them.
Certain IDEs, like pycharm and VSCode, have built-in support for running tests via a granular GUI.
Magical Fixtures
Fixtures are a common part of many testing setups, across different programming languages and frameworks, but they are extra special with pytest. I won’t go into detail, but the basic “magic” of pytest fixtures is that they are auto-injected (as isolated copies) based on naming, so a lot of manual setup / teardown can be replaced with a decorated fixture function.
For more information, refer to the official docs.
Conclusion
I hope this helps someone out there! If you are a fan of to-the-point documentation focused on getting things done, you might enjoy my personal documentation / cheatsheet site.