Coding implementation of advanced PyTest for building custom and automated tests using plugins, fixtures, and JSON reports
In this tutorial we will explore the following advanced features py testone of the most powerful testing frameworks in Python. We build a complete mini-project from scratch, demonstrating fixtures, markers, plugins, parameterization and custom configurations. We focus on showing how PyTest evolved from a simple test runner into a robust, scalable system suitable for real-world applications. Finally, we not only understand how to write tests, but also how to control and customize the behavior of PyTest to meet the needs of any project. Check The complete code is here.
import sys, subprocess, os, textwrap, pathlib, json
subprocess.run([sys.executable, "-m", "pip", "install", "-q", "pytest>=8.0"], check=True)
root = pathlib.Path("pytest_advanced_tutorial").absolute()
if root.exists():
import shutil; shutil.rmtree(root)
(root / "calc").mkdir(parents=True)
(root / "app").mkdir()
(root / "tests").mkdir()
We first set up the environment and import the basic Python libraries for file processing and subprocess execution. We install the latest version of PyTest to ensure compatibility, and then create a clean project structure with folders for the main code, application modules, and tests. This gives us a solid foundation to organize everything neatly before writing any test logic. Check The complete code is here.
(root / "pytest.ini").write_text(textwrap.dedent("""
[pytest]
addopts = -q -ra --maxfail=1 -m "not slow"
testpaths = tests
markers =
slow: slow tests (use --runslow to run)
io: tests hitting the file system
api: tests patching external calls
""").strip()+"n")
(root / "conftest.py").write_text(textwrap.dedent(r'''
import os, time, pytest, json
def pytest_addoption(parser):
parser.addoption("--runslow", action="store_true", help="run slow tests")
def pytest_configure(config):
config.addinivalue_line("markers", "slow: slow tests")
config._summary = {"passed":0,"failed":0,"skipped":0,"slow_ran":0}
def pytest_collection_modifyitems(config, items):
if config.getoption("--runslow"):
return
skip = pytest.mark.skip(reason="need --runslow to run")
for item in items:
if "slow" in item.keywords: item.add_marker(skip)
def pytest_runtest_logreport(report):
cfg = report.config._summary
if report.when=="call":
if report.passed: cfg["passed"]+=1
elif report.failed: cfg["failed"]+=1
elif report.skipped: cfg["skipped"]+=1
if "slow" in report.keywords and report.passed: cfg["slow_ran"]+=1
def pytest_terminal_summary(terminalreporter, exitstatus, config):
s=config._summary
terminalreporter.write_sep("=", "SESSION SUMMARY (custom plugin)")
terminalreporter.write_line(f"Passed: {s['passed']} | Failed: {s['failed']} | Skipped: {s['skipped']}")
terminalreporter.write_line(f"Slow tests run: {s['slow_ran']}")
terminalreporter.write_line("PyTest finished successfully ✅" if s["failed"]==0 else "Some tests failed ❌")
@pytest.fixture(scope="session")
def settings(): return {"env":"prod","max_retries":2}
@pytest.fixture(scope="function")
def event_log(): logs=[]; yield logs; print("\nEVENT LOG:", logs)
@pytest.fixture
def temp_json_file(tmp_path):
p=tmp_path/"data.json"; p.write_text('{"msg":"hi"}'); return p
@pytest.fixture
def fake_clock(monkeypatch):
t={"now":1000.0}; monkeypatch.setattr(time,"time",lambda: t["now"]); return t
'''))
We now create the PyTest configuration and plugin files. In pytest.ini, we define flags, default options, and test paths to control how tests are discovered and filtered. In conftest.py, we implemented a custom plugin that tracks passed, failed, and skipped tests, added the –runslow option, and provided fixtures for reusable test resources. This helps us extend PyTest’s core behavior while keeping our setup clean and modular. Check The complete code is here.
(root/"calc"/"__init__.py").write_text(textwrap.dedent('''
from .vector import Vector
def add(a,b): return a+b
def div(a,b):
if b==0: raise ZeroDivisionError("division by zero")
return a/b
def moving_avg(xs,k):
if klen(xs): raise ValueError("bad window")
out=[]; s=sum(xs[:k]); out.append(s/k)
for i in range(k,len(xs)):
s+=xs[i]-xs[i-k]; out.append(s/k)
return out
'''))
(root/"calc"/"vector.py").write_text(textwrap.dedent('''
class Vector:
__slots__=("x","y","z")
def __init__(self,x=0,y=0,z=0): self.x,self.y,self.z=float(x),float(y),float(z)
def __add__(self,o): return Vector(self.x+o.x,self.y+o.y,self.z+o.z)
def __sub__(self,o): return Vector(self.x-o.x,self.y-o.y,self.z-o.z)
def __mul__(self,s): return Vector(self.x*s,self.y*s,self.z*s)
__rmul__=__mul__
def norm(self): return (self.x**2+self.y**2+self.z**2)**0.5
def __eq__(self,o): return abs(self.x-o.x)
We now build the core computing modules for our project. In the calc package, we define simple mathematical utilities including addition, division with error handling, and moving average functions to demonstrate logic testing. In addition to this, we created a Vector class that supports arithmetic operations, equality checks, and norm calculations, which is a perfect example of using PyTest to test custom objects and comparisons. Check The complete code is here.
(root/"app"/"io_utils.py").write_text(textwrap.dedent('''
import json, pathlib, time
def save_json(path,obj):
path=pathlib.Path(path); path.write_text(json.dumps(obj)); return path
def load_json(path): return json.loads(pathlib.Path(path).read_text())
def timed_operation(fn,*a,**kw):
t0=time.time(); out=fn(*a,**kw); t1=time.time(); return out,t1-t0
'''))
(root/"app"/"api.py").write_text(textwrap.dedent('''
import os, time, random
def fetch_username(uid):
if os.environ.get("API_MODE")=="offline": return f"cached_{uid}"
time.sleep(0.001); return f"user_{uid}_{random.randint(100,999)}"
'''))
(root/"tests"/"test_calc.py").write_text(textwrap.dedent('''
import pytest, math
from calc import add,div,moving_avg
from calc.vector import Vector
@pytest.mark.parametrize("a,b,exp",[(1,2,3),(0,0,0),(-1,1,0)])
def test_add(a,b,exp): assert add(a,b)==exp
@pytest.mark.parametrize("a,b,exp",[(6,3,2),(8,2,4)])
def test_div(a,b,exp): assert div(a,b)==exp
@pytest.mark.xfail(raises=ZeroDivisionError)
def test_div_zero(): div(1,0)
def test_avg(): assert moving_avg([1,2,3,4,5],3)==[2,3,4]
def test_vector_ops(): v=Vector(1,2,3)+Vector(4,5,6); assert v==Vector(5,7,9)
'''))
(root/"tests"/"test_io_api.py").write_text(textwrap.dedent('''
import pytest, os
from app.io_utils import save_json,load_json,timed_operation
from app.api import fetch_username
@pytest.mark.io
def test_io(temp_json_file,tmp_path):
d={"x":5}; p=tmp_path/"a.json"; save_json(p,d); assert load_json(p)==d
assert load_json(temp_json_file)=={"msg":"hi"}
def test_timed(capsys):
val,dt=timed_operation(lambda x:x*3,7); print("dt=",dt); out=capsys.readouterr().out
assert "dt=" in out and val==21
@pytest.mark.api
def test_api(monkeypatch):
monkeypatch.setenv("API_MODE","offline")
assert fetch_username(9)=="cached_9"
'''))
(root/"tests"/"test_slow.py").write_text(textwrap.dedent('''
import time, pytest
@pytest.mark.slow
def test_slow(event_log,fake_clock):
event_log.append(f"start@{fake_clock['now']}")
fake_clock["now"]+=3.0
event_log.append(f"end@{fake_clock['now']}")
assert len(event_log)==2
'''))
We added lightweight application utilities for JSON I/O and a mocking API to perform real-world behavior without the need for external services. We write focused tests using parameterization, xfail, tags, tmp_path, capsys and Monkeypatch to verify logic and side effects. We include a slow test connected to the event_log and fake_clock devices to demonstrate controlled timing and session-wide state. Check The complete code is here.
print("📦 Project created at:", root)
print("n▶️ RUN #1 (default, skips @slow)n")
r1=subprocess.run([sys.executable,"-m","pytest",str(root)],text=True)
print("n▶️ RUN #2 (--runslow)n")
r2=subprocess.run([sys.executable,"-m","pytest",str(root),"--runslow"],text=True)
summary_file=root/"summary.json"
summary={
"total_tests":sum("test_" in str(p) for p in root.rglob("test_*.py")),
"runs": ["default","--runslow"],
"results": ["success" if r1.returncode==0 else "fail",
"success" if r2.returncode==0 else "fail"],
"contains_slow_tests": True,
"example_event_log":["[email protected]","[email protected]"]
}
summary_file.write_text(json.dumps(summary,indent=2))
print("n📊 FINAL SUMMARY")
print(json.dumps(summary,indent=2))
print("n✅ Tutorial completed — all tests & summary generated successfully.")
We now run the test suite twice: first with the default configuration of skipping slow tests, and then again with the –runslow flag to include them. After both runs, we generate a JSON summary containing the test results, the total number of test files, and a sample event log. The final summary gives us a clear snapshot of the project’s testing health, confirming that all components worked flawlessly from start to finish.
In summary, we see how PyTest helps us test smarter, not harder. We designed a plugin that can track results, use fixtures for state management, and control slow tests with customization options while keeping the workflow clean and modular. Finally, we provide a detailed JSON summary showing how PyTest easily integrates with modern CI and analytics pipelines. With this foundation in place, we are now confident that we can further extend PyTest to incorporate coverage, benchmarking, and even execute large-scale, professional-grade testing in parallel.
Check The complete code is here. Please feel free to check out our GitHub page for tutorials, code, and notebooks. In addition, welcome to follow us twitter And don’t forget to join our 100k+ ML SubReddit and subscribe our newsletter. wait! Are you using Telegram? Now you can also join us via telegram.
Asif Razzaq is the CEO of Marktechpost Media Inc. As a visionary entrepreneur and engineer, Asif is committed to harnessing the potential of artificial intelligence for the benefit of society. His most recent endeavor is the launch of Marktechpost, an artificial intelligence media platform that stands out for its in-depth coverage of machine learning and deep learning news that is technically sound and easy to understand for a broad audience. The platform has more than 2 million monthly views, which shows that it is very popular among viewers.
🙌 FOLLOW MARKTECHPOST: Add us as your go-to source on Google.