Python: Zero to Hero
Home/Data and Files
Share

Chapter 28: Testing with pytest

Code without tests is code you can't safely change. Every refactor becomes a gamble. Every bug fix might break something else. Tests are the net that catches you.

This chapter teaches you to write tests with pytest — Python's most popular testing framework. You'll go from zero tests to a full suite with fixtures, parametrize, mocking, and coverage. By the end you'll write tests as naturally as you write functions.

Why Test?

Tests give you three superpowers:

  1. Confidence — you know the code does what you think it does.
  2. Safety — when you change code, tests tell you immediately if something broke.
  3. Documentation — tests show how code is supposed to be used, with real examples.

The best time to write a test is right after you write the function. The second best time is right now.

Installing and Running pytest

pip install pytest pytest-cov

Create a file that starts with test_ or ends with _test.py. Put your test functions inside it, naming each one test_something:

my_project/
├── calculator.py
└── test_calculator.py

Run all tests:

pytest                     # run everything
pytest test_calculator.py  # run one file
pytest -v                  # verbose output (shows each test name)
pytest -k "add"            # run tests whose names contain "add"
pytest --tb=short          # shorter tracebacks

Your First Tests

Let's test a simple calculator module:

# calculator.py
def add(a, b):
    return a + b

def subtract(a, b):
    return a - b

def multiply(a, b):
    return a * b

def divide(a, b):
    if b == 0:
        raise ZeroDivisionError("Cannot divide by zero.")
    return a / b

def power(base, exp):
    return base ** exp
# test_calculator.py
from calculator import add, subtract, multiply, divide, power


def test_add_integers():
    assert add(2, 3) == 5

def test_add_floats():
    assert add(1.5, 2.5) == 4.0

def test_add_negative():
    assert add(-1, -2) == -3

def test_add_zero():
    assert add(5, 0) == 5

def test_subtract():
    assert subtract(10, 3) == 7

def test_multiply():
    assert multiply(4, 5) == 20

def test_divide():
    assert divide(10, 2) == 5.0

def test_divide_by_zero():
    import pytest
    with pytest.raises(ZeroDivisionError):
        divide(5, 0)

def test_power():
    assert power(2, 10) == 1024

Run them:

$ pytest test_calculator.py -v

test_calculator.py::test_add_integers   PASSED
test_calculator.py::test_add_floats     PASSED
test_calculator.py::test_add_negative   PASSED
test_calculator.py::test_add_zero       PASSED
test_calculator.py::test_subtract       PASSED
test_calculator.py::test_multiply       PASSED
test_calculator.py::test_divide         PASSED
test_calculator.py::test_divide_by_zero PASSED
test_calculator.py::test_power          PASSED

9 passed in 0.05s

Reading a failure

Break a test on purpose:

def test_add_integers():
    assert add(2, 3) == 99   # wrong expected value
FAILED test_calculator.py::test_add_integers

    def test_add_integers():
>       assert add(2, 3) == 99
E       AssertionError: assert 5 == 99
E        +  where 5 = add(2, 3)

pytest shows you the exact expression that failed, the actual value, and how it was computed. No print statements needed.

assert — The Only Assertion You Need

pytest rewrites assert statements to give detailed error messages. Use plain assert for everything:

# Values
assert result == expected
assert result != unexpected
assert result > 0
assert result is None
assert result is not None

# Strings
assert "error" in message
assert message.startswith("Hello")

# Collections
assert len(items) == 3
assert item in collection
assert collection == [1, 2, 3]

# Floats — never use == for floats
assert abs(result - expected) < 1e-9
# Or pytest's built-in approx:
import pytest
assert result == pytest.approx(expected)
assert result == pytest.approx(3.14, rel=1e-3)  # relative tolerance
assert result == pytest.approx(0.0,  abs=1e-6)  # absolute tolerance
# Examples with approx
from calculator import divide
import pytest

def test_divide_result():
    assert divide(1, 3) == pytest.approx(0.3333, rel=1e-3)

def test_celsius_to_fahrenheit():
    def c_to_f(c):
        return c * 9 / 5 + 32

    assert c_to_f(100) == pytest.approx(212.0)
    assert c_to_f(0)   == pytest.approx(32.0)
    assert c_to_f(37)  == pytest.approx(98.6, rel=1e-3)

pytest.raises — Testing Exceptions

import pytest
from calculator import divide

def test_divide_by_zero_raises():
    with pytest.raises(ZeroDivisionError):
        divide(10, 0)

def test_divide_by_zero_message():
    with pytest.raises(ZeroDivisionError, match="Cannot divide by zero"):
        divide(10, 0)

def test_raises_captures_exception():
    with pytest.raises(ValueError) as exc_info:
        int("not a number")

    assert "invalid literal" in str(exc_info.value)
    assert exc_info.type is ValueError

match takes a regex pattern — the test passes only if the exception message matches.

Parametrize — Test Many Inputs at Once

@pytest.mark.parametrize runs the same test function with different inputs. Instead of writing five nearly identical test functions, write one:

import pytest
from calculator import add, divide, power

@pytest.mark.parametrize("a, b, expected", [
    (2,   3,    5),
    (0,   0,    0),
    (-1, -2,   -3),
    (1.5, 2.5,  4.0),
    (100, -100,  0),
])
def test_add(a, b, expected):
    assert add(a, b) == expected


@pytest.mark.parametrize("base, exp, expected", [
    (2,  0,   1),
    (2,  1,   2),
    (2,  10,  1024),
    (3,  3,   27),
    (10, 3,   1000),
])
def test_power(base, exp, expected):
    assert power(base, exp) == expected


@pytest.mark.parametrize("a, b", [
    (1, 0),
    (0, 0),
    (-5, 0),
])
def test_divide_by_zero(a, b):
    with pytest.raises(ZeroDivisionError):
        divide(a, b)

pytest generates one test case per set of parameters — each is named and runs independently:

test_calculator.py::test_add[2-3-5]          PASSED
test_calculator.py::test_add[0-0-0]          PASSED
test_calculator.py::test_add[-1--2--3]        PASSED
test_calculator.py::test_add[1.5-2.5-4.0]    PASSED
test_calculator.py::test_add[100--100-0]      PASSED

Fixtures — Shared Setup and Teardown

A fixture is a function that prepares something your tests need. Fixtures replace repetitive setUp/tearDown code. pytest injects them by matching the parameter name:

import pytest
from pathlib import Path
import tempfile, os


@pytest.fixture
def sample_numbers():
    """Provide a list of numbers for tests."""
    return [1, 2, 3, 4, 5, 10, -3, 0]


@pytest.fixture
def temp_file(tmp_path):
    """Create a temporary file with some content. tmp_path is a pytest built-in."""
    file = tmp_path / "test_data.txt"
    file.write_text("line one\nline two\nline three\n")
    return file


def test_sum(sample_numbers):
    assert sum(sample_numbers) == 22

def test_min(sample_numbers):
    assert min(sample_numbers) == -3

def test_max(sample_numbers):
    assert max(sample_numbers) == 10

def test_file_exists(temp_file):
    assert temp_file.exists()

def test_file_content(temp_file):
    lines = temp_file.read_text().splitlines()
    assert len(lines) == 3
    assert lines[0] == "line one"

Fixture scope — how long it lives

By default, fixtures run fresh for every test. Change scope to share them:

@pytest.fixture(scope="session")   # once for the entire test session
def database_connection():
    conn = connect_to_db()
    yield conn      # yield instead of return when you need cleanup
    conn.close()

@pytest.fixture(scope="module")    # once per test file
def shared_config():
    return load_config("test_config.yaml")

@pytest.fixture(scope="function")  # default — once per test
def fresh_list():
    return []

Fixtures with teardown

Use yield to run cleanup code after the test:

@pytest.fixture
def database(tmp_path):
    """Create a test database, yield it, then delete it."""
    import sqlite3
    db_path = tmp_path / "test.db"
    conn = sqlite3.connect(db_path)
    conn.execute("CREATE TABLE users (id INTEGER PRIMARY KEY, name TEXT)")
    conn.commit()

    yield conn   # test runs here

    conn.close()
    # db_path is automatically cleaned up by tmp_path fixture


def test_insert_user(database):
    database.execute("INSERT INTO users (name) VALUES (?)", ("Alice",))
    database.commit()
    row = database.execute("SELECT name FROM users").fetchone()
    assert row[0] == "Alice"

def test_empty_database(database):
    rows = database.execute("SELECT * FROM users").fetchall()
    assert rows == []

Built-in fixtures

pytest ships with several useful fixtures:

def test_tmp_path(tmp_path):
    # tmp_path: a temporary directory unique to the test invocation
    file = tmp_path / "output.txt"
    file.write_text("hello")
    assert file.read_text() == "hello"

def test_capsys(capsys):
    # capsys: capture stdout and stderr
    print("Hello, pytest!")
    captured = capsys.readouterr()
    assert captured.out == "Hello, pytest!\n"

def test_monkeypatch(monkeypatch):
    # monkeypatch: temporarily replace functions, env vars, attributes
    monkeypatch.setenv("API_KEY", "test-key-123")
    import os
    assert os.environ["API_KEY"] == "test-key-123"

Mocking — Replace Real Dependencies

Tests should be fast, isolated, and repeatable. Network calls, databases, and the clock make tests slow and flaky. Mocking replaces real dependencies with fake ones you control.

from unittest.mock import Mock, MagicMock, patch, call

Mock — a fake object

from unittest.mock import Mock

# Create a mock
m = Mock()

# Call it — returns a Mock
result = m(1, 2, 3)
print(type(result))   # <class 'unittest.mock.Mock'>

# Configure the return value
m.return_value = 42
assert m() == 42

# Configure side effects
m.side_effect = ValueError("boom")
try:
    m()
except ValueError as e:
    print(e)   # boom

# Check calls
m = Mock(return_value="ok")
m("hello", key="value")
m.assert_called_once_with("hello", key="value")
print(m.call_count)    # 1
print(m.call_args)     # call('hello', key='value')

patch — replace real objects temporarily

# email_sender.py
import smtplib

def send_email(to, subject, body):
    with smtplib.SMTP("smtp.gmail.com") as server:
        server.sendmail("from@example.com", to, f"Subject: {subject}\n\n{body}")
    return True
# test_email.py
from unittest.mock import patch, MagicMock

def test_send_email():
    with patch("smtplib.SMTP") as mock_smtp:
        # mock_smtp is called when smtplib.SMTP() is used
        mock_server = MagicMock()
        mock_smtp.return_value.__enter__.return_value = mock_server

        from email_sender import send_email
        result = send_email("alice@example.com", "Hello", "World")

        assert result is True
        mock_server.sendmail.assert_called_once_with(
            "from@example.com",
            "alice@example.com",
            "Subject: Hello\n\nWorld"
        )

patch as a decorator

from unittest.mock import patch
import pytest


class WeatherService:
    def get_temperature(self, city):
        # real implementation calls an external API
        raise NotImplementedError


def get_weather_report(city):
    service = WeatherService()
    temp = service.get_temperature(city)
    if temp < 0:
        return f"{city}: Freezing ({temp}°C)"
    elif temp < 15:
        return f"{city}: Cold ({temp}°C)"
    else:
        return f"{city}: Warm ({temp}°C)"


@patch.object(WeatherService, "get_temperature", return_value=25)
def test_warm_weather(mock_get_temp):
    report = get_weather_report("London")
    assert "Warm" in report
    assert "25" in report
    mock_get_temp.assert_called_once_with("London")


@patch.object(WeatherService, "get_temperature", return_value=-5)
def test_freezing_weather(mock_get_temp):
    report = get_weather_report("Oslo")
    assert "Freezing" in report

Mocking time and random

import pytest
from unittest.mock import patch
import datetime


def get_greeting():
    hour = datetime.datetime.now().hour
    if hour < 12:
        return "Good morning!"
    elif hour < 18:
        return "Good afternoon!"
    else:
        return "Good evening!"


@patch("datetime.datetime")
def test_morning_greeting(mock_dt):
    mock_dt.now.return_value = datetime.datetime(2026, 1, 1, 9, 0, 0)
    assert get_greeting() == "Good morning!"


@patch("datetime.datetime")
def test_evening_greeting(mock_dt):
    mock_dt.now.return_value = datetime.datetime(2026, 1, 1, 20, 0, 0)
    assert get_greeting() == "Good evening!"

Testing Classes

# bank.py
class InsufficientFundsError(Exception):
    pass

class BankAccount:
    def __init__(self, owner, balance=0.0):
        self.owner   = owner
        self._balance = balance
        self._history = []

    @property
    def balance(self):
        return self._balance

    def deposit(self, amount):
        if amount <= 0:
            raise ValueError("Deposit amount must be positive.")
        self._balance += amount
        self._history.append(("deposit", amount))
        return self._balance

    def withdraw(self, amount):
        if amount <= 0:
            raise ValueError("Withdrawal amount must be positive.")
        if amount > self._balance:
            raise InsufficientFundsError(
                f"Cannot withdraw {amount:.2f}, balance is {self._balance:.2f}."
            )
        self._balance -= amount
        self._history.append(("withdraw", amount))
        return self._balance

    def history(self):
        return list(self._history)
# test_bank.py
import pytest
from bank import BankAccount, InsufficientFundsError


@pytest.fixture
def account():
    """A fresh account with $100 balance."""
    return BankAccount("Alice", balance=100.0)


class TestDeposit:
    def test_deposit_increases_balance(self, account):
        account.deposit(50)
        assert account.balance == 150.0

    def test_deposit_returns_new_balance(self, account):
        result = account.deposit(50)
        assert result == 150.0

    def test_deposit_recorded_in_history(self, account):
        account.deposit(50)
        assert ("deposit", 50) in account.history()

    @pytest.mark.parametrize("amount", [0, -1, -100])
    def test_deposit_invalid_amount(self, account, amount):
        with pytest.raises(ValueError, match="positive"):
            account.deposit(amount)


class TestWithdraw:
    def test_withdraw_decreases_balance(self, account):
        account.withdraw(30)
        assert account.balance == 70.0

    def test_withdraw_exact_balance(self, account):
        account.withdraw(100)
        assert account.balance == 0.0

    def test_withdraw_overdraft(self, account):
        with pytest.raises(InsufficientFundsError, match="balance is 100.00"):
            account.withdraw(150)

    @pytest.mark.parametrize("amount", [0, -10])
    def test_withdraw_invalid_amount(self, account, amount):
        with pytest.raises(ValueError):
            account.withdraw(amount)

    def test_withdraw_recorded_in_history(self, account):
        account.withdraw(40)
        assert ("withdraw", 40) in account.history()


class TestHistory:
    def test_empty_history_for_new_account(self):
        acc = BankAccount("Bob")
        assert acc.history() == []

    def test_history_records_sequence(self, account):
        account.deposit(50)
        account.withdraw(30)
        account.deposit(10)
        assert account.history() == [
            ("deposit",  50),
            ("withdraw", 30),
            ("deposit",  10),
        ]

    def test_history_returns_copy(self, account):
        h = account.history()
        h.append(("fake", 999))
        assert ("fake", 999) not in account.history()

Test Coverage

Coverage tells you which lines of your code are exercised by tests:

pip install pytest-cov
pytest --cov=calculator --cov-report=term-missing

Output:

Name            Stmts   Miss  Cover   Missing

calculator.py      12      1    92%    15

TOTAL              12      1    92%

Line 15 is not covered — that's probably an edge case your tests don't reach yet. Generate an HTML report for a visual view:

pytest --cov=. --cov-report=html
# Open htmlcov/index.html in your browser

Coverage configuration in pyproject.toml:

[tool.pytest.ini_options]
testpaths = ["tests"]
addopts = "--cov=. --cov-report=term-missing --cov-fail-under=80"

[tool.coverage.run]
omit = ["tests/*", "setup.py"]

--cov-fail-under=80 makes pytest exit with a failure if coverage drops below 80%. Good for CI pipelines.

Test Organization

my_project/
├── src/
│   └── my_project/
│       ├── __init__.py
│       ├── calculator.py
│       ├── bank.py
│       └── utils.py
├── tests/
│   ├── conftest.py          <- shared fixtures go here
│   ├── test_calculator.py
│   ├── test_bank.py
│   └── test_utils.py
├── pyproject.toml
└── README.md

conftest.py is automatically loaded by pytest. Put fixtures used by multiple test files here:

# tests/conftest.py
import pytest
import sqlite3


@pytest.fixture(scope="session")
def db_connection(tmp_path_factory):
    """One database for the entire test session."""
    db_path = tmp_path_factory.mktemp("data") / "test.db"
    conn    = sqlite3.connect(db_path)
    conn.execute("CREATE TABLE users (id INTEGER PRIMARY KEY, name TEXT)")
    conn.commit()
    yield conn
    conn.close()


@pytest.fixture
def clean_db(db_connection):
    """Start each test with an empty database."""
    yield db_connection
    db_connection.execute("DELETE FROM users")
    db_connection.commit()

Marks — Categorizing Tests

pytest marks let you tag and selectively run tests:

import pytest

@pytest.mark.slow
def test_large_dataset():
    # takes 30 seconds
    ...

@pytest.mark.integration
def test_real_database():
    # requires actual database
    ...

@pytest.mark.skip(reason="Not implemented yet")
def test_future_feature():
    ...

@pytest.mark.skipif(
    condition=sys.platform == "win32",
    reason="Unix only"
)
def test_unix_permissions():
    ...

@pytest.mark.xfail(reason="Known bug, fix in progress")
def test_known_broken():
    assert 1 == 2   # expected to fail

Run only fast tests:

pytest -m "not slow"
pytest -m "not slow and not integration"
pytest -m slow

Register custom marks in pyproject.toml to avoid warnings:

[tool.pytest.ini_options]
markers = [
    "slow: marks tests as slow (deselect with '-m not slow')",
    "integration: marks tests requiring external services",
]

Test-Driven Development (TDD)

TDD means writing the test before the code. The cycle is:

  1. Red — write a failing test for the feature you want
  2. Green — write the minimum code to make it pass
  3. Refactor — clean up the code; the test keeps it working
# Step 1 — Red: write the test first
def test_fizzbuzz_three():
    assert fizzbuzz(3) == "Fizz"

def test_fizzbuzz_five():
    assert fizzbuzz(5) == "Buzz"

def test_fizzbuzz_fifteen():
    assert fizzbuzz(15) == "FizzBuzz"

def test_fizzbuzz_other():
    assert fizzbuzz(7) == "7"

# Tests fail because fizzbuzz doesn't exist yet

# Step 2 — Green: write the minimum code
def fizzbuzz(n):
    if n % 15 == 0:
        return "FizzBuzz"
    if n % 3 == 0:
        return "Fizz"
    if n % 5 == 0:
        return "Buzz"
    return str(n)

# Tests pass

# Step 3 — Refactor: maybe make it cleaner
def fizzbuzz(n):
    result = ""
    if n % 3 == 0:
        result += "Fizz"
    if n % 5 == 0:
        result += "Buzz"
    return result or str(n)

# Tests still pass — refactor was safe

TDD's biggest benefit isn't the tests — it's that it forces you to think about the interface before the implementation.

Project: Test Suite for a Shopping Cart

# cart.py
from dataclasses import dataclass, field
from typing import Optional


class CartError(Exception):
    pass


@dataclass
class Product:
    name:  str
    price: float
    sku:   str


@dataclass
class CartItem:
    product:  Product
    quantity: int = 1

    @property
    def subtotal(self):
        return self.product.price * self.quantity


class ShoppingCart:
    def __init__(self, tax_rate: float = 0.0):
        self.tax_rate = tax_rate
        self._items: dict[str, CartItem] = {}

    def add(self, product: Product, quantity: int = 1) -> None:
        if quantity <= 0:
            raise CartError("Quantity must be positive.")
        if product.sku in self._items:
            self._items[product.sku].quantity += quantity
        else:
            self._items[product.sku] = CartItem(product, quantity)

    def remove(self, sku: str, quantity: Optional[int] = None) -> None:
        if sku not in self._items:
            raise CartError(f"Product {sku!r} not in cart.")
        if quantity is None or quantity >= self._items[sku].quantity:
            del self._items[sku]
        else:
            self._items[sku].quantity -= quantity

    def clear(self) -> None:
        self._items.clear()

    @property
    def subtotal(self) -> float:
        return sum(item.subtotal for item in self._items.values())

    @property
    def tax(self) -> float:
        return round(self.subtotal * self.tax_rate, 2)

    @property
    def total(self) -> float:
        return round(self.subtotal + self.tax, 2)

    @property
    def item_count(self) -> int:
        return sum(item.quantity for item in self._items.values())

    def items(self) -> list[CartItem]:
        return list(self._items.values())
# test_cart.py
import pytest
from cart import ShoppingCart, Product, CartItem, CartError


# ── Fixtures ──────────────────────────────────────────────────────────────────

@pytest.fixture
def book():
    return Product("Python Book", 29.99, "BOOK-001")

@pytest.fixture
def pen():
    return Product("Pen", 2.50, "PEN-001")

@pytest.fixture
def laptop():
    return Product("Laptop", 999.99, "LAP-001")

@pytest.fixture
def empty_cart():
    return ShoppingCart()

@pytest.fixture
def taxed_cart():
    return ShoppingCart(tax_rate=0.08)

@pytest.fixture
def cart_with_items(empty_cart, book, pen):
    empty_cart.add(book, quantity=2)
    empty_cart.add(pen, quantity=3)
    return empty_cart


# ── Adding items ──────────────────────────────────────────────────────────────

class TestAdd:
    def test_add_product(self, empty_cart, book):
        empty_cart.add(book)
        assert empty_cart.item_count == 1

    def test_add_same_product_twice_increases_quantity(self, empty_cart, book):
        empty_cart.add(book, 2)
        empty_cart.add(book, 3)
        assert empty_cart.item_count == 5

    def test_add_multiple_products(self, empty_cart, book, pen):
        empty_cart.add(book)
        empty_cart.add(pen)
        assert empty_cart.item_count == 2

    @pytest.mark.parametrize("quantity", [0, -1, -100])
    def test_add_invalid_quantity_raises(self, empty_cart, book, quantity):
        with pytest.raises(CartError, match="positive"):
            empty_cart.add(book, quantity)


# ── Removing items ────────────────────────────────────────────────────────────

class TestRemove:
    def test_remove_all(self, cart_with_items, book):
        cart_with_items.remove(book.sku)
        assert cart_with_items.item_count == 3   # only pens remain

    def test_remove_partial_quantity(self, cart_with_items, pen):
        cart_with_items.remove(pen.sku, quantity=1)
        remaining = next(
            i for i in cart_with_items.items() if i.product.sku == pen.sku
        )
        assert remaining.quantity == 2

    def test_remove_all_of_quantity(self, cart_with_items, book):
        cart_with_items.remove(book.sku, quantity=2)
        skus = [i.product.sku for i in cart_with_items.items()]
        assert book.sku not in skus

    def test_remove_nonexistent_raises(self, empty_cart):
        with pytest.raises(CartError, match="not in cart"):
            empty_cart.remove("FAKE-SKU")


# ── Totals ────────────────────────────────────────────────────────────────────

class TestTotals:
    def test_empty_cart_subtotal(self, empty_cart):
        assert empty_cart.subtotal == pytest.approx(0.0)

    def test_subtotal(self, cart_with_items):
        # 2 books at 29.99 + 3 pens at 2.50
        expected = 2 * 29.99 + 3 * 2.50
        assert cart_with_items.subtotal == pytest.approx(expected)

    def test_no_tax_by_default(self, cart_with_items):
        assert cart_with_items.tax == pytest.approx(0.0)

    def test_tax_calculation(self, taxed_cart, book):
        taxed_cart.add(book)
        expected_tax = round(book.price * 0.08, 2)
        assert taxed_cart.tax == pytest.approx(expected_tax)

    def test_total_with_tax(self, taxed_cart, book):
        taxed_cart.add(book, 2)
        subtotal = 2 * book.price
        expected = round(subtotal * 1.08, 2)
        assert taxed_cart.total == pytest.approx(expected)

    def test_clear_empties_cart(self, cart_with_items):
        cart_with_items.clear()
        assert cart_with_items.item_count == 0
        assert cart_with_items.subtotal == 0.0


# ── Edge cases ────────────────────────────────────────────────────────────────

class TestEdgeCases:
    def test_large_order(self, empty_cart, book):
        empty_cart.add(book, 1000)
        assert empty_cart.subtotal == pytest.approx(1000 * book.price)

    def test_cart_is_independent(self, book):
        cart1 = ShoppingCart()
        cart2 = ShoppingCart()
        cart1.add(book)
        assert cart2.item_count == 0   # cart2 unaffected

    def test_items_returns_copy(self, cart_with_items):
        items = cart_with_items.items()
        items.clear()
        assert cart_with_items.item_count > 0   # original unchanged

Run the full suite:

$ pytest test_cart.py -v --tb=short

test_cart.py::TestAdd::test_add_product                          PASSED
test_cart.py::TestAdd::test_add_same_product_twice_increases...  PASSED
...
test_cart.py::TestEdgeCases::test_items_returns_copy             PASSED

28 passed in 0.15s

What You Learned in This Chapter

  • Tests give you confidence, safety, and living documentation.
  • Name test files test_*.py and test functions test_*. Run with pytest.
  • Use plain assert — pytest rewrites it for detailed failure messages.
  • pytest.approx compares floats with tolerance.
  • pytest.raises tests that the right exception is raised; match= checks the message.
  • @pytest.mark.parametrize runs one test function with many inputs.
  • Fixtures provide shared setup/teardown, injected by parameter name. Use yield for cleanup.
  • Built-in fixtures: tmp_path, capsys, monkeypatch.
  • unittest.mock.patch replaces real dependencies with controllable fakes during tests.
  • pytest-cov measures which lines your tests exercise. Aim for 80%+ on core logic.
  • conftest.py holds fixtures shared across multiple test files.
  • @pytest.mark categorizes tests; run subsets with -m.
  • TDD: write the failing test first, then write the code, then refactor.

What's Next?

Chapter 29 covers Debuggingpdb, breakpoint(), logging, and the techniques experienced developers use to track down bugs systematically. Because even well-tested code has bugs — and knowing how to find them quickly is a skill on its own.

© 2026 Abhilash Sahoo. Python: Zero to Hero.