Chapter 53: Logging
print() is fine for scripts. For real applications, it's a disaster.
print() has no timestamps, no severity levels, no way to filter messages, no way to write to a file without redirecting stdout, and no way to turn it off in production without deleting it. The moment something goes wrong at 3am, you'll wish you had proper logging.
Python's built-in logging module solves all of this. It takes ten minutes to set up and pays dividends every time you debug a production issue.
The Problem with print()
# This is everywhere in your codebase
print("Starting server...")
print("Connecting to database")
print("Processing 1000 records")
print("Done!")
Problems:
- No timestamp — when did this happen?
- No severity — is this normal or a warning?
- Can't filter — want only errors? Too bad.
- Can't write to file without shell tricks
- Can't disable without editing code
- No context — which module, which function?
The Logging Module — Basics
import logging
logging.basicConfig(level=logging.DEBUG)
logging.debug("Detailed debugging info")
logging.info("Normal operation message")
logging.warning("Something unexpected happened")
logging.error("A serious error occurred")
logging.critical("Application cannot continue")
Output:
DEBUG:root:Detailed debugging info
INFO:root:Normal operation message
WARNING:root:Something unexpected happened
ERROR:root:A serious error occurred
CRITICAL:root:Application cannot continue
Log Levels
There are five standard log levels, in increasing severity:
| Level | Value | When to use |
|---|---|---|
DEBUG |
10 | Detailed info for debugging. Off in production. |
INFO |
20 | Confirm things are working as expected. |
WARNING |
30 | Something unexpected, but the app continues. Default level. |
ERROR |
40 | A serious problem. The app may keep running. |
CRITICAL |
50 | Fatal error. The app cannot continue. |
Setting a level filters out everything below it:
logging.basicConfig(level=logging.WARNING)
# DEBUG and INFO messages are now silently ignored
# WARNING, ERROR, CRITICAL are shown
In development: DEBUG. In production: INFO or WARNING.
Named Loggers — The Right Way
Never use the root logger (logging.debug(...)) in real code. Always create a named logger per module:
# In every Python file
import logging
log = logging.getLogger(__name__)
# __name__ is the module's dotted path, e.g. "myapp.services.auth"
log.debug("Starting authentication")
log.info("User %s logged in", username)
log.warning("Failed login attempt for %s", username)
log.error("Database connection failed: %s", error)
Why __name__? It creates a hierarchy:
myapp
├── myapp.api
├── myapp.services
│ └── myapp.services.auth
└── myapp.database
You can configure myapp as a whole, or silence just myapp.database. The hierarchy gives you fine-grained control without touching every file.
Configuring Logging Properly
logging.basicConfig() is fine for scripts. For applications, configure logging explicitly:
import logging
import sys
def setup_logging(level: str = "INFO") -> None:
"""Configure logging for the application."""
# Create a formatter
formatter = logging.Formatter(
fmt="%(asctime)s %(levelname)-8s %(name)s — %(message)s",
datefmt="%Y-%m-%d %H:%M:%S",
)
# Console handler — writes to stdout
console_handler = logging.StreamHandler(sys.stdout)
console_handler.setLevel(logging.DEBUG)
console_handler.setFormatter(formatter)
# Configure the root logger
root_logger = logging.getLogger()
root_logger.setLevel(getattr(logging, level.upper(), logging.INFO))
root_logger.addHandler(console_handler)
# Silence noisy third-party libraries
logging.getLogger("urllib3").setLevel(logging.WARNING)
logging.getLogger("httpx").setLevel(logging.WARNING)
logging.getLogger("sqlalchemy.engine").setLevel(logging.WARNING)
# Call this once at application startup
setup_logging(level="DEBUG")
Output now looks like:
2026-03-09 14:22:01 INFO myapp.main — Starting application
2026-03-09 14:22:01 DEBUG myapp.database — Connecting to database
2026-03-09 14:22:02 INFO myapp.services.auth — User alice logged in
2026-03-09 14:22:05 WARNING myapp.api — Rate limit exceeded for 192.168.1.1
2026-03-09 14:22:10 ERROR myapp.database — Query timeout after 30s
Logging to Files
import logging
from logging.handlers import RotatingFileHandler
from pathlib import Path
def setup_logging(
level: str = "INFO",
log_dir: str = "logs",
log_file: str = "app.log",
) -> None:
Path(log_dir).mkdir(exist_ok=True)
log_path = Path(log_dir) / log_file
formatter = logging.Formatter(
fmt="%(asctime)s %(levelname)-8s %(name)s — %(message)s",
datefmt="%Y-%m-%d %H:%M:%S",
)
# Console handler
console = logging.StreamHandler()
console.setFormatter(formatter)
console.setLevel(logging.DEBUG)
# File handler — plain file
file_handler = logging.FileHandler(log_path, encoding="utf-8")
file_handler.setFormatter(formatter)
file_handler.setLevel(logging.INFO) # less noise in file
# Rotating file handler — max 10MB per file, keep 5 backups
rotating = RotatingFileHandler(
log_path,
maxBytes=10 * 1024 * 1024, # 10 MB
backupCount=5,
encoding="utf-8",
)
rotating.setFormatter(formatter)
rotating.setLevel(logging.INFO)
root = logging.getLogger()
root.setLevel(getattr(logging, level.upper(), logging.INFO))
root.addHandler(console)
root.addHandler(rotating)
RotatingFileHandler creates app.log, then app.log.1, app.log.2, etc. when the file reaches 10MB. It keeps 5 backup files and deletes the oldest. Your disk never fills up.
For daily rotation use TimedRotatingFileHandler:
from logging.handlers import TimedRotatingFileHandler
daily = TimedRotatingFileHandler(
"logs/app.log",
when="midnight", # rotate at midnight
interval=1,
backupCount=30, # keep 30 days
encoding="utf-8",
)
Logging Exceptions
Always log the full traceback when catching exceptions:
import logging
log = logging.getLogger(__name__)
def process_file(path: str) -> None:
try:
with open(path) as f:
data = f.read()
except FileNotFoundError:
log.error("File not found: %s", path)
raise
except PermissionError:
log.error("Permission denied reading: %s", path)
raise
except Exception:
# exc_info=True includes the full traceback
log.exception("Unexpected error reading file: %s", path)
raise
log.exception(...) is equivalent to log.error(..., exc_info=True) — it logs the message AND the full traceback. Use it inside except blocks.
Output:
2026-03-09 14:30:15 ERROR myapp.processor — Unexpected error reading file: data.csv
Traceback (most recent call last):
File "processor.py", line 8, in process_file
data = f.read()
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xff in position 4
Structured Logging — JSON Logs for Production
In production, logs are often parsed by tools like Datadog, Splunk, or Elasticsearch. Plain text logs are hard to query. JSON logs are easy.
pip install python-json-logger
import logging
import sys
from pythonjsonlogger import jsonlogger
def setup_json_logging(level: str = "INFO") -> None:
formatter = jsonlogger.JsonFormatter(
fmt="%(asctime)s %(levelname)s %(name)s %(message)s",
datefmt="%Y-%m-%dT%H:%M:%S",
)
handler = logging.StreamHandler(sys.stdout)
handler.setFormatter(formatter)
root = logging.getLogger()
root.setLevel(getattr(logging, level.upper(), logging.INFO))
root.addHandler(handler)
Each log line is now a JSON object:
{"asctime": "2026-03-09T14:22:01", "levelname": "INFO", "name": "myapp.auth", "message": "User logged in", "username": "alice", "ip": "192.168.1.1"}
{"asctime": "2026-03-09T14:22:10", "levelname": "ERROR", "name": "myapp.db", "message": "Query timeout", "query": "SELECT * FROM orders", "duration_ms": 30412}
Add extra fields to any log call:
log.info("User logged in", extra={"username": username, "ip": client_ip})
log.error("Query timeout", extra={"query": sql, "duration_ms": elapsed_ms})
Adding Context with LoggerAdapter
When processing a request, you want every log message to include the request ID without passing it to every function:
import logging
from typing import Any
log = logging.getLogger(__name__)
class RequestAdapter(logging.LoggerAdapter):
"""Injects request_id into every log record."""
def process(self, msg: str, kwargs: dict) -> tuple[str, dict]:
extra = kwargs.setdefault("extra", {})
extra["request_id"] = self.extra.get("request_id", "-")
return msg, kwargs
# In a FastAPI middleware or request handler:
def get_request_logger(request_id: str) -> logging.LoggerAdapter:
return RequestAdapter(log, {"request_id": request_id})
# Usage
def handle_request(request_id: str, user_id: int) -> None:
rlog = get_request_logger(request_id)
rlog.info("Processing request for user %d", user_id)
rlog.debug("Fetching from database")
rlog.info("Request complete")
Output (every line has the request ID):
2026-03-09 14:22:01 INFO myapp — [req-abc123] Processing request for user 42
2026-03-09 14:22:01 DEBUG myapp — [req-abc123] Fetching from database
2026-03-09 14:22:01 INFO myapp — [req-abc123] Request complete
Complete Production Logging Setup
# logging_config.py
"""
Centralized logging configuration.
Call setup_logging() once at application startup.
"""
import logging
import logging.config
import sys
from pathlib import Path
NOISY_LIBRARIES = [
"urllib3",
"httpx",
"httpcore",
"sqlalchemy.engine",
"sqlalchemy.pool",
"multipart",
"passlib",
"boto3",
"botocore",
"s3transfer",
]
def setup_logging(
level: str = "INFO",
log_dir: str = "logs",
json_output: bool = False,
) -> None:
"""
Configure application-wide logging.
Args:
level: Minimum log level (DEBUG, INFO, WARNING, ERROR).
log_dir: Directory for log files. None = console only.
json_output: If True, output JSON (for production log aggregators).
"""
Path(log_dir).mkdir(parents=True, exist_ok=True)
numeric_level = getattr(logging, level.upper(), logging.INFO)
if json_output:
try:
from pythonjsonlogger import jsonlogger
formatter = jsonlogger.JsonFormatter(
fmt="%(asctime)s %(levelname)s %(name)s %(message)s",
datefmt="%Y-%m-%dT%H:%M:%SZ",
)
except ImportError:
formatter = _text_formatter()
else:
formatter = _text_formatter()
handlers = []
# Console
console = logging.StreamHandler(sys.stdout)
console.setFormatter(formatter)
console.setLevel(numeric_level)
handlers.append(console)
# Rotating file
if log_dir:
from logging.handlers import RotatingFileHandler
file_handler = RotatingFileHandler(
Path(log_dir) / "app.log",
maxBytes=10 * 1024 * 1024,
backupCount=5,
encoding="utf-8",
)
file_handler.setFormatter(formatter)
file_handler.setLevel(logging.INFO)
handlers.append(file_handler)
# Separate error log
error_handler = RotatingFileHandler(
Path(log_dir) / "error.log",
maxBytes=10 * 1024 * 1024,
backupCount=10,
encoding="utf-8",
)
error_handler.setFormatter(formatter)
error_handler.setLevel(logging.ERROR)
handlers.append(error_handler)
# Configure root logger
root = logging.getLogger()
root.setLevel(numeric_level)
for h in handlers:
root.addHandler(h)
# Silence noisy libraries
for library in NOISY_LIBRARIES:
logging.getLogger(library).setLevel(logging.WARNING)
logging.getLogger(__name__).info(
"Logging configured: level=%s, log_dir=%s, json=%s",
level, log_dir, json_output,
)
def _text_formatter() -> logging.Formatter:
return logging.Formatter(
fmt="%(asctime)s %(levelname)-8s %(name)s — %(message)s",
datefmt="%Y-%m-%d %H:%M:%S",
)
Use it:
# main.py
from logging_config import setup_logging
import logging
setup_logging(level="DEBUG") # development
# setup_logging(level="INFO", json_output=True) # production
log = logging.getLogger(__name__)
log.info("Application started")
Logging in FastAPI
# main.py
from contextlib import asynccontextmanager
from fastapi import FastAPI, Request
import logging
import time
import uuid
from logging_config import setup_logging
setup_logging(level="INFO")
log = logging.getLogger(__name__)
@asynccontextmanager
async def lifespan(app: FastAPI):
log.info("Application starting up")
yield
log.info("Application shutting down")
app = FastAPI(lifespan=lifespan)
@app.middleware("http")
async def log_requests(request: Request, call_next):
request_id = str(uuid.uuid4())[:8]
start = time.time()
log.info(
"-> %s %s [%s]",
request.method, request.url.path, request_id,
extra={"request_id": request_id},
)
response = await call_next(request)
duration_ms = (time.time() - start) * 1000
log.info(
"<- %s %s %dms [%s]",
response.status_code, request.url.path, duration_ms, request_id,
extra={"request_id": request_id, "duration_ms": round(duration_ms)},
)
return response
Output for every request:
2026-03-09 14:22:01 INFO myapp.main — -> GET /products [a3f2b1c4]
2026-03-09 14:22:01 INFO myapp.main — <- 200 /products 12ms [a3f2b1c4]
2026-03-09 14:22:02 INFO myapp.main — -> POST /auth/token [d7e1a8b5]
2026-03-09 14:22:02 INFO myapp.main — <- 200 /auth/token 45ms [d7e1a8b5]
2026-03-09 14:22:03 WARNING myapp.main — -> GET /products/9999 [c2f3d4e5]
2026-03-09 14:22:03 INFO myapp.main — <- 404 /products/9999 3ms [c2f3d4e5]
What You Learned in This Chapter
print()has no timestamps, severity, filtering, or file output. Useloggingfor anything beyond a quick script.- Five levels in order:
DEBUG < INFO < WARNING < ERROR < CRITICAL. Set the level to filter below it. - Always create named loggers with
logging.getLogger(__name__). Never use the root logger directly. - Format strings:
%(asctime)s,%(levelname)s,%(name)s,%(message)s. Use%sstyle in log messages (not f-strings) — lazy evaluation skips string building if the level is filtered. RotatingFileHandlercaps file size and keeps backups.TimedRotatingFileHandlerrotates daily.log.exception("msg")logs the full traceback. Use it insideexceptblocks.- Structured JSON logging (
python-json-logger) makes production logs queryable by log aggregators. LoggerAdapterinjects context (request ID, user ID) into every log message in a scope.- Silence noisy third-party libraries with
logging.getLogger("urllib3").setLevel(logging.WARNING). - Call your setup function once at startup. Every module just does
log = logging.getLogger(__name__).
What's Next?
Chapter 54 covers Design Patterns in Python — proven solutions to recurring problems in software design: Singleton, Factory, Observer, Strategy, Decorator, and more. You'll learn which patterns are genuinely useful in Python and which the language already handles for you.