Decorators Explained: From Functions to Frameworks
Decorators are just functions wrapping functions โ once you see that, Flask routes, pytest fixtures, and dataclass fields all make sense
Abstract AlgorithmsAI-assisted content. This post may have been written or enhanced with the help of AI tools. While efforts are made to ensure accuracy, the content may contain errors or inaccuracies. Please verify critical information independently.
๐ The Copy-Paste Crisis: When Timing Code Invades Twenty Functions
Sofia is three months into her first Python backend role. The team runs a performance review and discovers the data-processing API is slow. The tech lead asks her to add timing instrumentation to every endpoint handler โ 20 functions spread across four files.
She does it the obvious way. She copies the same five lines to the top and bottom of each function:
import time
def process_orders(orders):
start = time.perf_counter()
# ... fifty lines of real logic ...
elapsed = time.perf_counter() - start
print(f"[process_orders] {elapsed:.4f}s")
return result
def process_returns(returns):
start = time.perf_counter()
# ... forty lines of real logic ...
elapsed = time.perf_counter() - start
print(f"[process_returns] {elapsed:.4f}s")
return result
Four files and two hours later, she has 100 lines of identical boilerplate threaded through the codebase. The tech lead reviews the PR and leaves one comment: "This is exactly what decorators are for."
The refactored version:
import time
import functools
def timer(func):
@functools.wraps(func)
def wrapper(*args, **kwargs):
start = time.perf_counter()
result = func(*args, **kwargs)
elapsed = time.perf_counter() - start
print(f"[{func.__name__}] {elapsed:.4f}s")
return result
return wrapper
@timer
def process_orders(orders):
# ... fifty lines of real logic ...
return result
@timer
def process_returns(returns):
# ... forty lines of real logic ...
return result
Every function is timed. One decorator. Zero repeated boilerplate. When the timing format changes, you edit one place.
The @timer syntax is syntactic sugar. Python translates it at parse time into exactly one assignment:
process_orders = timer(process_orders)
That is the entire magic. timer is a function that takes process_orders as an argument and returns a new function. The returned function โ wrapper โ does the timing work, then calls the original process_orders, then returns its result. Seeing the @ symbol as shorthand for this assignment is the mental model that makes every decorator in the Python ecosystem immediately readable.
๐ Python Functions as First-Class Citizens: The Foundation Decorators Build On
Before a decorator can work, you need to see functions the way Python sees them โ not as named blocks of code but as objects you can pass, store, and return just like integers or strings.
Functions Are Objects You Can Pass Around
def greet(name):
return f"Hello, {name}!"
# Assign to a variable โ no call parentheses
say_hello = greet
print(say_hello("Alice")) # Hello, Alice!
# Store in a list alongside other callables
actions = [greet, str.upper, len]
print([f("world") for f in actions]) # ['Hello, world!', 'WORLD', 5]
When Python executes def greet(name): it allocates a function object and binds the name greet to it. greet is just a label pointing at that object. Assigning say_hello = greet copies the reference โ both names point at the same function object.
Returning a Function from a Function: The Closure
A closure is the mechanism that makes decorators possible. When an inner function references a variable from its enclosing scope, Python keeps that variable alive after the outer function returns by storing a reference to it in the inner function's __closure__ attribute.
def make_multiplier(factor):
def multiply(value):
return value * factor # factor is captured from make_multiplier's scope
return multiply
double = make_multiplier(2)
triple = make_multiplier(3)
print(double(10)) # 20
print(triple(10)) # 30
# The captured variable lives in __closure__
print(double.__closure__[0].cell_contents) # 2
make_multiplier(2) returns multiply with factor=2 captured. Even after make_multiplier has returned and its stack frame is gone, double.__closure__ keeps factor alive. This is exactly what the wrapper inside timer does โ it captures func from the enclosing timer scope and calls it later.
Desugaring @decorator to an Explicit Assignment
The @ syntax is purely cosmetic. These two blocks are byte-for-byte equivalent:
# Style 1: decorator syntax
@timer
def process_orders(orders):
...
# Style 2: explicit assignment โ what Python actually does
def process_orders(orders):
...
process_orders = timer(process_orders)
Python reads @timer directly before a def, evaluates timer, and passes the function being defined as the argument. The name process_orders is then rebound to whatever timer returns. If timer does not return a callable, every subsequent call to process_orders raises a TypeError.
โ๏ธ Crafting Decorators by Hand: Timer, Logger, Retry, and functools.wraps
The Minimal Wrapper Pattern
Every function-based decorator follows this three-part structure: accept the original function as an argument, define an inner wrapper that adds behavior before and after calling the original, and return wrapper.
def my_decorator(func):
def wrapper(*args, **kwargs):
# before: run code before the original function
result = func(*args, **kwargs)
# after: run code after the original function
return result
return wrapper
The *args, **kwargs signature in wrapper is essential. It means the wrapper accepts any combination of positional and keyword arguments and forwards them transparently to func. Without it, the decorator would only work with one specific signature.
Why functools.wraps Matters
Without functools.wraps, the decorated function loses its identity. The wrapper replaces the original, including its __name__, __doc__, and __module__:
def bad_timer(func):
def wrapper(*args, **kwargs):
return func(*args, **kwargs)
return wrapper
@bad_timer
def calculate_total(items):
"""Sum all item prices and return the total."""
return sum(item.price for item in items)
print(calculate_total.__name__) # wrapper โ wrong!
print(calculate_total.__doc__) # None โ lost!
Debugging tools, logging frameworks, pytest, and help() all rely on __name__ and __doc__. When they show wrapper instead of calculate_total, stack traces become confusing and help(calculate_total) returns nothing useful. functools.wraps copies all the original function's metadata to the wrapper:
import functools
def timer(func):
@functools.wraps(func)
def wrapper(*args, **kwargs):
import time
start = time.perf_counter()
result = func(*args, **kwargs)
elapsed = time.perf_counter() - start
print(f"[{func.__name__}] {elapsed:.4f}s")
return result
return wrapper
functools.wraps also sets wrapper.__wrapped__ = func, which lets inspect.unwrap() and testing frameworks peel off decorators to test the raw function directly.
Decorators with Arguments: The Factory Pattern
A plain decorator receives one argument โ the function. When you want @retry(max_attempts=3), you need an extra outer function that returns the decorator:
import functools
import time
def retry(max_attempts=3, delay=1.0, exceptions=(Exception,)):
"""Decorator factory: returns a decorator that retries on specified exceptions."""
def decorator(func):
@functools.wraps(func)
def wrapper(*args, **kwargs):
last_exc = None
for attempt in range(1, max_attempts + 1):
try:
return func(*args, **kwargs)
except exceptions as exc:
last_exc = exc
if attempt < max_attempts:
print(f"[{func.__name__}] attempt {attempt} failed: {exc}. Retrying in {delay}s...")
time.sleep(delay)
raise last_exc
return wrapper
return decorator
@retry(max_attempts=3, delay=0.5, exceptions=(ConnectionError,))
def fetch_user(user_id):
# Simulates a flaky network call
import random
if random.random() < 0.6:
raise ConnectionError("Network timeout")
return {"id": user_id, "name": "Alice"}
The call order is: retry(max_attempts=3, delay=0.5, exceptions=(ConnectionError,)) executes first and returns decorator. Python then calls decorator(fetch_user) and rebinds fetch_user to the resulting wrapper. This two-level nesting is the standard factory pattern for parameterized decorators.
Stacking Multiple Decorators
When you stack decorators, Python applies them bottom-up at decoration time but executes them top-down at call time:
@retry(max_attempts=3)
@timer
@log
def fetch_price(product_id):
...
This expands to:
fetch_price = retry(max_attempts=3)(timer(log(fetch_price)))
At call time, the outermost wrapper (retry) runs first, calls timer's wrapper, which calls log's wrapper, which calls the original fetch_price. The returns unwind in the reverse order. Decoration order matters whenever the decorators have interactions โ a @cache placed outside @retry caches results including successful retries, while @cache inside @retry might cache a None result from a partial failure.
๐ง How Python Applies Decorators Under the Hood
The Internals of Decorator Application
Import time versus call time. Decorators run at import time โ when Python executes the def statement, not when the function is called. This distinction has real consequences. A module-level @app.route("/") registers the route with Flask the moment the module is imported, not when the first request arrives. A poorly written decorator with a side effect in its body (not inside wrapper) will trigger that side effect once per import, not once per call.
def loud_decorator(func):
print(f"Decorating {func.__name__}") # runs at import time
@functools.wraps(func)
def wrapper(*args, **kwargs):
print(f"Calling {func.__name__}") # runs at call time
return func(*args, **kwargs)
return wrapper
@loud_decorator
def compute():
return 42
# Output when the module is imported (before compute() is called):
# Decorating compute
# Output when compute() is called:
# Calling compute
__wrapped__ and decorator transparency. functools.wraps sets wrapper.__wrapped__ = func. The inspect.unwrap() function follows the __wrapped__ chain to reach the original function regardless of how many decorator layers are applied. Test frameworks use this to mock or bypass decorators in unit tests:
import inspect
@timer
@retry(max_attempts=2)
def process(data):
return data
# Unwrap to the original โ ignores all decorator layers
original = inspect.unwrap(process)
print(original.__name__) # process
The descriptor protocol and method decorators. When a decorator is applied to a method inside a class, the descriptor protocol becomes relevant. @staticmethod and @classmethod are descriptors โ objects that implement __get__. When Python looks up MyClass.my_method, the descriptor's __get__ is called, returning either an unbound function (static) or a bound-to-class callable (class method). A naive function-based decorator applied to a method breaks this โ the wrapper is a plain function, not a descriptor, so self is not passed correctly. The wrapt library (covered in the OSS section) solves this by implementing descriptor-aware wrappers.
Performance Analysis of Wrapper Call Overhead
Every decorator layer adds overhead โ a Python function call. Let's measure it:
import timeit
import functools
def noop_decorator(func):
@functools.wraps(func)
def wrapper(*args, **kwargs):
return func(*args, **kwargs)
return wrapper
def bare():
return 42
@noop_decorator
def wrapped_once():
return 42
@noop_decorator
@noop_decorator
@noop_decorator
def wrapped_three():
return 42
# Measure overhead
base = timeit.timeit(bare, number=1_000_000)
one = timeit.timeit(wrapped_once, number=1_000_000)
three = timeit.timeit(wrapped_three, number=1_000_000)
print(f"bare: {base:.3f}s")
print(f"1 decorator: {one:.3f}s (+{(one - base) / base * 100:.0f}%)")
print(f"3 decorators: {three:.3f}s (+{(three - base) / base * 100:.0f}%)")
# Typical output (CPython 3.12):
# bare: 0.029s
# 1 decorator: 0.066s (+127%)
# 3 decorators: 0.135s (+365%)
For a function that does trivial work, each decorator layer more than doubles the execution time. For a function that performs real I/O or computation, a few microseconds of wrapper overhead is irrelevant. The rule is: apply decorators freely to functions that do real work; avoid stacking many decorators on tight inner loops that call trivially fast functions millions of times per second.
functools.lru_cache is the exception to the overhead rule. It is implemented in C, so its wrapper cost is negligible. Its caching benefit โ turning an O(2โฟ) recursive Fibonacci into O(n) โ completely dominates the profile:
import functools
@functools.lru_cache(maxsize=None)
def fibonacci(n):
if n < 2:
return n
return fibonacci(n - 1) + fibonacci(n - 2)
print(fibonacci(50)) # 12586269025, computed instantly
print(fibonacci.cache_info())
# CacheInfo(hits=48, misses=51, maxsize=None, currsize=51)
Use lru_cache for pure functions with hashable arguments where repeated calls with the same inputs are expected. Use manual caching (a dict inside the decorator) when you need TTL, cache invalidation by key, or persistence across process restarts.
๐ Tracing a Stacked Decorator: Call Order, Execution, and Return
When you write @retry, @timer, and @log on the same function, understanding which runs first at call time versus return time is not obvious. The diagram below traces the exact execution path for a three-decorator stack.
graph TD
A[Caller invokes fetch_price] --> B[retry wrapper starts]
B --> C[timer wrapper starts]
C --> D[log wrapper starts]
D --> E[original fetch_price executes]
E --> F[log wrapper finishes and returns result]
F --> G[timer records elapsed time and returns result]
G --> H[retry checks for exceptions and returns result]
H --> I[result returned to caller]
Read this diagram top to bottom for the call path and bottom to top for the return path. retry is outermost: it starts first, receives the result last, and is the only layer that can re-invoke the inner stack on failure. timer is in the middle: it measures wall-clock time from after retry decides to make an attempt until log and fetch_price both finish. log is innermost: it runs immediately before the real function and sees the exact arguments passed to fetch_price. Each layer has its own view of the execution โ outermost layers see retry behavior, innermost layers see raw inputs and outputs.
The corresponding decoration-time expansion makes the nesting concrete:
@retry(max_attempts=3) # outermost
@timer # middle
@log # innermost
def fetch_price(product_id):
...
# Python internally does this at decoration time:
fetch_price = retry(max_attempts=3)(timer(log(fetch_price)))
๐ Where Decorators Run the Python Ecosystem
Decorators are not an advanced niche feature โ they are the primary API surface of the most popular Python frameworks. Recognizing the pattern makes every framework immediately more readable.
Flask and FastAPI: Routes Are Decorated Functions
Flask's @app.route() is a decorator factory. At import time, app.route("/products") returns a decorator that registers list_products in the Flask URL map:
from flask import Flask, jsonify
app = Flask(__name__)
@app.route("/products", methods=["GET"])
def list_products():
return jsonify([{"id": 1, "name": "Widget"}])
# Equivalent to:
# list_products = app.route("/products", methods=["GET"])(list_products)
FastAPI uses the same pattern but also reads the function's type annotations to generate OpenAPI documentation and perform request validation at the framework level โ all powered by the fact that the decorated function is inspectable at runtime.
pytest: Fixtures Are Parameterized Decorators
@pytest.fixture decorates a generator function and registers it as a test fixture. pytest discovers all fixtures at collection time and injects them into tests that declare a matching parameter name:
import pytest
@pytest.fixture
def db_connection():
conn = create_test_database()
yield conn # setup done; test runs here
conn.close() # teardown runs after the test
def test_user_creation(db_connection):
# db_connection is injected by pytest โ no import needed
user = db_connection.create_user("alice@example.com")
assert user.id is not None
Built-in Descriptors: @property, @staticmethod, @classmethod
Python's @property is a decorator that converts a method into a managed attribute, with optional setter and deleter:
class Temperature:
def __init__(self, celsius):
self._celsius = celsius
@property
def fahrenheit(self):
return self._celsius * 9 / 5 + 32
@fahrenheit.setter
def fahrenheit(self, value):
self._celsius = (value - 32) * 5 / 9
t = Temperature(100)
print(t.fahrenheit) # 212.0
t.fahrenheit = 32
print(t._celsius) # 0.0
@staticmethod removes self from the signature โ the method does not receive the instance or class. @classmethod replaces self with cls โ the method receives the class object, enabling alternative constructors:
class User:
def __init__(self, username, email):
self.username = username
self.email = email
@classmethod
def from_dict(cls, data):
"""Alternative constructor โ creates a User from a dictionary."""
return cls(data["username"], data["email"])
user = User.from_dict({"username": "bob", "email": "bob@example.com"})
functools.lru_cache: Memoization as a Built-in Decorator
@functools.lru_cache wraps a pure function with an LRU (Least Recently Used) cache keyed by the arguments. It is the single most impactful performance decorator in the standard library:
import functools
@functools.lru_cache(maxsize=128)
def get_exchange_rate(from_currency, to_currency):
# Simulates an expensive external API call
print(f"Fetching rate {from_currency} -> {to_currency}")
return 1.08 # EUR/USD stub
print(get_exchange_rate("EUR", "USD")) # Fetching rate... 1.08
print(get_exchange_rate("EUR", "USD")) # (cached) 1.08
print(get_exchange_rate.cache_info()) # hits=1, misses=1
Python 3.8 added @functools.cached_property, which caches the result of a property computation on the instance โ useful for expensive derived attributes that should be computed once:
class DataSet:
def __init__(self, records):
self._records = records
@functools.cached_property
def summary_stats(self):
# Computed once on first access, cached on the instance
values = [r.value for r in self._records]
return {"mean": sum(values) / len(values), "count": len(values)}
โ๏ธ Decorator Versus Subclass, Middleware, and Class-Based Wrappers
Not every cross-cutting concern belongs in a decorator. Knowing when to use each approach prevents over-engineering.
| Concern | Decorator | Subclass | Middleware | Class-based decorator |
| Adding behavior to a single function | Best fit | Overkill | N/A | Only if stateful |
| Adding behavior to all methods in a class | Use __init_subclass__ or metaclass | Inheritance | N/A | Class decorator on the class |
| HTTP request lifecycle (auth, CORS, rate-limit) | Function decorator | N/A | Best fit | N/A |
| Stateful wrappers (rate limiter, per-user cache) | Class-based decorator | Possible | Possible | Best fit |
| Varying behavior per subclass | Decorator limits this | Inheritance wins | N/A | N/A |
Decorator versus subclass. A decorator adds behavior from the outside without requiring the original function to know about it. A subclass adds behavior from the inside โ the child class controls what changes. Choose a decorator when the concern is orthogonal to the function's purpose (timing, logging, retry). Choose a subclass when the behavior is an intrinsic variation of the base behavior (a PremiumUser that overrides can_access).
Decorator versus middleware. Web frameworks (Django, FastAPI, Starlette) have middleware stacks for concerns that apply to every request โ authentication, compression, request ID injection. Applying a @require_auth decorator to 200 route functions is far worse than one middleware that handles authentication globally. Use decorators for opt-in behavior on specific functions; use middleware for default behavior on all requests.
Class-based decorators versus function-based decorators. A class-based decorator stores state across calls, which is impossible to do cleanly in a function-based one without closures and mutable containers. A rate limiter that counts requests per window is a natural fit for a class:
import time
import functools
class RateLimit:
def __init__(self, calls_per_second):
self.calls_per_second = calls_per_second
self._call_times = []
def __call__(self, func):
@functools.wraps(func)
def wrapper(*args, **kwargs):
now = time.monotonic()
window_start = now - 1.0
self._call_times = [t for t in self._call_times if t > window_start]
if len(self._call_times) >= self.calls_per_second:
raise RuntimeError(
f"Rate limit exceeded: {self.calls_per_second} calls/second"
)
self._call_times.append(now)
return func(*args, **kwargs)
return wrapper
rate_limit = RateLimit(calls_per_second=5)
@rate_limit
def send_notification(user_id, message):
print(f"Notifying {user_id}: {message}")
The RateLimit instance persists its _call_times list across all calls to the decorated function. A function-based decorator could do the same with a mutable list in the closure, but the class-based approach is more explicit about the state it manages.
๐งญ Picking the Right Decorator Pattern: A Decision Table
Use this table when deciding how to implement a new decorator. The right column lists the pattern to reach for based on what you need.
| What you need | Pattern to use |
| Add logging, timing, or tracing to a function | Simple function decorator with functools.wraps |
Accept configuration (@retry(max_attempts=3)) | Decorator factory: outer function returns the decorator |
| Share state across all calls (rate limiter, call counter) | Class-based decorator: __call__ returns the wrapper |
| Cache results of a pure function with hashable args | @functools.lru_cache |
| Cache a computed property on an instance (compute once) | @functools.cached_property |
| Apply the same decorator to every method in a class | Class decorator using inspect.getmembers to iterate methods |
| Cross-cutting concern on all HTTP requests | Framework middleware, not a decorator |
| Wrap a method that must remain a descriptor | Use wrapt.decorator to preserve descriptor protocol |
| Test the original function without decorator side effects | Use inspect.unwrap(func) in the test |
๐งช Three Production-Ready Decorators: Retry with Backoff, Rate Limiter, and TTL Cache
The following three decorators go beyond demonstration โ each one is production-grade code that handles the edge cases that naive versions miss. The section walks through what each decorator does, why the implementation is structured the way it is, and what to look for in the output.
Example 1 โ Retry with Exponential Backoff
A retry decorator for network calls needs exponential backoff (to avoid thundering herd), jitter (to spread retries across time), and configurable exception filtering (so it does not retry on ValueError):
import functools
import time
import random
def retry_with_backoff(
max_attempts=3,
base_delay=1.0,
max_delay=30.0,
jitter=True,
exceptions=(Exception,),
):
"""
Retry a function with exponential backoff.
Delay doubles with each attempt, capped at max_delay.
Jitter adds up to 1 second of randomness to avoid thundering-herd.
"""
def decorator(func):
@functools.wraps(func)
def wrapper(*args, **kwargs):
last_exc = None
for attempt in range(1, max_attempts + 1):
try:
return func(*args, **kwargs)
except exceptions as exc:
last_exc = exc
if attempt == max_attempts:
break
delay = min(base_delay * (2 ** (attempt - 1)), max_delay)
if jitter:
delay += random.uniform(0, 1)
print(
f"[{func.__name__}] attempt {attempt}/{max_attempts} "
f"failed ({exc}). Retrying in {delay:.2f}s"
)
time.sleep(delay)
raise last_exc
return wrapper
return decorator
@retry_with_backoff(max_attempts=4, base_delay=0.5, exceptions=(ConnectionError,))
def fetch_weather(city):
raise ConnectionError("upstream timeout") # simulate a flaky service
Example 2 โ Token Bucket Rate Limiter
A token bucket rate limiter allows short bursts up to a maximum capacity while enforcing a long-term average rate. It is the algorithm used by AWS API Gateway and Stripe's SDKs:
import functools
import time
import threading
def rate_limit(calls_per_second, burst=None):
"""
Token bucket rate limiter.
Refills at calls_per_second tokens/second, up to burst (default = calls_per_second).
Raises RuntimeError immediately when the bucket is empty.
"""
capacity = burst if burst is not None else calls_per_second
lock = threading.Lock()
def decorator(func):
tokens = [float(capacity)]
last_check = [time.monotonic()]
@functools.wraps(func)
def wrapper(*args, **kwargs):
with lock:
now = time.monotonic()
elapsed = now - last_check[0]
last_check[0] = now
# Refill tokens proportionally to elapsed time
tokens[0] = min(capacity, tokens[0] + elapsed * calls_per_second)
if tokens[0] < 1.0:
raise RuntimeError(
f"[{func.__name__}] rate limit exceeded "
f"({calls_per_second} calls/second)"
)
tokens[0] -= 1.0
return func(*args, **kwargs)
return wrapper
return decorator
@rate_limit(calls_per_second=10, burst=20)
def send_sms(recipient, message):
print(f"SMS to {recipient}: {message}")
Example 3 โ In-Memory Cache with TTL
functools.lru_cache has no time-to-live. For API responses or database queries that are valid for a limited window, a TTL cache is more appropriate:
import functools
import time
def ttl_cache(maxsize=128, ttl_seconds=60):
"""
LRU-style cache with per-entry TTL.
Entries older than ttl_seconds are treated as cache misses.
"""
def decorator(func):
cache = {} # {args_key: (result, expiry_timestamp)}
order = [] # insertion order for LRU eviction
@functools.wraps(func)
def wrapper(*args, **kwargs):
key = (args, tuple(sorted(kwargs.items())))
now = time.monotonic()
if key in cache:
result, expiry = cache[key]
if now < expiry:
return result
del cache[key]
order.remove(key)
result = func(*args, **kwargs)
expiry = now + ttl_seconds
if len(cache) >= maxsize:
oldest = order.pop(0)
cache.pop(oldest, None)
cache[key] = (result, expiry)
order.append(key)
return result
wrapper.cache_clear = lambda: (cache.clear(), order.clear())
return wrapper
return decorator
@ttl_cache(maxsize=256, ttl_seconds=300)
def fetch_product_price(product_id, currency="USD"):
print(f"Fetching price for {product_id} in {currency}")
return 29.99 # simulate a database read
print(fetch_product_price(42)) # cache miss โ fetches
print(fetch_product_price(42)) # cache hit โ returns instantly
fetch_product_price.cache_clear()
print(fetch_product_price(42)) # cache cleared โ fetches again
The wrapper exposes a cache_clear() method for tests that need a clean slate. The TTL check happens on every read, so stale entries are evicted lazily on the next miss for that key rather than through a background sweep thread.
๐ ๏ธ functools and wrapt: What the Standard Library and the Community Built
functools: Python's Decorator Toolkit
The functools module is the official toolkit for higher-order functions and decorators. The table below summarizes the utilities most relevant to decorator authors:
| Function | What it does | When to use it |
functools.wraps(func) | Copies __name__, __doc__, __module__, __qualname__, __annotations__, __dict__; sets __wrapped__ | Every function-based decorator, without exception |
functools.lru_cache(maxsize) | C-implemented LRU cache; cache_info() and cache_clear() included | Pure functions with hashable args where repeated calls are likely |
functools.cached_property | Property computed once, then stored on the instance | Expensive derived attributes on long-lived objects |
functools.partial(func, *args, **kwargs) | Returns a new callable with some arguments pre-filled | Adapting a function signature without writing a full wrapper |
functools.singledispatch | Turns a function into a single-dispatch generic โ different implementations per argument type | Type-based dispatch without isinstance chains |
import functools
# functools.partial โ pre-fill arguments
def power(base, exponent):
return base ** exponent
square = functools.partial(power, exponent=2)
cube = functools.partial(power, exponent=3)
print(square(5), cube(3)) # 25 27
# functools.singledispatch โ dispatch by type
@functools.singledispatch
def serialize(value):
raise NotImplementedError(f"No serializer for {type(value)}")
@serialize.register(int)
def _(value):
return str(value)
@serialize.register(list)
def _(value):
return "[" + ", ".join(serialize(v) for v in value) + "]"
print(serialize(42)) # "42"
print(serialize([1, 2, 3])) # "[1, 2, 3]"
For a full exploration of the functools module, the Python documentation is the canonical reference.
wrapt: Community-Standard Robust Wrapping
The wrapt library by Graham Dumpleton (available via pip install wrapt) solves the one problem functools.wraps does not: it makes decorated functions indistinguishable from the original with respect to the descriptor protocol. A wrapt.decorator works correctly on plain functions, instance methods, class methods, and static methods without the caller needing to handle each case separately.
import wrapt
@wrapt.decorator
def log_call(func, instance, args, kwargs):
"""wrapt signature: func=original, instance=self/cls/None, args=positional, kwargs=keyword."""
print(f"Calling {func.__name__} with args={args} kwargs={kwargs}")
return func(*args, **kwargs)
class OrderService:
@log_call
def create_order(self, user_id, items):
return {"order_id": 1, "user_id": user_id, "items": items}
service = OrderService()
service.create_order(42, ["widget", "gadget"])
# Calling create_order with args=(42, ['widget', 'gadget']) kwargs={}
wrapt is the recommended choice when writing decorators that ship as part of a library โ it guarantees correct introspection behavior regardless of how the decorated callable is defined. For application-level decorators that only wrap module-level functions, functools.wraps is sufficient and has no additional dependencies.
๐ Lessons Learned from Decorator Debugging
functools.wraps is not optional. Every decorator that omits it breaks stack traces, logging, help(), and any tool that reads __name__ or __doc__. Make it the first thing you add whenever you start a new wrapper. The cost is one import and one line โ the benefit is that your decorated functions look identical to the originals in every diagnostic tool.
Decorators run at import time, not call time. Any code in the decorator body but outside wrapper executes once when the module loads. This is powerful (Flask route registration) and dangerous (a decorator that opens a database connection in its body holds that connection for the lifetime of the module). If a decorator needs resources, acquire them inside wrapper, not at decoration time.
Stacking order is counterintuitive. @a @b @c def f() reads top-to-bottom but wraps inside-out: c wraps first, then b, then a. A @cache applied before @retry means the cache sits inside the retry loop โ a cached failure result is returned without retrying. Reversing the order โ @retry @cache โ means the cache surrounds the retry: a successful cached result bypasses retry entirely. Draw the expansion as a(b(c(f))) whenever you are unsure about order.
Class-based decorators do not reuse across different functions by default. When you write rate_limit = RateLimit(calls_per_second=5) and then apply @rate_limit to two different functions, both functions share the same RateLimit instance and its state. If you need independent rate limits per function, instantiate a new decorator for each: @RateLimit(calls_per_second=5) instead of rate_limit = RateLimit(calls_per_second=5); @rate_limit.
Testing decorated functions: use inspect.unwrap. When you need to test the raw behavior of a function without any decorator side effects โ retry delays, rate limit errors, cache hits โ use inspect.unwrap(func)(*args) to call the original. This makes tests fast, deterministic, and independent of decorator state.
Avoid using decorators for business logic. A decorator is the right home for cross-cutting concerns: logging, timing, caching, retrying, authorization. If you find yourself putting if-else business rules inside a decorator โ "skip this step for premium users" โ it is a signal that the decorator is doing too much. Prefer explicit function calls for anything that varies by domain context.
๐ Key Takeaways
TLDR: A decorator is syntactic sugar for
func = decorator(func). The@symbol applies at import time, not call time. Every decorator should usefunctools.wrapsto preserve the original function's metadata. Use the factory pattern (@retry(max_attempts=3)) when you need configuration. Stack decorators bottom-up (they execute top-down). Use class-based decorators when you need per-instance state. Reach forwraptwhen your decorator must survive the descriptor protocol.
Seven things to remember from this post:
@decoratoris shorthand forfunc = decorator(func)โ once you see that, all framework decorators are just function calls.functools.wrapscopies__name__,__doc__, and sets__wrapped__. It is not optional.- Decorators execute at import time (the body), wrappers execute at call time (inside
wrapper). - The factory pattern adds an outer layer:
@retry(max_attempts=3)meansretry(max_attempts=3)runs first and returns the actual decorator. - Stacking
@a @b @c def f()wraps inside-out:a(b(c(f))). Call time executesa โ b โ c โ f. Return time unwindsc โ b โ a. - Class-based decorators (
__call__returning a wrapper) are the right choice for stateful decorators like rate limiters or per-key caches. inspect.unwrap(func)reaches the original function through any number of decorator layers โ essential for testing.
๐ Practice Quiz
- What does
@timerabove a function definition actually cause Python to do when the module is loaded?
Correct Answer:
Python evaluatestimer (a callable), calls it with the function being defined as its sole argument, and rebinds the function's name to whatever timer returns. It is exactly equivalent to writing func = timer(func) after the def block. This happens at import time โ before any call to the function is made.
- You apply
@functools.lru_cacheto a function and call it ten times with the same two arguments.cache_info()reportshits=9, misses=1. Why ismissesnot 10?
Correct Answer:
The first call is a cache miss โlru_cache has no entry yet, so it calls the original function and stores the result. Every subsequent call with the same arguments is a cache hit โ the stored result is returned without calling the original function. One miss (first call) + nine hits (calls 2โ10) = hits=9, misses=1.
- You write a retry decorator without
functools.wraps. After decoratingsend_email, you callsend_email.__name__. What does it print, and why is that a problem in production?
Correct Answer:
It prints"wrapper" โ the name of the inner function defined inside the decorator. In production, logging frameworks record __name__ in log entries, so every log line from send_email shows wrapper instead of send_email. Stack traces become confusing because the function name no longer matches any name in the source code. help(send_email) shows the wrapper's docstring, which is typically empty. Monitoring dashboards that group metrics by function name will group all decorated functions under wrapper. Adding @functools.wraps(func) to the inner wrapper fixes all of these at once.
- What is the difference between
@staticmethodand@classmethod, and when would you choose one over the other?
Correct Answer:
@staticmethod removes the implicit first parameter entirely โ the method receives neither self nor cls. It behaves like a plain function that happens to live inside the class namespace. Use it for utility logic that is logically related to the class but does not need to access instance or class state.
@classmethod replaces self with cls โ the method receives the class object (not an instance). This allows the method to create new instances (alternative constructors) or access class-level attributes without needing a specific instance. Use @classmethod for factory methods like User.from_dict(data) or Config.from_environment().
- Given this stacking:
@cache @retry(max_attempts=3) def fetch(url), which layer handles an exception raised byfetch? What happens to a cached result if you move@cacheto be above@retry?
Correct Answer:
With@cache @retry(max_attempts=3), retry is the innermost decorator (closest to fetch), so retry handles exceptions from fetch and will attempt up to 3 retries before propagating the exception. cache wraps retry, so it caches the result of a successful retry run.
If you write @retry(max_attempts=3) @cache instead, cache is innermost โ it wraps fetch directly. On a successful first call, cache stores the result. Future calls hit the cache and bypass retry entirely. On a failure, cache propagates the exception to retry, which retries. But if cache ever stores a failed or stale result (for example in a custom TTL cache that caches exceptions), retry would never see it because cache would serve it directly. Decorator order matters whenever the decorators can observe each other's outputs.
- Open-ended challenge: Design a
@require_permission(permission)decorator for a Flask API. The decorator should read the permission string from the JWT token in the request'sAuthorizationheader, compare it against the required permission argument, and return a 403 response if the permission is missing. Write out the full implementation, explain how you use the factory pattern to pass thepermissionargument, and describe one edge case your implementation must handle (for example: expired tokens, missing header, or case-sensitivity in permission names).
๐ Related Posts
- Functions in Python: Parameters, Return Values, and Scope โ The direct prerequisite for this post. Covers first-class functions, closures, LEGB scope,
*args/**kwargs, and the higher-order function pattern that every decorator relies on. Read this first if closures or*argsforwarding are unfamiliar. - Python OOP: Classes, Dataclasses, and Dunder Methods โ Covers
@property,@staticmethod,@classmethod, and the descriptor protocol in the context of class design. Complements the method-decorator section in this post with a deeper treatment of how Python resolves attribute access on instances and classes. - Pythonic Code: Idioms Every Developer Should Know โ Decorators are one of the most recognizable Pythonic patterns. This post covers the broader set of idioms โ comprehensions, context managers,
enumerate,zip, and unpacking โ that make code idiomatic across the whole language, not just in function wrappers.

Written by
Abstract Algorithms
@abstractalgorithms
More Posts
Watermarking and Late Data Handling in Spark Structured Streaming
TLDR: A watermark tells Spark Structured Streaming: "I will accept events up to N minutes late, and then I am done waiting." Spark tracks the maximum event time seen per partition, takes the global minimum across all partitions, subtracts the thresho...
Spark Structured Streaming: Micro-Batch vs Continuous Processing
๐ The 15-Minute Gap: How a Fraud Team Discovered They Needed Real-Time Streaming A fintech team runs payment fraud detection with a well-tuned Spark batch job. Every 15 minutes it reads a day's worth of transaction events from S3, scores them agains...
Stateful Aggregations in Spark Structured Streaming: mapGroupsWithState
TLDR: mapGroupsWithState gives each streaming key its own mutable state object, persisted in a fault-tolerant state store that checkpoints to object storage on every micro-batch. Where window aggregations assume fixed time boundaries, mapGroupsWithSt...
Shuffles in Spark: Why groupBy Kills Performance
TLDR: A Spark shuffle is the most expensive operation in any distributed job โ it moves every matching key across the network, writes temporary sorted files to disk, and forces a hard synchronization barrier between every upstream and downstream stag...
