Category
data engineering
14 articles across 3 sub-topics

Medallion Architecture: Bronze, Silver, and Gold Layers in Practice
TLDR: Medallion Architecture solves the "data swamp" problem by organizing a data lake into three progressively refined zones — Bronze (raw, immutable), Silver (cleaned, conformed), Gold (aggregated, business-ready) — so teams always build on a trust...

Kappa Architecture: Streaming-First Data Pipelines
TLDR: Kappa architecture replaces Lambda's batch + speed dual codebases with a single streaming pipeline backed by a replayable Kafka log. Reprocessing becomes replaying from offset 0. One codebase, no drift. TLDR: Kappa is the right call when your t...

Big Data 101: The 5 Vs, Ecosystem, and Why Scale Breaks Everything
TLDR: Traditional databases fail at big data scale for three concrete reasons — storage saturation, compute bottleneck, and write-lock contention. The 5 Vs (Volume, Velocity, Variety, Veracity, Value) frame what makes data "big." A layered ecosystem ...

Stream Processing Pipeline Pattern: Stateful Real-Time Data Products
TLDR: Stream pipelines succeed when event-time semantics, state management, and replay strategy are designed together — and Kafka Streams lets you build all three directly inside your Spring Boot service. Stripe's real-time fraud detection processes...
Lambda Architecture Pattern: Balancing Batch Accuracy with Streaming Freshness
TLDR: Lambda architecture is justified when replay correctness and sub-minute freshness are both non-negotiable despite dual-path complexity. TLDR: Lambda architecture is a fit only when you need both low-latency views and deterministic recompute fro...
Dimensional Modeling and SCD Patterns: Building Stable Analytics Warehouses
TLDR: Dimensional modeling with explicit SCD policy is the foundation for reproducible metrics and trustworthy historical analytics. TLDR: Dimensional models stay trustworthy only when teams define grain, history rules, and reload procedures before d...
Data Pipeline Orchestration Pattern: DAG Scheduling, Retries, and Recovery
TLDR: Pipeline orchestration is an operational control plane problem that requires explicit dependency, retry, and backfill contracts. TLDR: Pipeline orchestration is less about drawing DAGs and more about controlling freshness, replay, and recovery ...
Change Data Capture Pattern: Log-Based Data Movement Without Full Reloads
TLDR: Change data capture moves committed database changes into downstream systems without full reloads. It is most useful when freshness matters, replay matters, and the source database must remain the system of record. TLDR: CDC becomes production-...
Big Data Architecture Patterns: Lambda, Kappa, CDC, Medallion, and Data Mesh
TLDR: A serious data platform is defined less by where files are stored and more by how changes enter the system, how serving layers are materialized, and who owns quality over time. Lambda, Kappa, CDC, Medallion, and Data Mesh are patterns for makin...
Data Warehouse vs Data Lake vs Data Lakehouse: Which One to Choose?
TLDR: Warehouse = structured, clean data for BI and SQL dashboards (Snowflake, BigQuery). Lake = raw, messy data for ML and data science (S3, HDFS). Lakehouse = open table formats (Delta Lake, Iceberg) that bring SQL performance to raw storage — the ...
Spark Architecture: Driver, Executors, DAG Scheduler, and Task Scheduler Explained
TLDR: Spark's architecture is a precise chain of responsibility. The Driver converts user code into a DAG, the DAGScheduler breaks it into stages at shuffle boundaries, the TaskScheduler dispatches tasks to Executors respecting data locality, and the...

Modern Table Formats: Delta Lake vs Apache Iceberg vs Apache Hudi
TLDR: Delta Lake, Apache Iceberg, and Apache Hudi are open table formats that wrap Parquet files with a transaction log (or snapshot tree) to deliver ACID guarantees, time travel, schema evolution, and efficient upserts on object storage. Choose Delt...
Apache Spark for Data Engineers: RDDs, DataFrames, and Structured Streaming
TLDR: Apache Spark distributes Python DataFrame jobs across a cluster of executors, using lazy evaluation and the Catalyst query optimizer to process terabytes with the same code that works on gigabytes. Master partitioning, shuffle-awareness, and St...
How CDC Works Across Databases: PostgreSQL, MySQL, MongoDB, and Beyond
A data engineering team at a fintech company built what they believed was a robust Change Data Capture pipeline: three source databases (PostgreSQL, MongoDB, and Cassandra), Debezium connectors wired to Kafka, and a downstream data warehouse receivin...
