All Posts

The Ultimate Guide to Acing the System Design Interview

Abstract AlgorithmsAbstract Algorithms
··8 min read

TL;DR

TLDR: System Design interviews aren't about finding the "right" answer; they are about demonstrating your ability to navigate trade-offs. This guide is a comprehensive glossary and framework covering the essential theory—from Load Balancing and Cachi...

Cover Image for The Ultimate Guide to Acing the System Design Interview

TLDR: System Design interviews aren't about finding the "right" answer; they are about demonstrating your ability to navigate trade-offs. This guide is a comprehensive glossary and framework covering the essential theory—from Load Balancing and Caching strategies to CAP Theorem and Database Sharding.


Part 1: The Core Concepts (The Jargon)

Before you design a system, you must speak the language. Here are the fundamental concepts you need to master.

1. Scalability vs. Performance

  • Performance: Refers to how fast a system operates for a single user, focusing on optimizing code and algorithms to reduce latency and improve response times.

  • Scalability: Describes how well a system can handle increasing loads by adding more resources, ensuring the system remains efficient as demand grows.

    • Vertical Scaling (Scale Up): Involves upgrading to a bigger server with more RAM and CPU. It's easy to implement but has a hard limit due to hardware costs and capacity.

    • Horizontal Scaling (Scale Out): Involves adding more servers, which is harder to manage due to the complexity of distributed systems but offers theoretically infinite scalability.

2. Latency vs. Throughput

  • Latency: The time it takes to perform a single action, such as loading a page in 100ms. Lower latency means faster response times.

  • Throughput: The number of actions a system can handle per second, like processing 10,000 requests per second. Higher throughput indicates better capacity to handle large volumes of requests.

  • Goal: Aim for high throughput with acceptable latency to ensure a responsive and efficient system.

3. Availability vs. Consistency (The CAP Theorem)

In a distributed system, you can only pick 2 out of 3:

  1. Consistency (C): Ensures every read receives the most recent write or an error, meaning all nodes see the same data at the same time.

  2. Availability (A): Guarantees every request receives a (non-error) response, without ensuring it contains the most recent write.

  3. Partition Tolerance (P): The system continues to operate despite network failures, such as dropped messages between nodes.

  • The Reality: In distributed systems over the internet, P is mandatory because network failures are inevitable.

  • The Choice: Choose between CP (Consistency + Partition Tolerance) or AP (Availability + Partition Tolerance).

    • CP (Banking): If the network splits, stop accepting writes to prevent data errors, prioritizing correctness over availability.

    • AP (Social Media): If the network splits, continue accepting writes, allowing for eventual consistency but maintaining availability.

4. Consistency Patterns

  • Strong Consistency: After a write, reads will always see that write, but it's hard to scale due to the need for synchronization across nodes.

  • Weak Consistency: After a write, reads may or may not see it, offering best-effort consistency, such as in video chat where real-time updates are prioritized over accuracy.

  • Eventual Consistency: After a write, reads will eventually see it, common in high-scale systems like DNS and email, where temporary inconsistencies are acceptable.


Part 2: Networking & Infrastructure

1. DNS (Domain Name System)

Acts as the internet's phonebook, translating domain names like google.com into IP addresses like 172.217.0.0.

  • Interview Tip: Mention DNS lookup time when calculating latency, as it can impact the overall response time of a system.

2. CDN (Content Delivery Network)

A network of globally distributed servers (Edge Locations) that store static content like images, CSS, JS, and videos.

  • How it works: When a user in London requests an image hosted in New York, the CDN serves a cached copy from a London server.

  • Benefit: Significantly reduces latency and server load, improving user experience by delivering content faster.

3. Load Balancers (LB)

Distribute incoming traffic across multiple servers to prevent any single server from crashing, ensuring high availability and reliability.

  • Layer 4 LB (Transport Layer): Routes based on IP and Port, offering simplicity and speed.

  • Layer 7 LB (Application Layer): Routes based on content like URL, Cookies, and Headers, allowing smart routing but at a higher cost.

  • Algorithms:

    • Round Robin: Distributes requests sequentially (1, 2, 3, 1...).

    • Least Connections: Sends requests to the server with the fewest active users, balancing the load effectively.

    • Consistent Hashing: Essential for distributed caches, minimizing data movement when servers are added or removed.

4. Reverse Proxy

A server that sits in front of your web servers.

  • Role: Manages security (SSL termination), logging, compression, and load balancing, while hiding the existence of internal servers from the public, enhancing security and performance.

Part 3: Protocols & Communication

1. REST (Representational State Transfer)

  • Standard: HTTP-based and stateless, using JSON for data exchange.

  • Verbs: Includes GET, POST, PUT, and DELETE, providing a standardized way to interact with resources.

  • Pros: Easy to understand with universal support, making it a popular choice for web APIs.

  • Cons: Text-based, which can be slower, and may require multiple calls ("Chatty"), potentially increasing latency.

2. RPC (Remote Procedure Call) / gRPC

  • Concept: Allows calling a function on a remote server as if it were local code, simplifying distributed computing.

  • gRPC: Google's version, using Protobuf (binary format) instead of JSON, offering efficient serialization.

  • Pros: Faster and smaller than REST, ideal for internal microservices where performance is critical.

  • Cons: Harder to debug due to binary format and tight coupling, requiring more effort to implement and maintain.

3. TCP vs. UDP

  • TCP (Transmission Control Protocol): Reliable and ordered, ensuring data delivery with retransmissions if needed (used in Web, Email).

  • UDP (User Datagram Protocol): "Fire and forget" approach, sending data without guarantees (used in Video streaming, Gaming), offering lower latency at the cost of reliability.


Part 4: Databases & Storage

1. SQL (Relational) vs. NoSQL (Non-Relational)

  • SQL (MySQL, Postgres): Structured with ACID transactions and Joins, suitable for complex relationships like Users and Orders, but hard to scale horizontally.

  • NoSQL (Cassandra, DynamoDB, MongoDB): Offers flexible schema and BASE (Basically Available, Soft state, Eventual consistency), easy to scale horizontally, ideal for massive data like Logs and Metadata.

2. Database Scaling Techniques

  • Replication:

    • Master-Slave: Writes go to Master, reads from Slaves, suitable for read-heavy workloads, improving read performance.

    • Master-Master: Allows writes to any node, requiring complex conflict resolution, offering high availability and redundancy.

  • Sharding (Partitioning): Splits data across multiple servers.

    • Vertical: Divides by feature (e.g., User DB, Photo DB), simplifying management and scaling.

    • Horizontal: Divides by rows (e.g., Users A-M on Server 1, N-Z on Server 2), distributing load evenly across servers.

  • Federation: Splits databases by function (e.g., Forum DB, Users DB) to minimize load, allowing independent scaling and management.

  • Denormalization: Adds redundant data to speed up reads, avoiding Joins, improving performance at the cost of increased storage and potential data inconsistency.


Part 5: Caching

Caching is the #1 way to speed up a system, storing the result of expensive operations in fast memory (RAM).

Caching Strategies

  1. Cache-Aside (Lazy Loading):

    • The application checks the cache first. If there's a miss, it reads from the database and then updates the cache.

    • Pros: Only caches requested data and is resilient to cache failure, reducing unnecessary data storage.

    • Cons: The first request is slow due to a cache miss, impacting initial response time.

  2. Write-Through:

    • The application writes to both the cache and the database simultaneously.

    • Pros: Ensures data consistency between cache and database, providing reliable data access.

    • Cons: Slower writes due to the need to update both cache and database.

  3. Write-Behind (Write-Back):

    • The application writes to the cache, and the cache asynchronously writes to the database later.

    • Pros: Offers super fast writes, improving system responsiveness.

    • Cons: Risk of data loss if the cache crashes before updating the database, requiring careful management.


Part 6: Asynchronism

Message Queues (Kafka, RabbitMQ)

Decouple components by allowing Server A to put a message in a Queue instead of calling Server B and waiting. Server B picks it up when free.

  • Benefits:

    • Backpressure: If Server B is slow, the Queue absorbs the load, preventing Server A from crashing and ensuring system stability.

    • Decoupling: Enables teams to work independently, allowing for more flexible and scalable system architecture.


Part 7: The 45-Minute Framework (PEDALS)

Now that you know the jargon, here is how to use it in the interview.

  1. Process Requirements (5 min): Ask questions and define the scope, ensuring a clear understanding of the problem.

  2. Estimations (5 min): Perform calculations for storage, bandwidth, and QPS, providing a basis for design decisions.

  3. Design High-Level (10 min): Draw the big boxes like LB, App, and DB, outlining the system architecture.

  4. API & Database Schema (5 min): Define endpoints and tables, ensuring data flow and storage are well-planned.

  5. Look for Bottlenecks / Deep Dive (15 min): Apply the jargon like Sharding, Caching, and Queues, identifying potential issues and solutions.

  6. Summary (5 min): Discuss trade-offs, highlighting the strengths and weaknesses of your design.


Part 8: Practical Example: Designing TinyURL

1. Requirements: Shorten URL, Redirect, High Availability.

2. Estimation: 100M writes/month with a 100:1 read ratio.

3. High Level: Client -> LB -> Web Server -> DB.

4. DB Schema: Use NoSQL (Key-Value) with Key: ShortID and Value: LongURL.

5. Deep Dive:

Algorithm: Use Base62 encoding to generate short, unique identifiers.

Scaling:

Cache: Implement Redis for popular URLs using Cache-Aside, improving access speed for frequently requested URLs.

DB: Shard based on Hash(ShortID) to ensure even distribution, avoiding hotspots and improving performance.

* ID Generation: Use a standalone Key Generation Service (KGS) to pre-generate unique keys and avoid collisions, ensuring efficient and reliable ID creation.


Conclusion

System Design is about knowing your toolbox. You don't need to use every tool (don't use a Message Queue for a simple CRUD app), but you need to know they exist.

Checklist for your next interview:

  • [ ] Did I mention CAP Theorem?

  • [ ] Did I consider Caching (and eviction policies like LRU)?

  • [ ] Did I handle Database Scaling (Sharding/Replication)?

  • [ ] Did I separate Read vs Write paths?

  • [ ] Did I mention Monitoring/Logging?

Master these concepts, and you won't just pass the interview—you'll enjoy designing the system.

Abstract Algorithms

Written by

Abstract Algorithms

@abstractalgorithms