All Posts

System Design Requirements and Constraints: Ask Better Questions Before You Draw

A practical framework for clarifying functional scope, non-functional targets, and trade-off boundaries in interviews.

Abstract AlgorithmsAbstract Algorithms
ยทยท9 min read
Share
Share on X / Twitter
Share on LinkedIn
Copy link

TLDR: In system design interviews, weak answers fail early because requirements are fuzzy. Strong answers start by turning vague prompts into explicit functional scope, measurable non-functional targets, and clear trade-off boundaries before any architecture diagram appears.

TLDR: If you clarify requirements well, the architecture almost chooses itself.

๐Ÿ“– Why Requirement Clarity Is the Real Beginning of System Design

Most candidates think the first minute of a system design interview should sound technical: "We should use Kafka," "Let's add Redis," "I would shard the database." Interviewers usually hear that as a red flag, not confidence.

Architecture choices are consequences. Requirements are causes.

If the problem statement is "Design a notification system," you cannot pick a sound architecture until you know whether the product needs:

  • In-app only or also SMS/email/push.
  • Best-effort delivery or strict delivery guarantees.
  • Real-time delivery within seconds or relaxed delivery windows.
  • Global support with regulatory constraints.

Without that clarity, every design is either over-engineered or under-powered.

Candidate behaviorInterview impression
Starts with tools and vendorsPremature optimization
Clarifies user flows and SLO-like targets firstStructured systems thinking
Avoids assumptionsAfraid to reason under uncertainty
States assumptions and validates themComfortable with ambiguity

This is why requirement work is not "soft" work. It is the highest-leverage technical activity in the interview.

๐Ÿ” The Requirement Stack: Functional, Non-Functional, and Business Constraints

A reliable way to avoid chaos is to classify requirements into layers.

Functional requirements answer "What should the system do?"

Examples:

  • Users can create short links.
  • Users can view a personalized feed.
  • Drivers can request rides and track status.

Non-functional requirements answer "How should it behave?"

Examples:

  • p99 read latency under 150 ms.
  • 99.95% availability.
  • Eventual consistency accepted for feeds, strong consistency required for balances.

Business and operational constraints answer "What limits shape the design?"

Examples:

  • Budget ceiling for first six months.
  • Data residency in specific regions.
  • Team size and operational maturity.
Requirement layerTypical interview questionDesign impact
Functional"What are the core user actions?"Defines APIs and entities
Non-functional"What latency and availability targets matter?"Defines caching, replication, and failover choices
Business constraints"What budget and compliance limits apply?"Defines architecture complexity and deployment scope

When you explicitly separate these layers, you avoid the common mistake of solving a non-problem. For instance, active-active multi-region writes are unnecessary if the product is regional and budget-constrained.

โš™๏ธ A Practical Requirement Interview Script You Can Reuse

Candidates often ask: "What exactly should I ask first?"

Use a short script in this order:

  1. Define the primary user journey.
  2. Define scale assumptions.
  3. Define success metrics.
  4. Define strict consistency boundaries.
  5. Define out-of-scope items.

Here is a reusable checklist table:

QuestionWhy ask it nowExample answer
What is the primary user action?Prevents feature sprawl"Send message" and "read inbox" only
What is expected daily and peak traffic?Sizes compute/storage path20M DAU, peak 8x average in evenings
What latency is acceptable?Determines cache and data pathp95 under 200 ms for reads
Which operations require strict correctness?Determines transaction strategyPayments and inventory cannot be stale
What is explicitly out of scope?Protects interview time and focusSearch and recommendation omitted

This script works because it does not require perfect numbers. It requires transparent assumptions and explicit boundaries.

A strong candidate says: "If these assumptions change, I will adapt the design in this direction." That sentence shows architecture maturity.

๐Ÿง  Deep Dive: Translating Requirements Into Enforceable Design Decisions

Requirement gathering is useful only if it drives specific architecture decisions. The translation step is where many interviews are won.

The Internals: Requirement-to-Component Mapping

Every clarified constraint should map to one or more design mechanisms.

  • Low read latency target -> cache layer, denormalized read model, or edge routing.
  • High write throughput target -> partitioning strategy, queue-based ingestion, or write-optimized storage.
  • Strong consistency requirement -> single write authority, synchronous commit scope, and transactional boundaries.
  • High availability requirement -> replication, automated failover, and controlled degradation paths.

This mapping can be captured in a compact matrix:

RequirementFirst mechanismSecondary mechanism
p95 reads < 150 msCache-aside for hot readsRead replicas
50k writes/secPartitioned write pathAsync downstream fan-out
No oversellingTransactional inventory updatesIdempotent retries
99.95% availabilityMulti-AZ replicationFailover automation

The interview gain is huge: when asked "Why this component?" you can always point back to an explicit requirement.

Performance Analysis: Requirement Drift, Latency Budgets, and Scope Risk

Performance failures often begin as requirement failures.

Requirement drift: The scope silently grows mid-design. You started with "timeline read" and now you are discussing full-text search, ranking, and recommendations. If not controlled, the architecture loses coherence.

Latency budget confusion: Teams quote one latency number but do not allocate it. End-to-end latency is a sum of API gateway, service logic, network, storage, and optional cache miss penalties.

Unbounded scope risk: If out-of-scope is never declared, every follow-up appears mandatory.

Risk signalWhat it meansMitigation
New features appear every 2 minutesScope is unstableFreeze MVP scope and defer extras
"Fast" is undefinedNon-functional ambiguityDefine p95/p99 target per operation
Conflicting consistency assumptionsHidden correctness gapsMark strict vs eventual boundaries explicitly

In interview settings, saying "Let's lock the MVP and mark search as phase two" is often stronger than trying to solve everything at once.

๐Ÿ“Š Requirement Funnel: From Vague Prompt to Defensible Architecture

flowchart TD
    A[Vague interview prompt] --> B[Clarify functional scope]
    B --> C[Capture non-functional targets]
    C --> D[Set constraints and assumptions]
    D --> E[Define out-of-scope boundaries]
    E --> F[Map constraints to components]
    F --> G[Present architecture with trade-offs]

This funnel is your anti-chaos mechanism. If the interview starts drifting, return to the funnel and show what changed in assumptions.

๐ŸŒ Real-World Applications: Notification, Feed, and Checkout Systems

The same requirement framework applies across very different domains.

Notification platform:

  • Functional: send notification, view delivery status.
  • Non-functional: near-real-time delivery for push, eventual for email.
  • Constraints: provider rate limits, regional SMS regulations.

Social feed service:

  • Functional: create post, read timeline.
  • Non-functional: low read latency, high read fan-out.
  • Constraints: partial staleness acceptable, budget sensitive.

E-commerce checkout:

  • Functional: place order, reserve inventory, charge payment.
  • Non-functional: strict correctness and high availability.
  • Constraints: compliance, auditing, and transactional integrity.

Once requirements are explicit, the architecture differences become obvious instead of ideological.

โš–๏ธ Trade-offs & Failure Modes: What Goes Wrong When Requirements Are Weak

Failure modeSymptomRoot causeFirst fix
Over-engineered designToo many components for small loadNo clear scale assumptionsRe-scope around measured traffic
Under-designed reliabilityOutage from single-node failureAvailability target not clarifiedAdd replication and failover
Conflicting data behaviorUsers see inconsistent critical stateConsistency boundaries unclearMark strict vs eventual operations
Endless design expansionInterview runs out of timeOut-of-scope never declaredFreeze MVP and defer extras

A strong candidate explicitly narrates these failure modes and shows how requirement discipline prevents them.

๐Ÿงญ Decision Guide: Which Requirement Style Fits the Interview Prompt?

SituationRecommendation
Prompt is broad and vagueSpend extra time on scope and exclusions
Prompt includes strict SLOsPrioritize non-functional decomposition first
Prompt is domain-heavy (payments, healthcare)Clarify correctness and compliance early
Prompt is startup MVP styleEmphasize simplicity and evolution path

This decision table helps you adapt your questioning style without sounding scripted.

๐Ÿงช Practical Example: Requirement Breakdown for "Design a Chat System"

Suppose the interviewer says: "Design WhatsApp."

A structured response starts with narrowing:

  • Phase 1: one-to-one messaging only.
  • Exclude group chat, media compression, and end-to-end encryption details from MVP.

Then define measurable assumptions:

ItemAssumption
DAU30 million
Peak concurrent users3 million
Message sends at peak120k/sec
Read consistencyEventual is acceptable for unread counters; ordered delivery required per conversation

Now architecture decisions follow naturally:

  1. Per-conversation ordering requirement -> partition messages by conversation ID.
  2. High send throughput -> async fan-out and queue-backed ingestion.
  3. Availability target -> replicated state and failover for message store.

This sequence demonstrates what interviewers want: requirement-first reasoning, not random component listing.

๐Ÿ“š Lessons Learned

  • Requirements are architecture inputs, not interview formalities.
  • Functional, non-functional, and business constraints should be separated explicitly.
  • Every component choice should trace back to a stated constraint.
  • Scope control is a technical skill, not avoidance.
  • The best designs evolve from assumptions that can be revised under pressure.

๐Ÿ“Œ Summary & Key Takeaways

  • Clarify scope first, then scale, then success metrics.
  • Define consistency boundaries early to avoid hidden correctness bugs.
  • Use requirement-to-component mapping to justify architecture choices.
  • Protect interview time by locking MVP and labeling phase-two items.
  • Requirement clarity is often the single biggest predictor of design quality.

๐Ÿ“ Practice Quiz

  1. Which question most directly prevents over-engineering in an interview?

A) "Should we use Kafka?"
B) "What is in scope for MVP and what is out of scope?"
C) "How many microservices should we start with?"

Correct Answer: B

  1. Why should latency targets be clarified early?

A) Because they only affect frontend choices
B) Because they influence cache, storage, and routing decisions across the stack
C) Because they are optional if availability is high

Correct Answer: B

  1. What is the strongest way to justify a design component in an interview?

A) "This is what big tech companies use."
B) "It directly addresses the p95 latency and availability targets we agreed on."
C) "I prefer this tool personally."

Correct Answer: B

  1. Open-ended challenge: if the interviewer doubles your traffic assumption and tightens latency requirements halfway through, which part of your design would you re-evaluate first and why?
Abstract Algorithms

Written by

Abstract Algorithms

@abstractalgorithms