0

πŸ›οΈ The System Design Playbook - Part 2 πŸ“–

A deeply-synthesized, opinionated reference distilled from five canonical sources: donnemartin/system-design-primer Β· ByteByteGoHq/system-design-101 Β· karanpratapsingh/system-design Β· ashishps1/awesome-system-design-resources Β· binhnguyennus/awesome-scalability

Use it as: a study guide for interviews, a checklist for design reviews, and a vocabulary for cross-team discussions.


Table of Contents

  1. πŸ“– How to Use This Playbook
  2. 🧠 The System Design Mindset
  3. πŸ”‘ Core Mental Models
  4. 🎯 The Interview Framework (RAPID-S)
  5. πŸ”’ Back-of-Envelope Math
  6. 🌐 Networking Fundamentals
  7. 🌍 DNS, CDN, and Proxies
  8. βš–οΈ Load Balancing & API Gateways
  9. πŸ—„οΈ Databases: Pick Your Engine
  10. πŸ”€ Replication, Sharding, Federation
  11. πŸ”’ Consistency, Transactions & Isolation
  12. ⚑ Caching
  13. πŸ“¨ Asynchronous Communication
  14. πŸ”Œ API Design
  15. πŸ—οΈ Architectural Patterns
  16. πŸ•ΈοΈ Distributed Systems Primitives
  17. πŸ›‘οΈ Reliability & Resilience Patterns
  18. πŸ“Š Observability, SLA/SLO/SLI
  19. πŸ” Security
  20. πŸ“ˆ Capacity Planning & Scaling Playbook
  21. 🏭 Data Engineering & Analytics
  22. πŸš€ Deployment, Release & Schema Evolution
  23. πŸ“‹ Tradeoffs Cheat Sheet
  24. πŸ’‘ Interview Problem Templates
  25. 🌟 Real-World Case Studies
  26. ⚠️ Anti-Patterns to Avoid
  27. πŸ“š Must-Read Papers & Further Reading

Section 1 -> 12: Read part 1 here https://viblo.asia/p/the-system-design-playbook-part-1-PoL7e0D24vk

13. πŸ“¨ Asynchronous Communication

13.1 Why Async

Decouples producer from consumer in time, fault-domain, and rate. The producer publishes a message; the consumer processes when it can. The system absorbs spikes and isolates failures.

13.2 Message Queue vs Event Stream

Message Queue (RabbitMQ, SQS, ActiveMQ) Event Stream (Kafka, Pulsar, Kinesis)
Model Point-to-point or routing Pub-sub log
Consumption Message removed after ack Messages retained, consumers track offset
Replay Generally no Yes (rewind to offset)
Ordering Per-queue Per-partition
Throughput High (10k–100k/s) Very high (1M+/s)
Use for Job processing, RPC Event sourcing, log aggregation, stream processing

Use a queue for: send-email jobs, video transcoding, retryable RPC, fan-out to one worker. Use a stream for: event sourcing, change data capture, multi-consumer fan-out, analytics, audit trail.

13.3 Delivery Semantics

  • At-most-once β€” fire and forget. Messages may be lost. Use for telemetry where exact count is unimportant.
  • At-least-once β€” guaranteed delivery, possible duplicates. The default and the realistic target.
  • Exactly-once β€” guaranteed delivery, no duplicates. Practically achieved via at-least-once + idempotent consumer (deduplicate by message ID). Kafka offers transactional producer + read-process-write within Kafka, but end-to-end exactly-once across systems is an idempotency design problem, not a guarantee you buy.

13.4 Patterns

  • Work queue: N producers β†’ queue β†’ M workers, one worker per message. Auto-scales.
  • Pub-sub / fan-out: one publish β†’ N subscribers each get a copy.
  • Routing / topic: message tagged; subscribers filter.
  • Dead-letter queue (DLQ): messages that fail repeatedly land in DLQ for manual / scripted recovery. Always configure one.
  • Outbox + CDC: atomic write to DB + event table; CDC publishes. Eliminates dual-write inconsistency.

13.5 Backpressure

When consumers can't keep up, the queue grows unbounded β†’ memory blow-up β†’ cascading failure.

Defenses:

  • Bounded queues β€” drop or block when full.
  • HTTP 503 + Retry-After β€” push back to clients, who retry with exponential backoff + jitter.
  • Token bucket / leaky bucket rate limiting β€” at the producer side.
  • Auto-scaling consumers β€” but watch for downstream (DB) bottleneck β€” scaling consumers without scaling the DB just moves the bottleneck.

13.6 Kafka Mental Model

  • Topic = ordered log split into partitions. Order preserved per partition only.
  • Partition key decides which partition (similar to shard key). Choose for distribution + ordering needs.
  • Consumers organized into consumer groups; one partition consumed by exactly one consumer in a group.
  • Retention by time or size. Topic is the source of truth in event-sourced systems.
  • Compaction keeps the latest value per key β€” useful for materializing a current-state table from a log.

13.7 Stream Processing Fundamentals

When data is unbounded (clicks, sensor readings, financial ticks), batch jobs aren't enough. Stream processing runs continuous queries on top of Kafka / Kinesis / Pulsar.

Three time concepts β€” pick the right one:

  • Event time: when the event actually occurred (in the data).
  • Ingestion time: when the broker received it.
  • Processing time: when the operator handled it.

Always aggregate by event time when correctness matters β€” processing time is sensitive to backlog and replay.

Windows:

  • Tumbling β€” fixed, non-overlapping (every 1 min, no overlap).
  • Sliding β€” overlapping (every 1 min, 5-min look-back).
  • Session β€” gaps define boundaries (per-user activity sessions).

Watermarks declare "I believe all events with timestamp ≀ T have arrived." They let windows close even when out-of-order events trickle in. Late events options: drop them, route to a side output, or trigger window updates.

State management: stateful operators (joins, aggregations) need durable state. Frameworks checkpoint state to durable storage (RocksDB local + S3 backup in Flink) for fault tolerance.

Exactly-once in practice: Kafka transactions + framework checkpoint barriers, paired with idempotent or transactional sinks (UPSERT into DB; transactional Kafka producer; or end-of-pipeline dedup).

Frameworks:

  • Flink β€” true streaming, low-latency, sophisticated state, native event-time. Default modern choice.
  • Spark Structured Streaming β€” micro-batch, integrates with Spark batch ecosystem.
  • Kafka Streams β€” library, no separate cluster, stateful via local RocksDB.
  • Apache Beam β€” unified batch+stream API; runs on Flink/Spark/Dataflow.
  • Materialize / RisingWave β€” streaming SQL with materialized views.

14. πŸ”Œ API Design

14.1 The Big Four Styles

REST GraphQL gRPC WebSocket
Transport HTTP/1.1 + HTTP/2 HTTP HTTP/2 TCP via HTTP upgrade
Encoding JSON JSON Protobuf (binary) Anything
Schema OpenAPI (optional) Strongly typed Strongly typed (.proto) App-defined
Direction Request-response Request-response Uni / streaming both ways Bi-directional
Use Public APIs BFF, mobile, complex queries Service-to-service, low-latency Real-time, chat, gaming

14.2 REST Best Practices

  • Resources, not actions: POST /orders, not POST /createOrder.
  • Verbs: GET (safe + idempotent), PUT (idempotent replace), PATCH (partial), POST (create / non-idempotent), DELETE (idempotent).
  • Status codes: 200 OK, 201 Created, 204 No Content, 301/302 redirects, 400 bad request, 401 unauth, 403 forbidden, 404 not found, 409 conflict, 429 rate limit, 500 server, 502/503/504 upstream.
  • Versioning: URL (/v2/...) is most pragmatic; header (Accept: application/vnd.api+json;v=2) is purer; never break v1.
  • Pagination:
    • Offset/limit (?page=3&size=50) β€” easy, breaks under inserts, slow at deep offsets.
    • Cursor / keyset (?after=abc123) β€” consistent, scales, the right default for large datasets.
  • Idempotency: require an Idempotency-Key header on POSTs that must not duplicate (payments, signup).
  • Filter / sort / fields: ?status=active&sort=-createdAt&fields=id,name.
  • HATEOAS is academically nice, practically rare.

14.3 GraphQL β€” When and When Not

When: Many clients with different shape needs (mobile + web + partners), aggregation across many sources, rapidly evolving UI. Not when: Simple CRUD, public APIs (cacheability is harder), file uploads, RPC-style.

Risks: N+1 query explosion (mitigate with DataLoader / batching), unbounded queries (depth + cost limits), caching loss (no HTTP cache for POSTed queries β€” use persisted queries).

14.4 gRPC

  • Use: internal service-to-service in polyglot orgs.
  • Wins: schema enforcement, code generation, HTTP/2 multiplexing, streaming, smaller payloads.
  • Pitfalls: browser support requires gRPC-Web + proxy; harder to debug (binary); load balancing needs L7 awareness or a service mesh.

14.5 Real-Time Push: Long Polling vs SSE vs WebSocket

Long Polling SSE WebSocket
Direction Client pulls Server β†’ client Both
Connection Repeated request Persistent (HTTP/1.1) Persistent upgrade
Browser support Universal Modern browsers Universal
Best for Legacy systems Server notifications, news feeds Chat, gaming, collaborative editing

14.6 Webhooks

Server-to-server callback. Provider POSTs to your URL when an event happens. Always: verify signature, return 2xx fast and process async, dedupe by event ID, expect retries.


15. πŸ—οΈ Architectural Patterns

15.1 Monolith vs Microservices vs Modular Monolith

Monolith β€” single deployable, single DB. Pro: simple, fast to develop. Con: deploys couple teams; scaling is all-or-nothing.

Modular monolith β€” one deployable, strict module boundaries with explicit interfaces. Often the right answer for teams of < 50 engineers.

Microservices β€” many deployables, each owned by one team, ideally each with its own DB. Pro: independent deploys, polyglot, fault isolation. Con: distributed-systems tax (networking, observability, data consistency, deployment complexity, on-call). Conway's Law: the architecture mirrors the org chart β€” microservices succeed only when the org is structured for them.

Rule of thumb: start monolith. Split a service out only when (a) it has a clear domain boundary, (b) a team can own it, (c) the cost of co-deployment is provably hurting you.

15.2 N-Tier Architecture

Classic: Presentation β†’ Business Logic β†’ Data. Modern translation: SPA β†’ API β†’ Service β†’ DB. Useful as a thinking frame, not a religion.

15.3 Event-Driven Architecture (EDA)

Services communicate via events on a bus rather than RPC. Decouples producers from consumers. Excellent for: workflows, integrations, audit, analytics. Pitfall: distributed debugging is hard β€” invest in correlation IDs and tracing from day one.

15.4 Event Sourcing

Persist state as an append-only sequence of events; current state is a fold of events. Excellent for: audit, time-travel debugging, deriving multiple read models from one source.

Pairs with CQRS: writes go to event store; reads go to one or more materialized projections optimized for query patterns.

Costs: event schema evolution, replay cost, harder ad-hoc querying. Reach for it when audit / temporal queries are core to the domain.

15.5 CQRS (Command Query Responsibility Segregation)

Two models: a command model that mutates state, a query model that reads denormalized projections. Lets reads and writes scale independently and have different schemas. Often paired with event sourcing but doesn't require it.

15.6 Saga Pattern

Already covered in Β§11.3. Workflow of local transactions with compensations. The de facto answer to "distributed transaction" in microservices.

15.7 Circuit Breaker

State machine: Closed (normal) β†’ Open (fail fast after threshold of errors) β†’ Half-Open (probe) β†’ Closed. Prevents cascading failure when a downstream is slow or dead. Tools: Hystrix (deprecated), resilience4j, Polly, Envoy.

15.8 Bulkhead

Isolate resource pools so a flood in one cannot starve another. E.g., separate thread pool per downstream, separate DB connection pool per workload. Inspired by ship hulls β€” one breach doesn't sink the ship.

15.9 Sidecar (and Service Mesh)

A helper container deployed alongside each service to handle cross-cutting concerns: TLS, retries, observability, rate limiting. Implementations: Envoy as sidecar with Istio / Linkerd as control plane. Lifts these concerns out of every language's library mess into a single, language-agnostic layer.

15.10 Strangler Fig

Migration pattern: route some traffic to the new system, leave the rest on the legacy, gradually shift, retire legacy when traffic = 0. The safe alternative to big-bang rewrites.

15.11 BFF (Backend for Frontend)

A thin API per client type (web BFF, iOS BFF, partner BFF). Aggregates internal services and shapes responses for one client. Avoids the "lowest common denominator" general API.

15.12 Serverless / FaaS

Functions on demand (Lambda, Cloud Functions). Pro: zero idle cost, autoscale, no server ops. Con: cold start, runtime limits, harder local dev, vendor lock-in, observability. Use for: event handlers, glue, low-volume APIs, scheduled jobs.


16. πŸ•ΈοΈ Distributed Systems Primitives

16.1 Consensus & Coordination

Already covered in Β§11.4 (Paxos, Raft). Practical use: etcd / Zookeeper / Consul for leader election, distributed locks, configuration, service discovery.

16.2 Leader Election

Many algorithms (Bully, Raft-style). Practical: use a coordination service. Critical: design for split-brain β€” two nodes thinking they're leader. Defenses: quorum-based election, fencing tokens, lease + heartbeat.

16.3 Gossip Protocol

Each node periodically exchanges state with random peers. Probabilistic eventual convergence. Used by: Cassandra (membership), Dynamo, Consul (LAN), serf. Scales to thousands of nodes without central authority.

16.4 Bloom Filter

Probabilistic set membership: "definitely not in the set" or "maybe in the set." Tiny memory, no false negatives, tunable false positive rate.

Use: "is this URL crawled?", "has this user seen this article?", filtering DB reads β€” query bloom filter first, hit DB only on positive.

16.5 Count-Min Sketch / HyperLogLog

  • Count-Min Sketch: approximate frequency of items in a stream. Top-K trending.
  • HyperLogLog: approximate cardinality (distinct count) in tiny memory. Redis PFCOUNT.

16.6 Merkle Tree

A tree of hashes where each non-leaf is a hash of its children. Quickly identifies which subtree differs between two replicas. Used by: Cassandra anti-entropy, DynamoDB, Git, blockchains, ZFS.

16.7 Vector Clocks & CRDTs

  • Vector clock: logical timestamp tracking causality across nodes. Detects concurrent writes (which can then be resolved or surfaced to app).
  • CRDT (Conflict-free Replicated Data Type): data structures that automatically merge concurrent updates without coordination. G-Counter, OR-Set, LWW-Register, etc. Powers offline-first apps (Riak, Redis Enterprise, collaborative editors).

16.8 Geohash & Quadtree

  • Geohash: encode (lat, lng) as a string; common prefix β‰ˆ spatial proximity. Easy to index in a regular B-tree. Use for "within X km of me".
  • Quadtree: recursive 2D partitioning. Good when density varies wildly across regions. Use for game worlds, map tile rendering, Uber's H3 (a hexagonal variant).

16.9 Distributed Lock

Lock service across nodes. Implementations: Redis Redlock (controversial), Zookeeper, etcd. Fundamental gotcha: client crashes holding the lock β†’ lock must expire. Solution: fencing tokens β€” every operation includes a monotonically increasing token; storage rejects stale tokens.


17. πŸ›‘οΈ Reliability & Resilience Patterns

17.1 Failure Modes Inventory

For every component ask:

  • What if it's slow (high latency)?
  • What if it's down (no response)?
  • What if it lies (corrupted / wrong response)?
  • What if it's partitioned (some clients reach it, some don't)?
  • What if it fills up (storage / queue / connection pool)?

17.2 Timeouts

Default. Every network call needs a timeout. Without one, your service inherits the slowness of every downstream and your thread pool dies. Set timeouts shorter than your own SLA (otherwise you're doomed before retry).

17.3 Retries

  • Exponential backoff with jitter β€” never retry immediately, never retry in lockstep.
  • Limit attempts β€” usually 3.
  • Idempotency required β€” never retry a non-idempotent operation without an idempotency key.
  • Retry only on retriable errors β€” 5xx, 429, network timeouts. Never retry 4xx (you'll get the same answer).

17.4 Circuit Breaker

Already covered in Β§15.7. Combine with retries: open circuit prevents wasteful retries during outage.

17.5 Bulkhead

Β§15.8. Per-dependency thread pools / connection limits.

17.6 Rate Limiting

Algorithms:

Algorithm How Pro Con
Fixed window N tokens per minute, reset at boundary Simple Burst at boundary
Sliding window log Store timestamps, count last N s Accurate Memory
Sliding window counter Weighted blend of two fixed windows Cheap + accurate
Token bucket Bucket fills at rate r, request takes 1 Allows bursts Tuning
Leaky bucket Queue with constant outflow Smooths spikes Latency

Apply at: edge (API gateway, per IP / API key), per service (per dependency), per user, per tenant. Use distributed counter (Redis) for cluster-wide limits.

17.7 Backpressure

Β§13.5. Push back on the producer when consumers can't keep up. The alternative is silent queue blow-up.

17.8 Graceful Degradation

When a non-critical dependency fails, return a degraded response (cached value, default, partial). Examples:

  • Recommendation service down β†’ show last-known popular items.
  • Personalization service down β†’ show generic homepage.
  • Comment count service down β†’ show "comments" without count.

17.9 Disaster Recovery

Term Meaning Question to ask
RTO (Recovery Time Objective) Maximum acceptable downtime "How long can we be down?"
RPO (Recovery Point Objective) Maximum acceptable data loss "How much data can we lose?"

DR strategies, in order of cost and speed:

  • Backup & restore β€” slow restore, low cost. RTO hours, RPO hours.
  • Pilot light β€” minimum infra running, scale up on disaster. RTO minutes, RPO seconds.
  • Warm standby β€” scaled-down full copy, scale up. RTO seconds.
  • Active-active multi-region β€” full capacity in each region. RTO ~0, RPO ~0. Most expensive, hardest to test.

Test your DR. Untested DR is theatre.

17.10 Chaos Engineering

Deliberately inject failure in production to validate resilience. Pioneered by Netflix Chaos Monkey. Modern: Gremlin, AWS Fault Injection Simulator, ChaosMesh on Kubernetes.

17.11 Tail Latency: "The Tail at Scale"

Average latency lies. p99 dictates user experience β€” and tail effects compound when one request fans out to many services.

The math that should scare you: if a service has p99 = 1 s and a request fans out to 10 such services awaiting all responses, the chance all 10 finish in 1 s is 0.99^10 β‰ˆ 90%. So p99 of the gather call β‰ˆ p90 of one component. With 100 fan-outs, only 37% of requests stay within the per-service p99 window. Tail latency is not negligible β€” it is the design problem.

Sources of tail latency:

  • GC pauses, JIT compilation warm-up.
  • Lock contention, queueing under load.
  • Slow node (degraded disk, network microburst, neighboring container).
  • Background tasks (compaction, vacuum) competing for resources.
  • TCP retransmits, head-of-line blocking on HTTP/2 streams.

Mitigations (Dean & Barroso, The Tail at Scale, 2013):

  • Hedged requests: after p95 timeout, send to a second replica; take the first response.
  • Tied requests: send to two replicas simultaneously; each carries the other's identity; whichever starts first cancels its sibling.
  • Micro-batching at the connection level instead of single-request RPCs.
  • Per-class queueing: prioritize short interactive requests over background scans.
  • Slow-node detection + drain: continuously remove the slowest replica from rotation.
  • Request-level parallelism with first-N-of-M responses when business semantics allow (recommendations, search re-rank).
  • Reduce fan-out depth: every extra hop multiplies tail probability.

Operational rule: alarm on p99 (or p99.9), never the mean. The mean hides everything that hurts users.


18. πŸ“Š Observability, SLA/SLO/SLI

18.1 The Three Pillars

Metrics β€” numerical time-series. Dashboards, alerts. Examples: QPS, error rate, p99 latency, queue depth, CPU. Cheap. Tools: Prometheus, Datadog, Atlas (Netflix), M3 (Uber).

Logs β€” discrete events with context. Debugging, audit. Examples: request logs, app logs, security audit. Expensive at scale. Tools: ELK, Splunk, Loki, CloudWatch.

Traces β€” causal chain of one request across services. Pinpoint slow span. Tools: Jaeger, Zipkin, Tempo, AWS X-Ray. Modern standard: OpenTelemetry.

18.2 RED (services) and USE (resources)

  • RED: Rate, Errors, Duration β€” the three metrics every service owes you.
  • USE: Utilization, Saturation, Errors β€” the three metrics every resource (CPU, disk, queue) owes you.

18.3 SLI / SLO / SLA

  • SLI (Service Level Indicator) β€” what you measure (availability %, p99 latency).
  • SLO (Service Level Objective) β€” internal target (99.9% availability monthly).
  • SLA (Service Level Agreement) β€” external contract with consequences (refund if < 99.5%).

Error budget: 1 βˆ’ SLO. If SLO is 99.9%, you have 43 minutes of monthly downtime budget. Spend it on shipping risky features. When you blow it, stop shipping and fix reliability. This is the SRE-vs-product peace treaty.

18.4 Alerting Rules

  • Alert on symptoms (user pain), not causes. A pegged CPU is fine if latency is OK. Alert on "p99 > 500 ms" not "CPU > 80%".
  • Page only when human action is required, now. Everything else β†’ ticket / dashboard.
  • Every alert must link to a runbook.

19. πŸ” Security

19.1 Authentication vs Authorization

  • AuthN: "who are you?" β€” passwords, MFA, SSO.
  • AuthZ: "what can you do?" β€” RBAC, ABAC, ACL.

19.2 OAuth 2.0 vs OIDC

  • OAuth 2.0: delegated authorization. "User lets app A access their resources at provider B" via access tokens. Flows: authorization code (with PKCE for SPAs/mobile), client credentials (machine-to-machine).
  • OpenID Connect: identity layer on top of OAuth 2.0. Adds an ID token (JWT) describing the user. This is what powers "Sign in with Google".
  • Rule of thumb: if you want login β†’ OIDC. If you want "let app act on behalf of user" β†’ OAuth.

19.3 JWT (JSON Web Token)

header.payload.signature, base64url-encoded. Pros: stateless, self-contained. Cons: revocation is hard (use short expiry + refresh tokens), payload is not encrypted (only signed), size grows with claims.

Practical rules: sign with asymmetric (RS256/EdDSA) so resource servers verify without private key; keep TTL short (≀15 min); use refresh tokens for sessions; never put secrets in payload.

19.4 SSO and SAML

  • SSO β€” log in once, access many systems. Implemented via OIDC (modern) or SAML (enterprise legacy).
  • SAML β€” XML-based assertions, common in enterprise IdPs (Okta, AD FS). Bigger and older than OIDC; choose OIDC for new builds unless mandated.

19.5 TLS, mTLS, HTTPS

  • TLS β€” encryption + integrity + server authentication. Replaces SSL (deprecated).
  • mTLS β€” mutual TLS: both sides present certificates. Standard for service-to-service inside a mesh / zero-trust network.
  • HTTPS = HTTP + TLS. Cert managed by the LB / CDN / reverse proxy in production.

19.6 Encryption

  • In transit: TLS everywhere. No internal cleartext.
  • At rest: disk-level (LUKS, KMS-managed S3, EBS); column-level for PII.
  • Symmetric (AES-256-GCM) is fast β€” bulk data. Asymmetric (RSA, Ed25519) for key exchange + signatures.
  • Key management: never roll your own. Use AWS KMS, GCP KMS, HashiCorp Vault.

19.7 Password Storage

  • Never store plaintext.
  • Hash with slow, salted function: bcrypt, scrypt, Argon2id. Never MD5/SHA-256 directly (too fast).
  • Per-user salt is mandatory.

19.8 OWASP Top 10 β€” Drill List

Injection, broken auth, sensitive data exposure, XXE, broken access control, security misconfig, XSS, insecure deserialization, vulnerable components, insufficient logging. Internalize this list and the controls for each.

19.9 Defense in Depth

WAF at edge β†’ rate limiting at gateway β†’ input validation at service β†’ least-privilege IAM at infra β†’ encryption at rest β†’ audit logs. Assume any single layer will fail.


20. πŸ“ˆ Capacity Planning & Scaling Playbook

20.1 Scaling Axes

  • Vertical (scale up): bigger box. Simple, eventually impossible.
  • Horizontal (scale out): more boxes. Required for true scale; demands statelessness or sharding.
  • Functional (scale by service): split by domain (federation / microservices).
  • Data (scale by partition): shard.

20.2 The Scale Sequence (apply in order)

  1. Profile. Where is the actual bottleneck? CPU, memory, disk, network, lock contention?
  2. Cache. First and cheapest. Identify hot reads, add Redis/Memcached, target 90%+ hit rate.
  3. Optimize. Indexes, query plans, N+1 elimination, payload size.
  4. Add read replicas. Read-heavy workloads scale here for free.
  5. Vertical scale. Often cheaper than re-architecting at small scale.
  6. Async-ify writes. Move expensive work off the request path: queue + worker.
  7. Functional split. Federate by domain.
  8. Shard. Last resort because operationally expensive. Pick shard key carefully (Β§10.2).

20.3 Capacity Estimation Worksheet

For any service, compute on paper:

DAU  = ?
peak QPS         = DAU Γ— actions/user/day / 86400 Γ— peak_factor (5–10Γ—)
storage growth   = QPS Γ— bytes/record Γ— 86400 Γ— 365 Γ— replication
network bandwidth = QPS Γ— payload Γ— replication

Compare to a rough capacity per box (e.g., a modern app server: 10K QPS, 16 GB RAM; a single Postgres node: 50K read QPS, 5K write QPS with proper indexes; Redis: 100K ops/sec; Kafka broker: 100 MB/s).

20.4 Hot Spots

Skewed access destroys partitioned systems. Identify with histograms; fix with:

  • Key salting: userId:randomBucket for write fan-out.
  • In-process caching at app layer for celebrity reads.
  • Replication of hot keys across multiple shards.
  • Application-level sharding of one logical key into N physical keys.

20.5 Autoscaling

  • Reactive: CPU / memory / queue depth thresholds. Cheap, reactive (lag).
  • Predictive: ML-based forecast (Netflix Scryer). Hard, but flattens cold starts.
  • Schedule-based: known peak hours.
  • Don't autoscale stateful tiers (DB, cache) the same way as stateless. Stateful scaling = sharding + rebalance, not "add a node".

20.6 Multi-Region Patterns

Going multi-region buys disaster tolerance and lower user-perceived latency, at a steep operational cost.

Pattern Behavior RTO Use when
Single-region + DR backup Backups in another region; restore on disaster hours Small product, regulatory minimum
Active-passive Standby region with live replica; manual or automated failover minutes Tier-1 service, occasional disasters acceptable
Active-active read All regions serve reads; one region writes minutes for write, ~0 for read Read-heavy global apps
Active-active write All regions serve writes seconds Truly global scale

Write strategies for active-active:

  • Home region per user/tenant. Each user pinned to one region; cross-region requests proxy back. Used by Slack, Zoom, GitHub. Simplest correct option for user-scoped data.
  • Single global write region. Writes funnel to one region, replicated out. Strong consistency, latency for far users (Spanner with leader near majority).
  • Multi-master with conflict resolution. Cassandra / DynamoDB Global Tables. LWW or app-level merge. Strong availability, weak consistency.

Routing: Geo-DNS (Route 53 latency or geo policies), Anycast IPs, or client-side region selection based on a config endpoint.

Compliance: GDPR, India DPDP, China, Russia mandate data residency. Region pinning is a product feature, not just an architecture choice. Build it in early β€” retrofitting tenant-scoped data residency is a migration nightmare.

Failure modes specific to multi-region:

  • Cross-region replication lag spikes during regional incidents.
  • Partial-region outages (some AZs up, some down) confuse health checks.
  • DNS propagation slow β†’ stragglers pin to dead region for minutes.
  • Asymmetric routing (writes go region A, reads go B) β†’ read-your-writes anomalies.

20.7 Multi-Tenancy (SaaS)

Model Sharing Pros Cons
Pool Shared infra, tenant_id column Cheap, easy ops Noisy neighbor, blast radius, per-tenant scale ceiling
Silo Dedicated stack per tenant Isolated, per-tenant tunable, compliance-friendly Expensive, ops complexity multiplies
Bridge / Hybrid Most pooled, big customers siloed Right-sized Two systems to maintain

Required across all tenancy models:

  • Tenant ID in every query, cache key, log line, metric label. No exceptions β€” leakage is a P0 incident.
  • Per-tenant rate limits and quotas. Prevents one tenant's bad actor from consuming all capacity.
  • Per-tenant encryption keys (BYOK) for regulated tenants.
  • Per-tenant observability: metrics aggregated by tenant for support, debugging, cost attribution.
  • Schema strategies: shared schema with tenant_id (most common), schema-per-tenant (Postgres schemas), DB-per-tenant (silo).

The biggest pool-vs-silo question: can a tenant's load realistically threaten others? If yes β†’ silo or bulkhead the largest tenants.

20.8 Capacity Reference Card

Numbers to anchor estimates. Always benchmark, but expect this order of magnitude on commodity cloud hardware.

Component Capacity per instance
Modern app server (4–8 vCPU) 5K–20K QPS for stateless HTTP
Postgres / MySQL primary 10K–50K read QPS, 1K–5K write QPS with proper indexes
Postgres read replica Same as primary for reads
Redis (single node) 100K ops/sec, sub-ms latency
Memcached (single node) 200K+ ops/sec
Kafka broker 100 MB/s sustained, 10K+ msg/s per partition
Cassandra node ~10K writes/sec, ~5K reads/sec
Elasticsearch node 1K+ index ops/sec (depends on doc size)
Nginx / Envoy 50K+ RPS per core for proxying
CDN edge (cache hit) ~1 ms in-region
Cross-AZ network RTT < 1 ms
Cross-region intra-continent 10–60 ms
Cross-region intercontinental 100–200 ms
1 Gbps NIC 125 MB/s, ~83K pps at MTU 1500
10 Gbps NIC 1.25 GB/s
NVMe SSD 500K+ IOPS, several GB/s sequential
Spinning disk ~100 IOPS, ~100 MB/s sequential

Use: when sizing, divide your peak QPS by per-instance numbers to get a rough box count. Add 2Γ— headroom for spikes, 1.3Γ— for redundancy across AZs.


21. 🏭 Data Engineering & Analytics

The product database (OLTP) is bad at analytics, and the analytics warehouse (OLAP) is bad at transactions. Modern systems run both, connected by a pipeline. Knowing the boundary is essential to scaling either side.

21.1 OLTP vs OLAP

OLTP OLAP
Workload Many small transactions Few large scans
Latency ms seconds–minutes
Storage Row-oriented Column-oriented
Consistency ACID Eventually consistent (often replicated from OLTP)
Examples Postgres, MySQL, MongoDB, DynamoDB Snowflake, BigQuery, Redshift, ClickHouse, Druid

Why columnar wins for analytics: queries touch few columns of many rows; columnar storage skips the rest; same-type values compress 10–20Γ—; SIMD aggregates blocks of values at once.

21.2 Data Warehouse vs Data Lake vs Lakehouse

  • Data warehouse: structured, schema-on-write, governed, expensive per TB. Fast SQL on cleaned data. Snowflake, BigQuery, Redshift, Synapse.
  • Data lake: raw files (Parquet, ORC, Avro, JSON) on object storage (S3/GCS/ADLS); schema-on-read; cheap. Tends to become a swamp without governance.
  • Lakehouse: open table formats (Delta Lake, Apache Iceberg, Apache Hudi) on object storage that add ACID transactions, schema evolution, and time travel. Best of both worlds; powering modern Databricks, Snowflake-on-Iceberg, AWS Athena workloads.

21.3 ETL vs ELT

  • ETL (legacy): transform before loading. Heavy upfront modeling, brittle to schema change.
  • ELT (modern): load raw, transform inside the warehouse using SQL (dbt). Cheaper compute, faster iteration, easier reprocessing β€” just rerun the SQL.

21.4 CDC (Change Data Capture)

Stream the binlog/WAL of your OLTP DB into Kafka, then onward. Tools: Debezium (most popular, open source), AWS DMS, Fivetran, Airbyte.

Common destinations:

  • DB β†’ Kafka β†’ warehouse (analytics replication, near-real-time).
  • DB β†’ Kafka β†’ search index (Elasticsearch) β€” keeps search fresh without dual-writes.
  • DB β†’ Kafka β†’ cache invalidation.
  • DB β†’ Kafka β†’ derived stores in other microservices (lets services own their read models without distributed transactions).

Pair CDC with the outbox pattern (Β§13.4) to first-class application events.

21.5 Lambda vs Kappa Architecture

  • Lambda: two pipelines β€” batch (slow, accurate, source of truth) + speed (fast, approximate). Reconcile in the serving layer. Operational pain: maintain two codebases for the same logic.
  • Kappa: stream-only. Replay history through the same stream pipeline by re-reading Kafka from offset 0. Simpler, requires capable stream framework (Flink) + adequate retention.

Most modern data platforms are Kappa-leaning, with batch as a special case (bounded stream).

21.6 Reference Pipeline

Source DB ─Debezium CDC─→ Kafka ─→ Flink (cleanse, enrich, window)
                                       ↓
                          β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
                          ↓            ↓            ↓
                     Iceberg/Delta  Elasticsearch  Online feature
                     (lakehouse)    (search)       store (Redis)
                          ↓
                       dbt models β†’ BI dashboards

This shape β€” CDC β†’ Kafka β†’ stream proc β†’ fan-out to lakehouse + search + online stores β€” is the modern default for any non-trivial data platform.


22. πŸš€ Deployment, Release & Schema Evolution

Designing the system is half the job. Releasing it safely without downtime is the other half.

22.1 Deployment Strategies

Strategy How Pros Cons
Recreate Stop old, start new Simple Downtime
Rolling Replace instances incrementally No downtime, gradual Mixed versions live simultaneously
Blue-Green Stand up parallel env, flip LB Instant rollback, no version mixing 2Γ— infra during cutover
Canary Send 1% β†’ 5% β†’ 25% β†’ 100% to new Catch issues with limited blast Requires good metrics + auto-rollback
Shadow / Mirror Copy traffic to new, discard responses Test in prod with no user risk Doesn't validate write path

22.2 Feature Flags

Decouple deploy from release. Code ships dark; flags toggle behavior at runtime per user, tenant, percentage. Use for: progressive rollout, A/B testing, kill switches, dark launches, ops mode (read-only emergency).

Hygiene: every flag is technical debt. Set TTLs, owners, cleanup tasks. Tools: LaunchDarkly, Unleash, Flagsmith, in-house tables.

22.3 Schema Evolution: Expand-Contract (Parallel Change)

Never break running code. Apply changes in non-breaking phases:

  1. Expand β€” add the new column / table / field / version alongside the old. Both readable.
  2. Migrate writers β€” code writes to both old and new (dual-write). Backfill historical data into new.
  3. Migrate readers β€” code reads from new with fallback to old.
  4. Cutover β€” readers ignore old; writers stop writing old.
  5. Contract β€” drop old after a monitoring window.

Examples:

  • Rename column: add new, dual-write, switch readers, drop old.
  • Split table: create new tables, dual-write, migrate readers, retire old.
  • Change type: add _new column, backfill with cast, switch, drop.

This is the only safe pattern for online systems. "Big bang" migrations always break in production.

22.4 Online Schema Migration

Long ALTER TABLE on big tables blocks. Tools that copy and swap atomically:

  • gh-ost (GitHub) β€” uses binlog for incremental sync, no triggers.
  • pt-online-schema-change (Percona) β€” trigger-based.
  • Postgres: CREATE INDEX CONCURRENTLY, partition swap, logical replication for major changes.

22.5 Schema Versioning for Messages and APIs

  • Avro / Protobuf with a schema registry. Enforce backward + forward compatibility.
  • Compatibility rules: never reuse field numbers, never change types, only add optional fields, never remove a required field.
  • Consumers should tolerate unknown fields (forward compat) and missing fields (backward compat).
  • For REST APIs: additive change preferred; breaking change β†’ new version path (/v2).

22.6 Database Migration Tooling

  • Flyway, Liquibase (JVM); goose (Go); Alembic (Python); Prisma migrate (Node); Rails migrations.
  • Forward-only philosophy: never edit applied migrations; create a new migration to fix a previous one.
  • Test migrations on a recent prod-shaped snapshot β€” schema migrations on a tiny dev DB hide row-count and lock issues.

22.7 Progressive Delivery

Auto-rollback on SLO violation during canary. Tools: Argo Rollouts, Flagger, Spinnaker pipelines. Metrics-driven decisions remove the human from the rollback loop.

22.8 Twelve-Factor Highlights

The factors that matter most for system design:

  • Config in env β€” never in code.
  • Backing services as resources β€” DB, cache, queue addressable by URL; swappable.
  • Stateless processes β€” state in backing services, not in app memory.
  • Disposable processes β€” fast startup, graceful shutdown (SIGTERM β†’ drain connections β†’ exit within timeout).
  • Dev/prod parity β€” minimize the gap to make releases predictable.
  • Logs as event streams β€” write to stdout, let infra route + aggregate.

23. πŸ“‹ Tradeoffs Cheat Sheet

Choice Win Cost
Vertical scale Simple, no app changes Ceiling, single point of failure, downtime
Horizontal scale Linear capacity, redundancy Statelessness or sharding required
Cache Latency, offload backend Invalidation complexity, staleness
Read replica Cheap read scale Replica lag, read-after-write anomalies
Sharding Parallel writes, smaller indexes Hot keys, cross-shard joins, resharding pain
Denormalization Read speed Write complexity, redundancy
Strong consistency Correctness, simpler app Latency, lower availability
Eventual consistency Latency, availability App must tolerate staleness
Async (queue) Decoupling, spike absorption Latency, debug complexity, dup risk
Sync RPC Simple, immediate response Tight coupling, cascading failures
Microservices Team autonomy, indep deploy Distributed-systems tax
Monolith Simplicity, perf, easy txns Coupled deploys, scaling all-or-nothing
Push CDN Bandwidth efficiency Storage, manual upload
Pull CDN Set and forget First-request slow, possible stale
Master-slave Simple, read scale Failover complexity, lag
Master-master Write scale, fast failover Conflict resolution
2PC ACID across nodes Blocking, slow, fragile
Saga Liveness across services Compensations, complexity
REST Universal, cacheable Over/under-fetching
GraphQL Flexible queries N+1, caching loss
gRPC Perf, schema Browser support, debug
WebSocket Real-time, bidirectional Stateful conns, scaling
SSE Simple server push One direction, HTTP/1.1 conn limits
JWT Stateless Hard to revoke
Server sessions Easy revoke, smaller token Stateful storage
Bloom filter Memory tiny, fast Probabilistic (false positives)
Consistent hashing Smooth rebalance Implementation complexity

24. πŸ’‘ Interview Problem Templates

Each template lists the 4–6 things you must mention.

24.1 URL Shortener (TinyURL / bit.ly)

  • Encoding: base62 of an auto-incremented ID, or hash + collision retry. ID generation: range allocator, snowflake, or DB sequence. 7 chars of base62 = 3.5T URLs.
  • Storage: KV (id β†’ long URL). Reads vastly outnumber writes (say 100:1).
  • Cache: LRU on hot short URLs. CDN for redirect responses (edge cache the 301).
  • Analytics: async event stream β†’ batch aggregation. Don't write a row per click on the hot path.
  • Custom aliases: uniqueness check; reserve namespace.
  • Expiration: TTL field; lazy delete.

24.2 Pastebin / Document Service

  • Like URL shortener for IDs, plus blob storage (S3) for content.
  • Markdown rendering on read (cache the HTML), or on write.
  • Expiration, access control (link-only / private / public).

24.3 News Feed / Twitter Timeline

The classic fan-out decision:

  • Fan-out on write (push): when a celebrity tweets, copy to each follower's inbox. Read = O(1). Write = O(followers). Bad for users with 100M followers.
  • Fan-out on read (pull): read tweets of all followees, merge. Read = O(followees). Write = O(1). Bad for high-volume readers.
  • Hybrid: push for normal users, pull for celebrities (Twitter's actual approach).

Required mentions: timeline cache (Redis sorted set per user), media in CDN, ranking signals, async fan-out via queue, search via Elasticsearch.

24.4 Chat / Messaging (WhatsApp, Slack)

  • Connection layer: WebSocket gateways with sticky LB; presence in Redis.
  • Delivery: per-user inbox queue; ack from client; offline messages persisted.
  • Storage: Cassandra / wide-column, partition by (user_id, conversation_id). Discord stores trillions this way.
  • Group chat: fan-out on write to participants' inboxes; or fan-out on read with a single conversation log.
  • End-to-end encryption: Signal protocol β€” server cannot read messages.
  • Push notifications when offline (APNs / FCM).

24.5 Video Streaming (Netflix, YouTube)

  • Upload + transcode: S3 + queue + worker farm transcoding into multiple bitrates (HLS / DASH segments).
  • Storage: segments in object store; metadata in SQL/NoSQL.
  • Delivery: multi-tier CDN, push popular segments to edge (Open Connect).
  • Adaptive bitrate (ABR): client picks bitrate based on bandwidth.
  • Recommendation: offline batch + online learning.

24.6 Ride-Sharing (Uber, Lyft)

  • Location ingest: drivers send GPS at e.g., 4 Hz over WebSocket. 1M drivers Γ— 4 = 4M events/s β€” Kafka.
  • Geospatial index: geohash / H3 hexes; bucket of nearby drivers per cell, kept in Redis.
  • Matching: rider request β†’ find drivers in adjacent cells β†’ rank by ETA β†’ dispatch.
  • State machine per trip; Saga for payment.
  • Surge pricing based on supply/demand per cell, computed every minute.

24.7 Search Autocomplete

  • Trie of prefixes β†’ top-K completions (with frequencies).
  • Trie too big for one node? Shard by first 2 chars.
  • Update from query log via batch (daily) β€” autocomplete doesn't need fresh.
  • Cache top results per prefix in CDN.

24.8 Web Crawler

  • Frontier (URLs to crawl) in priority queue; politeness (per-host rate limit).
  • Bloom filter to dedupe URLs.
  • Distributed workers; DNS cache; robots.txt cache.
  • Storage: object store for raw pages; index pipeline β†’ Elasticsearch / inverted index.
  • Detect spider traps (depth limit, content hash dedupe).

24.9 Distributed Rate Limiter

  • Token bucket per user/IP; counters in Redis with INCR + EXPIRE.
  • For cluster-wide accuracy: leaky bucket via Redis sorted set, or sliding window.
  • For huge scale: approximate with local counters synced periodically (cost: small over-allowance).

24.10 Distributed Unique ID (Snowflake)

  • 64-bit ID = timestamp_ms (41) | machine_id (10) | sequence (12). ~4096 IDs/ms/machine.
  • Required: clock sync, worker ID assignment (via Zookeeper / config).
  • Alternatives: UUIDv7 (timestamp-prefixed), KSUID, DB sequence + range allocation.

24.11 Notification System

  • Channels: push (APNs/FCM), SMS, email, in-app.
  • Per-channel queue with retry + DLQ.
  • Template service + user preferences (do-not-disturb, channel opt-out).
  • Idempotency key on send to prevent duplicates.

24.12 Payment System

  • Idempotency on every mutation (Idempotency-Key header + dedup table).
  • Double-entry ledger β€” every transaction is two balanced entries.
  • Saga for multi-step (charge β†’ ship β†’ fulfill); compensations for refund.
  • Async reconciliation with payment processor.
  • PCI scope minimization β€” tokenize card data; never store PAN.
  • Hot account problem (accounts with millions of writes) β†’ shard by sub-account.

24.13 File Storage (Dropbox / S3)

  • Chunking (4–8 MB) with content-addressed hashes β€” enables dedup, partial sync, parallel upload.
  • Metadata DB (chunk list per file).
  • Object store for chunks (replicated 3x, or erasure-coded for cold storage β€” better space efficiency than 3x replication for rarely-read data).
  • Sync protocol with delta sync, conflict resolution (LWW or branched).

24.14 Distributed Cache

  • Β§10.4 + Β§12. Consistent hashing, replication for HA, eviction policy.
  • Watch out: thundering herd, hot key, cache penetration, cache stampede.

24.15 Distributed Search Index

  • Inverted index per shard; routing by document ID; query fan-out + merge.
  • Ranking: TF-IDF / BM25 baseline, learned-to-rank on top.
  • Tradeoff: more shards = faster query, more network overhead and harder relevance scoring.

24.16 Collaborative Editor (Google Docs)

  • Operational Transformation (OT) or CRDT for concurrent edits without locks. Y.js, Automerge are mature CRDT libraries.
  • WebSocket per session; one server is the merge authority for a given document.
  • Document partitioning: one shard owns one document; co-editors all connect there.
  • Snapshot + ops log: every op appended; periodic snapshots for fast loading.
  • Presence cursors as a separate ephemeral channel (lower durability needs than text ops).
  • For spreadsheets/drawings: domain-specific CRDTs (sequence, map, register).

24.17 Top-K Trending

  • Count-Min Sketch for approximate frequency of millions of distinct keys in fixed memory.
  • Heap of size K kept alongside; on each update, check if new freq > heap min.
  • Time decay: shard counts by minute/hour; sum windowed for "trending in last N min."
  • For accuracy at the top, combine sketch with full counters for the heap candidates.
  • Stream-process via Flink with tumbling/sliding windows.

24.18 Leaderboard

  • Redis sorted set (ZADD, ZINCRBY, ZREVRANGE). Sub-ms top-N reads.
  • Sharding for huge games: hash range of users β†’ many sorted sets, merge top-K from each.
  • Tiered: top-100 cached aggressively; rank for arbitrary user computed on demand or approximated.
  • For 100M+ players: per-region leaderboards + global aggregation in batch.
  • Anti-cheat: rate-limit score updates, validate server-side.

24.19 Distributed Scheduler / Cron

  • Leader-elected coordinator (Zookeeper / etcd) β€” only one scheduler dispatches at a time.
  • Time-bucketed queue: jobs land in a sorted set keyed by next_run_at.
  • Worker pool pulls due jobs; at-least-once + idempotent jobs for safety.
  • Catch-up policy on outage (run all missed? skip? run latest only?). State this explicitly.
  • Production tools: Quartz, Airflow scheduler, Temporal/Cadence, AWS EventBridge.

24.20 Online Presence (Status / Last Seen)

  • Heartbeat: client pings every 30 s; server sets Redis key with TTL = 60 s.
  • Presence read = key exists.
  • Fan-out on transition to friends via pub/sub when state changes (online ↔ offline) β€” not on every heartbeat.
  • Sharded by user ID; cross-shard friend lookups batched.
  • Last-seen as LASTSEEN:user with debounced writes (1/min, not every heartbeat).

25. 🌟 Real-World Case Studies

Synthesized lessons from production write-ups (curated by awesome-scalability).

23.1 Netflix

  • Microservices with strong service ownership; chaos engineering native (Chaos Monkey, Simian Army).
  • EVCache (Memcached + custom) for distributed caching with cache warmer.
  • Open Connect CDN β€” Netflix-owned ISPs-deployed appliances β†’ 95% of traffic from edge.
  • Atlas for metrics, Mantis for stream processing, Spinnaker for CD.
  • Rule: observability is built before scale, never retrofitted.

23.2 Uber

  • Polyglot microservices (originally Python, moved core to Go + Java).
  • H3 geospatial index β€” hexagonal grid (uniform neighbor distance).
  • Schemaless (in-house MySQL sharding layer).
  • Migrated HDFS β†’ S3 for analytics β€” data gravity dictates compute location.
  • Ringpop for application-layer sharding.

23.3 Twitter / X

  • Hybrid timeline: push for normal users, pull for celebrities β€” solves fan-out asymmetry.
  • Manhattan distributed DB; Gizzard sharding framework.
  • Kafka for event pipeline; trillions of events/day.
  • Timeline construction in 1.5 s p99 via aggressive caching at every layer.

23.4 Discord

  • Cassandra for messages β€” partition by (channel_id, bucket_id), billions of messages/day.
  • Recently migrated to ScyllaDB for better tail latency.
  • Voice: separate WebRTC infrastructure, regional routing.
  • Elixir for connection-heavy services (BEAM scheduling shines).

23.5 Airbnb

  • Migrated from Rails monolith to service-oriented architecture.
  • Elasticsearch powers search (geo + facet + ranking).
  • Multi-currency, multi-payment-method ledger.
  • Lessons: service migration is a multi-year project; Strangler Fig is the only safe approach.

23.6 Pinterest

  • MySQL with sharding (vs going NoSQL) β€” vindication of relational + sharding for relational data.
  • Functional partitioning by domain (pins, boards, users).
  • Heavy use of Memcached + Redis.

23.7 Instagram

  • Three rules: keep it simple, don't reinvent, use proven technologies.
  • Postgres + sharding for social graph.
  • Cassandra for activity feeds.
  • Aggressive caching, one-engineer-per-million-users efficiency.

23.8 Stripe

  • Idempotency-key first-class API design.
  • Veneer (in-house service framework) + machine learning fraud detection (Radar) on every transaction.
  • Distributed rate limiting on token-bucket primitive.

23.9 LinkedIn

  • Birthplace of Kafka, Samza, Pinot, Voldemort, Espresso.
  • Span Kafka clusters β†’ cross-DC pipelines β†’ real-time + batch unified.
  • Lesson: observability investment is a force multiplier. "Observability powers high availability for LinkedIn Feed."

23.10 Recurring Lessons (the 10 most important)

  1. Embrace operational complexity early. Observability + chaos before scale.
  2. Data gravity dominates. Compute moves to data, not the other way.
  3. Statelessness scales linearly. Push state down to a few specialized tiers.
  4. Database selection is multi-dimensional. Mix SQL + NoSQL + cache + search; one size never fits.
  5. Observability prevents outages. You can't fix what you can't see.
  6. Org structure mirrors architecture (Conway). Microservices fail without team realignment.
  7. Cost-perf tradeoffs are real and additive. Saving 10% in three places = 30%.
  8. Async/event-driven decouples failure. A queue between two services is a fault break.
  9. Replication lag is inevitable. Design for it (read-your-writes via session, version tokens).
  10. Test at scale via simulation. Chaos, load tests, dark traffic, shadow writes.

26. ⚠️ Anti-Patterns to Avoid

  • Premature microservices. Splitting before domains and teams are clear creates a distributed monolith β€” worst of both.
  • Premature NoSQL. "We'll be web-scale" while you have 100K rows. Postgres scales further than you think.
  • Distributed transactions across services. Reach for sagas, idempotency, and outbox instead.
  • Sticky sessions as state strategy. Hides true stateful design until LB scaling reveals it.
  • No idempotency on POST. Every retry creates a duplicate. Plan for it day 1.
  • No timeouts. Cascading failure is one slow downstream away.
  • Retries without backoff. Self-DDoS during recovery.
  • Cache without TTL or invalidation strategy. Permanent staleness time bomb.
  • Single load balancer. SPOF, often invisible until it isn't.
  • Synchronous fan-out to many services. One slow node breaks p99 for everyone.
  • Logging PII. Compliance disaster.
  • No observability before scale. Retrofitting traces / metrics / structured logs costs 10Γ— more than building them in.
  • Over-engineered abstractions. "We might need to switch DB" β€” you won't, and the abstraction costs you forever.
  • No DLQ. Failed messages quietly disappear.
  • Untested DR. Backup that's never restored is not a backup.

27. πŸ“š Must-Read Papers & Further Reading

25.1 Foundational Papers

  • Lamport β€” Time, Clocks, and the Ordering of Events (1978). Logical time, causality.
  • Brewer β€” Towards Robust Distributed Systems (2000). CAP.
  • Gilbert & Lynch β€” CAP proof (2002).
  • Lamport β€” Paxos Made Simple (2001).
  • Ongaro & Ousterhout β€” In Search of an Understandable Consensus Algorithm (Raft) (2014).
  • Dean & Ghemawat β€” MapReduce (2004).
  • Ghemawat et al. β€” Google File System (2003).
  • Chang et al. β€” Bigtable (2006).
  • DeCandia et al. β€” Dynamo (2007).
  • Corbett et al. β€” Spanner (2012).
  • Kreps β€” The Log: What every software engineer should know (2013).

25.2 Books

  • Designing Data-Intensive Applications β€” Martin Kleppmann (the single most valuable systems book).
  • Site Reliability Engineering β€” Google.
  • Database Internals β€” Alex Petrov.
  • System Design Interview (Vol 1 + 2) β€” Alex Xu.
  • Building Microservices β€” Sam Newman.
  • Release It! β€” Michael Nygard (resilience patterns).

25.3 Engineering Blogs (read regularly)

Netflix Tech Blog Β· Uber Engineering Β· Airbnb Engineering Β· Discord Engineering Β· Stripe Β· Cloudflare Β· Slack Β· Shopify Β· Dropbox Β· LinkedIn Engineering Β· The Pragmatic Engineer Β· High Scalability.

25.4 Source Repositories Referenced


Final principle: The best system design is the simplest one that meets the actual requirements β€” not the one that anticipates every imagined future. Build for the load you have plus 10Γ—. When you reach 5Γ—, design the next 10Γ—. When you reach 9Γ—, build it. Every "we might need it someday" abstraction is a tax you pay every day for a benefit you may never collect.


If you found this helpful, let me know by leaving a πŸ‘ or a comment!, or if you think this post could help someone, feel free to share it! Thank you very much! πŸ˜ƒ


All rights reserved

Viblo
HΓ£y Δ‘Δƒng kΓ½ mα»™t tΓ i khoαΊ£n Viblo để nhαΊ­n được nhiều bΓ i viαΊΏt thΓΊ vα»‹ hΖ‘n.
Đăng kΓ­