A senior engineer’s perspective on building highly available distributed systems Bord af indhold Introduktion: Hvorfor Dynamo ændrede alt Den fælles handelspolitiske teori Trade-off Core Architecture Components Consistent Hashing for Partitioning Replication Strategy (N, R, W) Vector Clocks for Versioning Sloppy Quorum and Hinted Handoff Konfliktløsning: Problemet med indkøbskurven Læs og skriv flow Merkle Træer til Anti-Entropy Medlemskab og fejldetektering Præstationskarakteristika: Virkelige tal Partitionering strategi udvikling Sammenligning af Dynamo med moderne systemer Hvad Dynamo ikke giver dig Praktiske eksempler på implementering Nøgleundervisning i systemdesign Når du IKKE skal bruge Dynamo-Style Systems Konklusionen Bilag: Designproblemer og tilgange Dette er en lang form reference - hver sektion står på egen hånd, så føl dig fri til at hoppe direkte til hvad der er mest relevant for dig. Dette er en lang form reference - hver sektion står på egen hånd, så føl dig fri til at hoppe direkte til hvad der er mest relevant for dig. Introduktion: Hvorfor Dynamo ændrede alt Da Amazon udgav Dynamo-artikkelen i 2007, var det ikke bare en anden akademisk øvelse. Det var en kamptestet løsning på virkelige problemer i massiv skala. Det er designet til at understøtte Amazons højt trafikerede tjenester som indkøbskurv og session management systemer. Der er ingen sekundære indekser, ingen joins, ingen relationel semantik - kun nøgler og værdier, med ekstremt fokus på tilgængelighed og skalerbarhed. Det giver ikke lineariserbarhed eller globale bestillingsgarantier, selv i de højeste quorumindstillinger. Dynamo is a distributed key-value storage system. Det centrale problem Amazon stod over for var simpelt at sige, men brutalt at løse: Når nogen forsøger at tilføje et element til deres indkøbskurv under en netværkspartition eller serverfejl, er det ikke acceptabelt at afvise at skrive. How do you build a storage system that never says “no” to customers? The CAP Theorem Trade-off: Hvorfor Dynamo vælger tilgængelighed Før du dykker ind i, hvordan Dynamo fungerer, skal du forstå den grundlæggende begrænsning, det er designet omkring. Hvad er CAP Theorem? CAP-teoremet beskriver en grundlæggende kompromis i distribuerede systemer: Når der opstår en netværkspartition, skal du vælge mellem konsistens og tilgængelighed. Konsistens (C): Alle noder ser de samme data på samme tid Tilgængelighed (A): Hver anmodning får et svar (succes eller fiasko) Partition Tolerance (P): Systemet fortsætter med at arbejde på trods af netværksfejl En almindelig forkortelse er "pick 2 of 3", men dette er en overforenkling.I praksis er netværkspartitioner uundgåelige på skalaen, så den virkelige beslutning er: Det er det egentlige designvalg. when partitions occur (and they will), do you sacrifice consistency or availability? Netværkspartitioner vil ske. Kabler bliver klippet, switches fejler, datacentre mister forbindelsen. Du kan ikke undgå dem, så du skal vælge: Konsistens eller Tilgængelighed? The harsh reality Traditionelle databaser vælger konsistens : Traditional approach Database: "I can't guarantee all replicas are consistent, so I'll reject your write to be safe." Result: Customer sees error, cart is empty Impact: Lost revenue, poor experience Dynamo vælger tilgængelighed : Dynamo’s approach Dynamo: "I'll accept your write with the replicas I can reach. The unreachable replica will catch up later." Result: Customer sees success, item in cart Impact: Sale continues, happy customer Den visualiserede trade-off When a partition occurs: Traditional Database: Choose C over A → Sacrifice Availability - ✓ All replicas always have same data - ✓ No conflicts to resolve - ❌ Rejects writes during failures - ❌ Poor customer experience - ❌ Lost revenue Dynamo: Choose A over C → Sacrifice Strong Consistency - ✓ Accepts writes even during failures - ✓ Excellent customer experience - ✓ No lost revenue - ❌ Replicas might temporarily disagree - ❌ Application must handle conflicts Real Amazon eksempel: Black Friday indkøbskurv Forestil dig, at det er Black Friday.Millioner af kunder shopper.Et netværkskabel skæres mellem datacentre. : With traditional database Time: 10:00 AM - Network partition occurs Result: - All shopping cart writes fail - "Service Unavailable" errors - Customers can't checkout - Twitter explodes with complaints - Estimated lost revenue: $100,000+ per minute : With Dynamo Time: 10:00 AM - Network partition occurs Result: - Shopping cart writes continue - Customers see success - Some carts might have conflicts (rare) - Application merges conflicting versions - Estimated lost revenue: $0 - A few edge cases need conflict resolution (acceptable) Hvorfor dette valg giver mening for e-handel Amazon gjorde matematikken: Omkostninger ved at afvise en skrivning: Umiddelbart tabt salg ($ 50-200) Omkostninger ved at acceptere en modstridende skrivning: lejlighedsvis nødt til at fusionere indkøbskurve (sjældent sker, let fastsættes) Forretningsbeslutning: Accept skriver, håndtere sjældne konflikter : Types of data where Availability > Consistency Indkøbskurve (fusion af modstridende tilføjelser) Session data (sidste skrive-vinder er i orden) Brugerpræferencer (eventuel sammenhæng acceptabel) Best seller lister (ca. er fint) : Types of data where Consistency > Availability Bankkontoens saldo (kan ikke have modstridende saldo) Inventar tæller (kan ikke oversælges) Transaktionslogs (må bestilles) Det er derfor, Dynamo ikke er for alt - men for Amazons e-handelsbrugssager var valget af tilgængelighed over stærk konsistens den rigtige kompromis. Vigtig nuance: Mens Dynamo ofte beskrives som et AP-system, er det mere præcist at kalde det et justerbart konsistenssystem. Afhængigt af din R- og W-kvorumkonfiguration kan det opføre sig tættere på CP. AP-labelet gælder for sin standard / anbefalede konfiguration optimeret til e-handelsarbejdsbelastninger. Mens Dynamo ofte beskrives som et AP-system, er det mere præcist at kalde det et . Depending on your R and W quorum configuration, it can behave closer to CP. The AP label applies to its default/recommended configuration optimized for e-commerce workloads. Important nuance tunable consistency system Core arkitektoniske komponenter Konsekvent hashing til partitionering Lad mig forklare dette med et konkret eksempel, fordi konsistent hashing er et af de begreber, der synes magisk, indtil du ser det i aktion. Problemet: Traditionel hashbaseret sharding Forestil dig, at du har 3 servere og ønsker at distribuere data på tværs af dem. # Traditional approach - DON'T DO THIS def get_server(key, num_servers): hash_value = hash(key) return hash_value % num_servers # Modulo operation # With 3 servers: get_server("user_123", 3) # Returns server 0 get_server("user_456", 3) # Returns server 1 get_server("user_789", 3) # Returns server 2 Dette fungerer... indtil du tilføjer eller fjerner en server. Lad os se, hvad der sker, når vi går fra 3 til 4 servere: # Before (3 servers): "user_123" → hash % 3 = 0 → Server 0 "user_456" → hash % 3 = 1 → Server 1 "user_789" → hash % 3 = 2 → Server 2 # After (4 servers): "user_123" → hash % 4 = 0 → Server 0 ✓ (stayed) "user_456" → hash % 4 = 1 → Server 1 ✓ (stayed) "user_789" → hash % 4 = 2 → Server 2 ✓ (stayed) # But wait - this is lucky! In reality, most keys MOVE: "product_ABC" → hash % 3 = 2 → Server 2 "product_ABC" → hash % 4 = 3 → Server 3 ✗ (MOVED!) Når du ændrer antallet af servere, skal næsten ALLE dine data omfordeles. The disaster Løsningen: Konsistent hashing Konsekvent hashing løser dette ved at behandle hashrummet som en cirkel (0 til 2^32 – 1, pakket rundt). Step 1: Place servers on the ring Hver server er tildelt en tilfældig position på ringen (kaldet en "token"). Step 2: Place data on the ring Når du vil gemme data, skal du: Hash nøglen for at få en position på ringen Gå klokkeslæt fra denne position Gem dataene på den første server, du møder Visuelt eksempel: Komplet ring Her er ringen placeret i rækkefølge. Nøgler går klokkeslæt til den næste server: En nøgle bevæger sig klokkeslæt, indtil den rammer en server. Simple rule : Examples user_123 ved 30° → går til 45° → Server A ejer den user_456 ved 150° → går til 200° → Server C ejer det cart_789 ved 250° → går til 280° → Server D ejer den produkt_ABC ved 300° → passerer 360°, wraps til 0°, fortsætter til 45° → Server A ejer den Who owns what range? Server A (45°): ejer alt fra 281° til 45° (wraps rundt) Server B (120°): ejer alt fra 46° til 120° Server C (200°): ejer alt fra 121° til 200° Server D (280°): ejer alt fra 201° til 280° Den magiske: Tilføj en server Lad os nu se, hvorfor dette er brilliant. Vi tilføjer Server E på position 160°: BEFORE: Server A (45°) → owns 281°-45° Server B (120°) → owns 46°-120° Server C (200°) → owns 121°-200° ← THIS RANGE WILL SPLIT Server D (280°) → owns 201°-280° AFTER: Server A (45°) → owns 281°-45° ← NO CHANGE Server B (120°) → owns 46°-120° ← NO CHANGE Server E (160°) → owns 121°-160° ← NEW! Takes part of C's range Server C (200°) → owns 161°-200° ← SMALLER range Server D (280°) → owns 201°-280° ← NO CHANGE : Kun nøgler i området 121°-160° skal flyttes (fra C til E). Serverne A, B og D er helt uberørt! Result Optimering af virtuelle knuder Der er et kritisk problem med den grundlæggende konsekvente hashing-tilgang: . random distribution can be extremely uneven The Problem in Detail: Når du tilfældigt tildeler en position pr. server, kaster du i det væsentlige darts på et cirkulært bord. Nogle gange klynger dartsene sammen, nogle gange spredes de ud. Lad mig give dig et konkret eksempel: Scenario: 4 servers with single random tokens Server A: 10° } Server B: 25° } ← Only 75° apart! Tiny ranges Server C: 100° } Server D: 280° ← 180° away from C! Huge range Range sizes: - Server A owns: 281° to 10° = 89° (25% of ring) - Server B owns: 11° to 25° = 14° (4% of ring) ← Underutilized! - Server C owns: 26° to 100° = 74° (21% of ring) - Server D owns: 101° to 280° = 179° (50% of ring) ← Overloaded! Real-world consequences: Server D håndterer 50% af alle data, mens Server B håndterer kun 4%. Server D's CPU, disk og netværk er maxed ud Server B er for det meste ledig (spildt kapacitet) Din 99.9 percentil latens domineres af Server D overbelastning Hotspot Cascading: Når Server D bliver langsom eller fejler: Alle dens 50% belastning skifter til Server A (den næste klokkeslæt) Server A bliver nu overbelastet Systemets ydeevne forringes katastrofalt Ineffektiv skalering: Tilføjelse af servere hjælper ikke jævnt, fordi nye servere kan lande i allerede små intervaller Visualizing the problem: Hver fysisk server får flere virtuelle positioner (tokens). Dynamo’s solution I stedet for at kaste en dart pr. server, kaste mange dart. Jo flere kast, jo mere selv fordelingen bliver (lov om store tal). How Virtual Nodes Fix the Problem: Lad os tage de samme 4 servere, men nu får hver server 3 virtuelle knuder (tokens) i stedet for 1: Physical Server A gets 3 tokens: 10°, 95°, 270° Physical Server B gets 3 tokens: 25°, 180°, 310° Physical Server C gets 3 tokens: 55°, 150°, 320° Physical Server D gets 3 tokens: 75°, 200°, 340° Now the ring looks like: 10° A, 25° B, 55° C, 75° D, 95° A, 150° C, 180° B, 200° D, 270° A, 310° B, 320° C, 340° D Range sizes (approximately): - Server A total: 15° + 55° + 40° = 110° (31% of ring) - Server B total: 30° + 20° + 30° = 80° (22% of ring) - Server C total: 20° + 30° + 20° = 70° (19% of ring) - Server D total: 20° + 70° + 20° = 110° (31% of ring) Belastningen varierer fra 19% til 31% i stedet for 4% til 50%. Much better! Why this works: : With more samples (tokens), the random distribution averages out. This is the law of large numbers in action. Statistics : When a server fails, its load is distributed across many servers, not just one neighbor: Granular load distribution Server A fails: - Its token at 10° → load shifts to Server B's token at 25° - Its token at 95° → load shifts to Server C's token at 150° - Its token at 270° → load shifts to Server B's token at 310° Result: The load is spread across multiple servers! : When adding a new server with 3 tokens, it steals small amounts from many servers instead of a huge chunk from one server. Smooth scaling Real Dynamo configurations: Artiklen nævner forskellige strategier, der har udviklet sig over tid. Tidlige versioner: 100-200 virtuelle knuder pr. fysisk server Senere optimeret til: Q/S tokens per node (hvor Q = samlede partitioner, S = antal servere) Typisk opsætning: Hver fysisk server kan have 128-256 virtuelle knuder The Trade-off: Balance vs Overhead Flere virtuelle knudepunkter betyder bedre belastningsfordeling, men der er en pris. What you gain with more virtual nodes: With 1 token per server (4 servers): Load variance: 4% to 50% (±46% difference) ❌ With 3 tokens per server (12 virtual nodes): Load variance: 19% to 31% (±12% difference) ✓ With 128 tokens per server (512 virtual nodes): Load variance: 24% to 26% (±2% difference) ✓✓ What it costs: : Each node maintains routing information Metadata size 1 token per server: Track 4 entries 128 tokens pr. server: Spor 512 poster Gossip overhead: Nodes udveksler medlemsoplysninger regelmæssigt More tokens = more data to sync between nodes Every second, nodes gossip their view of the ring Ombalanceringskompleksitet: Når knudepunkter slutter sig til / forlader Flere virtuelle knudepunkter = flere partitionsoverførsler for at koordinere Men hver overførsel er mindre (hvilket faktisk er godt for bootstrapping) Dynamo’s evolution: The paper describes how Amazon optimized this over time: Strategy 1 (Initial): - 100-200 random tokens per server - Problem: Huge metadata (multiple MB per node) - Problem: Slow bootstrapping (had to scan for specific key ranges) Strategy 3 (Current): - Q/S tokens per server (Q=total partitions, S=number of servers) - Equal-sized partitions - Example: 1024 partitions / 8 servers = 128 tokens per server - Benefit: Metadata reduced to KB - Benefit: Fast bootstrapping (transfer whole partition files) Real production sweet spot: De fleste Dynamo-implementeringer bruger 128-256 virtuelle knudepunkter pr. fysisk server. Lastfordeling inden for 10-15% varians (godt nok) Metadata overhead under 100KB per node (negligible) Fast failure recovery (load spreads across many nodes) At gå fra 128 til 512 tokens forbedrer kun belastningsbalancen med 2-3%, men fordobler metadata størrelse og gossip trafik. Why not more? Fysiske servere (top) kort til flere virtuelle positioner (bottom) på ringen. Key concept : Benefits More even load distribution When a server fails, its load is distributed across many servers (not just one neighbor) Når en server tilslutter sig, stjæler den en lille mængde fra mange servere Real-World Impact Comparison Let’s see the difference with real numbers: Traditional Hashing (3 servers → 4 servers): - Keys that need to move: ~75% (3 out of 4) - Example: 1 million keys → 750,000 keys must migrate Consistent Hashing (3 servers → 4 servers): - Keys that need to move: ~25% (1 out of 4) - Example: 1 million keys → 250,000 keys must migrate With Virtual Nodes (150 vnodes total → 200 vnodes): - Keys that need to move: ~12.5% (spread evenly) - Example: 1 million keys → 125,000 keys must migrate - Load is balanced across all servers The “Aha!” Moment The key insight is this: Consistent hashing decouples the hash space from the number of servers. Traditionel: server = hash(key) % num_servers ← num_servers er i formlen! Konsistent: server = ring.findNextClockwise(hash(key)) ← num_servers er ikke i formlen! Hashværdierne ændrer sig ikke - kun hvilken server "ejer" hvilken rækkevidde ændrer sig, og kun lokalt. Tænk på det som en cirkulær løbebane med vandstationer (servere). Hvis du tilføjer en ny vandstation, skifter løbere kun stationer, hvis de er mellem den gamle nærmeste station og den nye. Replikationsstrategi (N, R og W) Problemet: Tilgængelighed vs Konsistens Trade-off Imagine you’re building Amazon’s shopping cart. A customer adds an item to their cart, but at that exact moment: En server genstartes til vedligeholdelse En anden server har en netværkshiccup A third server is perfectly fine (Det er en stærk sammenhæng) Traditional database approach Client: "Add this item to my cart" Database: "I need ALL replicas to confirm before I say yes" Server 1: ✗ (rebooting) Server 2: ✗ (network issue) Server 3: ✓ (healthy) Result: "Sorry, service unavailable. Try again later." : 😡 “I can’t add items to my cart during Black Friday?!” Customer experience Dette er uacceptabelt for e-handel. Hver afvist skrivning er tabt indkomst. Dynamo’s løsning: Tunable Quorums Dynamo giver dig tre knapper til at justere den nøjagtige kompromis du ønsker: N: Antallet af kopier (hvor mange kopier af dataene) : Read quorum (how many replicas must respond for a successful read) R : Write quorum (how many replicas must acknowledge for a successful write) W : Hvornår , du garanterer quorum overlap - hvilket betyder, at mindst en node, der modtog skrivningen, vil blive forespørgt under enhver læsning. Denne overlap gør det muligt at detektere den nyeste version, forudsat at forligslogikken korrekt identificerer den højeste vektorklokke. The magic formula R + W > N Lad mig vise dig, hvorfor dette betyder noget med virkelige scenarier: Scenario 1: Indkøbskurv (prioriter tilgængelighed) N = 3 # Three replicas for durability R = 1 # Read from any single healthy node W = 1 # Write to any single healthy node # Trade-off analysis: # ✓ Writes succeed even if 2 out of 3 nodes are down # ✓ Reads succeed even if 2 out of 3 nodes are down # ✓ Maximum availability - never reject customer actions # ✗ Might read stale data # ✗ Higher chance of conflicts (but we can merge shopping carts) What happens during failure: Client: "Add item to cart" Coordinator tries N=3 nodes: - Node 1: ✗ Down - Node 2: ✓ ACK (W=1 satisfied!) - Node 3: Still waiting... Result: SUCCESS returned to client immediately Node 3 eventually gets the update (eventual consistency) Scenario 2: Session State (Balanced Approach) N = 3 R = 2 # Must read from 2 nodes W = 2 # Must write to 2 nodes # Trade-off analysis: # ✓ R + W = 4 > N = 3 → Read-your-writes guaranteed # ✓ Tolerates 1 node failure # ✓ Good balance of consistency and availability # ✗ Write fails if 2 nodes are down # ✗ Read fails if 2 nodes are down Why R + W > N enables read-your-writes: Write to W=2 nodes: [A, B] Later, read from R=2 nodes: [B, C] Because W + R = 4 > N = 3, there's guaranteed overlap! At least one node (B in this case) will have the latest data. The coordinator detects the newest version by comparing vector clocks. This guarantees seeing the latest write as long as reconciliation picks the causally most-recent version correctly. Scenario 3: Finansielle data (prioritering af sammenhæng) N = 3 R = 3 # Must read from ALL nodes W = 3 # Must write to ALL nodes # Trade-off analysis: # ✓ Full replica quorum — reduces likelihood of divergent versions # ✓ Any read will overlap every write quorum # ✗ Write fails if ANY node is down # ✗ Read fails if ANY node is down # ✗ Poor availability during failures Systemer, der kræver strenge transaktionsgarantier, vælger typisk CP-systemer i stedet. Denne konfiguration understøttes teknisk af Dynamo, men ofrer de tilgængelighedsegenskaber, der motiverer brugen af den i første omgang. Configuration Comparison Table Config N R W Availability Consistency Use Case High Availability 3 1 1 ⭐⭐⭐⭐⭐ ⭐⭐ Shopping cart, wish list Balanced 3 2 2 ⭐⭐⭐⭐ ⭐⭐⭐⭐ Session state, user preferences Full Quorum 3 3 3 ⭐⭐ ⭐⭐⭐⭐⭐ High-stakes reads (not linearizable) Read-Heavy 3 1 3 ⭐⭐⭐ (reads) ⭐⭐⭐⭐ Product catalog, CDN metadata Write-Heavy 3 3 1 ⭐⭐⭐ (writes) ⭐⭐⭐ Click tracking, metrics High Availability 3 1 1 ⭐⭐⭐⭐⭐ ⭐⭐ Indkøbskurv, ønskeseddel Balanced 3 2 2 ⭐⭐⭐⭐ ⭐⭐⭐⭐ Session status, brugerpræferencer Full Quorum 3 3 3 ⭐⭐ ⭐⭐⭐⭐⭐ High-stakes reads (not linearizable) Read-Heavy 3 1 3 ⭐⭐⭐ (reads) ⭐⭐⭐⭐ Produktkatalog, CDN metadata Write-Heavy 3 3 1 ⭐⭐⭐ (Læs mere) ⭐⭐⭐ Klik på sporing, metrics Bemærk om finansielle systemer: Systemer, der kræver stærke transaktionsgarantier (f.eks. bankkontobalancer), bør typisk ikke bruge Dynamo. Bemærk om finansielle systemer: Systemer, der kræver stærke transaktionsgarantier (f.eks. bankkontobalancer), bør typisk ikke bruge Dynamo. The Key Insight Most systems use Fordi det: N=3, R=2, W=2 Holdbarhed: Kan tolerere op til 2 replikfejl før permanent tab af data (forudsat uafhængige fejl og ingen korrelerede afbrydelser). Tilgængelighed: Tolererer 1 nodefejl for både læser og skriver Konsistens: R + W > N garanterer, at læse- og skrivekvorum overlapper, hvilket gør det muligt at læse-du-skriver adfærd i mangel af samtidige skrivninger. Præstation: Vent ikke på den langsommeste node (kun 2 ud af 3) Real production numbers from the paper: Amazon's indkøbskurv service under peak (ferie sæson): Configuration: N=3, R=2, W=2 Vi har behandlet millioner af anmodninger Over 3 millioner checkouts på én dag Ingen nedetid, selv med serverfejl This tunable approach is what made Dynamo revolutionary. You’re not stuck with one-size-fits-all—you tune it based on your actual business requirements. Vektorklokke til versionering Problemet: Påvisning af årsagssammenhæng i distribuerede systemer Når flere knudepunkter kan acceptere skrives uafhængigt, skal du besvare et kritisk spørgsmål: Are these two versions of the same data related, or were they created concurrently? Why timestamps don’t work: Scenario: Two users edit the same shopping cart simultaneously User 1 at 10:00:01.500 AM: Adds item A → Writes to Node X User 2 at 10:00:01.501 AM: Adds item B → Writes to Node Y Physical timestamp says: User 2's version is "newer" Reality: These are concurrent! Both should be kept! Problem: - Clocks on different servers are NEVER perfectly synchronized - Clock skew can be seconds or even minutes - Network delays are unpredictable - Physical time doesn't capture causality What we really need to know: Version A happened before Version B? → B can overwrite A Version A and B are concurrent? → Keep both, merge later Version A came from reading Version B? → We can track this! Løsningen: Vector ure En vektorklokke er en simpel datastruktur: en liste over Par, der sporer, hvilke noder der har set hvilke versioner. (node_id, counter) The rules: Når en node skriver data, øger den sin egen tæller Når en node læser data, får den vektorklokken When comparing two vector clocks: Hvis alle tæller i A ≤ tæller i B → A er en forfader til B (B er nyere) Hvis nogle tæller i A > B og nogle B > A → A og B er samtidige (konflikt!) Trin for trin eksempler Lad os spore en indkøbskurv gennem flere opdateringer: Breaking down the conflict: D3: [Sx:2, Sy:1] vs D4: [Sx:2, Sz:1] Comparing: - Sx: 2 == 2 ✓ (equal) - Sy: 1 vs missing in D4 → D3 has something D4 doesn't - Sz: missing in D3 vs 1 → D4 has something D3 doesn't Conclusion: CONCURRENT! Neither is an ancestor of the other. Both versions must be kept and merged. Den virkelige verdens karakteristika Disse tal afspejler Amazons specifikke arbejdsbyrde - højt læsnings-/skrivningsforhold, for det meste enkeltbrugersessioner - og bør ikke antages at generalisere til alle Dynamo-implementeringer: 99.94% - Single version (no conflict) 0.00057% - 2 versions 0.00047% - 3 versions 0.00009% - 4 versions : Conflicts are RARE in practice! Key insight Why conflicts happen: Ikke altid på grund af netværksfejl For det meste fra samtidige forfattere (ofte automatiserede processer / bots) Menneskelige brugere skaber sjældent konflikter, fordi de er langsomme i forhold til netværkshastigheden The Size Problem Vektorklokker kan vokse ubegrænset, hvis mange knuder koordinere skriver. når uret overstiger en størrelse tærskel. truncate the oldest entries // When vector clock exceeds threshold (e.g., 10 entries) // Remove the oldest entry based on wall-clock timestamp vectorClock = { 'Sx': {counter: 5, timestamp: 1609459200}, 'Sy': {counter: 3, timestamp: 1609459800}, 'Sz': {counter: 2, timestamp: 1609460400}, // ... 10 more entries } // If size > 10, remove entry with oldest timestamp // ⚠ Risk: Dropping an entry collapses causality information. // Two versions that were causally related may now appear // concurrent, forcing the application to resolve a conflict // that didn't actually exist. In practice, Amazon reports // this has not been a significant problem — but it is a // real theoretical risk in high-churn write environments // with many distinct coordinators. Sloppy Quorum og Hinted Handoff Problemet: Strenge quorums dræber tilgængelighed Traditionelle quorum-systemer er stive og uforgivelige. Traditional strict quorum: Your data is stored on nodes: A, B, C (preference list) Write requirement: W = 2 Scenario: Node B is down for maintenance Coordinator: "I need to write to 2 nodes from {A, B, C}" Tries: A ✓, B ✗ (down), C ✓ Result: SUCCESS (got 2 out of 3) Scenario: Nodes B AND C are down Coordinator: "I need to write to 2 nodes from {A, B, C}" Tries: A ✓, B ✗ (down), C ✗ (down) Result: FAILURE (only got 1 out of 3) Customer: "Why can't I add items to my cart?!" 😡 The problem: Hvis de specifikke knuder er nede, bliver systemet utilgængeligt. Strict quorums require specific nodes Real scenario at Amazon: Black Friday, 2:00 PM - Datacenter 1: 20% of nodes being rebooted (rolling deployment) - Datacenter 2: Network hiccup (1-2% packet loss) - Traffic: 10x normal load With strict quorum: - 15% of write requests fail - Customer support phones explode - Revenue impact: Millions per hour The Solution: Sloppy Quorum Dynamo slapper af med quorumkravet: “Write to the first N healthy nodes in the preference list, walking further down the ring if needed.” Preference list for key K: A, B, C But B is down... Sloppy Quorum says: "Don't give up! Walk further down the ring: A, B, C, D, E, F, ..." Coordinator walks until N=3 healthy nodes are found: A, C, D (D is a temporary substitute for B) Hvordan Hinted Handoff virker When a node temporarily substitutes for a failed node, it stores a “hint” with the data. Detailed Hinted Handoff Process Step 1: Detect failure and substitute def write_with_hinted_handoff(key, value, N, W): preference_list = get_preference_list(key) # [A, B, C] healthy_nodes = [] for node in preference_list: if is_healthy(node): healthy_nodes.append((node, is_hint=False)) # If we don't have N healthy nodes, expand the list if len(healthy_nodes) < N: extended_list = get_extended_preference_list(key) for node in extended_list: if node not in preference_list and is_healthy(node): healthy_nodes.append((node, is_hint=True)) if len(healthy_nodes) >= N: break # Write to first N healthy nodes acks = 0 for node, is_hint in healthy_nodes[:N]: if is_hint: # Store with hint metadata intended_node = find_intended_node(preference_list, node) success = node.write_hinted(key, value, hint=intended_node) else: success = node.write(key, value) if success: acks += 1 if acks >= W: return SUCCESS return FAILURE Step 2: Background hint transfer # Runs periodically on each node (e.g., every 10 seconds) def transfer_hints(): hints_db = get_hinted_replicas() for hint in hints_db: intended_node = hint.intended_for if is_healthy(intended_node): try: intended_node.write(hint.key, hint.value) hints_db.delete(hint) log(f"Successfully transferred hint to {intended_node}") except: log(f"Will retry later for {intended_node}") Hvorfor dette er brilliant Durability maintained: Even though B is down: - We still have N=3 copies: A, C, D - Data won't be lost even if another node fails - System maintains durability guarantee Availability maximized: Client perspective: - Write succeeds immediately - No error message - No retry needed - Customer happy Traditional quorum would have failed: - Only 2 nodes available (A, C) - Need 3 for N=3 - Write rejected - Customer sees error Eventual consistency: Timeline: T=0: Write succeeds (A, C, D with hint) T=0-5min: B is down, but system works fine T=5min: B recovers T=5min+10sec: D detects B is back, transfers hint T=5min+11sec: B has the data, D deletes hint Result: Eventually, all correct replicas have the data Konfigurationseksempel // High availability configuration const config = { N: 3, // Want 3 replicas W: 2, // Only need 2 ACKs to succeed R: 2, // Read from 2 nodes // Sloppy quorum allows expanding preference list sloppy_quorum: true, // How far to expand when looking for healthy nodes max_extended_preference_list: 10, // How often to check for hint transfers hint_transfer_interval: 10_seconds, // How long to keep trying to transfer hints hint_retention: 7_days }; Real-World Impact Fra Amazons produktionserfaring: During normal operation: Hinted handoff rarely triggered Most writes go to preferred nodes Hints database is mostly empty During failures: Scenario: 5% of nodes failing at any time (normal at Amazon's scale) Without hinted handoff: - Write success rate: 85% - Customer impact: 15% of cart additions fail With hinted handoff: - Write success rate: 99.9%+ - Customer impact: Nearly zero During datacenter failure: Scenario: Entire datacenter unreachable (33% of nodes) Without hinted handoff: - Many keys would lose entire preference list - Massive write failures - System effectively down With hinted handoff: - Writes redirect to other datacenters - Hints accumulate temporarily - When datacenter recovers, hints transfer - Zero customer-visible failures Den trade-off Benefits: Maksimal skriftlig tilgængelighed Bæredygtighed opretholdt under svigt Automatisk genopretning, når knudepunkterne vender tilbage ✓ No manual intervention required Costs: ✗ Temporary inconsistency (data not on “correct” nodes) Ekstra lagerplads til hints database ✗ Background bandwidth for hint transfers Lidt mere kompliceret kode ✗ If a substitute node (like D) fails before it can transfer its hint back to B, the number of true replicas drops below N until the situation resolves. This is an important edge case to understand in failure planning. Hinted handoff provides temporary durability, not permanent replication. The availability benefits far outweigh the costs for e-commerce workloads. Amazon’s verdict: Konfliktløsning: Problemet med indkøbskurven Lad os tale om det mest berømte eksempel fra avisen: indkøbskurven. Hvad er en konflikt (og hvorfor sker det)? A er Det sker, når to skrivninger sker til den samme nøgle på forskellige noder, uden at hverken skrive "ved om" den anden. conflict Her er en konkret sekvens af begivenheder, der skaber en konflikt: Timeline: T=0: Customer logs in. Cart has {shoes} on all 3 nodes. T=1: Network partition: Node1 can't talk to Node2. T=2: Customer adds {jacket} on their laptop → goes to Node1. Cart on Node1: {shoes, jacket} ← Vector clock: [N1:2] T=3: Customer adds {hat} on their phone → goes to Node2. Cart on Node2: {shoes, hat} ← Vector clock: [N2:2] T=4: Network heals. Node1 and Node2 compare notes. Node1 says: "I have version [N1:2]" Node2 says: "I have version [N2:2]" Neither clock dominates the other → CONFLICT! Neither version is “wrong”—both represent real actions the customer took. Dynamo’s job is to detect this situation (via vector clocks) and surface til ansøgningen, så ansøgningen kan beslutte, hvad der skal gøres. both versions Hvad gør applikationen med en konflikt? This is the crucial part that the paper delegates to you: Dynamo giver dig alle de samtidige versioner; din kode bestemmer, hvordan du fusionerer dem. the application must resolve conflicts using business logic For indkøbskurven valgte Amazon en : keep all items from all concurrent versions. The rationale is simple—losing an item from a customer’s cart (missing a sale) is worse than occasionally showing a stale item they already deleted. union merge Conflict versions: Version A (from Node1): {shoes, jacket} Version B (from Node2): {shoes, hat} Merge strategy: union Merged cart: {shoes, jacket, hat} ← All items preserved Her er den egentlige forsoningskode: from __future__ import annotations from dataclasses import dataclass, field class VectorClock: def __init__(self, clock: dict[str, int] | None = None): self.clock: dict[str, int] = clock.copy() if clock else {} def merge(self, other: "VectorClock") -> "VectorClock": """Merged clock = max of each node's counter across both versions.""" all_keys = set(self.clock) | set(other.clock) merged = {k: max(self.clock.get(k, 0), other.clock.get(k, 0)) for k in all_keys} return VectorClock(merged) def __repr__(self): return f"VectorClock({self.clock})" @dataclass class ShoppingCart: items: list[str] = field(default_factory=list) vector_clock: VectorClock = field(default_factory=VectorClock) @staticmethod def reconcile(carts: list["ShoppingCart"]) -> "ShoppingCart": if len(carts) == 1: return carts[0] # No conflict, nothing to do # Merge strategy: union of all items (never lose additions). # This is Amazon's choice for shopping carts. # A different application might choose last-write-wins or something else. all_items: set[str] = set() merged_clock = VectorClock() for cart in carts: all_items.update(cart.items) # Union: keep everything merged_clock = merged_clock.merge(cart.vector_clock) return ShoppingCart(items=sorted(all_items), vector_clock=merged_clock) # Example conflict scenario cart1 = ShoppingCart(items=["shoes", "jacket"], vector_clock=VectorClock({"N1": 2})) cart2 = ShoppingCart(items=["shoes", "hat"], vector_clock=VectorClock({"N2": 2})) # Dynamo detected a conflict and passes both versions to our reconcile() reconciled = ShoppingCart.reconcile([cart1, cart2]) print(reconciled.items) # ['hat', 'jacket', 'shoes'] — union! Sletningsproblemet (hvorfor dette bliver besværligt) EU-strategien har en ubehagelig kant: . deleted items can come back from the dead T=0: Cart: {shoes, hat} T=1: Customer removes hat → Cart: {shoes} Clock: [N1:3] T=2: Network partition — Node2 still has old state T=3: Some concurrent write to Node2 Clock: [N2:3] T=4: Network heals → conflict detected T=5: Union merge: {shoes} ∪ {shoes, hat} = {shoes, hat} Result: Hat is BACK! Customer removed it, but it reappeared. Amazon accepterer udtrykkeligt denne kompromis. Et "ghost" element i en kurv er en mindre irritation. At miste en kurv tilføjelse under en Black Friday salg er tabt indkomst. Ingeniørdybde note: Fusion logik skal være domænespecifik og omhyggeligt designet. Tilføjelse af elementer er commutative (ordre betyder ikke noget) og let at fusionere. Fjernelse af elementer er ikke – en sletning i en samtidig gren kan være tavs ignoreret under en union-baseret fusion. Dette er en bevidst kompromis i Dynamos design, men det betyder, at applikationen skal begrunde omhyggeligt om add vs. fjerne semantik. Hvis dine data ikke naturligt understøtter union fusioner (f.eks. en tæller, en brugers adresse), har du brug for en anden strategi – såsom CRDTs, sidste skriv-vind med tidsstempler, eller simpelthen afvise konkurrerende skrivelser for den data type. Ingeniørdybde note: Fusion logik skal være domænespecifik og omhyggeligt designet. Tilføjelse af elementer er commutative (ordre betyder ikke noget) og let at fusionere. Fjernelse af elementer er ikke – en sletning i en samtidig gren kan være tavs ignoreret under en union-baseret fusion. Dette er en bevidst kompromis i Dynamos design, men det betyder, at applikationen skal begrunde omhyggeligt om add vs. fjerne semantik. Hvis dine data ikke naturligt understøtter union fusioner (f.eks. en tæller, en brugers adresse), har du brug for en anden strategi – såsom CRDTs, sidste skriv-vind med tidsstempler, eller simpelthen afvise konkurrerende skrivelser for den data type. Read and Write Flow The diagrams above show the high-level flow, but let’s walk through what actually happens step-by-step during a read and a write. Understanding this concretely will make the earlier concepts click. Write Path Step-by-step narration of a PUT request: to any node (via a load balancer) or directly to the coordinator. Client sends the request — this is the first node in the preference list for the key’s hash position on the ring. The coordinator is determined Vektoruret opdateres – koordinatoren øger sin egen måler i vektoruret, hvilket skaber en ny version. Koordinatoren skriver lokalt, derefter fans ud skrive til de andre N-1 noder i præferencelisten samtidig. Koordinatoren venter på W-bekræftelser. Den venter IKKE på alle N - kun den første W til at svare. De resterende knuder, der endnu ikke har svaret, vil få skrive til sidst (eller via hintet handoff, hvis de er nede). to the client. From the client’s perspective, the write is done. Once W ACKs arrive, the coordinator returns 200 OK : Klienten får et vellykket svar, så snart W-noder bekræfter. De andre (N-W) noder modtager skrivningen asynkront. have the data, just not necessarily at the same moment. Key insight about the write path will Læs vejen Step-by-step narration of a GET request: Kunden sender anmodningen til koordinatoren for denne nøgle. Koordinatoren sender læseforespørgsler til alle N-noder på præferencelisten samtidig (ikke kun R). The coordinator returns as soon as R nodes have replied, without waiting for the slower ones. Wait for R responses. The coordinator checks all the vector clocks: Compare the versions returned. If all versions are identical → return the single version immediately. If one version’s clock dominates the others (it’s causally “newer”) → return that version. If versions are concurrent (neither clock dominates) → return to the client, which must merge them. all versions Læs reparation sker i baggrunden: Hvis koordinatoren bemærkede, at en node returnerede en stationær version, sender den den nyeste version til den pågældende node for at opdatere den. Fordi Dynamo er en generel lagringsmotor. Det ved ikke, om du gemmer en indkøbskurv, en brugerprofil eller en sessionstoken. koordinatoren giver dig de rå samtidige versioner sammen med vektorklokkesammenhængen, og du gør det rigtige for din brugssag. Why does the client receive the conflict instead of the coordinator resolving it? your application : Når klienten skriver den fusionerede version tilbage, skal den indeholde konteksten (den fusionerede vektorklokke). Dette fortæller Dynamo, at den nye skrive har "set" alle de samtidige versioner, så konflikten er løst. concurrent write on top of the still-unresolved conflict. The vector clock context is the key to closing the loop En anden Merkle Træer til Anti-Entropy Problemet: Hvordan ved du, når replikker er ude af synkronisering? Når en node genvinder fra en fejl, kan den have savnet nogle skrifter. Efter en netværkspartition heles, kan to replikaer afvige. The brute-force approach would be: “Every hour, compare every key on Node A against Node B, and sync anything that’s different.” But at Amazon’s scale, a single node might store hundreds of millions of keys. Comparing them all one by one would be so slow and bandwidth-intensive that it would interfere with normal traffic. Den grundlæggende idé: I stedet for at sammenligne individuelle nøgler, sammenligne . If the hash matches, that whole group is identical—skip it. Only drill down into groups where hashes differ. Dynamo uses Merkle trees to solve this efficiently. hashes of groups of keys Vigtigt: Merkle træ synkronisering er en baggrund anti-entropy mekanisme. Det er ikke på den varme læse / skrive vej. Normal læser og skriver bruger vektor ur og quorums til versionering. Merkle træer er for reparationsprocessen, der løber periodisk i baggrunden for at fange eventuelle inkonsekvenser, der glider igennem. Merkle tree sync er en mechanism. It’s not on the hot read/write path. Normal reads and writes use vector clocks and quorums for versioning. Merkle trees are for the repair process that runs periodically in the background to catch any inconsistencies that slipped through. Important background anti-entropy How a Merkle Tree Is Built Hver node opbygger et Merkle-træ over sine data, organiseret efter nøgleområder: Bladnoder indeholder hash af en lille række faktiske data nøgler (f.eks. hash af alle værdier for nøgler k1, k2, k3). Interne knudepunkter indeholder hash af deres børns hash. Root er en enkelt hash, der repræsenterer alle dataene på knuden. Hvordan to noder synkroniserer ved hjælp af Merkle træer When Node A and Node B want to check if they’re in sync: : Compare root hashes. If they’re the same, everything is identical. Done! (No network traffic for the data itself.) Step 1 : If roots differ, compare their left children. Same? Skip that entire half of the key space. Step 2 : Keep descending only into subtrees where hashes differ, until you reach the leaf nodes. Step 3 Synkroniser kun de specifikke taster i de forskellige bladnoder. Step 4 Example: Comparing two nodes Node A root: abc789 ← differs from Node B! Node B root: abc788 Compare left subtrees: Node A left: xyz123 Node B left: xyz123 ← same! Skip entire left half. Compare right subtrees: Node A right: def456 Node B right: def457 ← differs! Go deeper. Compare right-left subtree: Node A right-left: ghi111 Node B right-left: ghi111 ← same! Skip. Compare right-right subtree: Node A right-right: jkl222 Node B right-right: jkl333 ← differs! These are leaves. → Sync only the keys in the right-right leaf range (e.g., k10, k11, k12) Instead of comparing all 1 million keys, we compared 6 hashes and synced only 3 keys! : Synchronization process in code def sync_replicas(node_a, node_b, key_range): """ Efficiently sync two nodes using Merkle trees. Instead of comparing all keys, we compare hashes top-down. Only the ranges where hashes differ need actual key-level sync. """ tree_a = node_a.get_merkle_tree(key_range) tree_b = node_b.get_merkle_tree(key_range) # Step 1: Compare root hashes first. # If they match, every key in this range is identical — nothing to do! if tree_a.root_hash == tree_b.root_hash: return # Zero data transferred — full match! # Step 2: Recursively find differences by traversing top-down. # Only descend into subtrees where hashes differ. differences = [] stack = [(tree_a.root, tree_b.root)] while stack: node_a_subtree, node_b_subtree = stack.pop() if node_a_subtree.hash == node_b_subtree.hash: continue # This whole subtree matches — skip it! if node_a_subtree.is_leaf: # Found a differing leaf — these keys need syncing differences.extend(node_a_subtree.keys) else: # Not a leaf yet — recurse into children for child_a, child_b in zip(node_a_subtree.children, node_b_subtree.children): stack.append((child_a, child_b)) # Step 3: Sync only the specific keys that differed at leaf level. # This might be a handful of keys, not millions. for key in differences: sync_key(node_a, node_b, key) Why This Is Efficient The power of Merkle trees is that the number of hash comparisons you need scales with the (logarithmic in the number of keys), not the number of keys themselves. Dybden af træet Node with 1,000,000 keys: Naive approach: Compare 1,000,000 keys individually Cost: 1,000,000 comparisons Merkle tree: Compare O(log N) hashes top-down Tree depth ≈ 20 levels Cost: 20 comparisons to find differences Then sync only the differing leaves (~few keys) Speedup: ~50,000x fewer comparisons! And critically, if two nodes are (hvilket næsten altid er sandt i en sund klynge), matcher rodhascherne ofte fuldstændigt og nul data skal overføres. mostly in sync Medlemskab og fejldetektering Dynamo bruger en gossip-protokoll til medlemskabshåndtering. Hver node udveksler periodisk medlemskabsinformation med tilfældige jævnaldrende. Der er ingen hovednode – al koordinering er fuldt decentraliseret. Gossip-baseret medlemskab Nøgle designpunkter : Every node maintains its own view of cluster membership. There’s no central registry, so there’s no single point of failure for membership data. No single coordinator : Dynamo uses an accrual-based failure detector (similar to Phi Accrual). Rather than a binary “alive/dead” judgment, nodes maintain a Det stiger jo længere en peer er ikke-responsiv. Dette undgår falske positive fra forbigående netværk hiccups. Failure suspicion vs. detection Misforståelsesniveau Node A's view of Node B: - Last heartbeat: 3 seconds ago → Suspicion low → Healthy - Last heartbeat: 15 seconds ago → Suspicion rising → Likely slow/degraded - Last heartbeat: 60 seconds ago → Suspicion high → Treat as failed Nye knuder kontakter en frøknode for at slutte sig til, så spredes deres tilstedeværelse til resten af klyngen. Ringmedlemskab er til sidst konsistent - forskellige knuder kan have lidt forskellige synspunkter på ringen øjeblikkeligt, hvilket er acceptabelt. Decentralized bootstrapping Præstationskarakteristika: Virkelige tal Artiklen giver fascinerende præstationsdata. lad mig bryde det ned: Latency Distribution Metric | Average | 99.9th Percentile --------------------|---------|------------------ Read latency | ~10ms | ~200ms Write latency | ~15ms | ~200ms Key insight: 99.9th percentile is ~20x the average! Den 99.9 procentil er påvirket af: Why the huge gap? Garbage kollektion pauser Disk I/O variationer Netværk Jitter Uforholdsmæssig ubalance This is why Amazon SLAs are specified at 99.9th percentile, not average. Konfliktversion Fra 24 timer af Amazons produktionskøbskurvstrafik (på Dynamo-papiret). Bemærk, at disse afspejler Amazons specifikke arbejdsbelastningsegenskaber, ikke en universel baseline: 99.94% - Saw exactly one version (no conflict) 0.00057% - Saw 2 versions 0.00047% - Saw 3 versions 0.00009% - Saw 4 versions Konflikter er sjældne i praksis!Oftest forårsaget af samtidige forfattere (roboter), ikke fiaskoer. Takeaway Partitioning Strategy Evolution Dynamo udviklede sig gennem tre partitioneringsstrategier. Denne udvikling lærer os vigtige lektioner: Strategi 1: Random Tokens (Initial) Problem: Random token assignment → uneven load Problem: Adding nodes → expensive data scans Problem: Can't easily snapshot the system : Random token tildeling lyder elegant, men er et mareridt i praksis. Hver node får en tilfældig position på ringen, hvilket betyder vildt forskellige data ejerskab rækkevidde og ujævn belastningsfordeling. Operational lesson Strategy 2: Equal-sized Partitions + Random Tokens Improvement: Decouples partitioning from placement Problem: Still has load balancing issues Strategi 3: Q/S Tokens Per Node – lige store partitioner + Deterministisk Placering (Nuværende) What Q and S mean: Q = det samlede antal faste partitioner, som ringen er opdelt i (f.eks. 1024). Tænk på disse som lige store, forudskårne skiver af hashrummet, der aldrig ændrer form. S = antallet af fysiske servere i klyngen i øjeblikket (fx 8). Q/S = hvor mange af disse faste skiver hver server er ansvarlig for (f.eks. 1024 / 8 = 128 partitioner pr. server). The key shift from earlier strategies: the ring is now divided into Q fixed, equal-sized partitions , and then those partitions are assigned evenly to servers. Servers no longer get random positions — they each own exactly Q/S partitions, distributed evenly around the ring. Først Example: Q=12 partitions, S=3 servers Ring divided into 12 equal slices (each covers 30° of the 360° ring): Partition 1: 0°– 30° → Server A Partition 2: 30°– 60° → Server B Partition 3: 60°– 90° → Server C Partition 4: 90°–120° → Server A Partition 5: 120°–150° → Server B Partition 6: 150°–180° → Server C ...and so on, round-robin Each server owns exactly Q/S = 12/3 = 4 partitions → perfectly balanced. When a 4th server joins (S becomes 4): New Q/S = 12/4 = 3 partitions per server. Each existing server hands off 1 partition to the new server. Only 3 out of 12 partitions move — the rest are untouched. Denne udvikling - fra tilfældige tokens til faste, lige store partitioner med afbalanceret ejerskab - er en af de mest instruktive operationelle læringer fra Dynamo. Sammenligning af Dynamo med moderne systemer System Consistency Model Use Case Dynamo Influence Cassandra Tunable (N, R, W) Time-series, analytics Direct descendant — heavily inspired by Dynamo, uses same consistent hashing and quorum concepts Riak Tunable, vector clocks Key-value store Closest faithful Dynamo implementation Amazon DynamoDB Eventually consistent by default Managed NoSQL DynamoDB is a completely different system internally, with no vector clocks and much simpler conflict resolution. Shares the name and high-level inspiration only. ⚠️ Not the same as Dynamo! Voldemort Tunable LinkedIn's data store Open-source Dynamo implementation Google Spanner Linearizable Global SQL Opposite choice to Dynamo — prioritizes CP via TrueTime clock synchronization Redis Cluster Eventually consistent Caching, sessions Uses consistent hashing; much simpler conflict resolution Cassandra Tændbar (N, R og W) Tidsserie, Analyse Direkte efterkommer – stærkt inspireret af Dynamo, bruger de samme konsekvente hashing og quorum begreber Riak Tunable, vector clocks Nøgleværdi butik Nærmeste trofaste Dynamo implementering Amazon DynamoDB Endelig konsekvent ved standard Forvaltning af NoSQL DynamoDB er et helt andet system internt, uden vektorklokker og meget enklere konfliktløsning. ⚠️ Not the same as Dynamo! Voldemort Tunesisk LinkedIn's data store Open-source implementering af Dynamo Google Spanner Lineariseret Globalt SQL Modsat Dynamo – prioriterer CP via TrueTime ursynkronisering Redis Cluster Til sidst konsekvent Caching, sessions Uses consistent hashing; much simpler conflict resolution DynamoDB-forvirringen: Mange ingeniører forveksler Amazon DynamoDB med Dynamo-papiret. De er meget forskellige. DynamoDB er en administreret tjeneste optimeret til operationel enkelhed. Den udsætter ikke vektorklokker, bruger ikke den samme partitioneringsordning og bruger en proprietær konsistensmodel. Papiret handler om den interne Dynamo-lagringsenhed, der går forud for DynamoDB. Mange ingeniører forveksler Amazon DynamoDB med Dynamo-papiret. De er meget forskellige. DynamoDB er en administreret tjeneste optimeret til operationel enkelhed. Den udsætter ikke vektorklokker, bruger ikke den samme partitioneringsordning og bruger en proprietær konsistensmodel. The DynamoDB confusion Hvad Dynamo ikke giver dig Hver senior ingeniør blog bør være ærlig om begrænsninger. Her er hvad Dynamo udtrykkeligt handler væk: Ingen transaktioner: Operationer er kun med én nøgle. Du kan ikke atomisk opdatere flere nøgler. Ingen sekundære indekser: Du kan kun søge efter data ved hjælp af primærnøglen (i det mindste i det oprindelige design). Ingen tilslutninger: Det er en nøgleværdi butik. Der er ingen forespørgselssprog. Ingen global ordning: Begivenheder på tværs af forskellige nøgler har ingen garanteret ordning. Ingen lineariserbarhed: Selv ved R=W=N leverer Dynamo ikke lineariserbare aflæsninger. Ingen automatisk konfliktløsning: Systemet registrerer konflikter og overfører dem til applikationen. applikationen skal løse dem. Hvis dine ingeniører ikke forstår dette, vil du have subtile data bugs. Reparation omkostninger i skala: Den anti-entropy proces (Merkle træ forsoning) er ikke gratis. Vektor urvækst: I høj-churn skrive miljøer med mange koordinatorer, kan vektor ur vokse stort nok til at kræve truncation, som introducerer potentielt årsagssammenhæng tab. Forståelse af disse begrænsninger er afgørende for en vellykket drift af Dynamo-stil-systemer i produktionen. Praktiske eksempler på implementering Det er bevidst forenklet – ingen reel netværk, ingen vedholdenhed – men det modellerer trofast, hvordan vektorklokker, den konsistente hashring, quorum læser / skriver og konfliktdetektion interagerer. Del 1: Vektorklokke The class er grundlaget for version tracking. det er bare en ordbog mapping To vigtige operationer: VectorClock node_id → counter — bump our own counter when we write increment(node) — check if one clock is causally “after” another; if neither dominates, the writes were concurrent (conflict) dominates(other) from __future__ import annotations from dataclasses import dataclass, field from typing import Optional class VectorClock: """ Tracks causality across distributed writes. A clock like {"nodeA": 2, "nodeB": 1} means: - nodeA has coordinated 2 writes - nodeB has coordinated 1 write - Any version with these counters has "seen" those writes """ def __init__(self, clock: dict[str, int] | None = None): self.clock: dict[str, int] = clock.copy() if clock else {} def increment(self, node_id: str) -> "VectorClock": """Return a new clock with node_id's counter bumped by 1.""" new_clock = self.clock.copy() new_clock[node_id] = new_clock.get(node_id, 0) + 1 return VectorClock(new_clock) def dominates(self, other: "VectorClock") -> bool: """ Returns True if self is causally AFTER other. self dominates other when: - Every counter in self is >= the same counter in other, AND - At least one counter in self is strictly greater. Meaning: self has seen everything other has seen, plus more. """ all_keys = set(self.clock) | set(other.clock) at_least_one_greater = False for key in all_keys: self_val = self.clock.get(key, 0) other_val = other.clock.get(key, 0) if self_val < other_val: return False # self is missing something other has seen if self_val > other_val: at_least_one_greater = True return at_least_one_greater def merge(self, other: "VectorClock") -> "VectorClock": """ Merge two clocks by taking the max of each counter. Used after resolving a conflict to produce a new clock that has "seen" everything both conflicting versions saw. """ all_keys = set(self.clock) | set(other.clock) merged = {k: max(self.clock.get(k, 0), other.clock.get(k, 0)) for k in all_keys} return VectorClock(merged) def __repr__(self): return f"VectorClock({self.clock})" Del 2: Versioneret værdi Hver værdi, der er gemt i Dynamo, er pakket med dens vektorur. Denne parring er, hvad der gør det muligt for koordinatoren at sammenligne versioner under læsninger og opdage konflikter. @dataclass class VersionedValue: """ A value paired with its causal history (vector clock). When a client reads, they get back a VersionedValue. When they write an update, they must include the context (the vector clock they read) so Dynamo knows what version they're building on top of. """ value: object vector_clock: VectorClock def __repr__(self): return f"VersionedValue(value={self.value!r}, clock={self.vector_clock})" Del 3: Simuleret knude I real Dynamo er hver node en separat proces. Her simulerer vi dem som objekter i hukommelsen. Node kan markeres som At simulere fiasko. storage down class DynamoNode: """ Simulates a single Dynamo storage node. In production this would be a separate server with disk storage. Here it's an in-memory dict so we can demo the logic without networking. """ def __init__(self, node_id: str, token: int): self.node_id = node_id self.token = token # Position on the consistent hash ring self.storage: dict[str, list[VersionedValue]] = {} self.down = False # Toggle to simulate node failures def write(self, key: str, versioned_value: VersionedValue) -> bool: """ Store a versioned value. Returns False if the node is down. We store a LIST of versions per key, because a node might hold multiple concurrent (conflicting) versions until they are resolved by the application. """ if self.down: return False # In a real node this would be written to disk (e.g. BerkeleyDB) self.storage[key] = [versioned_value] return True def read(self, key: str) -> list[VersionedValue] | None: """ Return all versions of a key. Returns None if the node is down. A healthy node with no data for the key returns an empty list. """ if self.down: return None return self.storage.get(key, []) def __repr__(self): status = "DOWN" if self.down else f"token={self.token}" return f"DynamoNode({self.node_id}, {status})" Del 4: Konsistent hashring Vi sorterer knudepunkter efter deres token (position) og bruger en klokkeslæt til at finde koordinatoren og præferencelisten for enhver nøgle. import hashlib class ConsistentHashRing: """ Maps any key to an ordered list of N nodes (the preference list). Nodes are placed at fixed positions (tokens) on a conceptual ring from 0 to 2^32. A key hashes to a position, then walks clockwise to find its nodes. This means adding/removing one node only rebalances ~1/N of keys, rather than reshuffling everything like modulo hashing would. """ def __init__(self, nodes: list[DynamoNode]): # Sort nodes by token so we can do clockwise lookup efficiently self.nodes = sorted(nodes, key=lambda n: n.token) def _hash(self, key: str) -> int: """Consistent hash of a key into the ring's token space.""" # Use MD5 for a simple, evenly distributed hash. # Real Dynamo uses a more sophisticated hash (e.g., SHA-1). digest = hashlib.md5(key.encode()).hexdigest() return int(digest, 16) % (2**32) def get_preference_list(self, key: str, n: int) -> list[DynamoNode]: """ Return the first N nodes clockwise from key's hash position. These are the nodes responsible for storing this key. The first node in the list is the coordinator — it receives the client request and fans out to the others. """ if not self.nodes: return [] key_hash = self._hash(key) # Find the first node whose token is >= key's hash (clockwise) start_idx = 0 for i, node in enumerate(self.nodes): if node.token >= key_hash: start_idx = i break # If key_hash is greater than all tokens, wrap around to node 0 else: start_idx = 0 # Walk clockwise, collecting N unique nodes preference_list = [] for i in range(len(self.nodes)): idx = (start_idx + i) % len(self.nodes) preference_list.append(self.nodes[idx]) if len(preference_list) == n: break return preference_list Del 5: Den dynamiske koordinator Dette er kernen i systemet – den logik, der håndterer klientforespørgsler, fans ud til replikaer, venter på kvorum og opdager konflikter. class SimplifiedDynamo: """ Coordinates reads and writes across a cluster of DynamoNodes. Any node can act as coordinator for any request — there's no dedicated master. The coordinator is simply whichever node receives the client request (or the first node in the preference list, if using partition-aware routing). Configuration: N = total replicas per key R = minimum nodes that must respond to a read (read quorum) W = minimum nodes that must acknowledge a write (write quorum) """ def __init__(self, nodes: list[DynamoNode], N: int = 3, R: int = 2, W: int = 2): self.N = N self.R = R self.W = W self.ring = ConsistentHashRing(nodes) # ------------------------------------------------------------------ # # WRITE # # ------------------------------------------------------------------ # def put(self, key: str, value: object, context: VectorClock | None = None) -> VectorClock: """ Write a key-value pair to N replicas, wait for W ACKs. The 'context' is the vector clock from a previous read. Always pass context when updating an existing key — it tells Dynamo which version you're building on top of, so it can detect whether your write is concurrent with anything else. Returns the new vector clock, which the caller should store and pass back on future writes to this key. Raises: RuntimeError if fewer than W nodes acknowledged. """ preference_list = self.ring.get_preference_list(key, self.N) if not preference_list: raise RuntimeError("No nodes available") # The coordinator is always the first node in the preference list. coordinator = preference_list[0] # Increment the coordinator's counter in the vector clock. # If no context was provided (brand new key), start a fresh clock. base_clock = context if context is not None else VectorClock() new_clock = base_clock.increment(coordinator.node_id) versioned = VersionedValue(value=value, vector_clock=new_clock) # Fan out to all N replicas. # In a real system these would be concurrent RPC calls. # Here we call them sequentially for simplicity. ack_count = 0 for node in preference_list: success = node.write(key, versioned) if success: ack_count += 1 # Only need W ACKs to declare success. # The remaining replicas are updated asynchronously (or via # hinted handoff if they were down). if ack_count < self.W: raise RuntimeError( f"Write quorum not met: got {ack_count} ACKs, needed {self.W}" ) print(f"[PUT] key={key!r} value={value!r} clock={new_clock} " f"({ack_count}/{self.N} nodes wrote)") return new_clock # ------------------------------------------------------------------ # # READ # # ------------------------------------------------------------------ # def get(self, key: str) -> list[VersionedValue]: """ Read a key from N replicas, wait for R responses, reconcile. Returns a LIST of VersionedValues: - Length 1 → clean read, no conflict - Length >1 → concurrent versions detected; application must merge After reading, the caller should: 1. If no conflict: use the single value normally. 2. If conflict: merge the values using application logic, then call put() with the merged value and the merged vector clock as context. This "closes" the conflict. Read repair happens in the background: any replica that returned a stale version is silently updated with the latest version. """ preference_list = self.ring.get_preference_list(key, self.N) # Collect responses from all N nodes all_versions: list[VersionedValue] = [] responding_nodes: list[tuple[DynamoNode, list[VersionedValue]]] = [] for node in preference_list: result = node.read(key) if result is not None: # None means the node is down all_versions.extend(result) responding_nodes.append((node, result)) if len(responding_nodes) < self.R: raise RuntimeError( f"Read quorum not met: got {len(responding_nodes)} responses, needed {self.R}" ) # Reconcile: discard any version that is strictly dominated # (i.e., is a causal ancestor of) another version. # What remains is the set of concurrent versions. reconciled = self._reconcile(all_versions) # Background read repair: if any node returned something older # than the reconciled result, send it the latest version. # (Simplified: only meaningful when there's a single winner.) if len(reconciled) == 1: latest = reconciled[0] for node, versions in responding_nodes: if not versions or versions[0].vector_clock != latest.vector_clock: node.write(key, latest) # Repair silently in background status = "clean" if len(reconciled) == 1 else f"CONFLICT ({len(reconciled)} versions)" print(f"[GET] key={key!r} status={status} " f"({len(responding_nodes)}/{self.N} nodes responded)") return reconciled # ------------------------------------------------------------------ # # INTERNAL: VERSION RECONCILIATION # # ------------------------------------------------------------------ # def _reconcile(self, versions: list[VersionedValue]) -> list[VersionedValue]: """ Remove any version that is a causal ancestor of another version. If version A's clock is dominated by version B's clock, then B is strictly newer — A adds no new information and can be dropped. Whatever remains after pruning are CONCURRENT versions: writes that happened without either "knowing about" the other. The application must merge these using domain-specific logic. Example: versions = [clock={A:1}, clock={A:2}, clock={B:1}] {A:2} dominates {A:1} → drop {A:1} {A:2} and {B:1} are concurrent → both survive result = [{A:2}, {B:1}] ← conflict! application must merge """ dominated = set() for i, v1 in enumerate(versions): for j, v2 in enumerate(versions): if i != j and v2.vector_clock.dominates(v1.vector_clock): dominated.add(i) # v1 is an ancestor of v2, discard v1 survivors = [v for i, v in enumerate(versions) if i not in dominated] # De-duplicate: identical clocks from different replicas are the same version seen_clocks: list[VectorClock] = [] unique: list[VersionedValue] = [] for v in survivors: if not any(v.vector_clock.clock == s.clock for s in seen_clocks): unique.append(v) seen_clocks.append(v.vector_clock) return unique if unique else versions Del 6: Sæt det hele sammen - en demo Lad os løbe gennem et komplet scenarie: normal skrivning / læsning, så en simuleret konflikt, hvor to noder afviger, og applikationen skal fusionere dem. def demo(): # ── Setup ────────────────────────────────────────────────────────── # # Five nodes placed at evenly spaced positions on the hash ring. # In a real cluster these would span multiple datacenters. nodes = [ DynamoNode("node-A", token=100), DynamoNode("node-B", token=300), DynamoNode("node-C", token=500), DynamoNode("node-D", token=700), DynamoNode("node-E", token=900), ] dynamo = SimplifiedDynamo(nodes, N=3, R=2, W=2) print("=" * 55) print("SCENARIO 1: Normal write and read (no conflict)") print("=" * 55) # Write the initial shopping cart ctx = dynamo.put("cart:user-42", {"items": ["shoes"]}) # Read it back — should be a clean single version versions = dynamo.get("cart:user-42") print(f"Read result: {versions[0].value}\n") # Update the cart, passing the context from our earlier read. # The context tells Dynamo "this write builds on top of clock ctx". ctx = dynamo.put("cart:user-42", {"items": ["shoes", "jacket"]}, context=ctx) versions = dynamo.get("cart:user-42") print(f"After update: {versions[0].value}\n") print("=" * 55) print("SCENARIO 2: Simulated conflict — two concurrent writes") print("=" * 55) # Write the base version base_ctx = dynamo.put("cart:user-99", {"items": ["hat"]}) # Now simulate a network partition: # node-A and node-B can't talk to each other. # We model this by writing directly to individual nodes. pref_list = dynamo.ring.get_preference_list("cart:user-99", 3) node_1, node_2, node_3 = pref_list[0], pref_list[1], pref_list[2] # Write 1: customer adds "scarf" via node_1 (e.g., their laptop) clock_1 = base_ctx.increment(node_1.node_id) node_1.write("cart:user-99", VersionedValue({"items": ["hat", "scarf"]}, clock_1)) # Write 2: customer adds "gloves" via node_2 (e.g., their phone) # This write also descends from base_ctx, not from clock_1. # Neither write knows about the other → they are concurrent. clock_2 = base_ctx.increment(node_2.node_id) node_2.write("cart:user-99", VersionedValue({"items": ["hat", "gloves"]}, clock_2)) # Read — coordinator sees two concurrent versions and surfaces the conflict versions = dynamo.get("cart:user-99") if len(versions) > 1: print(f"\nConflict detected! {len(versions)} concurrent versions:") for i, v in enumerate(versions): print(f" Version {i+1}: {v.value} clock={v.vector_clock}") # Application-level resolution: union merge (Amazon's shopping cart strategy) # Merge items: take the union so no addition is lost all_items = set() merged_clock = versions[0].vector_clock for v in versions: all_items.update(v.value["items"]) merged_clock = merged_clock.merge(v.vector_clock) merged_value = {"items": sorted(all_items)} print(f"\nMerged result: {merged_value}") # Write the resolved version back with the merged clock as context. # This "closes" the conflict — future reads will see a single version. final_ctx = dynamo.put("cart:user-99", merged_value, context=merged_clock) versions = dynamo.get("cart:user-99") print(f"\nAfter resolution: {versions[0].value}") assert len(versions) == 1, "Should be a single version after merge" if __name__ == "__main__": demo() Expected output: ======================================================= SCENARIO 1: Normal write and read (no conflict) ======================================================= [PUT] key='cart:user-42' value={'items': ['shoes']} clock=VectorClock({'node-A': 1}) (3/3 nodes wrote) [GET] key='cart:user-42' status=clean (3/3 nodes responded) Read result: {'items': ['shoes']} [PUT] key='cart:user-42' value={'items': ['shoes', 'jacket']} clock=VectorClock({'node-A': 2}) (3/3 nodes wrote) [GET] key='cart:user-42' status=clean (3/3 nodes responded) After update: {'items': ['shoes', 'jacket']} ======================================================= SCENARIO 2: Simulated conflict — two concurrent writes ======================================================= [PUT] key='cart:user-99' value={'items': ['hat']} clock=VectorClock({'node-A': 1}) (3/3 nodes wrote) [GET] key='cart:user-99' status=CONFLICT (2 versions) (3/3 nodes responded) Conflict detected! 2 concurrent versions: Version 1: {'items': ['hat', 'scarf']} clock=VectorClock({'node-A': 2}) Version 2: {'items': ['hat', 'gloves']} clock=VectorClock({'node-A': 1, 'node-B': 1}) Merged result: {'items': ['gloves', 'hat', 'scarf']} [PUT] key='cart:user-99' value={'items': ['gloves', 'hat', 'scarf']} ... (3/3 nodes wrote) [GET] key='cart:user-99' status=clean (3/3 nodes responded) After resolution: {'items': ['gloves', 'hat', 'scarf']} I scenario 2 identificerer koordinatoren korrekt, at og De er hverken lige eller i et dominansforhold - hverken er en forfader til den anden - så begge er overflader som samtidig. What to notice: {'node-A': 2} {'node-A': 1, 'node-B': 1} Nøgleundervisning i systemdesign Efter at have arbejdet med Dynamo-inspirerede systemer i årevis, her er mine vigtigste takeaways: 1. Always-On Beats Strongly-Consistent For brugerorienterede applikationer vinder tilgængelighed næsten altid. Brugere vil tolerere at se lidt stagnante data. De vil ikke tolerere "Service Unavailable". 2. Application-Level Reconciliation is Powerful Vær ikke bange for at skubbe konfliktløsning til applikationen. applikationen forstår forretningslogikken og kan træffe smartere beslutninger end databasen nogensinde kunne. 3. Tunable Consistency is Essential Tilskud til indkøbskurve kræver høj tilgængelighed (W=1). Finansielle transaktioner har brug for stærkere garantier (W=N). 4. The 99.9th Percentile Matters More Than Average Fokusér dine optimeringsindsatser på tail latencies.Det er, hvad brugerne rent faktisk oplever under peak-tider. 5. Gossip Protocols Scale Beautifully Decentraliseret koordinering via gossip eliminerer enkeltfejlpunkter og skalaer til tusindvis af knuder. Når du IKKE skal bruge Dynamo-Style Systems Vær ærlig om kompromisser. Brug ikke denne tilgang, når: Der kræves stærk sammenhæng (finansielle transaktioner, lagerstyring) Der kræves komplekse forespørgsler (rapportering, analyse, samlinger) Transaktioner spænder over flere elementer (Dynamo er kun enkeltnøgleoperationer) Dit team kan ikke håndtere eventuel sammenhæng (hvis udviklere ikke forstår vektorklokker og konfliktløsning, vil du have problemer) Konklusionen Dynamo repræsenterer en grundlæggende ændring i, hvordan vi tænker på distribuerede systemer.Ved at omfavne eventuel konsistens og give justerbare kompromisser, gør det muligt at opbygge systemer, der skaleres til massive størrelser, samtidig med at der opretholdes en høj tilgængelighed. Uanset om du bruger Cassandra, Riak eller DynamoDB, har du gavn af de indsigter, der først blev offentliggjort i dette papir. Som ingeniører er vores opgave at forstå disse kompromisser dybt og anvende dem hensigtsmæssigt. Dynamo giver os et kraftfuldt værktøj, men som ethvert værktøj er det kun så godt som vores forståelse af, hvornår og hvordan man bruger det. Yderligere læsning Oprindeligt Dynamo Paper: SOSP 2007 Werner Vogels’ Blog: Alle ting distribueret Cassandra Dokumentation: Forstå, hvordan disse begreber implementeres "Design af dataintensive applikationer" af Martin Kleppmann - Kapitel 5 om replikation Bilag: Designproblemer og tilgange Tre åbne problemer, der opstår i systemdesigninterviews og ægte ingeniørarbejde. Problem 1: Konfliktløsning for en samarbejdsdokumenteditor : Du bygger noget som Google Docs understøttet af en butik i Dynamo-stil. To brugere redigerer det samme afsnit samtidigt. The problem Indkøbskurvstrategien (forening af alle elementer) er kun sikker, fordi tilføjelse af elementer er kommutiv - Hvis bruger A sletter en sætning, og bruger B redigerer midten af den, er foreningen af deres ændringer meningsløs eller modstridende. Why shopping cart union doesn’t work here {A} ∪ {B} = {B} ∪ {A} The right approach: Operational Transformation (OT) or CRDTs The industry solution is to represent the document not as a blob of text, but as a sequence of operations, and to transform concurrent operations so they can both be applied without conflict: User A's operation: delete(position=50, length=20) User B's operation: insert(position=60, text="new sentence") Without OT: B's insert position (60) is now wrong because A deleted 20 chars. With OT: Transform B's operation against A's: B's insert position shifts to 40 (60 - 20). Both operations now apply cleanly. Konfliktløsningsstrategien for Dynamo-laget ville være: Gem operationer (ikke fuld dokument snapshots) som værdien for hver nøgle. På konflikt, indsamle alle samtidige operationslister fra hver version. Anvend OT til at fusionere dem til en enkelt sammenhængende operationslog. Skriv den fusionerede log tilbage med den fusionerede vektorklokke som kontekst. : Operationsloggen pr. dokumentsegment, ikke den gengivne tekst. Dette gør fusioner deterministiske og tabsløse. What to store in Dynamo Deres lagringslag bruger enten OT eller en variant af CRDTs (Conflict-free Replicated Data Types), som er datastrukturer, der matematisk garanteres at fusionere uden konflikter uanset driftsordren. Real-world reference Problem 2: Vælg N, R, W til forskellige anvendelsesforhold Hvilken konfiguration ville du vælge for (a) en session butik, (b) en produktkatalog, (c) brugerprofiler? The problem Den rigtige måde at tænke på dette på: Identificer den fejlmodus, der koster mere - en savnet skrivning (data tab) eller en afvist skrivning (tilgængelighed). Session store — prioritize availability Sessioner er midlertidige og bruger-specifikke. Hvis en brugers session er kortvarigt stående eller tabt, bliver de logget ud og logget ind igen. N=3, R=1, W=1 Rationale: - W=1: Accept session writes even during heavy failures. A user can't log in if their session write is rejected. - R=1: Read from any single node. Stale session data is harmless. - N=3: Still replicate to 3 nodes for basic durability. Trade-off accepted: Stale session reads are possible but inconsequential. Product catalog — prioritize read performance and consistency Produktdata skrives sjældent (af ops teams), men læses millioner af gange om dagen. N=3, R=2, W=3 Rationale: - W=3: All replicas must confirm a catalog update before it's live. A price change half-published is worse than a brief write delay. - R=2: Read quorum overlap with W=3 guarantees fresh data. Acceptable: catalog writes are rare, so write latency doesn't matter. - N=3: Standard replication for durability. Trade-off accepted: Writes are slow and fail if any node is down. Acceptable because catalog updates are infrequent. User profiles — balanced Profildata (navn, e-mail, præferencer) er moderat vigtige. En stædig profil er irriterende, men ikke farlig. En afvist opdatering (f.eks. kan brugeren ikke opdatere sin e-mail) er et reelt problem. N=3, R=2, W=2 Rationale: - The classic balanced configuration. - R + W = 4 > N = 3, so quorums overlap: reads will see the latest write. - Tolerates 1 node failure for both reads and writes. - Appropriate for data that matters but doesn't require strict consistency. Trade-off accepted: A second simultaneous node failure will cause errors. Acceptable for non-critical user data. Decision framework summary: Priority R W When to use Max availability 1 1 Sessions, ephemeral state, click tracking Balanced 2 2 User profiles, preferences, soft state Consistent reads 2 3 Catalogs, config, rarely-written reference data Highest consistency 3 3 Anywhere you need R+W > N with zero tolerance for stale reads (still not linearizable) Max tilgængelighed 1 1 Sessioner, ephemeral status, klik sporing afbalanceret 2 2 Brugerprofiler, præferencer, blød tilstand Konsekvent læsning 2 3 Kataloger, config, sjældent skrevne referencedata Højeste konsistens 3 3 Hvor som helst du har brug for R+W > N med nul tolerance for stale læsninger (endnu ikke lineariserbar) Problem 3: Test af et Dynamo-Style-system under partitionsscenarier Hvordan kontrollerer du, at dit system faktisk opfører sig korrekt, når noder fejler og partitioner opstår? The problem Dette er et af de sværeste problemer i distribuerede systemer test, fordi bugs kun vises i specifikke interleavings af samtidige begivenheder, der er vanskelige at reproducere deterministisk. Layer 1: Unit tests for the logic in isolation Før du tester distribueret adfærd, skal du kontrollere byggeblokkerne uafhængigt.Vektorklokkomparationslogik, konfliktdetektion og forligsfunktioner kan alle testes med rene enhedstest – der kræves ingen netværk. def test_concurrent_clocks_detected_as_conflict(): clock_a = VectorClock({"node-A": 2}) clock_b = VectorClock({"node-B": 2}) assert not clock_a.dominates(clock_b) assert not clock_b.dominates(clock_a) # Both survive reconciliation → conflict correctly detected def test_ancestor_clock_is_discarded(): old_clock = VectorClock({"node-A": 1}) new_clock = VectorClock({"node-A": 3}) assert new_clock.dominates(old_clock) # old_clock should be pruned during reconciliation Layer 2: Deterministic fault injection I stedet for at håbe, at fejl sker i den rigtige rækkefølge under belastningstest, injicere dem bevidst og gentagne gange. er en simpel version af dette. I produktionssystemer, biblioteker som eller Det skal gøres på infrastrukturniveau. node.down = True Jepsen Kaos Monkey Nøglescenarier til test: Scenario A: Write succeeds with W=2, third replica is down. → Verify: the data is readable after the down node recovers. → Verify: no data loss occurred. Scenario B: Two nodes accept concurrent writes to the same key. → Verify: the next read surfaces exactly 2 conflicting versions. → Verify: after the application writes a merged version, the next read is clean. Scenario C: Node goes down mid-write (wrote to W-1 nodes). → Verify: the write is correctly rejected (RuntimeError). → Verify: no partial writes are visible to readers. Scenario D: All N nodes recover after a full partition. → Verify: no data was lost across the cluster. → Verify: vector clocks are still meaningful (no spurious conflicts). Layer 3: Property-based testing I stedet for at skrive individuelle test tilfælde, definere som altid skal holde og generere tusindvis af tilfældige operationssekvenser for at forsøge at bryde dem: uforanderlige # Invariant: after any sequence of writes and merges, a final get() # should always return exactly one version (no unresolved conflicts). # Invariant: a value written with a context derived from a previous read # should never produce a conflict with that read's version # (it should dominate it). # Invariant: if R + W > N, a value written successfully should always # be visible in the next read (read-your-writes, absent concurrent writes). Værktøjer som (Python) giver dig mulighed for at udtrykke disse invarianter og automatisk finde modeksempler. Hypotesen Layer 4: Linearizability checkers For den højeste sikkerhed, registrere hver operation starttid, sluttid og resultat under en fejlindsprøjtning test, og derefter fodre historien til en linearizability checker som Det vil fortælle dig, om nogen observeret historie er i overensstemmelse med en korrekt sekventiel udførelse - selv for et til sidst konsekvent system, der opererer inden for sine angivne garantier. Knossos Skrevet fra trinche af distribuerede systemer. kamptestet indsigt, nul håndbølger. NotebookLM Link NotebookLM Link