<?xml version="1.0" encoding="utf-8" standalone="yes"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
  <channel>
    <title>System Design on nSkillHub</title>
    <link>https://nskillhub.com/categories/System-Design/</link>
    <description>Recent content in System Design on nSkillHub</description>
    <generator>Hugo -- gohugo.io</generator>
    <language>en</language>
    <copyright>© 2026 Lakshay Jawa</copyright>
    <lastBuildDate>Tue, 28 Apr 2026 10:27:07 +0530</lastBuildDate><atom:link href="https://nskillhub.com/categories/System-Design/index.xml" rel="self" type="application/rss+xml" />
    
    <item>
      <title>Dropbox / Google Drive — Distributed File Sync at Scale</title>
      <link>https://nskillhub.com/system-design/classic/dropbox-google-drive-file-sync/</link>
      <pubDate>Tue, 28 Apr 2026 00:00:00 +0000</pubDate>
      
      <guid>https://nskillhub.com/system-design/classic/dropbox-google-drive-file-sync/</guid>
      <description>&lt;h2 class=&#34;relative group&#34;&gt;1. Hook&#xA;    &lt;div id=&#34;1-hook&#34; class=&#34;anchor&#34;&gt;&lt;/div&gt;&#xA;    &#xA;    &lt;span&#xA;        class=&#34;absolute top-0 w-6 transition-opacity opacity-0 -start-6 not-prose group-hover:opacity-100 select-none&#34;&gt;&#xA;        &lt;a class=&#34;text-primary-300 dark:text-neutral-700 !no-underline&#34; href=&#34;#1-hook&#34; aria-label=&#34;Anchor&#34;&gt;#&lt;/a&gt;&#xA;    &lt;/span&gt;&#xA;    &#xA;&lt;/h2&gt;&#xA;&lt;p&gt;In 2011, Dropbox engineers discovered that roughly &lt;strong&gt;70% of all uploaded data was already on their servers&lt;/strong&gt; — users syncing the same PDFs, stock photos, and installer packages. Switching from file-level to block-level deduplication immediately cut bandwidth costs by more than two-thirds. That insight defines the whole discipline of cloud file sync: the hard problems are not storage capacity or even bandwidth, but &lt;strong&gt;delta detection, deduplication, conflict resolution, and consistency across an arbitrarily large fleet of devices&lt;/strong&gt;. Google Drive went further, embedding a collaborative editing layer (Docs, Sheets, Slides) on top of the same blob store. Today both systems handle hundreds of millions of users, billions of files, and near-real-time sync across mobile, desktop, and web clients — often over flaky connections.&lt;/p&gt;</description>
      
    </item>
    
    <item>
      <title>Google Docs — Real-Time Collaborative Editing at Scale</title>
      <link>https://nskillhub.com/system-design/classic/google-docs-real-time-collaborative-editing/</link>
      <pubDate>Tue, 28 Apr 2026 00:00:00 +0000</pubDate>
      
      <guid>https://nskillhub.com/system-design/classic/google-docs-real-time-collaborative-editing/</guid>
      <description>&lt;h2 class=&#34;relative group&#34;&gt;1. Hook&#xA;    &lt;div id=&#34;1-hook&#34; class=&#34;anchor&#34;&gt;&lt;/div&gt;&#xA;    &#xA;    &lt;span&#xA;        class=&#34;absolute top-0 w-6 transition-opacity opacity-0 -start-6 not-prose group-hover:opacity-100 select-none&#34;&gt;&#xA;        &lt;a class=&#34;text-primary-300 dark:text-neutral-700 !no-underline&#34; href=&#34;#1-hook&#34; aria-label=&#34;Anchor&#34;&gt;#&lt;/a&gt;&#xA;    &lt;/span&gt;&#xA;    &#xA;&lt;/h2&gt;&#xA;&lt;p&gt;In 2006, Google acquired Writely and within two years turned it into Google Docs — the first mainstream product that let multiple people type in the same document &lt;em&gt;at the same time&lt;/em&gt; without locking or &amp;ldquo;check-out&amp;rdquo; workflows. The core problem sounds deceptively simple: if Alice deletes character 5 while Bob inserts a character at position 4, whose version wins? The naïve answer (&amp;ldquo;last write wins&amp;rdquo;) produces corrupted documents. The real answer — &lt;strong&gt;Operational Transformation (OT)&lt;/strong&gt; — is the algorithm that makes collaborative editing feel like magic, and it is one of the most subtle distributed-systems problems you will encounter in an interview. Every major collaborative editor (Google Docs, Notion, Figma, Microsoft 365) is built on either OT or its younger sibling CRDT (Conflict-free Replicated Data Type). Understanding which to use, and why, separates candidates who have thought deeply about consistency from those who have memorised buzzwords.&lt;/p&gt;</description>
      
    </item>
    
    <item>
      <title>Search Engine — Google-Scale Crawl, Index, Rank, and Serve</title>
      <link>https://nskillhub.com/system-design/classic/search-engine-google-scale/</link>
      <pubDate>Tue, 28 Apr 2026 00:00:00 +0000</pubDate>
      
      <guid>https://nskillhub.com/system-design/classic/search-engine-google-scale/</guid>
      <description>&lt;h2 class=&#34;relative group&#34;&gt;1. Hook&#xA;    &lt;div id=&#34;1-hook&#34; class=&#34;anchor&#34;&gt;&lt;/div&gt;&#xA;    &#xA;    &lt;span&#xA;        class=&#34;absolute top-0 w-6 transition-opacity opacity-0 -start-6 not-prose group-hover:opacity-100 select-none&#34;&gt;&#xA;        &lt;a class=&#34;text-primary-300 dark:text-neutral-700 !no-underline&#34; href=&#34;#1-hook&#34; aria-label=&#34;Anchor&#34;&gt;#&lt;/a&gt;&#xA;    &lt;/span&gt;&#xA;    &#xA;&lt;/h2&gt;&#xA;&lt;p&gt;Google processes &lt;strong&gt;8.5 billion searches per day&lt;/strong&gt; — roughly 99 000 queries per second at peak — and returns results in under 200 ms. Behind that sub-second response is a pipeline that never fully stops: a web crawler perpetually downloading ~20 billion pages, a MapReduce-scale indexing system converting raw HTML into a compressed inverted index, a multi-stage ranking pipeline that scores hundreds of signals in milliseconds, and a serving layer that shards the index across thousands of machines so no single query touches more than a fraction of the corpus. Building a search engine from scratch is perhaps the canonical &amp;ldquo;design a distributed system&amp;rdquo; problem because it combines almost every hard problem in the field: distributed crawling, large-scale data processing, near-real-time index updates, low-latency high-throughput query serving, and machine learning (ML)-based ranking. Even a simplified version at 1/1000th of Google&amp;rsquo;s scale teaches you more about distributed systems than almost any other exercise.&lt;/p&gt;</description>
      
    </item>
    
    <item>
      <title>Netflix — Video Streaming Platform</title>
      <link>https://nskillhub.com/system-design/classic/netflix-video-streaming/</link>
      <pubDate>Mon, 27 Apr 2026 00:00:00 +0000</pubDate>
      
      <guid>https://nskillhub.com/system-design/classic/netflix-video-streaming/</guid>
      <description>&lt;h2 class=&#34;relative group&#34;&gt;1. Hook&#xA;    &lt;div id=&#34;1-hook&#34; class=&#34;anchor&#34;&gt;&lt;/div&gt;&#xA;    &#xA;    &lt;span&#xA;        class=&#34;absolute top-0 w-6 transition-opacity opacity-0 -start-6 not-prose group-hover:opacity-100 select-none&#34;&gt;&#xA;        &lt;a class=&#34;text-primary-300 dark:text-neutral-700 !no-underline&#34; href=&#34;#1-hook&#34; aria-label=&#34;Anchor&#34;&gt;#&lt;/a&gt;&#xA;    &lt;/span&gt;&#xA;    &#xA;&lt;/h2&gt;&#xA;&lt;p&gt;At peak, Netflix accounts for &lt;strong&gt;15% of global internet downstream traffic&lt;/strong&gt; — roughly 700 Gbps flowing to subscribers in 190 countries. What makes this feasible is not raw bandwidth: it is a carefully engineered pipeline that converts every raw title into over &lt;strong&gt;1,200 encoded video files&lt;/strong&gt; before a single subscriber presses play, then serves those files from ISP-embedded appliances called Open Connect Appliances (OCA) rather than from a traditional cloud CDN. The streaming experience you see — where the picture quality silently improves while you watch — is ABR (Adaptive Bitrate) streaming dynamically switching between those pre-encoded variants based on your network conditions. Behind the personalised rows on the homepage sits a recommendation engine that runs 45+ algorithms to surface the title you are most likely to start watching in the next 30 seconds. Each of these subsystems operates at a scale where a 0.1% drop in streaming reliability translates to 250,000 subscribers unable to watch at that moment.&lt;/p&gt;</description>
      
    </item>
    
    <item>
      <title>Uber / Ride-Sharing System</title>
      <link>https://nskillhub.com/system-design/classic/uber-ride-sharing/</link>
      <pubDate>Mon, 27 Apr 2026 00:00:00 +0000</pubDate>
      
      <guid>https://nskillhub.com/system-design/classic/uber-ride-sharing/</guid>
      <description>&lt;h2 class=&#34;relative group&#34;&gt;1. Hook&#xA;    &lt;div id=&#34;1-hook&#34; class=&#34;anchor&#34;&gt;&lt;/div&gt;&#xA;    &#xA;    &lt;span&#xA;        class=&#34;absolute top-0 w-6 transition-opacity opacity-0 -start-6 not-prose group-hover:opacity-100 select-none&#34;&gt;&#xA;        &lt;a class=&#34;text-primary-300 dark:text-neutral-700 !no-underline&#34; href=&#34;#1-hook&#34; aria-label=&#34;Anchor&#34;&gt;#&lt;/a&gt;&#xA;    &lt;/span&gt;&#xA;    &#xA;&lt;/h2&gt;&#xA;&lt;p&gt;Every time someone taps &amp;ldquo;Request Ride&amp;rdquo; on Uber, the platform must answer a deceptively hard spatial query in under a second: &lt;em&gt;which of the thousands of nearby drivers is the best match for this rider, given their location, heading, vehicle type, and current workload?&lt;/em&gt; Uber processes &lt;strong&gt;25 million trips per day&lt;/strong&gt; across 70+ countries, with peak demand spikes during commute hours, concerts, and bad weather — all of which arrive simultaneously in the same city blocks.&lt;/p&gt;</description>
      
    </item>
    
    <item>
      <title>YouTube — Video Upload, Transcoding &amp; Global Delivery</title>
      <link>https://nskillhub.com/system-design/classic/youtube/</link>
      <pubDate>Mon, 27 Apr 2026 00:00:00 +0000</pubDate>
      
      <guid>https://nskillhub.com/system-design/classic/youtube/</guid>
      <description>&lt;h2 class=&#34;relative group&#34;&gt;1. Hook&#xA;    &lt;div id=&#34;1-hook&#34; class=&#34;anchor&#34;&gt;&lt;/div&gt;&#xA;    &#xA;    &lt;span&#xA;        class=&#34;absolute top-0 w-6 transition-opacity opacity-0 -start-6 not-prose group-hover:opacity-100 select-none&#34;&gt;&#xA;        &lt;a class=&#34;text-primary-300 dark:text-neutral-700 !no-underline&#34; href=&#34;#1-hook&#34; aria-label=&#34;Anchor&#34;&gt;#&lt;/a&gt;&#xA;    &lt;/span&gt;&#xA;    &#xA;&lt;/h2&gt;&#xA;&lt;p&gt;Every minute, creators upload &lt;strong&gt;500 hours of video&lt;/strong&gt; to YouTube — roughly 720,000 hours of raw footage per day that must be validated, transcoded into 10+ adaptive formats, and made globally available before viewers ever click play. Unlike Netflix (a closed catalogue of licensed titles transcoded offline), YouTube is a live upload platform: a creator in Lagos hits &amp;ldquo;publish&amp;rdquo; and expects global playback within minutes. The upload pipeline, transcoding infrastructure, and two-tier CDN (Content Delivery Network) that make this possible are among the most complex media-engineering systems on the planet. On the consumption side, 2 billion+ logged-in users watch over 1 billion hours of video daily — a recommendation challenge that dwarfs most advertising systems in latency sensitivity and business impact. If the recommendation model serves the wrong video, engagement drops; if the transcoder stalls, creators lose monetisation time.&lt;/p&gt;</description>
      
    </item>
    
    <item>
      <title>WhatsApp / Chat Messaging System</title>
      <link>https://nskillhub.com/system-design/classic/whatsapp-chat-messaging/</link>
      <pubDate>Sat, 25 Apr 2026 00:00:00 +0000</pubDate>
      
      <guid>https://nskillhub.com/system-design/classic/whatsapp-chat-messaging/</guid>
      <description>&lt;h2 class=&#34;relative group&#34;&gt;1. Hook&#xA;    &lt;div id=&#34;1-hook&#34; class=&#34;anchor&#34;&gt;&lt;/div&gt;&#xA;    &#xA;    &lt;span&#xA;        class=&#34;absolute top-0 w-6 transition-opacity opacity-0 -start-6 not-prose group-hover:opacity-100 select-none&#34;&gt;&#xA;        &lt;a class=&#34;text-primary-300 dark:text-neutral-700 !no-underline&#34; href=&#34;#1-hook&#34; aria-label=&#34;Anchor&#34;&gt;#&lt;/a&gt;&#xA;    &lt;/span&gt;&#xA;    &#xA;&lt;/h2&gt;&#xA;&lt;p&gt;WhatsApp delivers &lt;strong&gt;100 billion messages every day&lt;/strong&gt; to &lt;strong&gt;2 billion users&lt;/strong&gt; across 180+ countries — all end-to-end encrypted (E2EE), with sub-second latency, and with a global engineering team historically smaller than 50 engineers. The system does this while providing strong delivery guarantees (a message is either delivered exactly once or the sender knows it was not), preserving per-conversation message ordering even when users switch networks mid-send, and maintaining ephemeral server storage so that once a message is delivered it lives only on client devices.&lt;/p&gt;</description>
      
    </item>
    
    <item>
      <title>Instagram</title>
      <link>https://nskillhub.com/system-design/classic/instagram/</link>
      <pubDate>Fri, 24 Apr 2026 00:00:00 +0000</pubDate>
      
      <guid>https://nskillhub.com/system-design/classic/instagram/</guid>
      <description>&lt;h2 class=&#34;relative group&#34;&gt;1. Hook&#xA;    &lt;div id=&#34;1-hook&#34; class=&#34;anchor&#34;&gt;&lt;/div&gt;&#xA;    &#xA;    &lt;span&#xA;        class=&#34;absolute top-0 w-6 transition-opacity opacity-0 -start-6 not-prose group-hover:opacity-100 select-none&#34;&gt;&#xA;        &lt;a class=&#34;text-primary-300 dark:text-neutral-700 !no-underline&#34; href=&#34;#1-hook&#34; aria-label=&#34;Anchor&#34;&gt;#&lt;/a&gt;&#xA;    &lt;/span&gt;&#xA;    &#xA;&lt;/h2&gt;&#xA;&lt;p&gt;Instagram processes &lt;strong&gt;100 million photo and video uploads every day&lt;/strong&gt;, serves &lt;strong&gt;4.2 billion likes&lt;/strong&gt;, and delivers personalised feeds to 500 million daily users — all while keeping image loads under 200ms anywhere in the world. The engineering challenge is three-layered: a media processing pipeline that converts every raw upload into five optimised variants before the first follower ever sees it; a hybrid fan-out feed that handles both 400-follower personal accounts and 300-million-follower celebrities without write amplification blowing up; and an Explore page that must surface genuinely relevant content from a corpus of 50 billion posts to users who have never explicitly stated what they want. Each layer has a distinct bottleneck, and solving one often creates pressure on the others.&lt;/p&gt;</description>
      
    </item>
    
    <item>
      <title>Twitter / Social Media Feed</title>
      <link>https://nskillhub.com/system-design/classic/twitter-social-media-feed/</link>
      <pubDate>Thu, 23 Apr 2026 00:00:00 +0000</pubDate>
      
      <guid>https://nskillhub.com/system-design/classic/twitter-social-media-feed/</guid>
      <description>&lt;h2 class=&#34;relative group&#34;&gt;1. Hook&#xA;    &lt;div id=&#34;1-hook&#34; class=&#34;anchor&#34;&gt;&lt;/div&gt;&#xA;    &#xA;    &lt;span&#xA;        class=&#34;absolute top-0 w-6 transition-opacity opacity-0 -start-6 not-prose group-hover:opacity-100 select-none&#34;&gt;&#xA;        &lt;a class=&#34;text-primary-300 dark:text-neutral-700 !no-underline&#34; href=&#34;#1-hook&#34; aria-label=&#34;Anchor&#34;&gt;#&lt;/a&gt;&#xA;    &lt;/span&gt;&#xA;    &#xA;&lt;/h2&gt;&#xA;&lt;p&gt;Twitter at peak serves &lt;strong&gt;600K tweet reads per second&lt;/strong&gt; while simultaneously processing tens of thousands of new tweets. The naive approach — querying who you follow, then fetching all their tweets, then sorting — collapses instantly at scale. The real architecture is a masterclass in the write-amplification vs read-latency trade-off, and the edge cases (Lady Gaga following Justin Bieber, or vice versa) reveal why no single strategy wins.&lt;/p&gt;</description>
      
    </item>
    
    <item>
      <title>URL Shortener (bit.ly)</title>
      <link>https://nskillhub.com/system-design/classic/url-shortener/</link>
      <pubDate>Sat, 18 Apr 2026 00:00:00 +0000</pubDate>
      
      <guid>https://nskillhub.com/system-design/classic/url-shortener/</guid>
      <description>&lt;h2 class=&#34;relative group&#34;&gt;1. Hook&#xA;    &lt;div id=&#34;1-hook&#34; class=&#34;anchor&#34;&gt;&lt;/div&gt;&#xA;    &#xA;    &lt;span&#xA;        class=&#34;absolute top-0 w-6 transition-opacity opacity-0 -start-6 not-prose group-hover:opacity-100 select-none&#34;&gt;&#xA;        &lt;a class=&#34;text-primary-300 dark:text-neutral-700 !no-underline&#34; href=&#34;#1-hook&#34; aria-label=&#34;Anchor&#34;&gt;#&lt;/a&gt;&#xA;    &lt;/span&gt;&#xA;    &#xA;&lt;/h2&gt;&#xA;&lt;p&gt;Every time you click a &lt;code&gt;bit.ly&lt;/code&gt; or &lt;code&gt;t.co&lt;/code&gt; link, a distributed system silently resolves a 7-character code to a full URL and redirects you — in under 10 milliseconds — before your browser even renders the loading spinner. Behind that invisible handshake sits a deceptively rich design problem: how do you build a service that creates billions of short codes, never loses a mapping, and serves hundreds of thousands of reads per second with single-digit millisecond latency, all while preventing abuse, surviving data-centre failures, and staying profitable?&lt;/p&gt;</description>
      
    </item>
    
  </channel>
</rss>
