Built a React Native demo system with three independent levers for post-deploy UI control: backend-driven UI (JSON component schema + client renderer), runtime feature flags, and OTA JS bundle updates via EAS. Cloudflare Worker + Durable Object serves config; admin UI edits flags and screen JSON live. Same APK, three mechanisms, no app store round-trip. Covers how Swiggy (Dynamic Widget), Airbnb (Ghost Platform), PhonePe (LiquidUI), and Nubank (70% of screens on BDC) use this pattern at scale — and when it stops being clever and starts being a liability.
Introduction
There's a specific kind of frustration I think every mobile engineer has felt: a fix is ready, tests pass, PR is merged. And then you wait. Three days. Sometimes five. The app store review queue is a black box, and you have no recourse except to sit on a working fix while users keep hitting the bug.
I've thought about this problem a lot, especially in the context of consumer apps that need to move fast — the Swiggy-scale apps, the PhonePe-scale apps, the Zeptos. These products push dozens of UI changes a week. They run experiments that change not just text but layout, card ordering, promotional banner hierarchy. You can't do that on a mobile release cycle. You'd need a new build for every experiment, and each build takes hours to produce and days to reach users.
The industry's answer is a set of overlapping patterns that go by different names: server-driven UI (SDUI), backend-driven UI (BDUI), dynamic widgets, remote configuration. The common thread: move the UI definition out of the binary and into the server response. The app becomes a renderer; the backend becomes the author.
I spent a weekend building a working implementation of this in React Native — BDUI with a JSON component schema, feature flags, and OTA JS updates via EAS — and using it to demo all three levers to a team. This post is what I built, why each piece works the way it does, and what the industry experience says about where this pattern earns its complexity and where it doesn't.
The Release Problem, Precisely
Before getting into the implementation, it's worth being precise about the problem. Mobile apps have a structural constraint that web apps don't: the binary is distributed and version-locked.
When you ship a React app, "deploy" means copying files to a CDN. Users get new code on their next page load. The gap between "code done" and "user has code" is measured in seconds.
On mobile, the binary goes through:
- Build (15–20 min for a React Native app).
- App store submission and review (1–7 days; longer if flagged).
- User update (voluntary, or you force it and lose sessions).
That full cycle, even optimistically, is 24–48 hours. For a UI tweak, that's insane overhead. For a bug fix, it's worse. For a time-sensitive promotional banner ("flash sale ends in 4 hours"), it's simply not viable.
Three patterns have emerged to break different parts of this loop:
Each lever is strictly more powerful and more risky than the one above it. The goal is to reach for the weakest lever that solves the problem.
Who Is Already Doing This
The pattern is not new, and it's not esoteric. Every major consumer app company has converged on it independently.
Airbnb — Ghost Platform. Named for Guest + Host, Ghost Platform is Airbnb's unified server-driven UI system. A single GraphQL schema drives Web, iOS, and Android. The core abstractions are Sections (reusable, pre-formatted, localized UI component groups), Screens (layout configs with responsive variants), and Actions (server-defined interaction handlers). Search results, listing pages, and checkout — the majority of Airbnb's highest-traffic surfaces — now run entirely on GP. From their engineering blog: "The Ghost Platform is a unified, opinionated, server-driven UI system that enables us to iterate rapidly and launch features safely across web, iOS, and Android." They shipped the majority of core app features within roughly a year of Ghost Platform's existence.
Swiggy — Dynamic Widget. Swiggy's own SDUI engine uses JSON schema as its "language" for UI layout, rendering to both Facebook Litho (Android, for fine-grained view recycling and async layouts) and Jetpack Compose. The backend authors the widget tree; the app is a renderer. Swiggy uses this for personalized restaurant recommendations, location-aware promotional banners, dynamic search filters, and real-time offer cards. The motivations they cite are predictable: UI changes without app updates, A/B experimentation on layout (not just copy), cross-platform consistency, and reduced app binary size from stripping dead code paths.
PhonePe — LiquidUI. PhonePe's framework is the most comprehensive Indian example I've found publicly documented. LiquidUI has a web console for drag-and-drop screen spec building, a backend publishing system, a config store (Chimera), and native SDKs on both iOS and Android. The published numbers are striking: 9 diverse products, 130+ screens, and recent products like "Check vehicle policy" shipped without a single code change on older app versions. The framework ships with 40+ predefined widgets, 6 layout renderers, 40+ action types, and 30+ expression evaluators for business logic. When a company builds that infrastructure, they're not doing it for the novelty — they're doing it because the alternative (separate mobile releases per product change) is worse.
Nubank — BDC (Backend Driven Content). Nubank's implementation is built in Clojure, rendering to Flutter components via a tree-walk interpreter. The scale is notable: 70% of new screens at Nubank are authored through BDC, and 43% of the entire app runs on it. They describe it as "a Lego manual for the Nu app, telling how to use the 'building blocks' available to build the interface to the end user." The motivations are the same everywhere — release velocity, A/B rollouts same-day, deploying improvements within 24 hours of a fix being ready.
Flipkart — Proteus. An open-source Android JSON layout inflater that replaces Android's native LayoutInflater at runtime. JSON layouts can be server-hosted, enabling Flipkart to redesign the homepage for festive seasons (the biggest traffic events of the year) without pushing an app update. Proteus is on GitHub and worth studying if you're thinking about a native Android BDUI implementation.
Lyft — Canvas. Lyft's Bikes and Scooters team built Canvas, a protobuf-based SDUI system. The choice of protobuf over JSON is deliberate: built-in versioning support, compact binary format, and explicit schema enforcement. Lyft explicitly designs for "legacy app versions up to 2 years old" — if you have users who haven't updated in two years (and you do), your SDUI system must degrade gracefully for them. From their engineers: "SDUI has a snowball effect where the more you build on top of the platform the more powerful and useful it grows."
Delivery Hero pushed the methodology to its logical conclusion: using Apollo Federation to unify UI subgraphs (render-ready appearance data) with domain subgraphs (business data). The results they published: design cycle compressed from 3 months to 1 week; API payload reduced from 80+ fields to 4; load time ~100ms faster per view; config propagation time to production of 5 minutes. From Principal SWE Arne Wieding: "The goal was to move faster and let product teams run experiments without engineering having to release new apps."
The pattern keeps appearing — in different tech stacks, in different countries, in different verticals — because the constraint is universal. App stores create a deployment bottleneck. BDUI is a structural response to that bottleneck.
What I Built
For my demo, I wanted something that could prove all three levers in a single APK. The architecture:
The worker serves two things: a JSON screen definition and a flag map. The client fetches both, renders via a component registry, and respects flag-gates. The admin UI (also served from the worker) allows live edits with bearer-token auth. Cloudflare Durable Objects persist config with strong consistency — no KV propagation lag, no eventual consistency surprises.
EAS handles OTA separately. The APK is built once with eas build --profile preview, linking the binary to the preview channel. After that, any JS-only change ships via eas update --branch preview and lands on device on next cold start.
The BDUI Component Model
The core of any BDUI implementation is the component registry — a mapping from string type identifiers to native components. The server describes UI in terms of types; the client maps types to implementations.
My schema defines five node types:
type Node =
| { type: "Stack"; props?: { gap?: number }; children?: Node[]; flag?: string }
| { type: "Text"; props: { text: string; size?: "sm"|"md"|"lg"|"xl"; weight?: "normal"|"bold" }; flag?: string }
| { type: "Banner"; props: { text: string; tone?: "info"|"success"|"warn" }; flag?: string }
| { type: "Button"; props: { label: string; href?: string }; flag?: string }
| { type: "Card"; props?: { title?: string }; children?: Node[]; flag?: string }
The renderer is a recursive function over this tree:
function BDUI({ node, flags }: { node: Node; flags: FlagMap }) {
if (!isVisible(node, flags)) return null; // flag-gate check
switch (node.type) {
case "Stack": return <View style={{ gap: node.props?.gap ?? 12 }}>
{node.children?.map((c, i) => <BDUI key={i} node={c} flags={flags} />)}
</View>
case "Text": return <Text>{interpolate(node.props.text, flags)}</Text>
case "Banner": return <Banner text={interpolate(node.props.text, flags)} tone={node.props.tone} />
// ... Card, Button
}
}
Two mechanisms give this system most of its practical power:
Token interpolation — {{flags.X}} in any string field resolves to flags[X] at render time. A button labelled {{flags.ctaText}} shows whatever the server says ctaText is. Change the flag value; the button updates on next refresh. No code change, no OTA.
Flag-gating — any node with a top-level flag: "key" only renders when flags[key] is truthy. A banner with flag: "showPromo" is invisible when showPromo: false, visible when showPromo: true. Toggle from the server; the change reaches all users on their next app load.
The combination is expressive. A feature can be flag-gated at the component level (it renders or it doesn't), flag-varied in its content (different labels, different copy), and structurally different in its layout (different children, different ordering) — all from server config, all without touching client code.
What a live edit looks like
The admin UI serves JSON textareas for both the flag map and the screen definition. Save hits PUT /api/config with a bearer token; the Durable Object persists the change. The next time the app fetches /api/screen?name=home, it gets the updated tree. Pull-to-refresh surfaces it immediately.
During the demo, changing flags.headline from "Hello, team" to something specific to the audience landed in under 10 seconds wall-clock. That's the actual value proposition: UI velocity measured in seconds, not release cycles.
Feature Flags as a Separate Abstraction
Feature flags live in the same payload, but their semantics are distinct from layout. Layout nodes describe structure; flags are runtime control signals. I kept them as a first-class abstraction with their own context:
const { flags, loading, refresh } = useFlags();
const showBanner = useFlag("showBanner", false);
The FlagsProvider fetches on mount, on screen focus, and on explicit refresh. Flag values are immediately available throughout the component tree, not just in BDUI-rendered nodes. This matters when a flag should control imperative logic — navigation, analytics, network requests — not just visibility.
The pattern Airbnb and Delivery Hero both use is: BDUI handles structure, flags handle control flow. You don't want your React component logic reaching into the JSON tree; you want a typed hook that returns a value.
OTA: The Second Lever
OTA updates via expo-updates work at a different layer. Where BDUI replaces the data feeding a render, OTA replaces the code doing the rendering. They're complementary, not alternatives.
The operational model:
# edit any JS/TS file in the project
eas update --branch preview -m "fix banner layout"
# app picks up new bundle on next cold start
The constraints are strict:
- JS-only. Native code, native deps, manifest changes, app icons — none of these ship over OTA. Adding
expo-camerarequires a new build. - Same
runtimeVersion. The runtime version gates which updates a binary accepts. I usepolicy: "appVersion"inapp.json— bumpingexpo.versionchanges the runtime version, and old binaries stop receiving updates from the new version's branch. - Same channel. The APK is bound to a channel at build time. Updates must target that channel's branch or the binary ignores them.
The demo for OTA is simple but effective: edit the About page title, run eas update, tap "Fetch & reload" in the app. The update ID on the About page changes from (embedded) to a UUID, and the new title is there. Same APK. No reinstall. The audience watches it happen live.
What makes this stick for a mixed engineering and PM audience is the implication: any JS fix, any new screen, any BDUI engine improvement ships as an OTA. The native build isn't the iteration loop — it's the one-time bootstrapping step.
Layering Both Levers
The most powerful pattern is using both together:
Ship the feature code via OTA (Lever B)
Gate its visibility via a flag (Lever A)
Roll back by flipping the flag, not by reverting code (Lever A)
You decouple code shipping from feature activation. OTA gets the code to devices; the flag controls when users see it. If the feature behaves badly in production, you disable it at the flag layer without a code revert and another OTA push. This is roughly what Nubank and Delivery Hero describe in their rollout strategies — flag-controlled activation on top of version-decoupled deployments.
The three-lever decision tree:
The practical test for Lever B vs C: search the diff for new entries in package.json dependencies. If anything new appears, run npx expo prebuild --no-install and grep the generated android/ folder for new native modules. If something appears, it's Lever C.
When BDUI Is the Wrong Tool
This pattern earns its complexity only at a certain scale. I want to be honest about where it breaks down.
You're not shipping layout — you're building a rendering engine. Before your first BDUI feature lands, you have to build: a JSON schema, a parser, a component registry, a flag-evaluation layer, an action-handling system, a token-interpolation pass, a versioning strategy, and a server that generates valid payloads. Spotify built HubFramework, iterated on it for years, then deprecated it. Their retrospective is titled "The Silver Bullet That Wasn't." The abstraction didn't deliver sufficient value at their scale on iOS alone, without the cross-platform leverage.
Type safety degrades. Once UI structure lives in untyped JSON, the compiler can't catch mismatches between server payloads and client expectations. Airbnb partially solves this with GraphQL code generation producing Swift and Kotlin models. Without that tooling investment, you're shipping a dynamic system with static components — a mismatch that produces silent runtime failures and split-blame incidents.
Debuggability changes character. Native tooling — Xcode's canvas, Layout Inspector on Android, Compose previews — works on layout that exists at compile time. BDUI layouts don't exist until runtime. When something renders wrong, the question is: is the server sending bad data, or is the renderer misinterpreting it? Netflix's Christopher Luu explicitly flagged "a formalized testing strategy and design system alignment" as things they wish they had built earlier.
Old versions live forever. Lyft designs for clients that are two years behind. If you rename a component type on the server, every older client that receives that type will either silently skip it or crash. The backwards-compatibility tax is real and it compounds over time. You either never rename anything (accumulate debt) or build graceful-degradation logic (accumulate complexity).
Offline is harder. A native app with cached layouts renders offline; a BDUI app needs the schema to paint anything. Without aggressive caching (see: stale-while-revalidate pattern), every offline or slow-network launch shows the user a spinner or an error. My implementation doesn't have this cache yet — it's on the backlog, and it's a known limitation I'd fix before taking this to production.
Cultural resistance is real. Multiple engineering retrospectives on SDUI mention friction from mobile engineers who lose fast feedback loops (previews, hot reload, instant builds) in exchange for a JSON authoring experience. The value proposition is strongest for teams where PMs and designers are the primary UI authors post-launch — not for teams where engineers are expected to iterate on layout on a cadence.
The Tech Stack
| Layer | Choice | Why |
|---|---|---|
| App | React Native + Expo SDK 54, expo-router | First-class OTA support via expo-updates |
| OTA | EAS Update | JS bundle distribution with channels and rollback |
| Backend | Cloudflare Workers | Edge-distributed, < 5ms cold start globally |
| Config storage | Cloudflare Durable Object (SQLite-backed) | Strong consistency, no propagation lag |
| Schema | JSON (5 node types) | Minimal viable BDUI: prove the concept without framework overhead |
| Admin UI | Vanilla HTML served from worker | Zero build step, bearer-token auth, works on any device |
| Flags | Same JSON payload as screen config | Colocated, single fetch |
| Build target | Android APK via EAS, preview profile | Side-loadable, no store submission for demo |
What I'd Do Differently
Add a schema version field. Every config payload should carry a schemaVersion string. The renderer checks it before attempting to parse; if the version is too new, it falls back gracefully instead of crashing or rendering garbage. I skipped this for the demo; it matters for production.
Implement stale-while-revalidate. The current implementation shows a spinner on cold start while the fetch resolves. The fix is simple: read a cached payload from AsyncStorage, render immediately (stale paint), fetch fresh in parallel, update state if different. This should have been in from the start.
Generate TypeScript types from the JSON schema. Right now the client and server share the same Node type manually. In a production system, the server schema is the source of truth, and client types are code-generated from it. This is how Airbnb prevents version-skew bugs — the GraphQL schema generates the Swift and Kotlin models.
Add a component playground. The admin UI is a raw JSON textarea. A real system has a component palette with form inputs per prop, live preview, and schema validation. The PhonePe LiquidUI console (drag-and-drop, WYSIWYG) is what the tooling looks like when you invest in it.
Build the rollback UI. Durable Objects have a full config history via ctx.storage. A production admin should show a changelog and a "rollback to version N" button. For the demo, the reset-to-defaults button is enough.
Conclusion
The pattern that Airbnb calls Ghost Platform, Swiggy calls Dynamic Widget, PhonePe calls LiquidUI, and Nubank calls BDC is ultimately the same idea: the app store release cycle is not a constraint you can work around with enough process. It's a structural bottleneck, and the structural answer is to move UI authority from the binary to the server.
The three-lever model is how I think about the decision concretely: reach for the weakest lever that solves the problem. Text, layout, on/off — that's Lever A, server config, done in seconds. Code change, no new native deps — that's Lever B, OTA, done in minutes. Anything that touches native code — Lever C, build and ship properly.
What surprised me in building this is how small the implementation surface actually is. The component registry is a switch statement. The flag-gate is a ternary. The token interpolation is a regex replace. The complexity isn't in the renderer — it's in the schema versioning, the backwards compatibility, the caching, and the tooling. Those are the parts you have to invest in for production. For a demo, they're optional. For the teams at Delivery Hero running billions of queries through their GraphQL supergraph, they're the whole job.
The weekend build was about understanding the pattern in my hands, not just understanding it theoretically. I'd recommend it. Build the smallest possible version, feel where it strains, and then look at what Lyft added to Canvas (protobuf versioning), what PhonePe added to LiquidUI (expression evaluators, a console), and what Airbnb added to Ghost Platform (code-generated models, GraphQL subscriptions). Each addition addresses a real production failure. Now you know why.
The app store is not your deploy pipeline. BDUI + OTA is not a workaround — it's the correct architecture for consumer mobile apps that need to move at product speed.
References
- A Deep Dive into Airbnb's Server-Driven UI System — Airbnb Engineering
- A Deep Dive into Dynamic Widget — Swiggy Bytes
- Introducing LiquidUI — PhonePe Tech Blog
- Backend Driven Content: Nubank's SDUI Framework — Building Nubank
- The Journey to Server-Driven UI at Lyft Bikes and Scooters — Lyft Engineering
- How Delivery Hero Accelerates UX Experiments with SDUI and Apollo — Apollo GraphQL
- Primer on Delivery Hero's Server-Driven UI Platform — Delivery Hero Tech
- Netflix Saves Time and Money with Server-Driven Notifications — InfoQ
- Building Kimchi for Hack-a-Noodle 2022 — Zomato Blog
- flipkart-incubator/proteus — GitHub
- Backend-Driven UI: Fast A/B Testing and Unified Clients — NordSecurity
- Improving Development Velocity with Generic, Server-Driven UI — DoorDash Engineering
- The Production Playbook for OTA Updates — Expo Blog
- Spotify HubFramework — GitHub (Deprecated)
- bdui-demo — GitHub (this post's implementation)