A declarative, secure, and reactive approach to let both humans and LLMs build rich user interfaces with JSON. A declarative, secure, and reactive approach to let both humans and LLMs build rich user interfaces with JSON. The Problem with AI-Generated Interfaces We are at an inflection point in software development. Large Language Models (LLMs) are rapidly evolving from text-generation tools into full-fledged software agents capable of building applications, analyzing data, and interacting with users in real-time. But there's a critical bottleneck: how do these agents communicate visually with humans? how do these agents communicate visually with humans? The traditional options are bleak: Text-Only Responses: The chatbot experience. Great for conversation, terrible for complex interactions. Ever tried to book a flight through a wall of text? It's exhausting. Raw Code Generation (HTML/JS/React): The agent spits out code, and you hope it works. This is a massive security risk. Arbitrary code execution is the original sin of computing—do you really want to eval() whatever a cloud-based AI model sends you? The answer is no. And, the flexibility of raw language generation can result in either too much brittle code or too much variance from UI exprience to UI experience, something humans do not handle well. Text-Only Responses: The chatbot experience. Great for conversation, terrible for complex interactions. Ever tried to book a flight through a wall of text? It's exhausting. Text-Only Responses Raw Code Generation (HTML/JS/React): The agent spits out code, and you hope it works. This is a massive security risk. Arbitrary code execution is the original sin of computing—do you really want to eval() whatever a cloud-based AI model sends you? The answer is no. And, the flexibility of raw language generation can result in either too much brittle code or too much variance from UI exprience to UI experience, something humans do not handle well. Raw Code Generation (HTML/JS/React) massive security risk eval() The probable end states are far away: Direct generation of images and video with which users can interact in real-time is too expensive and slow. Direct mind interfaces are not yet possible except for the simplest of tasks. Google recognized the traditional options gap and perhaps the distance to an ultimate end-state when it introduced A2UI (Agent-to-User Interface) in late 2025. It's a declarative JSON format where agents describe UI components like text_field or button, and client applications render them natively. It's a solid approach with a security-first architecture. Direct generation of images and video with which users can interact in real-time is too expensive and slow. Direct generation of images and video Direct mind interfaces are not yet possible except for the simplest of tasks. Google recognized the traditional options gap and perhaps the distance to an ultimate end-state when it introduced A2UI (Agent-to-User Interface) in late 2025. It's a declarative JSON format where agents describe UI components like text_field or button, and client applications render them natively. It's a solid approach with a security-first architecture. Direct mind interfaces A2UI (Agent-to-User Interface) text_field button And, I asked additional questions: Why just describe the UI? Why not make it reactive by design? Why limit this approach to LLMs? Why not provide a reactive, cross-platform way for human developers to safely express UIs too? Why just describe the UI? Why not make it reactive by design? Why just describe the UI? Why not make it reactive by design? describe reactive Why limit this approach to LLMs? Why not provide a reactive, cross-platform way for human developers to safely express UIs too? Why limit this approach to LLMs? human developers This is why I built cDOM (Computational DOM) and its expression language, JPRX (JSON Pointer Reactive eXpressions). cDOM (Computational DOM) JPRX (JSON Pointer Reactive eXpressions) Let me show you how it works, step by step. Step 1: A Reactive UI with Zero Server Code or Custom JavaScript Consider a simple counter. In cDOM, you write JSON with embedded JPRX expressions like this: { "div": { "onmount": "=state({ count: 0 }, { name: 'local', scope: $this })", "children": [ { "p": ["Count: ", "=local/count"] }, { "button": { "onclick": "=local/count++", "children": ["+"] } } ] } } { "div": { "onmount": "=state({ count: 0 }, { name: 'local', scope: $this })", "children": [ { "p": ["Count: ", "=local/count"] }, { "button": { "onclick": "=local/count++", "children": ["+"] } } ] } } That's it. No server. No round-trip. No JavaScript except the underlying cDOM library (which could be JavaScript, Dart or some other language). At its core cDOM is JSON that uses tag names as properties containing an element object with properties that represent attributes and a special reserved attribute, children, for child elements. (When used without JPRX expressions starting with =, I call this an Object DOM, oDOM, and it is similar to Juris.js element representation. A vDOM with {tag:<name>,attributes:{<attributes>…},children:[<children>]}and a compressed cDOM that supports attributes and JPRX expressions without quotations along with comments is also available with Lightview). The state helper initializes reactive state ({ count: 0 }) scoped to this element. The =local/count expression is a live binding—the paragraph text updates automatically whenever count changes. The =local/count++ operator is a direct increment. At its core cDOM is JSON that uses tag names as properties containing an element object with properties that represent attributes and a special reserved attribute, children, for child elements. (When used without JPRX expressions starting with =, I call this an Object DOM, oDOM, and it is similar to Juris.js element representation. A vDOM with {tag:<name>,attributes:{<attributes>…},children:[<children>]}and a compressed cDOM that supports attributes and JPRX expressions without quotations along with comments is also available with Lightview). children = {tag:<name>,attributes:{<attributes>…},children:[<children>]} The state helper initializes reactive state ({ count: 0 }) scoped to this element. state { count: 0 } The =local/count expression is a live binding—the paragraph text updates automatically whenever count changes. =local/count live binding count The =local/count++ operator is a direct increment. =local/count++ The entire UI is reactive, self-contained, and runs instantly in the browser. This is the spreadsheet paradigm applied to UI: you define the relationships, and the system handles the updates. As you will see below, similar to a spreadhseet, JPRX has over 100 helper functions, covering everything from math and string manipulation to complex array processing. This is the spreadsheet paradigm applied to UI over 100 Step 2: But What If You Do Want LLM Integration? Do The counter example is great for client-only interactions. But what if you want to notify an LLM (or any server) when the user clicks a button? Just swap the operator for a =fetch helper: =fetch { "button": { "onclick": "=fetch('/api/notify', { method: 'POST', body: $event })", "children": ["Notify LLM"] } } { "button": { "onclick": "=fetch('/api/notify', { method: 'POST', body: $event })", "children": ["Notify LLM"] } } When the button is clicked: The =fetch helper sends a POST request to /api/notify with a JSON body. Object bodies are automatically stringified, and Content-Type: application/json is set for you. The LLM (or your backend) receives the event and can respond however it likes. The =fetch helper sends a POST request to /api/notify with a JSON body. =fetch /api/notify Object bodies are automatically stringified, and Content-Type: application/json is set for you. Content-Type: application/json The LLM (or your backend) receives the event and can respond however it likes. This is the event registration model: the LLM doesn't need to be notified of every interaction. It only hears about the events that matter—the ones where you've wired up a fetch. Developers can wire up fetch to any event they want, or when LLMs generate a cDOM component, they can wire up fetch to notify them of specific user actions. event registration model every fetch fetch fetch This avoids the potential chatty nature of A2UI. Do you really want the LLM notified of every mouse move? Keyboard stroke? Scroll position? Is it time and cost effective to have an LLM deal with things like sorting and filtering, or should you let the client handle it? LLM-driven architecture. Native-like responsiveness. LLM-driven architecture. Native-like responsiveness. Step 3: What If the LLM Wants to Modify the UI? Modify Here's where cDOM gets truly powerful. What if the server (or LLM) wants to push a new component into the page? Enter =mount: =mount { "button": { "onclick": "=mount('/api/get-widget')", "children": ["Load Widget"] } } { "button": { "onclick": "=mount('/api/get-widget')", "children": ["Load Widget"] } } When clicked, mount fetches JSON from /api/get-widget, hydrates it as a reactive cDOM element, and appends it to the document body. mount /api/get-widget appends it to the document body But wait, what if the LLM wants to place that widget somewhere specific, like a sidebar? That's where move comes in. move The LLM simply includes a =move directive in the component it returns: =move { "div": { "id": "weather-widget", "onmount": "=move('#dashboard-sidebar', 'afterbegin')", "children": ["Sunny, 75°F"] } } { "div": { "id": "weather-widget", "onmount": "=move('#dashboard-sidebar', 'afterbegin')", "children": ["Sunny, 75°F"] } } Here's what happens: mount fetches the widget and appends it to the body (a "safe landing"). The moment the widget mounts, move rips it out and teleports it to #dashboard-sidebar. If a widget with the same id already exists there, it's replaced - making the operation idempotent. mount fetches the widget and appends it to the body (a "safe landing"). mount body The moment the widget mounts, move rips it out and teleports it to #dashboard-sidebar. move rips it out teleports #dashboard-sidebar If a widget with the same id already exists there, it's replaced - making the operation idempotent. id replaced The LLM doesn't need to regenerate the entire page. It just pushes components that know how to place themselves. The LLM doesn't need to regenerate the entire page. The Full Picture: Three Levels of Power You've now seen the three core capabilities: Level Use Case Helper Server Involved? 1 Client-only reactivity =state, =++, etc ❌ No 2 Notify LLM of user actions =fetch ✅ Yes (one-way) 3 LLM pushes new UI to the client =mount, =move ✅ Yes (returns UI) Level Use Case Helper Server Involved? 1 Client-only reactivity =state, =++, etc ❌ No 2 Notify LLM of user actions =fetch ✅ Yes (one-way) 3 LLM pushes new UI to the client =mount, =move ✅ Yes (returns UI) Level Use Case Helper Server Involved? Level Level Use Case Use Case Helper Helper Server Involved? Server Involved? 1 Client-only reactivity =state, =++, etc ❌ No 1 1 Client-only reactivity Client-only reactivity =state, =++, etc =state, =++, etc =state =++ ❌ No ❌ No 2 Notify LLM of user actions =fetch ✅ Yes (one-way) 2 2 Notify LLM of user actions Notify LLM of user actions =fetch =fetch =fetch ✅ Yes (one-way) ✅ Yes (one-way) 3 LLM pushes new UI to the client =mount, =move ✅ Yes (returns UI) 3 3 LLM pushes new UI to the client LLM pushes new UI to the client =mount, =move =mount, =move =mount =move ✅ Yes (returns UI) ✅ Yes (returns UI) This layered approach means you can build: Fully offline-capable apps (Level 1) Hybrid apps where the LLM is notified selectively (Level 2) Agent-driven apps where the LLM controls the UI in real-time (Level 3) Fully offline-capable apps (Level 1) Hybrid apps where the LLM is notified selectively (Level 2) Agent-driven apps where the LLM controls the UI in real-time (Level 3) All with the same declarative JSON format. JPRX: The Expression Language Behind the Magic You've been looking at JPRX expressions above. Let me explain what's under the hood. JPRX (JSON Pointer Reactive eXpressions) is an extension of RFC 6901 JSON Pointer. It adds: JPRX (JSON Pointer Reactive eXpressions) RFC 6901 JSON Pointer Expressions: Any path or function name beginning with = is an expression. Only the first expression of a nested expression needs to start with =. Reactivity: Any path starting with = (e.g., =app/user/name) creates a live subscription. When the data changes, so does the UI. Operators: Prefix and postfix mutation (=++/count, =count--, =!!/enabled). Helper Functions: Over 100 Excel-like functions (sum, if, upper, filter, map, formatDate) plus state-mutating helpers (set, push, assign), e.g. =set(/app/user/name, 'John Doe'). Relative Paths: Use ../ to navigate up context hierarchies, just like a file system. Expressions: Any path or function name beginning with = is an expression. Only the first expression of a nested expression needs to start with =. Expressions = = Reactivity: Any path starting with = (e.g., =app/user/name) creates a live subscription. When the data changes, so does the UI. Reactivity = =app/user/name Operators: Prefix and postfix mutation (=++/count, =count--, =!!/enabled). Operators =++/count =count-- =!!/enabled Helper Functions: Over 100 Excel-like functions (sum, if, upper, filter, map, formatDate) plus state-mutating helpers (set, push, assign), e.g. =set(/app/user/name, 'John Doe'). Helper Functions sum if upper filter map formatDate set push assign =set(/app/user/name, 'John Doe') Relative Paths: Use ../ to navigate up context hierarchies, just like a file system. Relative Paths ../ Why is this perfect for LLMs? Context-Free Grammar: No closures, no callbacks. An LLM generates JPRX as easily as it generates a sentence. Safe by Design: No access to globalThis, eval, or the DOM API outside registered helpers. The application developer controls the helper catalog. Streamable: The LLM can stream components one by one. Each piece is self-describing and self-mounting. Context-Free Grammar: No closures, no callbacks. An LLM generates JPRX as easily as it generates a sentence. Context-Free Grammar Safe by Design: No access to globalThis, eval, or the DOM API outside registered helpers. The application developer controls the helper catalog. Safe by Design globalThis eval Streamable: The LLM can stream components one by one. Each piece is self-describing and self-mounting. Streamable And why is it great for human developers? Because JPRX reads like a spreadsheet formula. If you can write =SUM(A1:A10), you can write =sum(/cart/items...price). =SUM(A1:A10) =sum(/cart/items...price) Head-to-Head: cDOM vs. A2UI Feature cDOM/JPRX (Lightview) A2UI (Google) Core Paradigm Declarative + Reactive Computation Declarative Description Client-Side Reactivity ✅ Built-in via signals & computed expressions ❌ Requires server round-trip for state changes Interaction Model ✅ Handlers execute client-side (low latency) ⚠️ Every interaction signals back to agent (chatty) Security Model ✅ Sandboxed helper functions, no arbitrary code ✅ Sandboxed component catalog, no arbitrary code Expression Language ✅ Full formula language with 100+ helpers ❌ Data binding only, no computation in the format LLM Streaming ✅ Components self-mount & teleport via =move ✅ Flat component list with IDs for incremental updates Two-Way Binding ✅ =bind(/path) for inputs, selects, checkboxes ⚠️ Component-specific, varies by renderer State Management ✅ Scoped, schema-validated state objects ❌ External state managed by agent or backend Framework Lightview (Vanilla JS, ~5KB core) Client-agnostic (Flutter, React, Angular renderers) Feature cDOM/JPRX (Lightview) A2UI (Google) Core Paradigm Declarative + Reactive Computation Declarative Description Client-Side Reactivity ✅ Built-in via signals & computed expressions ❌ Requires server round-trip for state changes Interaction Model ✅ Handlers execute client-side (low latency) ⚠️ Every interaction signals back to agent (chatty) Security Model ✅ Sandboxed helper functions, no arbitrary code ✅ Sandboxed component catalog, no arbitrary code Expression Language ✅ Full formula language with 100+ helpers ❌ Data binding only, no computation in the format LLM Streaming ✅ Components self-mount & teleport via =move ✅ Flat component list with IDs for incremental updates Two-Way Binding ✅ =bind(/path) for inputs, selects, checkboxes ⚠️ Component-specific, varies by renderer State Management ✅ Scoped, schema-validated state objects ❌ External state managed by agent or backend Framework Lightview (Vanilla JS, ~5KB core) Client-agnostic (Flutter, React, Angular renderers) Feature cDOM/JPRX (Lightview) A2UI (Google) Feature Feature cDOM/JPRX (Lightview) cDOM/JPRX (Lightview) A2UI (Google) A2UI (Google) Core Paradigm Declarative + Reactive Computation Declarative Description Core Paradigm Core Paradigm Core Paradigm Declarative + Reactive Computation Declarative + Reactive Computation Declarative Description Declarative Description Client-Side Reactivity ✅ Built-in via signals & computed expressions ❌ Requires server round-trip for state changes Client-Side Reactivity Client-Side Reactivity Client-Side Reactivity ✅ Built-in via signals & computed expressions ✅ Built-in via signals & computed expressions ❌ Requires server round-trip for state changes ❌ Requires server round-trip for state changes Interaction Model ✅ Handlers execute client-side (low latency) ⚠️ Every interaction signals back to agent (chatty) Interaction Model Interaction Model Interaction Model ✅ Handlers execute client-side (low latency) ✅ Handlers execute client-side (low latency) ⚠️ Every interaction signals back to agent (chatty) ⚠️ Every interaction signals back to agent (chatty) Security Model ✅ Sandboxed helper functions, no arbitrary code ✅ Sandboxed component catalog, no arbitrary code Security Model Security Model Security Model ✅ Sandboxed helper functions, no arbitrary code ✅ Sandboxed helper functions, no arbitrary code ✅ Sandboxed component catalog, no arbitrary code ✅ Sandboxed component catalog, no arbitrary code Expression Language ✅ Full formula language with 100+ helpers ❌ Data binding only, no computation in the format Expression Language Expression Language Expression Language ✅ Full formula language with 100+ helpers ✅ Full formula language with 100+ helpers ❌ Data binding only, no computation in the format ❌ Data binding only, no computation in the format LLM Streaming ✅ Components self-mount & teleport via =move ✅ Flat component list with IDs for incremental updates LLM Streaming LLM Streaming LLM Streaming ✅ Components self-mount & teleport via =move ✅ Components self-mount & teleport via =move =move ✅ Flat component list with IDs for incremental updates ✅ Flat component list with IDs for incremental updates Two-Way Binding ✅ =bind(/path) for inputs, selects, checkboxes ⚠️ Component-specific, varies by renderer Two-Way Binding Two-Way Binding Two-Way Binding ✅ =bind(/path) for inputs, selects, checkboxes ✅ =bind(/path) for inputs, selects, checkboxes =bind(/path) ⚠️ Component-specific, varies by renderer ⚠️ Component-specific, varies by renderer State Management ✅ Scoped, schema-validated state objects ❌ External state managed by agent or backend State Management State Management State Management ✅ Scoped, schema-validated state objects ✅ Scoped, schema-validated state objects state ❌ External state managed by agent or backend ❌ External state managed by agent or backend Framework Lightview (Vanilla JS, ~5KB core) Client-agnostic (Flutter, React, Angular renderers) Framework Framework Framework Lightview (Vanilla JS, ~5KB core) Lightview (Vanilla JS, ~5KB core) Client-agnostic (Flutter, React, Angular renderers) Client-agnostic (Flutter, React, Angular renderers) A2UI is an excellent wire format for describing static snapshots of a UI. It fits neatly into enterprise ecosystems where Flutter or Angular applications render components. static snapshots cDOM is designed for dynamic, client-reactive applications where the UI is a living computation. When your LLM generates a dashboard, you want it to react to user input—filter, sort, calculate—without a server round-trip for every interaction. dynamic, client-reactive applications react And while the current reference implementation is in JavaScript, nothing prevents cDOM and JPRX from being implemented in other languages like Dart or Swift. The core concepts—reactive pointers, helper functions, and declarative structure—are universal. You could build a Flutter renderer for cDOM just as easily as Google built one for A2UI. nothing prevents cDOM and JPRX from being implemented in other languages like Dart or Swift Custom Components: Extending the Vocabulary A powerful feature of A2UI is capability negotiation: the agent queries the client's component catalog before generating UI. This ensures compatibility across different renderers. capability negotiation cDOM takes a different approach. There's no formal negotiation protocol, but the ecosystem makes adding custom components trivial. adding custom components trivial Libraries like Lightview and Juris.js allow you to define custom HTML elements: Lightview Juris.js const MyButton = () => { return { button: { style: 'font-size: 25px' children: ["My Button"] } } } const MyButton = () => { return { button: { style: 'font-size: 25px' children: ["My Button"] } } } The above is trivial for this example. Far more complex elements can be defined and once defined, these elements are immediately usable in cDOM: { div: { MyButton: { onclick: "=fetch('/api/notify', { method: 'POST', body: $event })" } } } { div: { MyButton: { onclick: "=fetch('/api/notify', { method: 'POST', body: $event })" } } } The above would render as: <div onclick="<compiled handler>"> <button style="font-size: 25px">My Button</button> </div> <div onclick="<compiled handler>"> <button style="font-size: 25px">My Button</button> </div> The philosophy: the application developer controls the component catalog, just as in A2UI. The difference is that cDOM skips the negotiation handshake—the LLM uses components it's been trained to expect for that application. If discovery is needed, a JSON manifest of available components can be provided to the LLM as context. application developer controls the component catalog This trade-off prioritizes simplicity and low overhead for applications where the component set is known in advance - which is true in many situations. simplicity and low overhead Why This Matters for the Future of Agentic UI I believe the next wave of applications will be agent-first. Not chatbots with UI bolted on, but intelligent systems where conversation and application dissolve into one. agent-first A user might ask: "Show me my sales performance for Q4, highlight anything below target, and let me drill down by region." "Show me my sales performance for Q4, highlight anything below target, and let me drill down by region." An LLM should respond not with a paragraph, but with a live dashboard: a chart that filters, a table that sorts, a summary that recalculates. When the user says "focus on EMEA," the agent streams a UI patch that updates the view in place. live dashboard cDOM and JPRX could be the foundation for this vision: For Developers: Write less JavaScript, define relationships, let reactivity handle the rest. For LLMs: A safe, structured, computable expression language as easy to generate as natural text. For Users: Rich, instantly reactive interfaces without server round-trip latency. For Developers: Write less JavaScript, define relationships, let reactivity handle the rest. For Developers For LLMs: A safe, structured, computable expression language as easy to generate as natural text. For LLMs For Users: Rich, instantly reactive interfaces without server round-trip latency. For Users Get Started with Lightview cDOM and JPRX are part of Lightview, a reactive UI library combining aspects of Bau.js, Juris.js, and HTMX with a router and cDOM. It's browser-native, and designed for both humans and machines. Lightview Documentation: lightview.dev - Full guides and API reference GitHub: github.com/anywhichway/lightview npm packages: Lightview (includes cDOM): https://www.npmjs.com/package/lightview JPRX (standalone parser): Documentation: lightview.dev - Full guides and API reference Documentation lightview.dev GitHub: github.com/anywhichway/lightview GitHub github.com/anywhichway/lightview npm packages: Lightview (includes cDOM): https://www.npmjs.com/package/lightview JPRX (standalone parser): npm packages Lightview (includes cDOM): https://www.npmjs.com/package/lightview JPRX (standalone parser): Lightview (includes cDOM): https://www.npmjs.com/package/lightview https://www.npmjs.com/package/lightview JPRX (standalone parser): cDOM is currently in experimental preview and could easily be implemented outside the scope of Lightview. I'm actively refining the expression language and helper library based on real-world use cases. I'd love your feedback. experimental preview Conclusion: The Spreadsheet Moment for UI Spreadsheets democratized computation. You didn't need to be a programmer to define that C3 = A1 + B2 and have it update forever. C3 = A1 + B2 cDOM is designed to be that moment for user interfaces. A format where an LLM—or a human—can declare: this text shows the count; this button increments it; this chart sums the sales. And the system handles the rest. this text shows the count; this button increments it; this chart sums the sales In a world where AI agents are becoming the new developers, the language they speak to the browser matters. I think that language should be safe, reactive, and computational from the ground up. That's cDOM. Welcome to the Computational DOM. That's cDOM. Welcome to the Computational DOM.