I have spent the last decade building systems at Microsoft, Twitter, and Stripe, watching the same pattern repeat itself. A seemingly small request like “let users download their Twitter data for the last month”, turns into a multi-quarter project involving design reviews, sprint planning, new React components, backend APIs and extensive QA cycles. For decades, software engineers have built software the same way. We list the problems the system is supposed to solve, collaborate with Product Managers and UX Designers to define what APIs to build, storage systems to provision, the business logic and, last but not the least, UI design. This means mapping each possible user interaction with the product (through buttons or menu items) to one or more api requests in the backend. For complex features, this results in front end systems that take months to build. And despite all this effort, users keep asking for features we never anticipated. Agentic AI is fundamentally changing this pattern. Instead of building elaborate UIs for every conceivable action and feature, we are now designing systems where LLMs sit at the forefront, orchestrating workflows through protocols like Model Context Protocols (MCPs). This shift is not just about adding AI to existing systems, it’s rethinking the entire architecture. What is MCP and Why does it matter? The Model Context Protocol (MCP) was invented by Justin Spahr-Summers and David Soria Parra, two software engineers at Anthropic. The initial concept emerged in mid-2024 and the protocol became open-sourced in late 2024 to standardize how AI models interact with external systems. Think of it as a universal adapter that lets LLMs communicate with your databases or APIs through a consistent interface. Before MCP, every integration required custom code. If you wanted your AI agent to access Google Drive, you would have to write a custom integration. But with MCP, you implement the protocol once, and any MCP compatible agent can interact with your system. If you decide to use LLMs from OpenAI and move to Anthropic later on, the underlying MCP architecture you have built once, would still work. Comparing Architectures: A Traditional System Design To understand why MCP matters, let's compare traditional and agentic approaches using a common system design problem: building a ticket booking system like Ticketmaster. This is a very common System Design question Engineers love to ask in interviews, so let’s take that as an example. The traditional approach requires: Interface to search with filters like date, city, price, etc Event details page containing seating arrangement Seat selection UI Cart management Checkout flow Order history page Mobile app mirroring all this functionality Interface to search with filters like date, city, price, etc Event details page containing seating arrangement Seat selection UI Cart management Checkout flow Order history page Mobile app mirroring all this functionality Traditional Stack Architecture A traditional stack for this system would look something like below. In this model, the UI drives everything. Each feature would need a dedicated component like search page with filter controls, event listing grid with pagination, interactive seat chart, etc. The UI layer alone might contain hundreds of React components and dozens of API integration points. Each new feature, for example, adding a group booking to our system above, requires designing new components and creating new API endpoints requiring on the optimistic side: 4-8 weeks minimum. Agentic Stack Architecture The Agentic design above simply doesn’t replace UI with an LLM agent, it made the system conversational instead of being action-oriented. Here’s the crucial insight: nobody is building their own LLMs either. You license one from OpenAI, Anthropic, Google or others via API. These companies have spent billions into perfecting these large language models and training them on so much data, that you don’t need to re-invent the wheel. Your job is to design the system around these existing models. For our ticket booking system, users just input text in the chat interface “Find me two tickets to see Coldplay in Los Angeles next month, preferably aisle seats under $200 each”. The LLM understands the intent of the user, determines which tools to invoke from the registry and executes the call to the tool through MCP. The interface becomes minimal, a simple chat box, a lightweight view showing current selections and a simple checkout confirmation. With MCP based architecture, in order to fulfil the user query above, you would expose tools like: search_events, check_availability, get_seats, reserve_seats, process_payments, get_tickets search_events, search_events check_availability, check_availability get_seats, get_seats reserve_seats, reserve_seats process_payments, process_payments get_tickets get_tickets // Example: Simple MCP Tool Structure { name: "search_events", description: "Search for events by artist, location, or date", input_schema: { artist: "string", city: "string", date_range: "string" }, handler: async (params) => { // Your business logic here return await eventDatabase.query(params); } } // Example: Simple MCP Tool Structure { name: "search_events", description: "Search for events by artist, location, or date", input_schema: { artist: "string", city: "string", date_range: "string" }, handler: async (params) => { // Your business logic here return await eventDatabase.query(params); } } Now, when the user inputs the query in the chat box, the LLM converts that query into a chain: search_events (Coldplay shows, next month) → check_availability (2 adjacent aisle seats) → filter_by_price (under $200 total), and presents the options to the user in the chat box itself. search_events check_availability filter_by_price Key Design Patterns With Agentic systems, traditional system design can now be thought of being replaced with these key patterns: Capability-Driven Design Instead of asking “What buttons should this page have?”, engineers ask “What capabilities should the system expose?” What buttons should this page have? What capabilities should the system expose? Rather than building separate UIs for searching events, price filtering, seat selection and processing payments, you now expose capabilities as MCP tools. The agent composes these based on user intent. The agent chain above from search_events to filter_by_price are a key example of this capability. Context management rather than State management Traditional applications like the one we discussed above, maintain state through URL parameters, session cookies, local storage and complex redux stores. Users navigate through multiple pages and the system preserves their selections and filters across different pages. Instead of this state management, Agentic systems now use context management through conversation history. The LLM inherently maintains a full context of the user intent. In the example above, LLM remembers: User is looking for 2 tickets Budget is under $200 each User prefers aisle seats User is interested in events next month User is looking for 2 tickets Budget is under $200 each User prefers aisle seats User is interested in events next month When the user subsequently says “Actually, I am fine with spending up to $300 each”, the agent updates one parameter while preserving all other context. No need to apply new filter on the search page. Actually, I am fine with spending up to $300 each Tool chaining This is one of the most powerful aspects of Agentic design. The LLMs chain multiple tools together to accomplish complex tasks based on user intent. Not through predetermined UI workflows. Consider this request: “Find tickets to any popular concert in the next two weeks. If Coldplay is performing, I prefer that, otherwise any popular pop band would do.” Find tickets to any popular concert in the next two weeks. If Coldplay is performing, I prefer that, otherwise any popular pop band would do. Agent execution: In traditional design, this workflow does not exist. The user would have to manually find if Coldplay is playing. And if not, reset the filters to find Guns N Roses and book. In an Agentic system, tool chaining makes complex user tasks possible. This dynamic composition is what separates Agentic systems from traditional architectures. Capabilities can be combined in ways you never explicitly programmed. Progressive Enhancement Engineers now focus on APIs, not on the LLMs themselves. Advantage here is that as model providers improve their LLMs, your system automatically improves at understanding user intent and tool chaining. Also, releasing new features now doesn’t mean big-bang launches. Engineers now incrementally add MCP tools and the LLM exposes that functionality in a subtle way to the user by doing more tasks. Add a check_weather_forecast tool, and suddenly users can ask 'Will it rain during the concert?' without you building a weather UI. Add get_parking_info, and the agent proactively suggests parking options. Each tool you add multiplies the possible workflows exponentially. check_weather_forecast get_parking_info When to use Agentic Design Not every system needs Agentic design. It is a tool for only specific contexts. Based on my experience, here’s when to consider this approach: System integrates many external services or data sources System requirements change frequently Users have unpredictable workflows Natural language provides genuine value over visual interfaces System integrates many external services or data sources System requirements change frequently Users have unpredictable workflows Natural language provides genuine value over visual interfaces On the flip side, these scenarios are a bad fit for Agentic design: User interactions are limited and repetitive Low latency is a must Users prefer explicit control over actions User interactions are limited and repetitive Low latency is a must Users prefer explicit control over actions The future is conversational After a decade in this industry, I have seen how much engineering effort goes into building and maintaining complex UIs. The Agentic paradigm offers something fundamentally different. The shift from UI-first to agent-first design is more than a technical evolution. It is a rethinking of how humans interact with software in general. Instead of forcing users to learn our interfaces and navigate pre-determined workflows, engineers are now building systems that understand user intent expressed in natural language and expose capabilities dynamically. This is transformative. Instead of maintaining massive front-end codebases that take months to update, you can iterate weekly. Engineers can expose new capabilities organically and let the agent compose them intelligently. The future of software isn't about building better button-based UIs, it's about designing better conversational experiences around powerful capabilities.