I have spent the last eighteen months working as a digital janitor. My job title says "Software Engineer," but my day-to-day reality has been far less glamorous. I’ve been building glue. Not the industrial-strength adhesive that holds skyscrapers together. I’m talking about the digital equivalent of duct tape, used to bind one proprietary AI API to another proprietary data source, wrapped in a framework that changes its syntax every three weeks. It has been exhausting. It has been wasteful. It has been the primary bottleneck for anyone trying to ship actual production AI systems rather than just cool demos on Twitter. We have been building agents in a fragmented reality. If you wanted your agent to talk to a Postgres database, you wrote a custom integration. If you wanted it to switch to Notion, you wrote another. If you wanted to swap the underlying model from GPT-4 to Claude because the pricing made more sense, you practically had to rewrite the orchestration layer because the function calling schemas were slightly different. We were not building agents. We were building translators. That changed on December 9th. The announcement was dry. It involved foundations and governance boards. It lacked the cinematic flair of a demo video showing a robot folding laundry. But for those of us who actually ship code? It was the most exciting news of the year. The Agentic AI Foundation has arrived. And with it, the era of proprietary agent glue is effectively over. Agentic AI Foundation Why Are We Still Hardcoding API Calls? To understand the magnitude of this shift, we have to look at the mess we are currently wading through. The prevailing orthodoxy in GenAI development has been one of "walled garden innovation." Every major model provider, every framework builder, and every cloud giant looked at the problem of Agentic AI and decided they needed to own the entire stack. The logic seemed sound on paper. If you own the model, the orchestration, and the tool definitions, you can optimise performance. You can ensure security. You can charge for every step of the chain. So we ended up with a landscape that looked like the Tower of Babel. OpenAI had its specific way of defining tools and functions. Anthropic had a slightly different JSON schema. LangChain had its abstractions. LlamaIndex had theirs. Microsoft’s Semantic Kernel did it another way. Here is what this looks like in practice. This is code I wrote six months ago. It is "legacy" now. The Old Way: The Integration Trap # The "Glue" Anti-Pattern # We are tightly coupling the model provider (OpenAI) # to the specific tool implementation (Database) import openai import json from pydantic import BaseModel # 1. Define the schema specifically for OpenAI's format class SQLQuery(BaseModel): query: str rationale: str openai_tool_definition = { "type": "function", "function": { "name": "execute_sql", "description": "Run a SQL query against the prod DB", "parameters": SQLQuery.model_json_schema() } } # 2. Hardcode the logic def execute_sql(query): # Imagine 50 lines of connection logic here pass # 3. The Orchestration Loop # If we want to switch to Anthropic later, we have to rewrite # this entire block because their tool-use syntax differs. response = openai.chat.completions.create( model="gpt-4", messages=[...], tools=[openai_tool_definition] ) # The "Glue" Anti-Pattern # We are tightly coupling the model provider (OpenAI) # to the specific tool implementation (Database) import openai import json from pydantic import BaseModel # 1. Define the schema specifically for OpenAI's format class SQLQuery(BaseModel): query: str rationale: str openai_tool_definition = { "type": "function", "function": { "name": "execute_sql", "description": "Run a SQL query against the prod DB", "parameters": SQLQuery.model_json_schema() } } # 2. Hardcode the logic def execute_sql(query): # Imagine 50 lines of connection logic here pass # 3. The Orchestration Loop # If we want to switch to Anthropic later, we have to rewrite # this entire block because their tool-use syntax differs. response = openai.chat.completions.create( model="gpt-4", messages=[...], tools=[openai_tool_definition] ) If you are a developer, looking at that code should make you itch. It works. It ships. But it is brittle. If I want to change the database, I have to update the tool definition. If I want to change the model provider, I have to rewrite the schema generation. If the model provider changes their API version (which happens constantly), my production system breaks. This is technical debt the moment you write it. We were building distinct silos of intelligence that could not speak to one another. We were creating a world where an agent built on the OpenAI stack would look at a tool definition written for an Anthropic agent and see nothing but gibberish. The result was friction. Massive, expensive, soul-crushing friction. The Model Context Protocol (MCP): A USB Port for Intelligence The launch of the Agentic AI Foundation (AAIF) is the industry admitting that the "land grab" phase of infrastructure is over. Hosted by the Linux Foundation, it includes Amazon, Google, Microsoft, OpenAI, and Anthropic. These companies usually agree on nothing. They fight over talent. They fight over market share. Yet here they are, aligning on a single set of open standards. Why? Because they realised that if the plumbing doesn't work, nobody buys the water. The crown jewel of this foundation is the Model Context Protocol (MCP), donated by Anthropic. Model Context Protocol (MCP) I have been skeptical of "universal protocols" in the past. Usually, they are abstract academic exercises. But MCP is different. It is practical. It works right now. Think of MCP as a "Language Server Protocol" (LSP) for AI. Before LSP, if you wanted to build a code editor (like VS Code or Vim) that supported Python, you had to write your own Python parser and autocomplete engine. LSP standardised the communication. Now, the Python team writes a "Language Server," and any editor that speaks LSP can instantly understand Python. MCP does the exact same thing for AI tools. In the MCP world, I don't write a "Google Drive integration for Claude." I write an "MCP Server for Google Drive." The New Way: The MCP Server Here is what the code looks like now. This is a TypeScript example using the MCP SDK. Notice the difference in philosophy. /** * The MCP Pattern * We are not writing for a specific model. * We are writing a server that broadcasts capability. */ import { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js"; import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js"; import { z } from "zod"; // 1. Create the server const server = new McpServer({ name: "database-server", version: "1.0.0", }); // 2. Register the tool resource // We define the tool ONCE. // Any MCP-compliant client (Claude, Goose, Custom) can use this. server.tool( "execute_sql", { query: z.string().describe("The SQL query to run"), rationale: z.string().describe("Why you are running this"), }, async ({ query, rationale }) => { // The implementation logic lives here, isolated from the LLM. console.error(`Executing query: ${query} because ${rationale}`); return { content: [{ type: "text", text: "Query Result: [Row 1, Row 2...]" }], }; }, ); // 3. Connect via Standard IO // The AI connects to this process via stdio. // No HTTP overhead required for local tools. const transport = new StdioServerTransport(); await server.connect(transport); /** * The MCP Pattern * We are not writing for a specific model. * We are writing a server that broadcasts capability. */ import { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js"; import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js"; import { z } from "zod"; // 1. Create the server const server = new McpServer({ name: "database-server", version: "1.0.0", }); // 2. Register the tool resource // We define the tool ONCE. // Any MCP-compliant client (Claude, Goose, Custom) can use this. server.tool( "execute_sql", { query: z.string().describe("The SQL query to run"), rationale: z.string().describe("Why you are running this"), }, async ({ query, rationale }) => { // The implementation logic lives here, isolated from the LLM. console.error(`Executing query: ${query} because ${rationale}`); return { content: [{ type: "text", text: "Query Result: [Row 1, Row 2...]" }], }; }, ); // 3. Connect via Standard IO // The AI connects to this process via stdio. // No HTTP overhead required for local tools. const transport = new StdioServerTransport(); await server.connect(transport); Why is this better? Why is this better? Decoupling: The server doesn't know it's talking to Claude, GPT-4, or a local Llama model. It just exposes a capability. Standardisation: The schema (zod definition) is automatically converted into the format the model understands by the MCP Client. Portability: I can take this database-server, wrap it in a Docker container, and deploy it anywhere. Any agent with access to that container can use the database. Decoupling: The server doesn't know it's talking to Claude, GPT-4, or a local Llama model. It just exposes a capability. Decoupling: Standardisation: The schema (zod definition) is automatically converted into the format the model understands by the MCP Client. Standardisation: zod Portability: I can take this database-server, wrap it in a Docker container, and deploy it anywhere. Any agent with access to that container can use the database. Portability: database-server The model asks: "What tools do you have?" The MCP server replies: "I can list files, read files, and execute SQL." The model says: "Execute SQL." The protocol handles the handshake. The protocol handles the security context. I don't have to rewrite the glue code ever again. Is AGENTS.md Just a Glorified README? AGENTS.md Alongside MCP, OpenAI contributed AGENTS.md. AGENTS.md At first glance, this looks trivial. It is effectively a markdown file in your repository that describes the codebase. You might ask: "Isn't this just a README?" No. A README is for humans. AGENTS.md is for robots. AGENTS.md When you point an AI agent at a codebase today, how does it know what the code does? How does it know the conventions? How does it know that we use snake_case for variables but CamelCase for classes? snake_case CamelCase Until now, we stuffed this into the system prompt. We pasted giant blocks of text into the chat window. We engaged in "Prompt Engineering," which is often a polite term for guessing which magic words will make the stochastic parrot behave. AGENTS.md standardises this context injection. AGENTS.md Context as Code This allows us to move from "Prompt Engineering" to Context Architecture. Context Architecture Instead of treating agent instructions as ephemeral text pasted into a chat UI, we treat them as code. They live in the repo. They are version controlled. They are subject to Pull Requests. Example: AGENTS.md Structure Example: AGENTS.md # Project Context This repository contains the backend for the Payments Service. ## Architectural Constraints - ALL database access must go through the Repository pattern. - NEVER write raw SQL in the controllers. - We use feature flags for all new payment gateways. ## Common Tasks ### Add a New Payment Gateway 1. Create a class in `src/gateways` implementing `IGateway`. 2. Register the key in `config/services.yaml`. 3. Add a migration for the transaction log. ## Known Issues - The `Refund` class is deprecated. Use `TransactionReversal` instead. # Project Context This repository contains the backend for the Payments Service. ## Architectural Constraints - ALL database access must go through the Repository pattern. - NEVER write raw SQL in the controllers. - We use feature flags for all new payment gateways. ## Common Tasks ### Add a New Payment Gateway 1. Create a class in `src/gateways` implementing `IGateway`. 2. Register the key in `config/services.yaml`. 3. Add a migration for the transaction log. ## Known Issues - The `Refund` class is deprecated. Use `TransactionReversal` instead. markdown When an MCP-compliant agent (like the newly open-sourced goose from Block) enters this repository, it reads this file first. goose It effectively "onboards" itself. If I change the architectural constraints, I update AGENTS.md. The agent's behaviour changes immediately. No prompt tuning required. This creates a deterministic link between your documentation and the AI's performance. AGENTS.md This seems trivial until you try to automate code maintenance at scale. Without a standard, every agent guesses. With a standard, the agent knows. The Architecture of Trust We need to talk about security. This is where the skeptics usually check out, and rightly so. Giving an autonomous agent access to your production database or your Slack workspace is terrifying. In the "Old Way" (the custom integration), security was often an afterthought. You pasted an API key into an .env file and prayed that the model didn't get hit with a prompt injection attack. .env MCP introduces a structural change to security. It moves us to a Client-Host-Server model. Client-Host-Server The Host: The application running the agent (e.g., the Claude Desktop App, or a local IDE). The Client: The agent logic itself. The Server: The tool (e.g., the Postgres MCP server). The Host: The application running the agent (e.g., the Claude Desktop App, or a local IDE). The Host: The Client: The agent logic itself. The Client: The Server: The tool (e.g., the Postgres MCP server). The Server: The Host acts as the firewall. It controls the consent layer. When the Client tries to call execute_sql on the Server, the Host intercepts the call. It can prompt the user: "The Agent wants to run 'DROP TABLE users'. Allow?" execute_sql "The Agent wants to run 'DROP TABLE users'. Allow?" This is not possible when the tool logic is embedded inside the agent's code. By separating the tool (Server) from the brain (Client) via a protocol, we create a distinct boundary where we can enforce policy. It turns the "Human in the Loop" from a buzzword into a distinct architectural layer. What This Actually Means The launch of the AAIF is the industry signaling the end of the "wild west" phase of GenAI. We are moving from hand-crafted, artisanal agents to industrial-grade infrastructure. We are trading the excitement of the "new release" for the comfort of the "stable protocol." For the theorists, this might be boring. Standardisation is never as sexy as revolution. They want to talk about AGI and consciousness. But for the builders? This means we can stop reinventing the wheel every Tuesday. This creates a new ecosystem. We are going to see a shift in how SaaS products market themselves. Previously, they touted their "API." Soon, they will tout their "MCP Server." "Does it integrate with Claude?" will be replaced by "Is it MCP compliant?" This opens up a massive market for developers. There is now a clear, standard way to build plugins for the entire AI ecosystem. You write the MCP server once, and it works with every agent framework that adopts the standard. entire If you are currently writing a complex, custom integration framework for your internal tools that is tightly coupled to a specific LLM provider's SDK... stop. You are building technical debt. Investigate MCP. Look at how you can expose your internal APIs as MCP servers. This future-proofs your work. Today you might be using GPT-4 via Azure. Tomorrow you might be using a fine-tuned Llama 4 on-prem. If your tools speak MCP, the migration is trivial. TL;DR For The Scrollers Proprietary glue is dead. Stop writing custom integrations for specific LLMs. It’s a waste of time. MCP is the USB port for AI. It standardises how models talk to tools (databases, file systems, APIs). Write servers, not scripts. Build "MCP Servers" that expose capabilities, rather than scripts that wrap API calls. Context is Code. Use AGENTS.md to document your codebase for robots, treating instructions like version-controlled code. Security needs boundaries. MCP separates the tool from the model, allowing for a proper consent layer (Host) in the middle. Proprietary glue is dead. Stop writing custom integrations for specific LLMs. It’s a waste of time. Proprietary glue is dead. MCP is the USB port for AI. It standardises how models talk to tools (databases, file systems, APIs). MCP is the USB port for AI. Write servers, not scripts. Build "MCP Servers" that expose capabilities, rather than scripts that wrap API calls. Write servers, not scripts. Context is Code. Use AGENTS.md to document your codebase for robots, treating instructions like version-controlled code. Context is Code. AGENTS.md Security needs boundaries. MCP separates the tool from the model, allowing for a proper consent layer (Host) in the middle. Security needs boundaries. Read the complete technical breakdown → Read the complete technical breakdown → Read the complete technical breakdown → Edward Burton ships production AI systems and writes about the stuff that actually works. Skeptic of hype. Builder of things. Edward Burton Production > Demos. Always. More at tyingshoelaces.com tyingshoelaces.com What proprietary integration are you most excited to delete from your codebase? What proprietary integration are you most excited to delete from your codebase?