Kevan Dodhia’s Builder Journey to Creating the New Policy Layer for AI Agents

Written by jonstojanjournalist | Published 2025/10/21
Tech Story Tags: alter-yc-s25 | agent-authorization | kevan-dodhia | ai-security-compliance | zero-trust-for-ai-agents | compute.ai-acquisition | distributed-systems-security | good-company

TLDRKevan Dodhia, former Compute.ai co-founder, is redefining AI security with Alter, an agent authorization platform that enforces real-time, fine-grained access control for AI agents. Built on his distributed systems expertise, Alter applies zero-trust principles, ephemeral credentials, and auditable policies—making autonomous agents safe and compliant for enterprise deployment.via the TL;DR App

Kevan Dodhia’s career has traced the arc of modern enterprise computing, from building high‑performance distributed SQL engines to pioneering a new category of AI security. As technical co-founder of Compute.ai, he and his team built a compute engine 5x faster than EMR Spark to serve data analytics in highly regulated environments. After Compute.ai’s 2025 acquisition by Terizza, Dodhia turned his attention to a fresh problem: how to make autonomous AI agents safe and compliant in production. The result is Alter (YC S25), an identity and access control platform for AI agents that embodies Dodhia’s experience with distributed systems, SQL compute engines, and regulated deployment.


In his previous role, Dodhia learned first‑hand how critical auditability and compliance are for financial and government customers. Compute.ai’s clients included the London Stock Exchange (LSEG), where every data query and job had to meet stringent governance rules. “We built a compute engine 5x faster than EMR Spark and sold into highly regulated enterprises like LSEG,” Dodhia said. That experience, scaling low‑latency queries across clusters while satisfying auditors, set the stage for Alter’s architecture.


The Alter platform breaks down policy intent into machine‑enforceable controls, using distributed enforcement points in the data plane. Just as Dodhia architected Compute.ai’s SQL engine for speed and reliability, Alter’s control layer must be highly performant. Every agent API call is intercepted and checked before proceeding, so latency is a key trade‑off. Dodhia said that the system verifies identity and checks every parameter against policy in real time, a process that inevitably adds a small delay but is essential for zero-trust for AI agents.


Alter effectively creates a new security category: agent authorization. The platform lives between an AI agent and any external tool or database, authenticating each request with strong identity and enforcing fine‑grained policies. The term agent authorization captures this idea: just as user sessions undergo identity checks and permissions, AI agents get an equivalent check. The YC launch materials describe Alter’s approach succinctly: it wraps every tool call in strong authentication, fine-grained authorization, and real-time guardrails. In practice, that means an LLM agent cannot issue a data query or execute a transaction without Alter’s approval. Alter issues ephemeral credentials for each agent action, scopes that expire in seconds, so there are no long-lived secrets floating around.


Under the hood, Alter converts policy intent into these low-level controls. For example, a business user might write a policy assigning Agents high level roles and functions. Alter’s compiler translates this into row- and column‑level filters on database queries, and even into checks on prompt parameters. This approach delivers fine-grained policy for LLMs: the system doesn’t just say yes or no, but enforces what data and actions are allowed at the level of individual rows or fields. In Dodhia’s words, the system is designed to prevent risky operations, such as database deletions or unintended transactions, by applying policy-based safeguards.


A core promise of Alter is blocking dangerous agent commands in real time. Since AI agents can fabricate or misinterpret outputs, as Dodhia noted, enterprises worry about rogue behavior. Alter addresses this by checking every request against the policy engine. For instance, Alter’s policy rules automatically restrict access attempts that exceed defined permissions or transaction limits, helping prevent unintended operations. Alter’s security model is built to minimize the risk of harmful commands reaching production, using policies that limit exposure to potential data or financial errors. Each API call uses an ephemeral token with narrowly scoped privileges; once used, that credential expires. The effect is a system with no long-lived secrets, no blind spots, and no surprises in audit.


This design reflected Dodhia’s distributed systems pedigree: like a fast query engine splitting work across shards, Alter’s enforcement is distributed. Every tool (databases, APIs, cloud services) connects through the control layer. The platform even supports multi-cloud tools (MCP) and native integrations, with agent‑to‑agent (A2A) coming soon. The choice to be vendor‑neutral is intentional. Dodhia and his co-founder stress the need for neutral, vendor‑agnostic infrastructure, security that works whether a company uses AWS, GCP, or on‑prem systems. This stems from serving compliance buyers who dread being locked into one cloud’s mechanisms. By keeping the control plane generic, Alter lets customers adopt AI agents without rewriting all their access policies.


Of course, intercepting every agent action comes with trade‑offs. There is added latency from authentication and policy evaluation. The team found that keeping policies only as code was too developer‑centric. Instead, they invested in a policy UI geared for security teams and business users. It presents policies in a simple manner so non-technical stakeholders can define them too. One lesson Dodhia highlights: The users of these policies are often auditors or compliance officers, not engineers. That requires the UX to be unambiguous and verifiable. Real-time enforcement also necessitates efficiency; a policy check must finish in milliseconds. Alter mitigates impact by using low level languages and using incremental evaluation where possible. Nevertheless, the system’s strict checks mean some sacrifice in raw throughput, a compromise Dodhia acknowledges as necessary for AI access control in high-stakes settings.


Integration complexity is another challenge. Alter must sit alongside potentially dozens of tools and agent frameworks. Dodhia’s past experience helped him: at Compute.ai, he built connectors to common data stores under tight service‑level requirements. Similarly, Alter provides connectors and SDKs so that existing agent platforms (OpenAI, Anthropic, etc.) can call into Alter’s gateway. The hope is to make Alter mostly transparent once configured – ideally, a frictionless layer.


A recurring theme in Dodhia’s story is compliance. Serve the financial sector, and you learn that audit trails and built‑in security controls are non-negotiable. At Compute.ai, he had to prove every job’s provenance; at Alter, he baked auditability into day one. The platform logs every agent request and decision, surfacing it in a CISO‑ready dashboard. For example, Alter can report “Agent X asked a database for customer records with parameter Y at 3:14 pm, and was denied under policy rule Z.” This transparency is a major selling point. Dodhia noted that compliance buyers expect evidence of least‑privilege and that policies run automatically in the background, essentially proof that no rule was violated. With built‑in audit trails, teams can pass SOC 2, HIPAA, or GDPR reviews without weeks of manual evidence gathering.


Several lessons emerge from Dodhia’s emphasis on compliance. First, policies must be auditable by design. That influenced Alter to avoid “magic” AI solutions; every access decision is deterministic and recordable. Second, security for AI agents can’t be an afterthought. “When you run in regulated environments,” Dodhia said, “you can’t bolt on security at the end, it has to be core to the architecture.” That’s why Alter was built from scratch as a zero‑trust platform for AI: its very name and design are about removing implicit trust. And third, flexibility matters. Enterprises often have heterogeneous tech stacks, so Alter’s vendor-neutral approach (e.g., supporting any cloud or on‑prem tools) ensures customers aren’t forced to replace infrastructure just to add agent controls.


Kevan Dodhia’s move from distributed compute engines to agent security platforms illustrates how deep engineering experience can address emerging AI risks. Alter is both a technical and conceptual leap: it compiles intuitive policy into low-level controls, applying the rigor of database access control to AI agents. By rejecting long-lived credentials and enforcing zero-trust for AI agents, it prevents a single mishap from escalating.


The result aligns with Dodhia’s goal: making AI agents safe for production by constraining them to the minimum access and duration they require. His journey underscores that in security, and especially in compliance‑driven environments, architecture and human needs must be in sync. As Dodhia puts it, Alter is about enabling teams to move fast on AI agent initiatives while staying fully compliant. In practice, this means building security that codifies policy intent, respects non‑technical users, and gives auditors exactly what they need: evidence that no agent ever misbehaved.


Written by jonstojanjournalist | Jon Stojan is a professional writer based in Wisconsin committed to delivering diverse and exceptional content..
Published by HackerNoon on 2025/10/21