Enterprises do not need a single, universal AI bot. They need domain-native agent systems that understand retail baskets and promotions, bank deposits and risk flags, SKU velocity and supplier OTIF, network cells and outage cohorts, patient journeys and protocol deviations. That is the center of Praveen Satyanarayana’s vision at Tredence: custom workflows, ontologies, knowledge graphs, tools, and metrics for each domain, carried by two non-negotiables that apply everywhere. Ontology-first grounding. Business language resolves to canonical metrics, permitted joins, units, and lineage, and the system locks scope across metric, time, segment, and geography before it spends any compute. Ontology-first grounding. Reliability over autonomy. Multi-turn reasoning runs through a verification scaffold with dual judges, one LLM critic for structure and clarity, and one gold data layer for numerical truth. Reliability over autonomy. For more than a decade companies have poured money into warehouses and dashboards while decision latency stayed stubborn. The last-mile problem is not a visualization gap. It is a reasoning gap. Closing it requires systems that clarify intent, form and test hypotheses, map claims to governed data, and return decision-ready recommendations with an auditable trace. That is the point of Milky Way, the agentic decision system Praveen and team have built for descriptive and diagnostic analytics. It treats enterprise reality as it is, with overloaded terms, partial data, brittle joins, and audit demands. “In retail, an agent that cannot speak basket, UPC, promo, and store-week has no business writing SQL.”— Praveen Satyanarayana “We build constellations of agents, not a mascot bot. Each one knows its domain, its tools, and its guardrails.” — Praveen Satyanarayana What makes this different What makes this different Agentic AI is software that chooses actions and uses tools to pursue a goal within guardrails. Praveen’s contribution is to make that idea measurable and governable for analytics. The architecture is simple to state and strict to implement. Ground the language. Map terms to entities, metrics, synonyms, lineage, and admissible join paths. Refuse ambiguity.Scaffold the execution. Compile plans into guarded tool calls with timeouts, retries, circuit breakers, and SQL structure checks on schema variants.Treat hypotheses as objects. Generate competing explanations, bind each to fields, joins, transforms, tests, and visuals, then rank by prior likelihood, cost to validate, and expected information gain.Judge twice. Let a critic model score clarity and coverage while a gold store verifies numbers, joins, filters, and statistical claims.Deliver a decision narrative. Provide tables, figures, confidence, and links to the full trace for audit. Ground the language. Map terms to entities, metrics, synonyms, lineage, and admissible join paths. Refuse ambiguity. Ground the language. Scaffold the execution. Compile plans into guarded tool calls with timeouts, retries, circuit breakers, and SQL structure checks on schema variants. Scaffold the execution. Treat hypotheses as objects. Generate competing explanations, bind each to fields, joins, transforms, tests, and visuals, then rank by prior likelihood, cost to validate, and expected information gain. Treat hypotheses as objects. Judge twice. Let a critic model score clarity and coverage while a gold store verifies numbers, joins, filters, and statistical claims. Judge twice. Deliver a decision narrative. Provide tables, figures, confidence, and links to the full trace for audit. Deliver a decision narrative. Why now Why now Error compounding is unforgiving. Modest per-step error rates collapse end-to-end reliability in multi-step workflows, which is why bounded steps, verification, and human gates matter. Conversation length also drives token cost and latency, so practical systems favor short-state tasks with explicit checkpoints. Milky Way decomposes work into verifiable sub-plans and keeps context tight. “Short, verified steps beat long, clever chats.” — Praveen Satyanarayana A crisp domain-native playbook A crisp domain-native playbook The system does not ship one template. It ships domain packs that include an ontology and knowledge graph, a vetted tool set, a starter library of hypotheses, and acceptance metrics. Retail, BFSI, Supply Chain, Telecom, Healthcare, and Travel all use the same backbone but install different packs. Joins and lineage differ by domain, so reliability must be defined locally and enforced centrally. “The fastest way to lose trust is to answer quickly with the wrong join/data.” — Praveen Satyanarayana Knowledge graph and ontology operations Knowledge graph and ontology operations The ontology and knowledge graph act as the contract between language and data. They encode entity relationships, metric lineage, join admissibility, synonyms, and policy tags. They also carry path costs and quality labels so planners prefer short, reliable routes. Operations on this layer include: Drift monitors. Detect schema changes, definition shifts, unit mismatches, and relationship breaks.Adapters and curricula. Provide domain adapters for new tables and curate curriculum tasks that harden weak spots.Synonym and alias management. Maintain a compact term store supported by embeddings for recall and by hard rules for precision.Join validators. Run preflight checks and structural SQL tests on hidden schema variants before execution.Lineage transparency. Record tables, joins, filters, and aggregation rules in a trace that is explorable by role. Drift monitors. Detect schema changes, definition shifts, unit mismatches, and relationship breaks. Drift monitors. Adapters and curricula. Provide domain adapters for new tables and curate curriculum tasks that harden weak spots. Adapters and curricula. Synonym and alias management. Maintain a compact term store supported by embeddings for recall and by hard rules for precision. Synonym and alias management. Join validators. Run preflight checks and structural SQL tests on hidden schema variants before execution. Join validators. Lineage transparency. Record tables, joins, filters, and aggregation rules in a trace that is explorable by role. Lineage transparency. Custom evaluations and rubrics Custom evaluations and rubrics Generic leaderboards do not measure enterprise reliability. Milky Way uses custom rubrics and acceptance tests that turn behavior into signals for learning and for go-live gates. Framing and guardrail signals. Clarify count, scope-lock precision, missing information requests, task type detection, and interrupt or override availability.Ontology alignment signals. Field mapping accuracy against a gold shortlist, join validity rate on the ontology graph, aggregation rule adherence to lineage, and escalation latency when required data is absent.Plan and execution signals. Plan completeness, statistical test appropriateness, SQL structural correctness, execution success ratio, and exploratory depth across distributions, cohorts, outliers, and controls.Insight signals. Causal attribution confidence, actionability lead time, persona fit for executive and analyst consumption, and trace transparency index.Learning signals. Role-shaped rewards that credit clarifiers for scope-lock improvements, mappers for field accuracy and join validity, executors for structural correctness, and reporters for persona fit and transparency, with a team bonus for on-time closure above confidence thresholds. Framing and guardrail signals. Clarify count, scope-lock precision, missing information requests, task type detection, and interrupt or override availability. Framing and guardrail signals. Ontology alignment signals. Field mapping accuracy against a gold shortlist, join validity rate on the ontology graph, aggregation rule adherence to lineage, and escalation latency when required data is absent. Ontology alignment signals. Plan and execution signals. Plan completeness, statistical test appropriateness, SQL structural correctness, execution success ratio, and exploratory depth across distributions, cohorts, outliers, and controls. Plan and execution signals. Insight signals. Causal attribution confidence, actionability lead time, persona fit for executive and analyst consumption, and trace transparency index. Insight signals. Learning signals. Role-shaped rewards that credit clarifiers for scope-lock improvements, mappers for field accuracy and join validity, executors for structural correctness, and reporters for persona fit and transparency, with a team bonus for on-time closure above confidence thresholds. Learning signals. These evaluations run offline on synthetic tasks that mirror real schema and run online as shadow or gated flows. How multi-turn reasoning actually runs How multi-turn reasoning actually runs Clarification converges to scope-lock with minimal burden on the user. The hypothesis engine seeds candidates from a domain library and from retrieval over prior cases and marks coexistence or competition. The mapper binds each hypothesis to fields and joins and produces a factor map. The executor runs SQL and tests under timeouts and circuit breakers and tracks exploratory depth. The critic and gold judges iterate on narrative quality and numeric truth. The reporter assembles role-specific narratives with evidence, confidence, and next actions. Every stage emits metrics that feed both evaluation and reinforcement learning. Reliability and economics by design Reliability and economics by design The scaffold captures tool signatures, side-effect policies, and costs. Tools return structured feedback that includes success, partial success, sample, and cost. Destructive operations are gated. Memory is episodic and semantic rather than an endless transcript. Stateless tools are preferred where possible. Stateful agents use retrieval and short contexts to control token cost. Adoption that earns trust Adoption that earns trust Teams begin with human-in-the-loop where analysts validate scope-lock and first recommendations. They progress to human-on-the-loop where routine paths auto-run and exceptions require review. They then authorize selective autonomy for narrow, high-confidence workflows with rollback and full audit. The sequence builds confidence without pausing impact. Open work, stated plainly Open work, stated plainly Ontology and graph upkeep carry real cost. Drift detection and domain curricula are ongoing. Reward gaming is possible and must be checked with cross-rubric audits and surprise variants. Synthetic-to-real gaps persist and benefit from targeted shadow runs on live incidents. Credit assignment in long traces is noisy, so role-shaped rewards and team bonuses improve stability. Why this vision is credible Why this vision is credible Praveen’s approach combines agentic orchestration, tool use, retrieval, and learning from signals, then anchors them to enterprise constraints. The stance is opinionated where it must be with ontology gates and a gold judge and modular where it should be with swappable tools and domain adapters. If the last mile is about making analysis useful on time and under control, this is a path that holds up in production and scales by design. “A narrative is only as strong as its trace. We ship the trace and the answers.” — Praveen Satyanarayana References References Oracle, What is Agentic AI, 2025.Gartner, Top Strategic Technology Trends for 2025, Agentic AI, 2024.Google DeepMind, Introducing Gemini 2.0 for the agentic era, 2024.Utkarsh Kanwat, Why I am Betting Against AI Agents in 2025, 2025.Navin Chaddha, AI-First Professional Services: The Great Equalizer is Coming, 2025.Industry coverage on agent rollouts and enterprise adoption, 2025. Oracle, What is Agentic AI, 2025. Gartner, Top Strategic Technology Trends for 2025, Agentic AI, 2024. Google DeepMind, Introducing Gemini 2.0 for the agentic era, 2024. Utkarsh Kanwat, Why I am Betting Against AI Agents in 2025, 2025. Navin Chaddha, AI-First Professional Services: The Great Equalizer is Coming, 2025. Industry coverage on agent rollouts and enterprise adoption, 2025. This story was distributed as a release by Kashvi Pandey under HackerNoon’s Business Blogging Program. This story was distributed as a release by Kashvi Pandey under HackerNoon’s Business Blogging Program.