Modern applications are becoming more dynamic, more intelligent, and more real time. Dashboards refresh with incoming telemetry. Monitoring systems respond to shifting baselines. Agents make decisions in context, not in isolation. Each depends on the same foundational requirement: the ability to unify live events with deep historical state. Yet the data remains fragmented. Operational systems, built on Postgres, handle ingestion and serving. Analytical systems, built on the lakehouse, handle enrichment and modeling. Connecting them means stitching together streams, pipelines, and custom jobs—each introducing latency, fragility, and cost. The result is a patchwork of systems that struggle to deliver the full picture, let alone do so in real time. This fragmentation doesn’t just slow teams down—it limits what developers can build. You can’t deliver real-time dashboards with historical depth or ground agents in fresh operational context when the data is split by design. This architectural divide is no longer sustainable. Tiger Lake bridges that divide. Now in public beta, it introduces a new data loop—continuous, bidirectional, and deeply integrated—between Postgres and the lakehouse. It simplifies the stack, preserves open formats, and brings operational and analytical context into the same system. Tiger Lake Tiger Lake Introducing Tiger Lake: Real-Time Data, Full-Context Systems Tiger Lake eliminates the need for external pipelines, complex orchestration frameworks, and proprietary middleware. It is built directly into Tiger Cloud and integrated with Tiger Postgres, our production-grade Postgres engine for transactional, analytical, and agentic workloads. The architecture uses open standards from end to end: Apache Iceberg tables stored in Amazon S3 Tables for lakehouse integrationContinuous replication from Postgres tables or hypertables into IcebergStreaming ingestion back into Postgres for low-latency serving and operationsPushing down queries from Postgres to Iceberg for efficient rollups Apache Iceberg tables stored in Amazon S3 Tables for lakehouse integration Continuous replication from Postgres tables or hypertables into Iceberg Streaming ingestion back into Postgres for low-latency serving and operations Pushing down queries from Postgres to Iceberg for efficient rollups These capabilities come built in. What previously required Flink jobs, DAG schedulers, and custom glue now works natively. Streaming behavior and schema compatibility are designed into the system from the start. To understand how Tiger Lake reshapes data architecture, it helps to revisit the medallion model and consider how it evolves when real-time context becomes a core design principle. revisit the medallion model revisit the medallion model You can think of it as an operational medallion architecture: operational medallion architecture Bronze: Raw data lands in Iceberg-backed S3.Silver: Cleaned and validated data is replicated to Postgres.Gold: Aggregates are computed in Postgres for real-time serving, then streamed back to Iceberg for feature analysis. Bronze: Raw data lands in Iceberg-backed S3. Bronze: Silver: Cleaned and validated data is replicated to Postgres. Silver: Gold: Aggregates are computed in Postgres for real-time serving, then streamed back to Iceberg for feature analysis. Gold: Traditional Bronze–Silver–Gold workflows were built for batch systems. Tiger Lake enables a continuous flow where enrichment and serving happen in real time. This shift transforms an overly complex pipeline into a dynamic and simpler real-time data loop. Context and data moves freely between systems. Operational and analytical layers stay connected without redundant jobs or duplicated infrastructure. All data remains native, up to date, and queryable with standard SQL. Tiger Lake supports a single write path that powers real-time applications, dashboards, and the lakehouse, using the architecture that best fits the developer. Users can write data to Postgres, then have appropriate data and rollups automatically synced to their lakehouse; conversely, users already feeding raw data into the lakehouse can automatically bring it to Postgres for operational serving. Now, applications can reason across the now and the then—without orchestration code or synchronization overhead. "We stitched together Kafka, Flink, and custom code to stream data from Postgres to Iceberg. It worked, but it was fragile and high-maintenance," said Kevin Otten, Director of Technical Architecture at Speedcast. "Tiger Lake replaces all of that with native infrastructure. It’s the architecture we wish we had from day one." "We stitched together Kafka, Flink, and custom code to stream data from Postgres to Iceberg. It worked, but it was fragile and high-maintenance," said Kevin Otten, Director of Technical Architecture at Speedcast. said Kevin Otten, Director of Technical Architecture at Speedcast. "Tiger Lake replaces all of that with native infrastructure. It’s the architecture we wish we had from day one." From Architecture to Outcomes Tiger Lake enables real-time systems that were previously too complex to operate or too expensive to build. Customer-facing dashboards Dashboards can now combine live metrics with historical aggregates in a single query. There is no need for dual stacks or stale insights. Tiger Lake supports high-throughput ingestion at production scale, powering pipelines that visualize billions of rows in real time. Everything lives in one system, continuously updated and instantly queryable. "With Tiger Lake, we finally unified our real-time and historical data," said Maxwell Carritt, Lead IoT Engineer at Pfeifer & Langen. "Now we seamlessly stream from Tiger Postgres into Iceberg, giving our analysts the power to explore, model, and act on data across S3, Athena, and TigerData." "With Tiger Lake, we finally unified our real-time and historical data," said Maxwell Carritt, Lead IoT Engineer at Pfeifer & Langen. said Maxwell Carritt, Lead IoT Engineer at Pfeifer & Langen. "Now we seamlessly stream from Tiger Postgres into Iceberg, giving our analysts the power to explore, model, and act on data across S3, Athena, and TigerData." Monitoring systems With a single source of truth and a continuous data loop, alerting becomes faster and more reliable. Engineers can run one SQL query to inspect fresh telemetry and historical incidents together—improving triage speed, reducing false positives, and staying focused on what matters. Simplifying the data plane also improves system resilience. Tiger Lake lets monitoring systems operate on the same live operational backbone, where Iceberg provides historical depth and Tiger Postgres delivers low-latency access. Agents Tiger Lake makes grounding possible without additional infrastructure. Developers can embed recent user activity and long-term interaction history directly inside Postgres. There is no need for orchestration, vector drift management or custom AI pipelines. Imagine a support agent receives a new inquiry. The large body of historical support cases remain in Iceberg, while Tiger Lake created automated chunk and vector embeddings in Postgres. Now vector search against the operational database can answer AI chat questions quickly, while ensuring that embeddings stay fresh and up-to-date without complex orchestration pipelines. In doing so, Tiger Lake is also a key building block in what we call Agentic Postgres, a Postgres foundation for intelligent systems that learn, decide, and act. "With Tiger Lake, we believe TigerData is setting a strong foundation for turning Postgres into the operational engine of the open lakehouse for applications," said Ken Yoshioka, CTO, Lumia Health. "It allows us the flexibility to grow our biotech startup quickly with infrastructure designed for both analytics and agentic AI." "With Tiger Lake, we believe TigerData is setting a strong foundation for turning Postgres into the operational engine of the open lakehouse for applications," said Ken Yoshioka, CTO, Lumia Health. said Ken Yoshioka, CTO, Lumia Health. "It allows us the flexibility to grow our biotech startup quickly with infrastructure designed for both analytics and agentic AI." Companies like Speedcast, Lumia Health, and Pfeifer & Langen are already building full-context and real-time analytical systems with Tiger Lake. These architectures power industrial telemetry, agentic workflows, and real-time operations, all from a unified, continuously streaming platform. Available in Public Beta on Tiger Cloud Tiger Lake is available now in public beta on Tiger Cloud, our managed platform for real-time applications and analytical systems. It supports continuous streaming from Tiger Postgres to Iceberg-backed Amazon S3 Tables using open formats. Tiger Cloud Tiger Cloud Coming soon: Round-trip intelligence Later this summer: Query Iceberg catalogs directly from within Postgres. Explore, join, and reason across lakehouse and operational data using SQL.Fall 2025: Full round-trip workflows: ingest into Postgres, enrich in Iceberg and stream results back automatically. This lets developers move from event to analysis to action in one architecture. Later this summer: Query Iceberg catalogs directly from within Postgres. Explore, join, and reason across lakehouse and operational data using SQL. Later this summer: Fall 2025: Full round-trip workflows: ingest into Postgres, enrich in Iceberg and stream results back automatically. This lets developers move from event to analysis to action in one architecture. Fall 2025: How to set up Tiger Lake Getting started is simple. No complex orchestration or manual integrations: Create a bucket for Iceberg-compatible S3 tables.Provide ARN permissions to Tiger Cloud.Enable table sync in Tiger Postgres: Create a bucket for Iceberg-compatible S3 tables. Provide ARN permissions to Tiger Cloud. Enable table sync in Tiger Postgres: ALTER TABLE my_hypertable SET ( tigerlake.iceberg_sync = true ); ALTER TABLE my_hypertable SET ( tigerlake.iceberg_sync = true ); The Future of Data Architecture Is Real-Time, Contextual, and Open Tiger Lake introduces a new kind of architecture. It is continuous by design, scalable by default, and optimized for applications that need full context and complete data in real time. Operational data flows into the lakehouse for enrichment and modeling. Enriched insights flow back into Postgres for low-latency serving. Applications and agents complete the loop, responding with precision and speed. We believe this is the foundation for what comes next: Systems that unify operational use cases and internal analyticsArchitectures that reduce complexity instead of compounding itWorkloads that are not just reactive but grounded in understanding Systems that unify operational use cases and internal analytics Architectures that reduce complexity instead of compounding it Workloads that are not just reactive but grounded in understanding You should not have to choose between context and simplicity. You should not have to patch together systems that were never designed to work together. And you should not have to replatform to evolve. Together with next-generation storage architecture and our Postgres-native AI tooling, Tiger Lake forms the backbone of Agentic Postgres. This is a foundation built for intelligent workloads that learn, simulate, and act. We’ll share more soon. Try it today on Tiger Cloud, and check out the Tiger Lake docs to get started. Tiger Cloud Tiger Cloud Tiger Lake docs Tiger Lake docs — Mike Mike