paint-brush
Everything Apache Iceberg-Related Announced This Yearby@alexmerced
537 reads
537 reads

Everything Apache Iceberg-Related Announced This Year

by Alex MercedDecember 13th, 2024
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

Numerous announcements throughout the year have solidified Apache Iceberg's position as the industry standard for modern data lakehouse architectures.
featured image - Everything Apache Iceberg-Related Announced This Year
Alex Merced HackerNoon profile picture

Another year has passed, and 2024 has been an eventful one for the Apache Iceberg table format. Numerous announcements throughout the year have solidified Apache Iceberg's position as the industry standard for modern data lakehouse architectures.


Here are some of the highlights from 2024:

  • Dremio announced the private preview of the Hybrid Iceberg Catalog, extending governance and table maintenance capabilities for both on-premises and cloud environments, building on the cloud catalog's general availability from previous years.
  • Snowflake announces Polaris Catalog, and then Partners with Dremio, AWS, Google and Microsoft to donate it to the Apache Software Foundation.
  • Upsolver introduced native Iceberg support, including table maintenance for streamed data landing in Iceberg tables.
  • Confluent unveiled several features aimed at enhancing Iceberg integrations.
  • Databricks acquired Tabular, a startup founded by Apache Iceberg creators Ryan Blue, Daniel Weeks, and Jason Reid.
  • AWS announced specialized S3 table bucket types for native Apache Iceberg support.
  • BigQuery added native Iceberg table support.
  • Microsoft Fabric introduced "Iceberg Links," enabling seamless access to Iceberg tables within its environment.


These advancements, along with many other companies and open-source technologies expanding their support for Iceberg, have made 2024 a remarkable year for the Apache Iceberg ecosystem.


Looking ahead, there is much to be excited about for Iceberg in 2025, as detailed in this blog.


With these developments in mind, it's the perfect time to reflect on how to architect an Apache Iceberg lakehouse. This guide aims to help you design a lakehouse that takes full advantage of Iceberg's capabilities and the latest industry innovations.

Why an Apache Iceberg Lakehouse?

Before we dive into the how, let’s take a moment to reflect on the why. A lakehouse leverages open table formats like Iceberg, Delta Lake, Hudi, and Paimon to create data warehouse-like tables directly on your data lake. The key advantage of these tables is that they provide the transactional guarantees of a traditional data warehouse without requiring data duplication across platforms or teams.


This value proposition is a major reason to consider Apache Iceberg in particular. In a world where different teams rely on different tools, Iceberg stands out with the largest ecosystem of tools for reading, writing, and—most importantly—managing Iceberg tables.


Additionally, recent advancements in portable governance through catalog technologies amplify the benefits of adopting Iceberg. Features like hidden partitioning and partition evolution further enhance Iceberg’s appeal by maximizing flexibility and simplifying partition management. These qualities ensure that you can optimize your data lakehouse architecture for both performance and cost.

Pre-Architecture Audit

Before we begin architecting your Apache Iceberg Lakehouse, it’s essential to perform a self-audit to clearly define your requirements. Document answers to the following questions:


  1. Where is my data currently? Understanding where your data resides—whether on-premises, in the cloud, or across multiple locations—helps you plan for migration, integration, and governance challenges.
  2. Which of my data is the most accessed by different teams? Identifying the most frequently accessed datasets ensures you prioritize optimizing performance for these critical assets.
  3. Which of my data is the highest cost generator? Knowing which datasets drive the highest costs allows you to focus on cost-saving strategies, such as tiered storage or optimizing query performance.
  4. Which data platforms will I still need if I standardize on Iceberg? This helps you assess which existing systems can coexist with Iceberg and which ones may need to be retired or reconfigured.
  5. What are the SLAs I need to meet? Service-level agreements (SLAs) dictate the performance, availability, and recovery time objectives your architecture must support.
  6. What tools are accessing my data, and which of those are non-negotiables? Understanding the tools your teams rely on—especially non-negotiable ones—ensures that the ecosystem around your Iceberg lakehouse remains compatible and functional.
  7. What are my regulatory barriers? Compliance with industry regulations or organizational policies must be factored into your architecture to avoid potential risks.


By answering these questions, you can determine which platforms align with your needs and identify the components required to generate, track, consume, and maintain your Apache Iceberg data effectively.

The Components of an Apache Iceberg Lakehouse

When moving to an Apache Iceberg lakehouse, certain fundamentals are a given—most notably that your data will be stored as Parquet files with Iceberg metadata. However, building a functional lakehouse requires several additional components to be carefully planned and implemented.

Key Components of an Apache Iceberg Lakehouse

  1. Storage

    Where will your data be stored? The choice of storage system (e.g., cloud object storage like AWS S3 or on-premises systems) impacts cost, scalability, and performance.

  2. Catalog

    How will your tables be tracked and governed? A catalog, such as Nessie, Hive, or AWS Glue, is critical for managing metadata, enabling versioning, and supporting governance.

  3. Ingestion

    What tools will you use to write data to your Iceberg tables? Ingestion tools (e.g., Apache Spark, Flink, Kafka Connect) ensure data is efficiently loaded into Iceberg tables in the required format.

  4. Integration

    How will you work with Iceberg tables alongside other data? Integration tools (e.g., Dremio, Trino, or Presto) allow you to query and combine Iceberg tables with other datasets and build a semantic layer that defines common business metrics.

  5. Consumption

    What tools will you use to extract value from the data? Whether for training machine learning models, generating BI dashboards, or conducting ad hoc analytics, consumption tools (e.g., Tableau, Power BI, dbt) ensure data is accessible for end-users and teams.

Next Steps

In this guide, we’ll explore each of these components in detail and provide guidance on how to evaluate and select the best options for your specific use case.

Storage: Building the Foundation of Your Iceberg Lakehouse

Choosing the right storage solution is critical to the success of your Apache Iceberg lakehouse. Your decision will impact performance, scalability, cost, and compliance. Below, we’ll explore the considerations for selecting cloud, on-premises, or hybrid storage, compare cloud vendors, and evaluate alternative solutions.

Reasons to Choose Cloud, On-Premises, or Hybrid Storage

  • Cloud Storage: Cloud storage offers scalability, cost efficiency, and managed services. It’s ideal for businesses prioritizing flexibility, global accessibility, and reduced operational overhead. Examples include AWS S3, Google Cloud Storage, and Azure Data Lake Storage (ADLS).
  • On-Premises Storage: On-premises solutions provide greater control over data and are often preferred for compliance, security, or latency-sensitive workloads. These solutions require significant investment in hardware and maintenance.
  • Hybrid Storage: Hybrid storage combines the benefits of both worlds. You can use on-premises storage for sensitive or high-frequency data while leveraging the cloud for archival, burst workloads, or global access.

Considerations When Choosing a Cloud Vendor

When selecting a cloud provider, consider the following:

  1. Integration with Your Tech Stack: Ensure the vendor works seamlessly with your compute and analytics tools (e.g., Apache Spark, Dremio).
  2. Cost Structure: Evaluate storage costs, retrieval fees, and data transfer costs. Some providers, like AWS, offer tiered storage options to optimize costs for infrequent data access.
  3. Global Availability and Latency: If your organization operates globally, consider a provider with a robust network of regions to minimize latency.
  4. Ecosystem Services: Consider additional services like data lakes, ML tools, or managed databases provided by the vendor.

Considerations for Alternative Storage Solutions

In addition to cloud and traditional on-prem options, there are specialized storage systems to consider:

  • NetApp StorageGrid: Optimized for object storage with S3 compatibility and strong data lifecycle management.
  • VAST Data: Designed for high-performance workloads, leveraging technologies like NVMe over Fabrics.
  • MinIO: An open-source, high-performance object storage system compatible with S3 APIs, ideal for hybrid environments.
  • Pure Storage: Offers scalable, all-flash solutions for high-throughput workloads.
  • Dell EMC: Provides a range of storage solutions for diverse enterprise needs.
  • Nutanix: Combines hyper-converged infrastructure with scalable object storage.

Questions to Ask Yourself When Deciding on Storage

  1. What are my performance requirements? Determine the latency, throughput, and IOPS needs of your workloads.
  2. What is my budget? Consider initial costs, ongoing costs, and scalability.
  3. What are my compliance and security needs? Identify regulatory requirements and whether you need fine-grained access controls or encryption.
  4. How frequently will I access my data? Choose between high-performance tiers and cost-effective archival solutions based on access patterns.
  5. Do I need scalability and flexibility? Assess whether your workloads will grow significantly or require frequent adjustments.
  6. What are my geographic and redundancy needs? Decide if data needs to be replicated across regions or stored locally for compliance.


Selecting the right storage for your Iceberg lakehouse is a foundational step. By thoroughly evaluating your needs and the available options, you can ensure a storage solution that aligns with your performance, cost, and governance requirements.

Catalog: Managing Your Iceberg Tables

A lakehouse catalog is essential for tracking your Apache Iceberg tables and ensuring consistent access to the latest metadata across tools and teams. The catalog serves as a centralized registry, enabling seamless governance and collaboration.

Types of Iceberg Lakehouse Catalogs

Iceberg lakehouse catalogs come in two main flavors:


  1. Self-Managed Catalogs

    With a self-managed catalog, you deploy and maintain your own catalog system. Examples include Nessie, Hive, Polaris, Lakekeeper, and Gravitino. While this approach requires operational effort to maintain the deployment, it provides portability of your tables and governance capabilities.


  2. Managed Catalogs

    Managed catalogs are provided as a service, offering the same benefits of portability and governance while eliminating the overhead of maintaining the deployment. Examples include Dremio Catalog and Snowflake's Open Catalog, which are managed versions of Polaris.

Importance of the Iceberg REST Catalog Specification

A key consideration when selecting a catalog is whether it supports the Iceberg REST Catalog Spec. This specification ensures compatibility with the broader Iceberg ecosystem, providing assurance that your lakehouse can integrate seamlessly with other Iceberg tools.

Catalogs Supporting the REST Spec:

  • Polaris
  • Gravitino
  • Unity Catalog
  • Lakekeeper
  • Nessie

Catalogs Without REST Spec Support (Yet):

  • Hive
  • JDBC
  • AWS Glue
  • BigQuery

Choosing the Right Catalog

Here are some considerations to guide your choice:

  • If you have on-prem data: Dremio Catalog is the only managed catalog offering that allows for on-prem tables to co-exist with cloud tables.
  • If you are already a Snowflake user: Snowflake's Open Catalog offers an easy path to adopting Iceberg, allowing you to leverage Iceberg while staying within the Snowflake ecosystem.
  • If you use Databricks with Delta Lake: Unity Catalog’s Uniform feature allows you to maintain an Iceberg copy of your Delta Lake table metadata, enabling compatibility with the Iceberg ecosystem.
  • If you are heavily invested in the AWS ecosystem: AWS Glue provides excellent interoperability within AWS. However, its lack of REST Catalog support may limit its usability outside the AWS ecosystem.


Selecting the right catalog is critical for ensuring your Iceberg lakehouse operates efficiently and integrates well with your existing tools. By understanding the differences between self-managed and managed catalogs, as well as the importance of REST Catalog support, you can make an informed decision that meets your needs for portability, governance, and compatibility.


Ingesting Data into Iceberg: Managing the Flow of Data

Ingesting data into Apache Iceberg tables is a critical step in building a functional lakehouse. The tools and strategies you choose will depend on your infrastructure, data workflows, and resource constraints. Let’s explore the key options and considerations for data ingestion.

Managing Your Own Ingestion Clusters

For those who prefer complete control, managing your own ingestion clusters offers flexibility and customization. This approach allows you to handle both batch and streaming data using tools like:


  • Apache Spark: Ideal for large-scale batch processing and ETL workflows.

  • Apache Kafka or Apache Flink: Excellent choices for real-time streaming data ingestion.


While these tools provide robust capabilities, they require significant effort to deploy, monitor, and maintain.

Leveraging Managed Services for Ingestion

If operational overhead is a concern, managed services can streamline the ingestion process. These services handle much of the complexity, offering ease of use and scalability:


  • Batch Ingestion Tools:Examples include Fivetran, Airbyte, AWS Glue, and ETleap. These tools are well-suited for scheduled ETL tasks and periodic data loads.
  • Streaming Ingestion Tools:Examples include Upsolver, Delta Stream, Estuary, Confluent, and Decodable, which are optimized for real-time data processing and ingestion.

Questions to Ask When Selecting Ingestion Tools

To narrow down your options and define your hard requirements, consider the following questions:

  1. What is the nature of your data workflow? Determine if your use case primarily involves batch processing, streaming data, or a combination of both.
  2. What is your tolerance for operational complexity? Decide whether you want to manage your own clusters or prefer managed services to reduce overhead.
  3. What are your performance and scalability requirements? Assess whether your ingestion tool can handle the volume, velocity, and variety of your data.
  4. How critical is real-time processing? If near-instantaneous data updates are crucial, prioritize streaming tools over batch processing solutions.
  5. What is your existing tech stack? Consider tools that integrate well with your current infrastructure, such as cloud services, catalogs, or BI tools.
  6. What is your budget? Balance cost considerations between self-managed clusters (higher operational costs) and managed services (subscription-based pricing).


Choosing the right ingestion strategy is essential for ensuring your Iceberg lakehouse runs smoothly. By weighing the trade-offs between managing your own ingestion clusters and leveraging managed services, and by asking the right questions, you can design an ingestion pipeline that aligns with your performance, cost, and operational goals.


Data Integration: Bridging the Gap for a Unified Lakehouse Experience

Not all your data will migrate to Apache Iceberg immediately—or ever. Moving existing workloads to Iceberg requires thoughtful planning and a phased approach. However, you can still deliver the "Iceberg Lakehouse experience" to your end-users upfront, even if not all your data resides in Iceberg. This is where data integration, data virtualization, or a unified lakehouse platform like Dremio becomes invaluable.


Why Dremio for Data Integration?

  1. Unified Access Across Data Sources

    Dremio allows you to connect and query all your data sources in one place. Even if your datasets haven’t yet migrated to Iceberg, you can combine them with Iceberg tables seamlessly. Dremio’s fast query engine ensures performant analytics, regardless of where your data resides.


  2. Built-In Semantic Layer for Consistency

    Dremio includes a built-in semantic layer to define commonly used datasets across teams. This layer ensures consistent and accurate data usage for your entire organization. Since the semantic layer is based on SQL views, transitioning data from its original source to an Iceberg table is seamless—simply update the SQL definition of the views. Your end-users won’t even notice the change, yet they’ll immediately benefit from the migration.


  3. Performance Boost with Iceberg-Based Reflections

    Dremio’s Reflections feature accelerates queries on your data. When your data is natively in Iceberg, reflections are refreshed incrementally and updated automatically when the underlying dataset changes. This results in faster query performance and reduced maintenance effort. Learn more about reflections in this blog post.

Delivering the Lakehouse Experience

As more of your data lands in Iceberg, Dremio enables you to seamlessly integrate it into a governed semantic layer. This layer supports a wide range of data consumers, including BI tools, notebooks, and reporting platforms, ensuring all teams can access and use the data they need effectively.


By leveraging Dremio, you can bridge the gap between legacy data systems and your Iceberg lakehouse, providing a consistent and performant data experience while migrating to Iceberg at a pace that works for your organization.

Consumers: Empowering Teams with Accessible Data

Once your data is stored, integrated, and organized in your Iceberg lakehouse, the final step is ensuring it can be consumed effectively by your teams. Data consumers rely on various tools for analytics, reporting, visualization, and machine learning. A robust lakehouse architecture ensures that all these tools can access the data they need, even if they don’t natively support Apache Iceberg.

Types of Data Consumers and Their Tools

  1. Python Notebooks

    Python notebooks, such as Jupyter, Google Colab, or VS Code Notebooks, are widely used by data scientists and analysts for exploratory data analysis, data visualization, and machine learning. These notebooks leverage libraries like Pandas, PyArrow, and Dask to process data from Iceberg tables, often via a platform like Dremio for seamless access.


  2. BI Tools

    Business intelligence tools like Tableau, Power BI, and Looker are used to create interactive dashboards and reports. While these tools may not natively support Iceberg, Dremio acts as a bridge, providing direct access to Iceberg tables and unifying them with other datasets through its semantic layer.


  3. Reporting Tools

    Tools such as Crystal Reports, Microsoft Excel, and Google Sheets are commonly used for generating structured reports. Dremio's integration capabilities make it easy for reporting tools to query Iceberg tables alongside other data sources.


  4. Machine Learning Platforms

    Platforms like Databricks, SageMaker, or Azure ML require efficient access to large datasets for training models. With Dremio, these platforms can query Iceberg tables directly or through unified views, simplifying data preparation workflows.


  5. Ad Hoc Querying Tools

    Tools like DBeaver, SQL Workbench, or even command-line utilities are popular among engineers and analysts for quick SQL-based data exploration. These tools can connect to Dremio to query Iceberg tables without additional configuration.

Dremio as the Integration Layer

Most platforms, even if they don’t have native Iceberg capabilities, can leverage Dremio to access Iceberg tables alongside other datasets. Here’s how Dremio enhances the consumer experience:


  • Unified Data Access: Dremio’s ability to virtualize data from multiple sources means that end-users don’t need to know where the data resides. Whether it’s Iceberg tables or legacy systems, all datasets can be queried together.
  • Semantic Layer: Dremio’s semantic layer defines business metrics and datasets, ensuring consistent definitions across all tools and teams. Users querying data via BI tools or Python notebooks can rely on the same, agreed-upon metrics.
  • Performance Optimization: Dremio’s Reflections accelerate queries, providing near-instant response times for dashboards, reports, and interactive analyses, even with large Iceberg datasets.


By enabling data consumers with tools they already know and use, your Iceberg lakehouse can become a powerful, accessible platform for delivering insights and driving decisions. Leveraging Dremio ensures that even tools without native Iceberg support can fully participate in your data ecosystem, helping you maximize the value of your Iceberg lakehouse.

Conclusion: Your Journey to a Seamless Iceberg Lakehouse

Architecting an Iceberg Lakehouse is not just about adopting a new technology; it’s about transforming how your organization stores, governs, integrates, and consumes data. This guide has walked you through the essential components—from storage and catalogs to ingestion, integration, and consumption—highlighting the importance of thoughtful planning and the tools available to support your journey.


Apache Iceberg’s open table format, with its unique features like hidden partitioning, partition evolution, and broad ecosystem support, provides a solid foundation for a modern data lakehouse. By leveraging tools like Dremio for integration and query acceleration, you can deliver the "Iceberg Lakehouse experience" to your teams immediately, even as you transition existing workloads over time.


As 2025 unfolds, the Apache Iceberg ecosystem will continue to grow, bringing new innovations and opportunities to refine your architecture further. By taking a structured approach and selecting the right tools for your needs, you can build a flexible, performant, and cost-efficient lakehouse that empowers your organization to make data-driven decisions at scale.

Let this guide be the starting point for your Iceberg Lakehouse journey—designed for today and ready for the future.