paint-brush
Data Lakehouses: The New Data Storage Modelby@zacamos
725 reads
725 reads

Data Lakehouses: The New Data Storage Model

by Zac AmosMay 13th, 2022
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

As a combination of the major models that came before — data lakes and warehouses — data lakehouses aim to offer flexibility, cost-efficiency, and support for complex data storage and analysis. Lakehouses store data in its raw, unstructured format, like data lakes, but they also use a single storage layer that offers indexing, caching, metadata, and compaction support. Data lakehouses offer the best of both worlds: flexibility, optimized performance, easier security, and cost-effective storage.

Company Mentioned

Mention Thumbnail
featured image - Data Lakehouses: The New Data Storage Model
Zac Amos HackerNoon profile picture

Data is the world’s most valuable resource. However, like any other resource, it doesn’t provide any value on its own. Businesses have to use it properly to benefit from it, and that means choosing the best way to store it.


The world creates, captures, copies, and consumes dozens of zettabytes of data a year. Handling that much data, especially with the necessary organization for in-depth analytics, is a complicated task. Data scientists have turned to data warehouses and lakes to help, but now a new model has emerged: the data lakehouse.

A Brief History of Data Architecture

To understand data lakehouses, it helps to understand the models that came before them. As companies’ data grew in volume and complexity, they quickly realized they needed to organize it to find what they needed faster and more accurately. Data warehouses were the solution.


Warehouses standardize and format incoming data to store it in a consolidated, structured environment. This eliminates redundancies, makes it easier to find and analyze data, and improves data insights by offering a more reliable single source of truth. As more complex processes like machine learning gained steam, warehouses’ limitations became more concerning.


Data lakes emerged shortly after Apache Hadoop premiered in 2008, offering a more flexible option. Unlike warehouses, lakes store data in its original, unstructured format, making it cheaper and easier to scale up and store vast amounts of information. However, this flexibility and affordability come at the cost of losing the speed and reliability of warehouses.

The Data Lakehouse

Data lakehouses offer the best of both worlds. They aim to offer the flexibility, cost-efficiency, and support for complex analysis processes of data lakes while maintaining warehouses’ organizational benefits.


Lakehouses more closely resemble data lakes at first. Like lakes, they take data in its raw format, storing structured, semi-structured, and unstructured information in a single, vast repository.


Unlike data lakes, lakehouses then use a single storage layer that offers indexing, caching, metadata, and compaction support that data passes through before going to specific end-uses.


When businesses need to use their data, it flows from the storage layer, through the compute layer for organization, and then through open APIs to meet various use cases. As a result, they meet the need for both flexibility and organization.

Benefits of Data Lakehouses

Right now, more than 90% of enterprise data sits in data lakes. However, that could change soon as more organizations realize the advantages of data lakehouses over these older architectures. Here are some of the most significant of these benefits.

Cost-Effectiveness

Data warehouses can quickly become expensive to implement, as it takes considerable time and computing resources to organize large data sets before storing them. Lakes offer a more affordable alternative, but they lose warehouses’ visibility and reliability. Lakehouses stand as the most cost-effective solution.


Since the data remains in a lake during storage, lakehouses offer the low-cost scalability of conventional lakes. When businesses need their data, the lakehouse will then run it through organizational tools to provide the necessary visibility and consistency. Organizations no longer have to sacrifice performance for affordability.


Because of this organizational layer, lakehouses cost more to implement than lakes. Still, they’re not as expensive as warehouses and their reliability can lead to process improvements that make up for the extra expenses. All in all, they offer the best balance between performance and cost.

Optimized Performance

Data lakehouses also benefit from the performance advantages of both lakes and warehouses. Warehouses enable far faster and, at times, more accurate analysis because of their standardization and organization. Lakes, on the other hand, enable more advanced analytics processes.


Data lakehouses provide for both and include several helpful optimization features. The compute layer offers support like caching, data skipping, and clustering to help refine data as needed for the specific use case at hand. Since data doesn’t go through this organizational layer until businesses use it, methods can match each end-use.


Many organizations try to balance the benefits of lakes and warehouses by using a mix of both, but this creates redundancies. Lakehouses provide a combination of their benefits while keeping a single repository, eliminating redundancy. As a result, they outperform hybrid structures, too.

Flexibility

Similarly, data lakehouses offer a more flexible approach to data architecture. Lakehouses use open formats like Parquet and ORC, as well as open APIs using languages like SQL, R, and Python. This makes them interoperable with many other apps, integrations, and processes.


Warehouses are ideal for business intelligence (BI) applications, while lakes are better-suited for direct access to large datasets for processes like machine learning. Since lakehouses feature a data lake and an organizational layer, they can meet the specific needs of both. Regardless of what types of applications businesses run their data through, the lakehouse can support it.


In a recent study, 68% of survey respondents believed lakehouses offered the best all-around solution after experts weighed in on the pros and cons of each model. Interestingly, just 37% felt that way before the discussion. Once the way lakehouses work becomes clear, it becomes more apparent that they’re the most flexible data storage solution.

Easier Security and Governance

Security and regulatory compliance and growing concerns for any data operation within a business. In fact, security and visibility account for the four largest barriers to cloud migration that companies face today. Data lakehouses’ organization and flexibility make it easier to adapt to these changing security and compliance needs.


Lakehouses’ compute layer can apply auditing and security mechanisms across an entire data lake, meeting stringent needs despite rising unstructured data. Similarly, their support for Atomicity, Consistency, Reliability, and Durability (ACID) transactions ensures data integrity to meet regulatory requirements.


Since data lakehouses offer more visibility and control for vast repositories, they make it easier to find and fix anomalies. The compute layer also makes it harder for poor-quality or poisoned data to influence end-uses.

Data Architecture Is Evolving

Data lakes have effectively replaced warehouses, but these architectures are becoming outdated, too. Just as lakes helped meet evolving flexibility and cost needs, lakehouses will help meet modern businesses’ security, control, and reliability requirements.


Data lakehouses are still new, so it will likely take some time before they see widespread adoption. Despite their novelty, early signs are promising. These new models could provide businesses with the best of both worlds for data storage.