If you’ve ever watched a seemingly simple dashboard grind your database to a halt, you already understand why materialized views exist.
In one of the payment systems I worked on, we had exactly this problem. Product wanted rich analytics for transactions: totals per day, per user, per currency, per status, filtered by date ranges and channels. Each widget on the dashboard ran heavy aggregations over millions of rows. It worked at first. Then traffic grew, and suddenly a single dashboard load looked like a DDoS on our database.
Materialized views were a big part of how we fixed that.
In this article, I’ll walk through what materialized views are, when they help, how to use them (with PostgreSQL as the example), and the trade‑offs you need to understand before adding them to your architecture.
What Is a Materialized View?
A view in SQL is basically a saved query. Think of it as a virtual table whose data is computed on the fly whenever you select from it.
A materialized view is different: it actually stores the result of the query on disk. Instead of recomputing the query every time, the database reads from the precomputed data and returns results much faster.
So:
- Normal view: compute on read
- Materialized view: compute on write (or refresh), read fast
Under the hood, a materialized view behaves like a table that the database manages for you. You define it with a SELECT, the database runs that query and stores its result. When your underlying data changes, the materialized view does not automatically update. You have to refresh it.
That makes materialized views great for:
- Expensive aggregations over large tables
- Dashboards and reports that tolerate slightly stale data
- Precomputing data for downstream services
The Classic Problem: Analytics on a Hot Table
Let’s say you have a simple transactions table for a fintech product:
CREATE TABLE transactions (
id BIGSERIAL PRIMARY KEY,
user_id BIGINT NOT NULL,
amount NUMERIC(18,2) NOT NULL,
currency TEXT NOT NULL,
status TEXT NOT NULL, -- 'pending', 'completed', 'failed', etc.
channel TEXT NOT NULL, -- 'card', 'bank_transfer', 'wallet', etc.
created_at TIMESTAMPTZ NOT NULL DEFAULT NOW()
);
You want a dashboard that shows:
- Total volume per day
- Count of transactions per day, per status
- Maybe per currency or per channel too
The naive way is to query the raw table every time:
SELECT
date_trunc('day', created_at) AS day,
status,
COUNT(*) AS tx_count,
SUM(amount) AS total_amount
FROM transactions
WHERE created_at >= NOW() - INTERVAL '30 days'
GROUP BY 1, 2
ORDER BY 1, 2;
On day one, with a few thousand rows, this is fine. But as your table grows into tens or hundreds of millions of rows, this aggregation becomes expensive:
- It reads a lot of data
- It performs grouping and aggregation on the fly
- It competes with your OLTP workload (inserts, updates, queries serving users)
You can throw indexes and caching at it, but at some point, the cost of recomputing this every time stops making sense—especially if your dashboard doesn’t need up‑to‑the‑second accuracy.
That’s where materialized views shine.
Creating a Materialized View in PostgreSQL
Here’s how you could turn that aggregation into a materialized view.
CREATE MATERIALIZED VIEW daily_transaction_stats AS
SELECT
date_trunc('day', created_at) AS day,
status,
COUNT(*) AS tx_count,
SUM(amount) AS total_amount
FROM transactions
GROUP BY 1, 2;
The database will:
- Run that aggregation once.
- Store the result in a physical structure.
- Allow you to query it like a table:
SELECT *
FROM daily_transaction_stats
WHERE day >= NOW() - INTERVAL '30 days'
ORDER BY day, status;
Now your dashboard is querying a much smaller, precomputed dataset. Reads are fast and consistent. All the heavy work moved to the refresh step instead of every dashboard load.
Refreshing a Materialized View
The trade‑off, of course, is freshness.
By default, the materialized view doesn’t update when transactions changes. If you want fresh data, you have to tell Postgres to recompute it:
REFRESH MATERIALIZED VIEW daily_transaction_stats;
This will:
- Re-run the underlying
SELECT - Replace the stored data with the new result
On older PostgreSQL versions, REFRESH MATERIALIZED VIEW is blocking: it takes a lock that prevents reads while refreshing. On newer versions, you can use CONCURRENTLY to avoid blocking reads, at the cost of some restrictions (e.g., you need a unique index):
CREATE UNIQUE INDEX daily_transaction_stats_day_status_idx
ON daily_transaction_stats(day, status);
Then:
REFRESH MATERIALIZED VIEW CONCURRENTLY daily_transaction_stats;
This way:
- Readers can still query the old data while the new data is being computed.
- Once the refresh finishes, PostgreSQL swaps the old with the new.
You usually trigger refreshes from:
- A cron job
- A scheduler in your application
- A background worker or job queue
For example, refreshing every 5 minutes, or every hour, depending on how fresh your data needs to be.
Designing Materialized Views for Real Systems
Once you understand the basic mechanics, the real work is designing materialized views that fit your system’s patterns.
Some considerations:
1. How Fresh Do You Need the Data?
If your business users are okay with “data as of a few minutes ago,” you can refresh every 5–10 minutes and keep load predictable.
If you need “near‑real‑time” numbers, materialized views might still work, but:
- You’ll refresh more often (more load)
- You may want to complement them with incremental approaches (more complexity)
For things like daily revenue, weekly active users, or fraud analytics that don’t need second‑level freshness, materialized views are a good fit.
2. Full Refresh vs. Incremental Refresh
The simple way is a full refresh: recompute everything from scratch. That’s what REFRESH MATERIALIZED VIEW does.
For large datasets, full refresh can become slow and expensive. At that point, teams often:
- Partition their data by time (e.g., one materialized view per month or per day).
- Use more advanced tools (e.g., incremental materialized views via triggers, ETL pipelines, or external systems like dbt, Kafka + stream processors, etc.).
- Pre‑aggregate data into intermediate tables and then build materialized views on top.
For many systems, though, a simple full refresh is enough—especially if you’re just getting started and want quick wins.
3. Indexing the Materialized View
Remember: a materialized view is still a table under the hood, so you can index it.
If your dashboard filters by day and status, index those columns:
CREATE INDEX daily_transaction_stats_day_idx
ON daily_transaction_stats(day);
CREATE INDEX daily_transaction_stats_status_idx
ON daily_transaction_stats(status);
If you want to use REFRESH MATERIALIZED VIEW CONCURRENTLY, you’ll need a unique index that covers the rows uniquely:
CREATE UNIQUE INDEX daily_transaction_stats_unique_idx
ON daily_transaction_stats(day, status);
Indexes will make queries against the materialized view faster, especially as the view grows.
4. Ownership and Access
Materialized views can be a shared interface between teams:
- Backend services read from them.
- Data analysts use them as starting points in BI tools.
- Other microservices consume them for precomputed data.
Because they’re database objects, you can control access and permissions. That makes them a nice contract: “If you need daily transaction stats, use daily_transaction_stats and don’t hit transactions directly for aggregations.”
Using Materialized Views From Your Application
From the application side, materialized views are just tables. You query them the same way.
In a Go service using a SQL driver or ORM, it looks like any other query:
type DailyStat struct {
Day time.Time
Status string
TxCount int64
TotalAmount float64
}
func (r *Repo) GetDailyStats(ctx context.Context, from, to time.Time) ([]DailyStat, error) {
rows, err := r.db.QueryContext(ctx, `
SELECT day, status, tx_count, total_amount
FROM daily_transaction_stats
WHERE day BETWEEN $1 AND $2
ORDER BY day, status
`, from, to)
if err != nil {
return nil, err
}
defer rows.Close()
var stats []DailyStat
for rows.Next() {
var s DailyStat
if err := rows.Scan(&s.Day, &s.Status, &s.TxCount, &s.TotalAmount); err != nil {
return nil, err
}
stats = append(stats, s)
}
return stats, rows.Err()
}
The only extra logic you need is on the refresh side: making sure something (cron, worker, etc.) runs REFRESH MATERIALIZED VIEW often enough.
When You Should Not Use a Materialized View
Materialized views are powerful, but they’re not free. Some cases where they might not be a good fit:
- You need real‑time data with strict freshness guarantees: If your product requires second‑by‑second accuracy, like a trading order book display, a precomputed snapshot that’s a few minutes old might be unacceptable.
- Your data changes constantly and you’d need to refresh too often: If refreshing every minute is still not enough, the overhead might be worse than just hitting an optimized query or using a streaming approach.
- Your query patterns change frequently: If your analytics questions are still evolving every week, locking into a particular materialized view schema can make you move slower.
- You already have a dedicated analytics pipeline: If you’re using a warehouse (BigQuery, Snowflake, etc.) and ETL pipelines, sometimes it’s better to move heavy analytics entirely off the primary database instead of pushing it further with materialized views.
In those cases, consider materialized views as one tool among many—alongside caching, read replicas, background ETL, or streaming.
A Mental Model: OLTP, OLAP, and Precomputed State
A simple way to think about materialized views is through the OLTP vs. OLAP lens.
- Your main database is great at OLTP: many small reads and writes, low latency, strict consistency.
- Analytics is often OLAP: large scans, aggregations, grouping, heavy reads.
Materialized views let you carve out a small OLAP‑like island inside your OLTP database by precomputing heavy queries into a compact, query‑friendly structure.
Instead of forcing your OLTP system to behave like a data warehouse on every request, you:
- Compute heavy stuff periodically.
- Store it in a materialized view.
- Serve queries from that precomputed state.
It’s a pragmatic middle ground when:
- You can’t or don’t want to introduce a full analytics stack yet.
- You want faster dashboards and reports without major architecture changes.
Practical Tips Before You Add Materialized Views
Before you ship your first materialized view to production, here are a few practical tips:
- Start with one or two high‑impact, simplematerialized views.
Pick a dashboard or report that’s obviously heavy and frequently used. - Measure the before and after.
Track query time, database load, and how long the refresh takes. - Decide and document the freshness SLA.
“These numbers are updated every 5 minutes.” Make that clear to stakeholders. - Put the refresh logic somewhere reliable.
Cron job, scheduled job in your queue, or whatever fits your stack, but it shouldn’t be manual. - Revisit your design as your data grows.
What works at 10 million rows might not work at 1 billion. Be ready to iterate.
Conclusion
Materialized views won’t magically fix every performance problem, but in systems where heavy aggregations are competing with your core workload, they can be a lifesaver.
They give you:
- Fast reads for complex queries
- Predictable write/refresh costs
- A clean interface for dashboards and analytics consumers
The trade‑off is freshness and complexity around refresh and indexing. If you understand and embrace that trade‑off, materialized views can turn your slow, noisy analytics queries into a predictable background job—while your users enjoy fast, responsive dashboards.
If you’re currently fighting slow reporting queries on a hot production database, a small, well‑designed materialized view might be the simplest big win you can ship next.
