Why Eve will be perfect for realtime apps

Written by lironshapira | Published 2016/12/04
Tech Story Tags: javascript | meteor | eve | programming | programming-languages

TLDRvia the TL;DR App

Welcome to Part VI of my VI-part series about Eve, an exciting and fascinating new programming language.

The connected-client era

In his JavaScript State of the Union talk from August 2015, Meteor’s Geoff Schmidt used the term “connected client” to describe our modern app architectures:

info.meteor.com/blog/javascript-state-of-the-union-with-meteor

Connected client apps have multiple clients with various amouts of CPU and storage resources, all talking to the same service in the cloud.

Schmidt points out that we expect our connected client apps to be realtime. We used to expect a web app’s UI to be static until manually refreshed, but now we expect it to be dynamic by default.

Problem: Today’s realtime databases are inadequate

It’s painful to make a connected client app today because there’s no way to continuously watch complex queries. I’m referring to the bottom-right quadrant in this table:

Today’s databases

A traditional database query is a snapshot — it tells you the value of a query at the time you ask for it, and that’s it. But if you’re building a modern realtime connected-client app, a snapshot query doesn’t match your needs. You need a continuous query.

Today, most programmers are using home-rolled solutions to get realtime functionality in their apps. In order to update your UI in realtime, you might have your server poll a SQL database, and then use websockets to send update messages to the client via some protocol you made up.

But there are also a handful of realtime databases in the market, which is why continuous simple queries are doable and the bottom-left box is green. The most popular realtime databases are:

  • FirebaseLets you monitor changes for key lookups and simple scans
  • RethinkDBProvides changefeeds on certain ReQL queries
  • MeteorLets you monitor arbitrary MongoDB queries

These databases are great for simple queries, but not for complex queries.

Before we get into their limitations, I want to thank realtime databases for even existing. They deserve tons of credit for implementing database-level continuous queries. Connected-client apps desperately need that abstraction layer; they shouldn’t be rolling their own realtime-ness.

Querying for notification counts

In “data denormalization is broken”, I gave an example of a complex query that a messaging app might use to compute how many notifications a user has. Here it is, written in ReQL:

Imagine the user has our messaging app open and they have one unread notification. Then suddenly they receive a message from a new conversation, which should bring their notification count up to two.

This is extremely simple, conceptually. All we want to do is continuously pipe the value of the notification-count-query into the little red number in the UI.

Since this is RethinkDB’s query language, you might think we can get a changefeed just by tacking on a .changes() to the query. But we can’t, because changefeeds on multi-table queries are not supported.

Current limitations

Currently, RethinkDB and Firebase severely limit the queries you can watch (similar to the notification-count example above). Meteor lets you watch any MongoDB query, but only the simple ones are fast.

Today’s realtime databasees

In Meteor, the divide between simple and complex is determined by whether it can use oplog tailing rather than poll-and-diff:

  • oplog tailing: Watches the oplog to deduce how the results of your queries have changed.
  • poll-and-diff: Runs your query every few seconds, diffs it to the last time it ran your query, then tells you about any changes.

If you’re just looking up a document by its id, Meteor can deduce that the result of your query only changes when that specific document is changed. Similarly, if your query is just a simple filter, it’s simple to deduce whether an oplog change can result in a change to the queried records.

At the moment, tuning Meteor for scalability basically means avoiding complex queries.

Current workarounds

How can we program our messaging app’s client to continuously monitor notification counts? We can’t do it with a realtime database that lets us watch a complex query, yet that’s the right abstraction for what we’re trying to do. The only solutions that are possible with today’s technology are workarounds.

The two main workarounds are polling and denormalization. Polling draws on the feasibility of complex snapshot queries, while denormalization draws on the feasibility of simple continuous queries.

Polling solutions are simple to code up. The obvious downside is that the resource usage is proportional to the fineness of the polling interval. 100ms polling is 10x as expensive as 1-second polling.

Denormalization solutions can potentially be a lot more time-efficient, but they require some extra space. More importantly, they’re problematic to code up.

Meteor deserves some credit for baking the polling workaround into the realtime database abstraction layer. It lets us dream that one day we’ll have technology to make continuous complex queries efficient, and then our application-layer Meteor code won’t need to change at all.

Eve’s solution

In Part II, I mentioned that Eve enables sophisticated CQRS architectures, and promised to show a more elaborate example. Here goes…

Remember that query for a user’s notification counts, the one we wish we could monitor in realtime?

Here’s the equivalent query in Eve:

What’s cool is that the green “Liron has 2 unread rooms” line is perfectly sensitive to its logical dependencies. In this video, you can see how changing the timestamp of a message instantly changes the number of conversation-rooms that are considered to be unread.

If you’ve used MobX, you may have been disappointed that there’s nothing like MobX for your data layer. That’s what Eve’s bind is.

And it’s important to note: Instead of binding that 2 to the @view database for debugging, we could just as easily have bound it to a special database that Eve syncs to the client.

Eve is simultaneously pushing the envelope for CQRS, for denormalization, and for query complexity of realtime databases. Personally, I’m really stoked!

Layerless queries

“Realtime data” sounds like a fancy nice-to-have when you’re talking about monitoring a database-layer query. But when we’re doing an application-layer computation on local data, then of course we expect to work with the latest variable values.

Eve’s vision is for application-layer code to access query values out of the database the same way it accesses local variable values. It’s all part of unifying your programming stack.

And once Eve blurs those layers, perhaps the whole concept of “layers of your stack” will dissolve, leaving only layerless queries.

Artist’s rendering of Quicksort as optimized by the Eve runtime

When you write a layerless query (i.e. a standard Eve search), Eve’s runtime might perform a remote query to pull down the data just-in-time, or have it predictively pulled down to the client ahead of time. It might depend on factors like your hardware and your network connectivity.

And layerless queries won’t just abstract away data layers, they’ll also abstract away computation layers. Sometimes the only reason we have a server is because it’s doing a heavy computation. With Eve, we might declaratively define the algorithm and leave it to the runtime to decide if it’ll run on the client or somewhere outside of it.

Conclusion

Throughout the series, rather than teach Eve in detail, my goal has been to point out areas in the status quo of mainstream programming that seem ripe for improvement. I hope you agree Eve holds the promise to improve life in these areas.


Published by HackerNoon on 2016/12/04