Before you go, check out these stories!

0
Hackernoon logoCancelation without Breaking a Promise by@dtipson

Cancelation without Breaking a Promise

Author profile picture

@dtipsonDrew Tipson

Reflecting on what was so tricky about cancelable Promises, embracing functional purity as a solution

So… “cancelable Promises” have been, well, canceled. And so, the Javascript community is left once again wondering how newer, Promise-based apis like fetch can possibly move forward. Why has this been such a blocker? And what can we do?

Working with the future in the present

Let’s start with a quick recap of the problem space. We could talk about the “event loop” in Javascript here… but for our purposes let’s just reflect on the fact that all the code in our applications executes in a specific order of operation: inside-to-outside, first line to last. Anything that takes time to accomplish “blocks” the rest of the application until it’s complete.

Obviously, this makes activating and responding to truly asynchronous actions a bit tricky. The first and most natural pattern that can help handle this in user-level code is callbacks: you pass some function another function that you want to be called with a result once some other interface declares that a result is available. There’s nothing even inherently asynchronous about this pattern: it’s just basic higher-order functions at play:

It’s worth noting that, actual non-blocking asynchronous behavior is almost impossible to create or imagine in javascript without using special, language-level functions that provide support for it. So let’s quickly think through one of the simplest ones that the language (at least in a browser) offers: setTimeout.

//setTimeout :: Function -> Int -> a
setTimeout(x=>console.log(x), 700, 5);

In this case, we just specify a callback function, an amount of time to wait before calling it, in milliseconds, and, optionally, the value to call the callback with. Given all this, the language will wait the appropriate amount of time then then call the function with the value.

That’s it. Basic. There’s no polyfill for non-blocking setTimeout that I know of: either the language can do that (that is: wait, without blocking the execution of any subsequent lines of code) or it can’t.

But one structural limitation with setTimeout is the “dead-end” nature of its API: the side-effecting callback is not particularly extensible. That is, if you want to extend and chain further computations onto the result of the callback, you can’t do so after defining the core callback operation itself. You’d have to just get everything you want to have happen ready and all packaged up inside the callback in the first place, before handing it off to setTimeout to be executed. Now, the callback could be made to trigger other functions that are already defined in an outer scope… or even to schedule them to run later using another setTimeout. But the fact remains that the actual return value returned from the callback doesn’t really GO anywhere:

setTimeout(x=>x, 700, 5);

Our callback function there, x=>x in practice returns a 5 as its result, but it’s basically pointless: the callback is defined synchronously (in the present) but it runs asynchronously (in the future). And so there’s no place for the “5” to go: nothing else is or can be defined for it to feed into. Nothing “else” exists in its future timeframe to listen to it.

Can we introduce this concept of “future listening”? Certainly! And that’s how we get Promises. Promises are like a representation of a value that will exist in the future… and thus provide you with an intelligible way to code against those missing values in the present. They do this by sort of “boxing up” a future event: containing an eventual explosion, and state-fully storing its result.

Thus, when you construct a Promise, you define a function that internally executes something like a setTimeout operation but then also specifies how to capture that result by calling one of two specialized functional handlers: resolve or reject. Like this:

Hopefully I don’t need to go into the API of Promises too much. Instead, let’s call out a few key things that will quickly become relevant to our discussion of cancelation:

  1. The constructor function (the one that’s passed the special resolve/reject arguments that control the Promise’s internal state) didn’t actually return anything (so, implicitly, it returns undefined). This is normal for promises just like it was for setTimeout. Even if it does return something, it doesn’t matter. new Promise returns, well, a Promise: not whatever arbitrary result the constructor might have returned.
  2. As we said, the constructor function is executed immediately.
  3. If you immediately chain a .then() method onto a promise, the result is a new, derivative Promise whose eventual state depends on the outcome of the first.
  4. This second, derivative Promise has no (and really, should not have any) hook back “into,” or control over, the original promise we created using the constructor! It simply derives its own “resolve” execution from the outcome of the former one. This channel of communication has room for only one thing, and in one direction: the value passed into the original resolve callback function, which is then farmed out to whatever function is attached to the promise using .then().

I lied a tiny bit in #3 of course: there IS another channel of communication: reject/.catch() But rejected Promises don’t really change the key details of the story that much: if an operation rejects, that just feeds a value into the reject handler of the next .catch() operation. Same as resolve did. And, same as resolve, it’s a function called with just a single value. There’s not a lot of room for extra signals in that pipeline! And that’s actually a very good thing, because we’re already dealing with a complex construct that we’re trying to keep simple & concrete.

Forestalling the future

But now we get to the meat of the matter: cancelation. What if we decide at some point that we don’t care about the original operation while we’re still in the middle of it? We’d expect two critical things to happen:

  1. whatever operation was asynchronously generating a result should stop, freeing up any resources it was consuming
  2. none of the side-effects that depend on that result (or were, at least, waiting until it arrived) should ever run

This seems deceptively simple, right?

And working with just a bare setTimeout, it is actually pretty easy. While it might be an interface for creating asynchronous effects, setTimeout does return something immediately as soon as it’s called: a browser session-unique id. And if you’ve ever worked with setTimeout, you know that you can simply use that id to cancel the entire operation before it completes:

But let’s note something very important here about scope: the cancelId exists in the outer scope. Which is to say: it exists in the present. The x=>console.log(x) function, on the other hand, sort of exists in the future. What’s accessible in the present is probably still accessible in the future, and it’s thus possible to sneak the cancelation id into the callback function by using a mutable external variable, like this:

But hopefully you can see that doing this is sort of pointless: once the inner callback runs, it’s too late to cancel the timeout. It’s already run! Thus, the ideal time and place you to get access to any cancelation controls is really the exact same as the time you choose to execute a cancelable operation… and in the exact same scope. setTimeout gets this right.

Now, the resources involved in making a setTimeout call are relatively minimal, so saving CPU cycles isn’t as big a deal there as just stopping any side effects. But other asynchronous APIs can be incredibly resource intensive. A network request for a big file. Or even an async file read operation. If our program or user decides that we don’t need to finish those operations, then optional cancelation offers a pretty huge possible performance win.

Of all the asynchronous apis in the mix, there are two flavors: those that already use (i.e. return) Promises, and special snowflakes like setTimeout. In the former group, we have things like the prospective new fetch API, which is blocked mostly precisely because nobody is quite sure how to shoehorn cancelation into its Promise-based API. Promises, and their syntax sugar-y friends async/await seemed like a very promising abstraction for clean, synchronously defined non-blocking async code.

But let’s take a quick look at the latter, supposedly old-school group, where we also have things like XMLHttpRequest and FileReader. With most of these interfaces, you first create an object that can dispatch an asynchronous action, define (or are given) some handlers and hooks (including ones for cancelation) on it, and then finally execute the action with all those callback-y things baked in:

Critically, you fire off this execution step using that original control object: one that you have an in-scope reference to. That’s what we said we wanted! Great.

Well… that’s exactly what you do NOT have in the case of Promises! You can, of course, wrap those special APIs inside of a Promise. But with those special snowflake APIs, setTimeout or a WebWorker event channel or whatever: if you use them to create a new Promise, then those control references exist inside the scope of the Promise constructor function and are hooked up to, and only to, the INNER surface of the Promise. But as we’ve seen, nothing other than the resolved or rejected values will ever come back out again! You have no way back “into” the operation you’re running, and thus no real control over it.

Now, we could play the same trick we played with setTimeout: we can first define a variable in the outer scope and then just re-assign it inside the constructor, retaining access to it afterwards. This is basically what many (non-native) implementations of simple “cancelable” promises do. And in fact, this approach is sort of the genesis of the (now also retracted?) cancelation Token proposal.

That’s a bit of a one-off, so most implementations tend to wrap up the entire construct in more layers so that they can return a unique cancelation method on the Promise object, as well as shims to make sure that the extension-y weirdness it gets carried through successive .then().catch() etc. chains. Or they model the token as a Promise itself, complete with an optional cancelation reason.

But if that already sounds like a tremendous amount of headache and overhead for a fairly rare operation, you’re not wrong. And we haven’t even gotten into the incredibly confusing question of what happens to the inherited state of derived Promises.

Derived and Dependent State

As we said, Promises are in some sense first-class values that you can pass around and attach additional .then() handlers to at any time. But when an original effect is canceled, how do we model it down through a chain of stateful dependencies? We can’t let any their success handlers get called (causing unwanted side-effects, based on values that we now don’t have and never will have). But what if the same Promise is used several times with different effects: do we model cancelation as an error (propagating it everywhere)?

I mean, cancelation isn’t really an error, nor should every affected function need to know how to catch it: they just need to not run at all. So then… do we just never let any of these derived Promises resolve… ever? Do we build some way to tell them that they’re never going to get a value OR an error (so that we at least can chain on some special cleanup method like .finally() to handle that case)?

There are lots of possible ways to answer those questions… and that’s actually the problem! None are particularly natural or intuitive: you just have to sort of pick one and live with the downsides. Libraries like Bluebird handle the “multiple dependencies” problem, for instance, by basically statefully keeping track of every consumer attached before an original promise is canceled. The original effect is canceled only if all the attached handlers cancel, throwing an error for any handlers that are attached after the cancelation occurs.

It can work, but it’s still a pretty ugly, race-y, somewhat mind-bending system.

But if you even slightly agree with me that Promise cancelation gets pretty gross, then let’s consider why this is all such a problem in the first place: because of statefulness itself. That is, Promises are not just descriptions of future events: they’re little state machines that we treat, and even sometimes think of, as values, even though they’re really sort of value containers that you can map over. It sort of works, most of the time. And it’s deceptively exciting: we can store and pass references to values around even before they exist!

But it makes a lot less sense when it leads us to talk about a canceled value. Like a time-traveler that murders their own grandparents, a canceled value, by all rights, should never have existed in the first place. In fact, we shouldn’t have a reference to it and thus we never could have/should have attached all these derived states to it in the first place! After all, if you travel back in time and prevent your grandparents from having kids, it’s not just you that shouldn’t exist, your grandchildren shouldn’t either.

And yet, that’s exactly the sort of bizarre construct we’re stuck with when using Promises as an abstraction for async operations!

So, shoot. Are we going to be stuck with this mess when fetch and other Promise-based APIs roll out? Probably! But did it/does it have to be this way?

Well, no. So remember how we bemoaned the fact that the callback in setTimeout is not, per se, composable (or at least, once you’ve described a setTimeout, you can’t add anything further chained behavior to the callback)? Solving this with Promises was one possible solution. But it also introduced this “future value” abstraction that, as we’ve seen, is deeply troubling.

There is another possible solution though: a form of chain-ability not by using linked states, but instead by implementing “lazy” operations. We’re talking of course, about the functional type known variously as a Future or a Task.

There are lots of great Task/Future libraries out there, but because the core concept is so fundamentally simple, let’s code up a Task right here and now:

Yep, that’s the core of the functional alternative to Promises, all in just a couple characters of code. And yet, its constructor usage is astonishingly pretty similar to Promises!

Now, that won’t actually do anything, of course: we have to run .fork() by passing it two handlers that match the ones we defined in the constructor.

Note that the cancelation interface is exposed right out of the box without any extra work here: calling .fork very naturally returns the setTimeout id token that we can then call clearTimeout on if we ever want to:

Pretty neat, no?

If we wanted to be more generic, we could just require that all Task constructors return a generic cancelation function directly, meaning we just define and return it from the constructor, standardizing cancelation usage regardless of the specific details of how some browser-level async operation can be canceled:

In any case, the point here is that Tasks are not fancy “future values” in the same way that Promises are: they’re just descriptions of future computations. And as such, they have another key feature missing from Promises: functional purity. That is, defining a Task has no side-effects in the way that defining a Promise does. Every Task constructor returns the same thing: a Task holding that same constructor, still unexecuted. Nor does a Task “keep track” of whether “it” is “pending, resolved, or rejected.” Tasks are never any of those things because they’re just descriptions.

And thus, as such, there’s never any such thing as a “canceled” Task that we have to worry about in the first place. You can’t cancel a Task, by definition, because it’s an operation that hasn’t run yet! Once you do run it by calling fork, you’re actually no longer really dealing with a Task any longer (the defined Task itself is immutable/reusable/extendable): all that’s left is the operation, its eventual side effects, and whatever controls you’ve left yourself to alter it in the meantime. Functional types tend to “get out of your way” like this once they’ve served their effect-ful purpose.

What’s additionally great about all all this, as I tried to celebrate in my original article on Tasks, the .fork method isn’t some special magic, and the reject/resolve arguments to a Task’s constructor aren’t some weird internal interface the way they are with Promises. fork is literally just a portal right back into the constructor definition itself: the onFailure/onSuccess callbacks you attach using .fork() are literally the reject/resolve arguments passed to the Task’s constructor!

But what about composing subsequent operations onto the constructor’s result? Easy:

Adding operations onto the rejected or resolved branch of the Task constructor likewise has no side-effects: it simply describes a referentially transparent transformation from any one value to another. We didn’t even have to do anything special to allow the original setTimeout id to still come back out at the end!

Pretty much everything you can do with a Promise you can also do with a Task, and actually much more (though most of the other operations, like the monadic transform from a value into a new Task that we’re used to .then() handling implicitly, take more code and have to handle more edge cases in their internal implementation than is worth going into here). But critically, the cancelation interface can be exposed right where we need it. All we had to give up was this weird concept of first-order “future” values that was probably more trouble than it was worth in the first place.

In any case, whatever cancelable interface we ultimately end up with for Promise-based apis like fetch will probably work out ok, and let us basically do what we need to do… but there’s no getting around how problematic it’s likely to be to work with and reason about.

Personally, knowing that an alternative pattern exists (and that I’ll just be able to wrap a Task around it all and escape having to deal with it), is quite a relief!

And of course, there’s always Observables!

Addendum: zalgo considerations

One upshot of the straightforward nature of Tasks is that the “zalgo” danger that required ES6 Promises code in a forced distinction between sync/async into Promises doesn’t exist with our Tasks as they stand. There’s no mandatory setImmediate/nextTick requirement making sure that Task effects always “run” asynchronously to avoid confusion about the order of things. Our whole purpose here seemed to be figuring out how to do asynchronous operations, but Tasks eliminate the distinction entirely. They do this, again, by separating the description of an effect and any transformations from the actual moment that it’s executed and told how to dispose of itself.

Because of this, Tasks can be very naturally used to model either synchronous callbacks (functional dependencies) or asynchronous ones (time-based dependencies). That’s because it’s the final .fork operation that explicitly controls the execution timing (execute this side-effect-causing operation starting right…. now!).

Tasks work just as seamlessly across sync or async effects in the same way that our original callback method did. If you didn’t even care about cancelation you could have just returned and used a synchronous value directly from the constructor function, after all.

But this does mean that you should be careful with any value-generating side-effect that could even possibly run either synchronously or asynchronously. Just be wary about side-effects (like a network request with synchronous caching or memoization) that could do either, and if so, be either extremely careful of when you execute them (since any other synchronous code after them could run before OR after they run without a nextTick guard) or just force a setImmediate onto the effect yourself.

You have more power here… but also more responsibility. The “zalgo” problem occurs when side-effects aren’t always sequenced properly (and so could run in an unexpected order).

This isn’t a bad thing really: once you have more unified control over the ordering of ALL side-effects, you can chaining them all explicitly in a specific control flow instead of just writing them out as a bunch of lines that may or may not run in “happenstance” order.

Hacker Noon is how hackers start their afternoons. We’re a part of the @AMIfamily. We are now accepting submissions and happy to discuss advertising &sponsorship opportunities.
To learn more, read our about page, like/message us on Facebook, or simply, tweet/DM @HackerNoon.
If you enjoyed this story, we recommend reading our latest tech stories and trending tech stories. Until next time, don’t take the realities of the world for granted!

Tags

Become a Hackolyte

Level up your reading game by joining Hacker Noon now!