paint-brush
Asynchronous Runtimes: A Primer for the Perplexed Developerby@sannis
368 reads
368 reads

Asynchronous Runtimes: A Primer for the Perplexed Developer

by Oleg EfimovApril 4th, 2024
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

This primer aims to illuminate how asynchronous programming can elevate application performance, scalability, and responsiveness. By focusing on the universal principles of non-blocking operations and event loops, we venture beyond specific technologies like AsyncIO, Node.js, or Go.
featured image - Asynchronous Runtimes: A Primer for the Perplexed Developer
Oleg Efimov HackerNoon profile picture

Designed for developers at the intersection of curiosity and professional growth, this primer aims to illuminate how asynchronous programming can elevate application performance, scalability, and responsiveness. By focusing on the universal principles of non-blocking operations and event loops, we venture beyond specific technologies like AsyncIO, Node.js, or Go.


This exploration is crafted for software developers eager to grasp the efficiencies driving modern software without the complexity of technical jargon.


Think of how a single thread handling requests is like a single-lane road in a busy city: each request is a car, and when there's only one lane, cars line up, causing delays. But what if our city — our service — could manage traffic more cleverly? That's where the magic of an asynchronous runtime comes into play. It's like adding an intelligent traffic system to our city, guiding cars through multiple lanes and intersections without waiting.


This system keeps traffic flowing smoothly, ensuring no car (or task) waits too long, which is precisely how an asynchronous runtime keeps our software running efficiently.


Let's consider an example of a service that completes I/O operations synchronously. For clarity, these operations are shown outside the main flow of execution in the diagram:

Blocking I/O during your service's startup phase might be acceptable, but avoiding this approach when processing external requests is advisable. The following diagram illustrates how service efficiency can be enhanced through the adoption of non-blocking I/O operations:

These examples highlight scenarios where the server's performance gains are maximized. Nevertheless, the advantage of non-blocking operations persists across any frequency of incoming requests. Even in less-than-ideal conditions, performance benefits from integrating dedicated I/O threads into the processing workflow.


The total time required to process a request (from the client's initial request to the final response, as depicted by the blue number on the right) will invariably decrease, provided there are sufficient threads to handle all requests. In the worst-case scenario, this duration will not surpass that of synchronous processing methods.

We then encounter a practice that might seem counterintuitive to many developers. When I/O operations constitute a significant portion of request processing time, optimizing other code segments may yield slight improvement. The duration to fetch data from a cache might align closely with the time dedicated to business logic and template rendering.


Employing caching or an in-process database can reduce data retrieval times compared to other processing activities.


To segment code effectively and facilitate callback execution, one can instruct the runtime to proceed to the next event loop cycle. Below is an illustrative example of how this concept is applied:

// blocking callbacks
function func1_cb(str, cb) {
 var res = func1(str);
 cb(res);
}

function func2_cb(str, cb) {
 var res = func2(str);
 cb(res);
}

// non-blocking callbacks
function func1_cb(str, cb) {
 var res = func1(str);
 process.nextTick(function () {
   cb(res);
 });
}

function func2_cb(str, cb) {
 var res = func2(str);
 process.nextTick(function () {
   cb(res);
 });
}

// usage example
func1_cb(content, function (str) {
 func2_cb(str, function (result) {
   // work with result
 });
});

By adopting this methodology to split two parts of calculations, the overall processing time for scenarios with nearly concurrent request arrivals remains the same. However, the response to the first request is delayed:

This scenario represents the least favorable outcome when employing the “next tick” strategy. As shown in the initial example, the “next tick” method proves benign for infrequent requests. Should requests arrive at a moderate pace, leveraging this technique enhances processing speed by allowing the initiation of new requests and the start of non-blocking operations to be interleaved during execution breaks. This approach effectively reduces both the total processing time and the average time per request:

In conclusion, adopting non-blocking I/O is crucial for enhancing application performance and beneficial in environments with both sparse and heavy incoming request volumes. Additionally, effectively sequencing the execution flow — illustrated by concepts similar to the "next tick" technique — significantly improves server efficiency. Embracing these asynchronous programming practices offers clear advantages over traditional, synchronous methods.