mostlyjason

Founder at Dev Spotlight

Scale Your Microservices with an Easy Message Queue on Redis

If you’re a microservices developer considering communication protocols, choosing an event-driven architecture might just help you rest a little easier at night. With the right design, event-driven architecture can help you to create apps that are decoupled and asynchronous, giving you the major benefits of your app being both performant and easily scalable.
However, once you’ve chosen event-driven, you still have several crucial design decisions to make. (See this article on event-driven architectures and options.) And one of the first, and most important, decisions is whether to use message queues or streams. 
In message queues, a sender places a message targeted to a recipient into a queue. The message is held in the queue until the recipient retrieves it, at which time the message is deleted.
Similarly, in streams, senders place messages into a stream and recipients listen for messages. However, messages in streams are not targeted to a certain recipient, but rather are available to any and all interested recipients. Recipients can even consume multiple messages at the same time, and can play back a series of messages through the streams history.
In this article, we’ll narrow our focus to message queues. We’ll create and deploy a simple, and quick to stand up, message queue using Heroku, Reddis, and RSMQ. And we’ll look at how our system works, what it can do, and some advantages.

Why Message Queues Are Helpful

Message queues can be thought of as the original event-driven architecture. They drove the adoption of early event-driven designs and are still in use today. In these message queue designs, a client (or other component) traditionally creates a message when some action happens, then sends that message to a queue, targeted to a specific recipient.
The recipient, which has been sitting idle waiting for work receives (or retrieves) the message from the queue, processes it, and does some unit of work. When the recipient is done with its work, it deletes the message from the queue. 
This traditional path is exactly what our example below will do. It’s a simple setup, but by placing a queue between the producer and consumer of the event, we introduce a level of decoupling that allows us to build, deploy, update, test, and scale those two components independently.
This decoupling not only makes coding and dev ops easier (since our components can remain ignorant of one another), but also makes our app much easier to scale up and down. We also reduce the workload on the web dynos, which lets us respond back to clients faster, and allows our web dynos to process more requests per second. This isn't just good for the business, but it's great for user experience as well.

Our Example App

Let's create a simple example app to demonstrate how a message queue works. We’ll create a system where users can submit a generic application through a website. This is a simple project you can use just to learn, as a real-world use case, or as a starting point for a more complicated project. We’re going to setup and deploy our simple yet powerful message queue using Heroku, Redis, Node.js, and RSMQ. This is a great stack that can get us to an event-driven architecture quickly. 
Heroku, Redis, and RSMQ—A Great Combination for Event-Driven
Heroku, with its one-click deployments and “behind-the-scenes” scaling, and Redis, an in-memory data store and message broker, are an excellent pair for quickly deploying systems that allow us to focus on business logic, not infrastructure. We can quickly and easily provision a Redis deployment (dyno) on Heroku that will scale as needed, and hides the implementation details we don’t want to worry about. 
RSMQ is an open-source simple message queue built on top of Redis that is easy to deploy. RSMQ has several nice features: it’s lightweight (just 500 lines of javascript), it’s fast (10,000+ messages per second), and it guarantees delivery of a message to just one recipient. 
We’ll also follow the “Worker Dynos, Background Jobs, and Queuing” pattern, which is recommended by Heroku and will give us our desired decoupling and scalability. Using this pattern, we’ll deploy a web client (the browser in the below diagram) that handles the user input and sends requests to the backend, a server (web process) that runs the queue, and a set of workers (background service) that pull messages from the queue and do the actual work. We’ll deploy the client/server as a web dyno, and the worker as a worker dyno.
Let’s Get Started
Once you’ve created your Heroku account and installed the Heroku CLI, you can create and deploy the project easily using the CLI. All of the source code needed to run this example is available on GitHub.
$ git clone https://github.com/devspotlight/example-message-queue.git
$ cd example-message-queue
$ heroku create
$ heroku addons:create heroku-redis
$ git push heroku master$ heroku ps:scale worker=1
$ heroku open
If you need help with this step, here a few good resources:
System Overview
Our system is made up of three pieces: the client web app, the server, and the worker. Because we are so cleanly decoupled, both the server and worker processes are easy to scale up and down as the need arises.
The Client
Our client web app is deployed as part of our web dyno. The UI isn’t really the focus of this article, so we’ve built just a simple page with one link. Clicking the link posts a generic message to the server.
The Web Server
The web server is a simple Express server that delivers the web client. It also creates the queue on startup (if the queue doesn’t already exist), receives new messages from the client, and adds new messages to the queue.
Here is the key piece of code that configures the variables for the queue:
let rsmq = new RedisSMQ({
        host: REDIS_HOST,
        port: REDIS_PORT,
        ns: NAMESPACE,
        password: REDIS_PASSWORD
    });
and sets up the queue the first time the first server runs:
rsmq.createQueue({qname: QUEUENAME}, (err) => {
   if (err) {
        if (err.name !== "queueExists") {
            console.error(err);
            return;
        } else {
            console.log("The queue exists. That's OK.");
        }
   }
   console.log("queue created");
});
When a client posts a message, the server adds it to the message queue like this:
app.post('/job', async(req, res) => {
   console.log("sending message");
   rsmq.sendMessage({
        qname: QUEUENAME,
        message: `Hello World at ${new Date().toISOString()}`,
        delay: 0
   }, (err) => {
        if (err) {
            console.error(err);
            return;
        }
   });
   console.log("pushed new message into queue");
}); 
The Worker
The worker, which fittingly is deployed as a worker dyno, polls the queue for new messages, then pulls those new messages from the queue and processes them.
We’ve chosen the simplest option here: The code reads the message, processes it, then manually deletes it from the queue. Note that there are more powerful options available in RSMQ, such as "pop”, which reads and deletes from the queue at the same time, and a “real-time” mode for pub/sub capabilities. 
rsmq.receiveMessage({ qname: QUEUENAME }, (err, resp) => {
   if (err) {
      console.error(err);
      return;
   }
   if (resp.id) {
      console.log("Hey I got the message you sent me!");
      // do lots of processing here
      // when we are done we can delete the message from the queue
      rsmq.deleteMessage({ qname: QUEUENAME, id: resp.id }, (err) => {
         if (err) {
            console.error(err);
            return;
         }
         console.log("deleted message with id", resp.id);
      });
   } else {
      console.log("no message in queue");
   }
});
We could easily fire up multiple workers by using Throng, if needed. Here’s a good example of a similar setup as ours that uses this library.
Note: When you deploy the worker dyno, be sure to scale the worker processes under the “Resources” tab in the Heroku Dashboard to at least one dyno so that your workers will run, if you haven’t already in the CLI.

Running the Example

When we deploy and start our dynos, we see our server firing up, our queue being deployed, and our worker checking for new messages.
And when we click our link on the client, you can see the server push the message onto the queue, and then the worker grab the message, process it, and delete it.
We’ve built a quick-to-stand up, but powerful message queue with our example. We’ve built a system that separated our components so that they are unaware of one another, and are easy to build, test, deploy, and scale independently. This is a great start to a solid, event-driven architecture.
If you haven’t already, read the other articles in our series on microservices including best practices for an event-driven architecture and how stream processing makes your event-driven architecture better.

Tags

Comments

November 28th, 2019

Hi. How will you handle access protection for the /job endpoint?

More by mostlyjason

Topics of interest