paint-brush
Benchmarking Amazon SNS & SQS For Inter-Process Communication In A Microservices Architectureby@eon01
11,418 reads
11,418 reads

Benchmarking Amazon SNS & SQS For Inter-Process Communication In A Microservices Architecture

by Aymen (@eon01)January 19th, 2017
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

<span>T</span>his is the <strong>part I</strong> of a series of <strong>practical</strong> posts that I am writing to help developers and architects understand and build microservices.

Companies Mentioned

Mention Thumbnail
Mention Thumbnail
featured image - Benchmarking Amazon SNS & SQS For Inter-Process Communication In A Microservices Architecture
Aymen (@eon01) HackerNoon profile picture

This is the part I of a series of practical posts that I am writing to help developers and architects understand and build microservices.

I wrote other stories in the same context like these are some links:

Note that I am using Python, SNS and SQS for this example.

Amazon SNS and SQS are used in this blog post because they are easy to configure but you may find several Open Source alternatives.

The results in this benchmark are relative to the internet connection I am using and my laptop abilities.

My connection has a bandwidth of 9.73 Mbps Download /11.09 Mbps Upload.

I have 4 CPUs and a memory of 7.5 GB:

Amazon Simple Notification Service

It is a fully managed push notification service that lets you send individual messages or to fan-out messages to large numbers of recipients. SNS could be used to send push notifications to mobile device users, email recipients or messages to other services (like SQS).

Amazon Simple Queue Service

SQS is a fully managed message queuing service. Amazon SQS is used to transmit any volume of data, without losing messages. Amazon SQS can act as a FIFO queue (first-in, first-out).

Unix Philosophy & Microservice Based Software

I believe that microservices are changing the way we think about software and DevOps. I am not saying that it is the solution to every problem you have, but sharing the complexity of your stack between programs that work together (Unix Philosophy) could resolve many of them.

Adding a containerization layer to your microservices architecture using Docker or rkt and an orchestration tool (Swarm, K8S ..) will really simplify the whole process from development to operations and help you manage the networking and increase your stack performance, scalability and self-healing ..

I — like many of you — adopt the philosophy of a single process per container. This is what I am also considering for this tutorial, so inter-process communication falls into the same thing as microservices containers; where each microservice runs a single isolated process that do one thing and do it well and communicate with other processes (Unix Philosophy).

The UNIX philosophy is documented by Doug McIlroy in the The Bell System Technical Journal from 1978:

  1. Make each program do one thing well. To do a new job, build afresh rather than complicate old programs by adding new “features”.
  2. Expect the output of every program to become the input to another, as yet unknown, program. Don’t clutter output with extraneous information. Avoid stringently columnar or binary input formats. Don’t insist on interactive input.
  3. Design and build software, even operating systems, to be tried early, ideally within weeks. Don’t hesitate to throw away the clumsy parts and rebuild them.
  4. Use tools in preference to unskilled help to lighten a programming task, even if you have to detour to build the tools and expect to throw some of them out after you’ve finished using them.

By experience, I believe that developing software while keeping in mind the Unix philosophy principle will avoid you a lot of headaches.

Don’t forget to check my training Practical AWS

A Common Architecture For Message Based Microservices

As you can see a publisher send a message to SNS with a pre selected Topic, in function of it, SNS dispatch a message to a subscribed SQS queue.

A Simplified Architecture

In this tutorial, we will code a prototype using Python and Boto.

Building The Publisher

Start by creating a virtual environment and installing Boto (Python interface to Amazon Web Services)

virtualenv sns_test

cd sns_test

pip install boto

This is the publisher code:


import boto.sns, time, json, loggingfrom datetime import datetime

logging.basicConfig(filename="sns-publish.log", level=logging.DEBUG)


# Connect to SNSc = boto.sns.connect_to_region("eu-west-1")

sns_topic_arn = "arn:aws:sns:eu-west-1:953414735923:test"


# Send 100 messagesfor x in range(100):

body = "Message Body From Microservice Number" + str(x)  
subject = "Message Subject Number" + str(x)

publication = c.publish(sns\_topic\_arn, body, subject=subject + str(x))


# Choosing what we want to print to the terminalm = json.loads(json.dumps(publication, ensure_ascii=False))

message_id = m["PublishResponse"]["PublishResult"]["MessageId"]

print str(datetime.now()) + " : " + subject + " : " + body + " : " + message_id

time.sleep(5)

This code will print the sent message id with the time.

Building The Consumer

Start by creating a virtual environment and installing Boto (Python interface to Amazon Web Services)

virtualenv sqs_test

cd sqs_test

pip install boto

This is the prototype code:


import boto.sqs, time, jsonfrom datetime import datetime




# Connect to SQSconn = boto.sqs.connect_to_region("eu-west-1", aws_access_key_id='xxxxxx',aws_secret_access_key='xxxxxx')


# Choose the used queuequeue = conn.get_queue('test')

# While true read the queue, if the program reads everything then it will wait (pass)



while 1:try:result_set = queue.get_messages()



message = result_set[0]message_body = message.get_body()m = json.loads(message_body)

    subject = m\["Subject"\]\[:-1\]  
    body = m\["Message"\]  
    message\_id = m\["MessageId"\]

    print str(datetime.now()) + " : " + subject + " : " + body + " : " + message\_id

    conn.delete\_message(queue, message)

except IndexError:  
    pass

This code will print the received message id with the time.

Testing Our Prototype

I split my terminal into two windows, started the subscriber script first, then the publisher just after:

How Fast Is SNS+SQS ?

In order to see how much time does the transportation of a text message takes, we are going to use the timestamp in both script. This is the simplified flow diagram:

Publisher -> SNS -> SQS -> Subscriber

We are sending a message of almost 75B (a small message for the moment, just for testing purposes).

In order to be more precise and make better measurements of the response time, I changed the two programs:

pub.py


import boto.sns, time, json, loggingfrom datetime import datetime



logging.basicConfig(filename="sns-publish.log", level=logging.DEBUG)c = boto.sns.connect_to_region("eu-west-1")sns_topic_arn = "arn:aws:sns:eu-west-1:953414735923:test"





for x in range(100):body = "Message Body From Microservice Number" + str(x)subject = "Message Subject Number" + str(x)publication = c.publish(sns_topic_arn, body, subject=subject + str(x))print str(time.time())

time.sleep(1)

sub.py


import boto.sqs, time, jsonfrom datetime import datetime


conn = boto.sqs.connect_to_region("eu-west-1", aws_access_key_id='xxxxxxxxxxx', aws_secret_access_key='xxxx')queue = conn.get_queue('test')

x = 0










while 1:try:result_set = queue.get_messages()if result_set != []:print str(time.time()))x += 1message = result_set[0]conn.delete_message(queue, message)except IndexError:pass

The difference between the reception time and the sending time is calculated (in function of the size of the data sent).

For every size, 20 serial requests are sent each time.

Here are the different data size sent from SNS to SQS:

  • 75B
  • 700B
  • 1.65KB
  • 3.3KB
  • 6.6KB
  • 26.38KB

The message received by SQS, will not be the same as the sent data, since other data and meta data are also sent with.

Sent Message Size Read By SQS

I stopped the benchmark at 26.38KB because there is a limit:

With Amazon SNS and Amazon SQS, you now have the ability to send large payload messages that are up to 256KB (262,144 bytes) in size. To send large payloads (messages between 64KB and 256KB), you must use an AWS SDK that supports AWS Signature Version 4 (SigV4) signing.

SNS->SQS Response Time In Function Of Sent Data Size

The response time is not changing if the data size increases, which is a good thing.

Let’s see the average response time in function of the data size:

For reasonables data sizes, the process of sending data to SNS that dispatch it to SQS + the time that my Python program takes to read the data is between 0.5 and 0.9 seconds.

During this benchmark, almost 1000 message were sent, I noticed that all the messages were delivered, there is no lost messages.

Number Of Sent Messages

SNS/SQS In Multiple Regions

A part from my Internet connection and laptop configuration, the speed depends on how you manage your SNS/SQS regions. (I’ve been using Dublin regions from Paris.)

One of the ways to optimise the message transportation between microservices (or between publishers and subscribers) using this messaging stack is to keep your microservices and SNS/SQS in the same regions.

If publishers are not in the same regions (say you have multiples publishers, one is in Asia, the other one is in Africa and the third one is in US), the best thing to do here is to create multiple SNS/SQS each in a different region.

Connect Deeper

Microservices are changing how we make software but one of its drawbacks is the networking part that could be complex sometimes and messaging is impacted directly by the networking problems. Using SNS/SQS and a pub/sub model seems to be a good solution to create an inter-service messaging middleware. The publisher/subscriber scripts that I used are not really optimised for load and speed but they are probably in their most basic form.

If you resonated with this article, please join thousands of passionate DevOps engineers, Developers and IT experts from all over the world and subscribe to DevOpsLinks.

Don’t forget to check my training Practical AWS

You can find me on Twitter, Clarity or my website and you can also check my books: SaltStack For DevOps, & Painless Docker.

If you liked this post, please recommend and share it to your followers.