paint-brush
Limiting your API requests: the right wayby@ru.energizer91
51,321 reads
51,321 reads

Limiting your API requests: the right way

by Alexander BareykoFebruary 20th, 2019
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

Hello everyone. My name is Alexander and I’m Javascript Developer. Today I want to tell you a story about me trying to find zen in building server application which will satisfy every API in the world.

Company Mentioned

Mention Thumbnail
featured image - Limiting your API requests: the right way
Alexander Bareyko HackerNoon profile picture

Hello everyone. My name is Alexander and I’m Javascript Developer. Today I want to tell you a story about me trying to find zen in building server application which will satisfy every API in the world.

Prologue

It was started in June 2015 when Telegram announced their new bots platform within API. I was a full-stack Javascript and PHP developer working in a small web studio and all my job was to rapidly develop landing pages based on this stack. Idea of having your working robot right in your messenger was mind blowing, something similar I was developing in 2010 for ICQ. The only one remaining question was an idea of future bot. What it will do? How will I develop it? So I decided to write baneksbot.

I started reading bot API everyday trying to understand how should I use it. I decided to write a small PHP bot with MySQL database, which should just ask VK group for new posts, broadcast it for all subscribed users and can show most liked posts per day/week/month/ever. It was released in July 2015.

In 2016 i decided to fully rewrite this bot to Node.JS. Instead MySQL i chose MongoDB for posts storage and ElsaticSearch for fast searching. Instead long polling i started using webhooks. So baneksbot v2.0 has been released. I had something like 20 000 users which were subscribed to new posts so I quickly ran into Bot API restrictions and started getting HTTP 429errors instead Telegram responses. It was October 2016 and then I realised that I need to somehow limit my bot requests and make it slower.

Rates/Limits in a nutshell

As you may know, a lot of REST API compatible services has so-called rate limiting to prevent DoS attacks and server overload. Someone has soft rules, where you can cross their limits for a short period of time and someone has strict rules, where you will immediately get HTTP 429 as a response and timeout after which you can give your request another try.

In my case Telegram had soft, but hard rules of rate limiting which I repeatedly ignore and lots of my messages were never sent to users. So I decided to find solutions and try to look for approaches how you can follow these rules.

The easiest way is to just set the timeout and then send message:

const delay = interval => new Promise(resolve => setTimeout(resolve, interval));


const sendMessage = async params => {await delay(1000);


return axios(params);};

Pros:

  • Easy as hell
  • Works like a charm

Cons:

  • Hard to manage
  • Impossible to configure individually

So this is basically main approach for people just transferred from PHP to Node.JS and trying to write something which will not work so fast. But, obviously, Node.JS is much more powerful in case of asynchronous stuff, so we need something elegant. And for make it work we need a queue.

Request queues

Actually, there are a lot of request queues in npmjs.com. Some of them are pretty good, some of them are not. I started to try them and see, can they work properly in my use cases. I used a library called request for making HTTP requests easily. after require('http').request it was like a breath of fresh air: you have promises, you can use streams, you can give it user friendly URLs etc. So my first choice was request-rate-limiter. It can be easily configured and used everywhere. Actually, it’s a perfect library for 95% of use cases.

const RateLimiter = require('request-rate-limiter');

const limiter = new RateLimiter(120); // 120 requests per minute

const sendMessage = params => limiter.request(params);

sendMessage('/sendMessage?text=hi')  .then(response => {    console.log('hello!', response);  }).catch(err => {    console.log('oh my', err);  });

Pros:

  • It will not send more requests than it’s allowed by API
  • It has built-in queue, so you can easily just drop requests there and wait for response

Cons:

  • Only one rule can be configured per instance
  • So you can’t have overall queue to make your application follow these rules globally

As you can see, this is an almost perfect library to queue requests. But let’s read Telegram Bot API more carefully. There are several rules:

  • You can send 1 message per second to individual chats
  • You can send 20 messages per minute to groups/channels
  • But at that moment you cannot send more than 30 messages per second overall**.**

You may think that if I will set rate/limit to 1/3 (20 message per 60 seconds) everybody will be happy. But let’s think once again about that numbers.

Wait for 3 seconds before next request

Sounds scary, huh? But that’s true. If you want to follow these rules easily you cannot send it rather than this period. I was disappointed. I wanted to send these posts from VK as fast as I could in order to provide my users top content right after it appeared in VK. Also I promised in my bot will send you new post not later than one minute after this post was published. So I decided to develop my own queue: with blackjack and… rules.

Smart queue

I started to develop this queue in January 2017. Created base concept and wrote first prototype a week after. The main concepts of this queue were:

  • There should be a queue which will store requests and execute them in the right order
  • There should be an ability to set multiple rules for different requests
  • There should be a priority for rules so less prioritised requests can hold a little bit and give way for more important ones
  • Even if I write these rules perfectly there should be a plan B for retry this request without extra pain and get response in the same promise

That’s how I created my first public npm library called smart-request-balancer. It can easily follow these rules and make my bot API safe for almost two years. Let me explain how it works:

First of all, you need to create a queue,

const SmartQueue = require('smart-request-balancer');

then you initialise it

const queue = new SmartQueue(config);

and use it. Nothing special!








const sendMessage = params => queue.request(retry => axios(params).then(response => response.data).catch(error => {if (error.response.status === 429) {return retry(error.response.data.parameters.retry_after);}throw error;}), user_id, rule);

sendMessage('/sendMessage?text=hi')  .then(response => {    console.log('hello!', response);  }).catch(err => {    console.log('oh my', err);  });

You may ask, what is error.response.data.parameters.rerty_after? user_id? rule??? Let me explain.

For this particular example we created a sendMessage function which basically makes axios request, but also has two more parameters: user_id and rule. Here user_id is just unique key for user based on which we can store these requests in a queue. For example, if you want to send 30 messages for user 1 and 50 messages for user 2 there will be 2 queues and they should not wait for each other and work independently. 1 message per second for users, remember? But at the same time it should not hit overall rule for telegram, which is not more than 30 messages per second for globally!

Still don’t understand? Let me show you how to configure this queue:























const config = {rules: {individual: {rate: 1,limit: 1,priority: 1},group: {rate: 20,limit: 60,priority: 1},broadcast: {rate: 30,limit: 1,priority: 2}},overall: {rate: 30,limit: 1}}

As you can see, we have 3 rules: for individuals, for groups/channels and for broadcasting. And also we have overall rule. Also individual and group rules has higher priority, than broadcast, which means than when my bot is idle and wants to broadcast messages, it will easily broadcast them but as soon as somebody send command it will immediately respond and then continue broadcasting. Also bot will never hit overall limit in 30 messages per second to not be ignored by API.

So let’s go back to our example. What is retry? retry is a special function which you should call if you accidentally hit the limit (in our case got 429) or for some reason want to repeat your request. When you call this function with some interval inside this request will be added to queue right after this interval and resolve exactly this promise right after this interval. Pretty neat, huh?

But wait, what is rule? And this is exact rule we provided in config.

How it’s work

Let’s summarise this algorithm:

  1. You make request with queue.request
  2. SmartQueue adds this request in queue <key, rule>
  3. If there is a first element in overall queue, execute request, otherwise wait for it’s turn
  4. Resolve request, heat it’s queue and overall queue also
  5. Remove empty queue
  6. Return to step 3 until there will be no requests in a queue

Easy and powerful. Nothing more.

Epilogue

This queue is successfully working since these days inside my multiple bots and I almost forgot about this broadcast stuff and started enjoying my life. All these two years i was polishing this library and preparing it for publishing. And now it’s ready to bring peaceful queues for the people.

In conclusion, here are useful links which I was using writing this article:

All contributions from your side will be welcome. Thank you for your patience and attention. And may the force be with you! Good bye.