Quick Node.js HTTP limiter proxy

Published:  18/12/2018 16:26

Introduction

Nowadays we tend to use load balancers and reverse proxies as first liners for web applications. It's even generalized in most container-based environments.

While these can offer complex filtering possibilities, we do not always have the control we'd like to have over them. Maybe we're using a load balancer container package that we can't really change easily, maybe we'd rather not make changes to the front liners due to possible downtimes and unplanned consequences, maybe what we're dealing with is temporary, ...

We could of course deploy a new version of our app with better rate limiting, but not everybody has super fast development cycles and rate limiting specifically requires some sort of global store at application level and backend languages like PHP do not keep any global context by themselves, you have to implement something to do it, whether it would be Redis, some static file somehwere, a database or any other resident process that may retain the data between requests.

This came up as a problem for one of our customers and we had to quickly find a possible solution to provide rate limiting to an existing application.

Choice of technology

We picked Node.js for the simplicity of processing requests and a pre-existing proxy package that existed called http-proxy-server.

The other advantages would be the data flow and memory model for this approach.

Not only are we using Node.js streams to process requests, which has excellent memory efficiency, but we're also taking advantage of the asynchronous nature of Node.js with no effort to program any concurrency ourselves.

While Node.js itself is single-threaded, any blocking action will be asynchronous in that the main process (the Event Loop) can continue while the blocking task has been handed out to something else - Either the OS mostly for network and socket operations, or a thread pool for filesystem operations (these are just examples - implementations can vary).

Once a blocking task is finished, the third party (usually the OS or a Node.js worker thread) can inject a callback in the main process call stack, which will be the code to run once that specific blocking operation is done.

For programmers unfamiliar with Javascript this can be disorienting and possibly create an intricate situation often referred to as callback hell . We'll however avoid that because our solution has to be very simple by design. Anything more complex would better be done at application level.

The result is that we only have one application thread, no race conditions are possible because there still can be one callback running at a time, making the whole thing extremely memory efficient.

Such a model has become very popular for web servers since the advent of Nginx, which uses a similar event loop system (so does Apache in event loop mode).

Unfortunately this also means we're stuck with a single process that is unable to create processing threads.

Using more than one process and load-balacing between them would be possible but then we'd have to find some way to share the context between the Node processes. As a matter of fact this is totally possible as the Node child process API offers easy event-driven ways to implement interprocess communication, but that would get out of the scope of our simple patch-in solution here.

For our use case, making sure the addition of the proxy would have very low impact on production machines was also important, both memory and CPU-wise.

It does require extra care in not blocking the event loop, something that you may have read somewhere before in a web browser context. To make things short we'll say that anything that the main Node process has to do (including blocking operation callbacks) has to be as quick and short as possible.

As a closing note on this I think I should mention that technologies like Rust and Go offer solutions to create very effective async IO single processes (with threading possibilities too) and would probably be your best pick to create the absolute fastest solution for this use case, considering you'll have to compile your code and add language features that may not be built-in as they are in Javascript.

The script

As a consequence of the creation of our blog section, we recently created a Github account to possibly share some of what we do.

You'll find the project in its generic state in the following Github repository.

Setting it up

First, you'll need to get a copy or the repository, or clone it using git.

Assuming you already have Node.js installed, you can just run:

npm install

Inside the project directory to install the dependencies (actually, the single dependency).

You should now have a look at config.js, which has a few important properties to consider:

  • maxRequests - This is the maximum amount of requests allowed in the timestamp specified through timeframeMs ; It's set to a very low value in the repository so you can easily see the effects.
  • target - The backend web server to proxy requests to. If you need to simulate one you can use the Python HTTP server, the PHP dev server or even the http-server Node package.

Check out the project README.md file if you run into any issue.

Making it fit your needs

Open limiter-proxy.js and have a look at the following code block:

// Register the server handler.
const server = http.createServer((req, res) => {

  // If using a reverse proxy before this one, 
  // make sure to have it filling out the 
  // X-Forwarded-For header.
  const ip = req.headers['x-forwarded-for'] || req.connection.remoteAddress;

  if (!config.ipWhitelist.includes(ip)) {
    // Determine the "key" using the request object.
    // const key = ...
    // To make an IP blocking proxy as an exemple, 
    // we're using the IP address as key:
    const key = ip;
    if (isUserBlocked(key, ip)) {
      blockedResponse(res);
    }
  }

  proxy.web(req, res);
});

What we're doing here, is create a server and giving it an inline function to handle requests.

The line:

proxy.web(req, res);

Will cause the request to be transferred to the target server (along with all the headers, unchanged) and get the response back to pipe to the client, so that line basically means the client is allowed to make the request as normal.

The line:

blockedResponse(res);

Immediately responds to the client with an HTTP error (the error code and plain text message are configurable in config.js).

Now what we want to do is to find out something in the request that can uniquely identify the clients you want to block (this is their IP address in the example code).

You should replace:

const key = ip;

By whatever would suit your needs. You have access to the request body through streaming its data (which again doesn't block the event loop, event if the request body is huge) and the "req" object has all the headers, request method (POST, GET, etc.) and more.

Running the script

We run it using the forever npm package but you can use anything, including systemd on newer Linux systems.

To test the script from the project directory you can either run node-limiter.js with node or use the included npm start script.

Cleaning up

This script version doesn't perform any actual cleaning up on the keys object.

If you register a lot of unique keys in it it could become quite big, although you'd have to go really overboard to run into performances issues from that since we're using hashed memory storage.

Still, for long production uses it could be wise to think of some kind of cleaning up procedure for the keys object. Just don't make it too expensive (some kind of stream would be best) because it will block the event loop and thus the processing of requests.

Comments

Loading...