Hack-Proof Your Backend: Exploring Rate Limiters

Enhancing your Backend Security. . .

Hack-Proof Your Backend: Exploring Rate Limiters

Understanding the Threats: DDOS and Brute-Force Attacks

DDoS (Distributed Denial of Service) and Brute-Force attacks are two common yet significant threats that can wreak havoc on backend systems. In this section, we will delve into the nature of these attacks, and their potential consequences, and highlight real-life examples to illustrate their impact.

What are DDoS attacks?

DDoS attacks are orchestrated attempts to overwhelm a server or network infrastructure with a flood of incoming traffic. The objective is to exhaust the available resources, including bandwidth, processing power, and memory.

By consuming these resources, the attackers cause the targeted system to become unresponsive or crash altogether, therefore incapable of handling legitimate user requests, resulting in service disruption or complete unavailability.

Example:
Imagine a scenario where an e-commerce website experiences a massive surge in traffic due to a DDoS attack. As a result, genuine users are unable to access the website, leading to revenue loss and a damaged reputation for the company for their unavailable service.

What are Brute-Force attacks?

Brute-Force attacks, on the other hand, involve repeated attempts to guess passwords or access credentials by systematically trying different combinations. This method relies on the assumption that some users may have weak or easily guessable passwords.

Real-Life Example:
Consider a scenario where an online banking application experiences a Brute-Force attack on user accounts. The attackers use automated scripts to try thousands of combinations until they successfully gain unauthorized access to an account, putting sensitive financial information at risk.

These examples highlight the severity and real-world implications of DDoS and Brute-Force attacks.

As backend developers, it is crucial to be aware of these threats and take necessary precautions to protect our applications and user data. Let's now explore how rate limiters can help mitigate the risks associated with these attacks.


Introducing Rate Limiters: A Shield Against Malicious Traffic

In the battle against malicious traffic and abuse, rate limiters emerge as powerful weapon. These proactive defense mechanisms help protect backend systems from various threats, including DDoS attacks and brute-force attempts.

Let's dive into the world of rate limiters and discover how they can fortify our applications.

๐Ÿ›ก๏ธ Rate limiters are like vigilant guards, controlling the flow of incoming requests. They set limits on the number of requests allowed within a specific timeframe. This prevents abuse, protects system resources, and maintains optimal performance. Think of them as the gatekeepers of a well-behaved and efficient system.

๐Ÿ’ช Protecting APIs: Limiting the number of requests per minute or hour helps prevent API abuse and ensures fair access for all users. This approach ensures your API remains responsive and available to handle legitimate traffic.

๐Ÿ”’ Securing Authentication Endpoints: Rate limiters also play a crucial role in preventing brute-force attacks by limiting the number of login attempts within a given timeframe, making it significantly harder for malicious actors to crack passwords.

๐Ÿ”‘ Tool: One popular tool in the realm of rate limiting is rate-limiter-flexible. This powerful library offers a comprehensive set of features to enforce rate-limiting rules in Node.js and TypeScript applications.

With the rate-limiter-flexible npm package, you can define custom rate limit rules based on IP addresses, user identities, or specific API endpoints. The tool allows you to set limits on the number of requests per minute, hour, or day, giving you fine-grained control over your application's traffic.


Implementation in Node.js Applications

๐Ÿš€ Ready to supercharge your Node.js or TypeScript backend with the robust rate-limiting capabilities of rate-limiter-flexible?

Let's embark on a step-by-step journey to integrate this powerful tool into your application and unleash its protective powers.

Step 1:

Ensure that you have Node.js and npm (Node Package Manager) installed on your system. Open your project directory and run the following command to install rate-limiter-flexible:

npm install rate-limiter-flexible

Step 2:

Now that Rate-Limiter-Flexible is part of your project's dependencies, it's time to configure it to suit your specific needs. First, import the library into your application:

const { RateLimiterMemory } = require('rate-limiter-flexible');

RateLimiterMemory is an In-memory rate limiter that operates solely in the application's memory where it stores keys and values about distinct users making requests to the server

Step 3:

Create an instance of the rate limiter by defining the limits and parameters according to your desired criteria. Here's an example of creating a rate limiter that limits requests per IP address:

const rateLimiter = new RateLimiterMemory({
  points: 10, // Number of requests allowed
  duration: 1, // Timeframe (in seconds) for the limit
  blockDuration: 60 * 5, // blocking a user for 5 mins to make any further requests
});

rateLimiter is an instance that will have properties like consumedPoints, blockDuration, duration, and methods such as consume, get, set, etc.

Step 4:

Now comes the exciting part! It's time to apply the rate-limiting logic to your endpoints or routes. You can use middleware functions to seamlessly integrate Rate-Limiter-Flexible into your existing routing system. Here's an example of how to apply rate limiting to a specific endpoint:

app.use('/api/limited-endpoint', (req, res, next) => {
  rateLimiter.consume(req.ip) // Replace with desired identifier, such as user ID or API key
    .then((RateLimiterRes) => {
      // Allow the request to proceed
      next();
    })
    .catch((RateLimiterRes) => {
      // Request exceeds the rate limit
      res.status(429).json({ error: 'Too Many Requests' });
    });
});

Now let's break down the code:

  • Initially every IP address making a request to this api-endpoint: /api/limited-endpoint will be assigned 10 points in total, which is max no. of requests the same IP address can make.

  • rateLimiter.consume() is a function that deducts points from the IP addresses every time it requests a particular API endpoint.

    Example:
    So if a user makes a request for the first time to /api/limited-endpoint, then after this request-response cycle is complete, no. of consumed points by the same user will be 1 and remaining points will be 9.

    So more the requests you make, more the points get reduced.

    But why deduct by 1 point? Actually that is the default value.
    - consume(req.ip, 2) : means deducting 2 points everytime the user makes a request
    - consume(req.ip) : not mentioning how many points to deduct will set the default value as 1

  • In consume(req.ip), req.ip is the key that identifies a specific user, that will be stored in memory with consumed points as the value.

  • For rateLimiter.consume(req.ip) returns a Promise, and whether the Promise is resolved or rejected, we get this object:

RateLimiterRes = {
    msBeforeNext: 250, // Number of milliseconds before next action can be done
    remainingPoints: 0, // Number of remaining points in current duration 
    consumedPoints: 5, // Number of consumed points in current duration 
    isFirstInDuration: false, // action is first in current duration 
}

Promise is resolved if consumed points <= 10 (the no. of points originally assigned)

Promise is rejected, when consumed points is more than the points allowed

Step 5:

Customize and Enhance Rate-Limiter-Flexible offers a plethora of options for customization. You can define limits based on different criteria such as IP addresses, user identities, or even custom parameters specific to your application's requirements. Explore the documentation to unleash the full potential of rate limiting in your backend.

๐Ÿ”ง With these steps, you've successfully integrated Rate-Limiter-Flexible into your Node.js or TypeScript application. It's now primed to safeguard your backend against excessive requests and potential abuse.


Rate-Limiting Approaches

โš–๏ธ When it comes to rate limiting, there's a balancing act between strict and lenient approaches:

๐Ÿ”’ Strict Rate Limiting: Setting low limits and imposing tight restrictions offers maximum protection against abuse, but it may also result in some legitimate users being affected by rate limits. Consider this approach when security is a top priority.

๐Ÿ”“ Lenient Rate Limiting: Allowing higher limits and providing more flexibility can ensure smoother user experiences, but it may also increase the risk of abuse or overload on system resources. Using this approach when maintaining usability is crucial.

Remember, the best rate-limiting strategy depends on your specific application needs and the level of protection required. It's important to strike the right balance to ensure security without compromising usability.


Redis Vs Memory

Since RateLimiterMemory stores data only in memory, it does not offer persistence. This means that if the application restarts or crashes, the rate-limiting data will be lost, and the rate limits will reset.

Whereas RateLimiterRedis stores data in the Redis cloud database, which means even if our application crashes or restarts, we can still retrieve all data about the user's consumed points and everything.

Setup for Redis-based rate-limiter:

const redisClient = createRedisClient(); // write your own logic here for creating redis client.

const rateLimiter = new RateLimiterRedis({
  storeClient: redisClient,
  points: maxConsecutiveFailsByUsername,
  duration: 60 * 60 * 3, // Store number for three hours since first fail
  blockDuration: 60 * 15, // Block for 15 minutes
});

Now instead of memory, key-value pair information about a specific user is stored in Redis resulting in the persistence of data even during application crashes or restarts.


Multi-purpose Rate Limiters

If you want to use rate limiters for different purposes like an authentication/authorization-based rate-limiter and another rate-limiter for CRUD operations or some other tasks, you can use key prefixes to distinguish between different rate-limiters

Example:

//* General Rate Limiter for CRUD operations

const totalAttemptsByIP = 150;
// create a rate limiter
const rateLimiter = new RateLimiterRedis({
    storeClient: redisClient,
    keyPrefix: 'ip_attempts_per_day',
    points: totalAttemptsByIP, // total no. of requests allowed for a sepcific route
    duration: 60 * 5, // duration in seconds allowed for the requests
    blockDuration: 60 * 10
});

//*************************************************************

const maxWrongAttemptsPerDay = 30;
// Rate Limiter for Brute Force attacks
const limiterSlowBruteByIP = new RateLimiterRedis({
    storeClient: redisClient,
    keyPrefix: 'login_attempts_ip_per_day',
    points: maxWrongAttemptsPerDay,
    duration: 60 * 5, // 5 mins
    blockDuration: 60 * 60, // Block for 1 day, if 100 wrong attempts per day
});

I have created two different rate limiter instances with different purposes and can be distinguished by its keyPrefixes : ip_attempts_per_day & login_attempts_ip_per_day

This means these keyPrefixes will be stored in the Redis database that will store consumed points of a particular user for general requests and auth requests.


Most commonly used rateLimiter methods

  • consume(key): This method allows you to consume a request from a specific user identified by the key. It returns a Promise that resolves value indicating whether the request was allowed or exceeded the rate limit.

  • get(key): This method retrieves the current state of the rate limiter for a specific user identified by the key.
    It returns a Promise that resolves to an object containing information such as the total number of requests made, the remaining requests allowed within the current time window, and the time when the rate limit will reset. If the key does not exists, the method will return null.

  • block(key, duration): This method blocks a specific user identified by the key for a specified duration. It prevents the user from making any further requests during the blocked period.

  • delete(key): This method removes the rate limiter entry for a specific user identified by the key. It can be used to clear the rate limit state for a user or to remove a user from rate limiting.

  • set(key, points, duration): This method allows you to set the rate limit for a specific user identified by the key. It takes three parameters:

    • key: The unique identifier for the user.

    • points: The maximum number of requests allowed within the specified duration.

    • duration: The time window during which the rate limit applies, specified in milliseconds.iv


One Last Thing...

How do you get access to values like blockDuration, duration, โ€ฆ?

Example:

const rateLimiter = new RateLimiterRedis({
    storeClient: redisClient,
    keyPrefix: 'ip_attempts_per_day',
    points: totalAttemptsByIP, // total no. of requests allowed for a sepcific route
    duration: 60 * 5, // duration in seconds allowed for the requests
    blockDuration: 60 * 10
});

const block_duration = rateLimiter.blockDuration;
const duration = rateLimiter.duration;
const consumedPoints = rateLimiter.consumedPoints;

Conclusion

Follow these code snippets to get more ideas about using rate-limiters in authentication and express middleware.

If you have learnt something from this blog, make sure to like and follow me to get more content on backend development :). Leave a comment for any doubts.

Thanks for reading!

Did you find this article valuable?

Support Mayukh Bhowmick by becoming a sponsor. Any amount is appreciated!

ย