Skip to content

Implementing Request Throttling in Node.js with node-cache

Implementing Request Throttling in Node.js with node-cache

Request throttling is essential for controlling the rate of incoming requests, especially in high-traffic applications where excessive requests can strain servers, increase costs, and negatively impact user experience. By using node-cache, we can create a simple, in-memory solution for throttling in Node.js. node-cache allows us to set request limits, track request counts, and enforce time-based restrictions without the need for a distributed caching system.

In this guide, we’ll implement basic and advanced request throttling techniques in Node.js using node-cache, covering rate-limiting strategies, custom TTL configurations, and best practices for managing API rate limits.

Request Throttling Architecture Overview

graph TB
    subgraph "Client Requests"
        CLIENT1[Client A<br/>IP: 192.168.1.100]
        CLIENT2[Client B<br/>IP: 192.168.1.101]
        CLIENT3[Client C<br/>IP: 192.168.1.102]
        MALICIOUS[🚨 Malicious Client<br/>High Request Rate]
    end
    
    subgraph "Throttling Layer"
        LB[Load Balancer<br/>Initial Request Distribution]
        THROTTLE[Throttling Middleware<br/>Rate Limit Enforcement]
        CACHE[node-cache<br/>In-Memory Request Tracking]
    end
    
    subgraph "Cache Storage Structure"
        KEY1[throttle:192.168.1.100<br/>Count: 8/10, TTL: 45s]
        KEY2[throttle:192.168.1.101<br/>Count: 3/10, TTL: 60s]
        KEY3[throttle:192.168.1.102<br/>Count: 10/10, TTL: 30s]
        KEY4[throttle:malicious.ip<br/>Count: 25/10, TTL: 60s]
    end
    
    subgraph "Application Layer"
        APP[Express Application<br/>Protected Routes]
        API1[GET /api/users]
        API2[POST /api/data]
        API3[PUT /api/profile]
    end
    
    subgraph "Response Handling"
        SUCCESS[✅ 200 OK<br/>Request Processed]
        THROTTLED[❌ 429 Too Many Requests<br/>Rate Limit Exceeded]
        RETRY[Retry-After Header<br/>Time Until Reset]
    end
    
    CLIENT1 --> LB
    CLIENT2 --> LB
    CLIENT3 --> LB
    MALICIOUS --> LB
    
    LB --> THROTTLE
    THROTTLE --> CACHE
    
    CACHE --> KEY1
    CACHE --> KEY2
    CACHE --> KEY3
    CACHE --> KEY4
    
    THROTTLE -->|Count < Limit| APP
    THROTTLE -->|Count >= Limit| THROTTLED
    
    APP --> API1
    APP --> API2
    APP --> API3
    
    API1 --> SUCCESS
    API2 --> SUCCESS
    API3 --> SUCCESS
    
    THROTTLED --> RETRY
    
    style CLIENT1 fill:#e8f5e8
    style CLIENT2 fill:#e8f5e8
    style CLIENT3 fill:#fff3e0
    style MALICIOUS fill:#ffebee
    style THROTTLE fill:#e1f5fe
    style SUCCESS fill:#e8f5e8
    style THROTTLED fill:#ffebee

Why Use node-cache for Request Throttling?

node-cache is ideal for implementing request throttling in applications that don’t require distributed caching because:

  1. In-Memory Storage: Data is stored in-memory, making it fast to read and write.
  2. TTL Support: node-cache supports time-to-live (TTL) configurations, allowing you to enforce request limits over specific timeframes.
  3. Lightweight Setup: With minimal setup, node-cache provides an efficient solution for limiting requests in small- to medium-scale applications.

Note: For distributed systems, consider using Redis or Memcached, which can handle request throttling across multiple nodes.


Setting Up node-cache for Throttling in Node.js

Step 1: Install node-cache

If you haven’t already, install node-cache in your project.

npm install node-cache

Step 2: Initialize node-cache with Configuration

Set up node-cache in a cache.js file. We’ll use this cache instance to track request counts for each user or IP address.

cache.js

// @filename: config.js
const NodeCache = require('node-cache')
const cache = new NodeCache()

module.exports = cache

Request Throttling Process Flow

sequenceDiagram
    participant Client
    participant Express as Express Server
    participant Middleware as Throttling Middleware
    participant Cache as node-cache
    participant App as Application Logic
    
    Note over Client,App: First Request from New Client
    Client->>Express: GET /api/data
    Express->>Middleware: Process request
    Middleware->>Cache: Get throttle:192.168.1.100
    Cache-->>Middleware: null (key not found)
    Middleware->>Cache: Set throttle:192.168.1.100 = 1, TTL = 60s
    Cache-->>Middleware: ✅ Key set successfully
    Middleware->>App: Allow request (count: 1/10)
    App-->>Express: 200 OK with data
    Express-->>Client: Response + Headers
    
    Note over Client,App: Subsequent Requests Within Limit
    Client->>Express: GET /api/data (2nd request)
    Express->>Middleware: Process request
    Middleware->>Cache: Get throttle:192.168.1.100
    Cache-->>Middleware: count = 1
    Middleware->>Cache: Set throttle:192.168.1.100 = 2, TTL = 60s
    Cache-->>Middleware: ✅ Updated successfully
    Middleware->>App: Allow request (count: 2/10)
    App-->>Express: 200 OK with data
    Express-->>Client: Response + Rate Limit Headers
    
    Note over Client,App: Request Exceeding Limit
    Client->>Express: GET /api/data (11th request)
    Express->>Middleware: Process request
    Middleware->>Cache: Get throttle:192.168.1.100
    Cache-->>Middleware: count = 10
    Middleware->>Middleware: Check: 10 >= 10 (limit)
    Middleware->>Express: ❌ Block request
    Express-->>Client: 429 Too Many Requests<br/>Retry-After: 45<br/>X-RateLimit-Remaining: 0
    
    Note over Client,App: After TTL Expiration
    Cache->>Cache: TTL expires, key deleted
    Client->>Express: GET /api/data (after reset)
    Express->>Middleware: Process request
    Middleware->>Cache: Get throttle:192.168.1.100
    Cache-->>Middleware: null (key expired)
    Middleware->>Cache: Set throttle:192.168.1.100 = 1, TTL = 60s
    Middleware->>App: Allow request (count: 1/10)
    App-->>Express: 200 OK with data

Implementing Basic Request Throttling

In basic throttling, we limit the number of requests a user can make within a fixed time period. For example, we might allow a user to make 10 requests per minute.

Step 1: Set Up Throttling Logic

Create a middleware function to track request counts and enforce the rate limit. Here, we’ll use the user’s IP address as a unique identifier for tracking.

throttle.js

// @filename: config.js
const cache = require('./cache')

const requestThrottle = (limit, duration) => (req, res, next) => {
  const userKey = `throttle:${req.ip}` // Use IP address as a unique key
  const requestCount = cache.get(userKey) || 0

  if (requestCount >= limit) {
    return res
      .status(429)
      .json({ message: 'Too many requests. Please try again later.' })
  }

  // Increment request count and set TTL if key is new
  cache.set(userKey, requestCount + 1, duration)
  next()
}

module.exports = requestThrottle

Step 2: Apply the Throttling Middleware in Express

In your main application file, use this middleware to throttle specific routes or all routes.

server.js

// @filename: server.js
const express = require('express')
const requestThrottle = require('./throttle')

const app = express()
const port = 3000

// Apply throttling: 10 requests per minute (60 seconds)
app.use(requestThrottle(10, 60))

app.get('/', (req, res) => {
  res.send('Welcome to the home page!')
})

app.listen(port, () => {
  console.log(`Server running on port ${port}`)
})

In this setup:

  • The middleware limits each IP address to 10 requests per 60 seconds.
  • If the limit is exceeded, it returns a 429 Too Many Requests response.

Advanced Throttling with Dynamic Limits and Custom TTL

For more flexibility, you can set dynamic limits based on user roles or endpoints. Additionally, you can configure custom TTLs for different request types.

Step 1: Dynamic Throttling Logic

Modify the middleware to accept a callback function that returns a limit and duration based on request details, such as user role or route.

advancedThrottle.js

// @filename: config.js
const cache = require('./cache')

const advancedThrottle = (getLimitDuration) => (req, res, next) => {
  const { limit, duration } = getLimitDuration(req) // Get limit and duration dynamically
  const userKey = `throttle:${req.ip}:${req.originalUrl}` // Unique key for each IP and endpoint
  const requestCount = cache.get(userKey) || 0

  if (requestCount >= limit) {
    return res
      .status(429)
      .json({ message: 'Too many requests. Please try again later.' })
  }

  // Increment request count and set TTL if key is new
  cache.set(userKey, requestCount + 1, duration)
  next()
}

module.exports = advancedThrottle

Step 2: Applying Advanced Throttling

Define a function that determines rate limits based on request parameters (e.g., endpoints or user roles) and use it with the advancedThrottle middleware.

server.js

// @filename: server.js
const express = require('express')
const advancedThrottle = require('./advancedThrottle')

const app = express()
const port = 3000

// Define dynamic rate limits based on endpoint and IP
const getLimitDuration = (req) => {
  if (req.originalUrl === '/api/data') {
    return { limit: 5, duration: 30 } // 5 requests every 30 seconds
  }
  return { limit: 10, duration: 60 } // Default rate limit
}

// Apply advanced throttling
app.use(advancedThrottle(getLimitDuration))

app.get('/api/data', (req, res) => {
  res.send('Data retrieved successfully')
})

app.get('/api/profile', (req, res) => {
  res.send('User profile data')
})

app.listen(port, () => {
  console.log(`Server running on port ${port}`)
})

In this setup:

  • /api/data has a stricter limit of 5 requests per 30 seconds.
  • Other endpoints follow a default limit of 10 requests per minute.

Explanation

  • Dynamic Limits: getLimitDuration allows each endpoint to have custom rate limits and durations.
  • IP and Endpoint-Based Throttling: By including the endpoint in the key, limits are enforced per user IP and endpoint.

Monitoring Throttling with Events

Use node-cache events to monitor request throttling activity, tracking when keys are set, deleted, or expired.

throttleEvents.js

// @filename: index.js
const cache = require('./cache')

cache.on('set', (key, value) => {
  if (key.startsWith('throttle:')) {
    console.log(`Throttling set for ${key}: ${value}`)
  }
})

cache.on('expired', (key, value) => {
  if (key.startsWith('throttle:')) {
    console.log(`Throttle limit expired for ${key}`)
  }
})

Integrate this file in your main application to log throttling events.


Best Practices for Throttling with node-cache

  1. Choose Appropriate Limits: Set limits based on user roles, endpoint types, and typical usage patterns.
  2. Use Unique Keys: Include user identifiers or IP addresses in cache keys to ensure individual tracking.
  3. Monitor Cache Usage: Track set, delete, and expired events to monitor throttling activity and identify high-traffic users or endpoints.
  4. Set Reasonable Expirations: Ensure TTLs match the frequency of access to avoid excessive throttling or overuse of resources.
  5. Graceful Error Handling: Return meaningful error messages or retry-after headers to inform users of retry opportunities.

Throttling Strategies Comparison

graph TB
    subgraph "Fixed Window Throttling"
        FW_TIME[Time Windows: 0-60s, 60-120s, 120-180s]
        FW_LIMIT[Limit: 10 requests per window]
        FW_ISSUE[❌ Burst Issue:<br/>10 requests at 59s<br/>+ 10 requests at 61s<br/>= 20 requests in 2 seconds]
        FW_IMPL[Simple counter reset<br/>Easy to implement<br/>Memory efficient]
    end
    
    subgraph "Sliding Window Throttling"
        SW_TIME[Continuous sliding window<br/>Always 60s from current time]
        SW_LIMIT[Limit: 10 requests per 60s period]
        SW_BENEFIT[✅ Smooth rate limiting<br/>No burst at window boundaries<br/>More accurate control]
        SW_IMPL[Track request timestamps<br/>Complex implementation<br/>Higher memory usage]
    end
    
    subgraph "Token Buck≡t Algorithm"
        TB_TOKENS[Token bucket with capacity<br/>Refill rate: 10 tokens/minute]
        TB_LIMIT[Consume 1 token per request<br/>Allow burst if tokens available]
        TB_BENEFIT[✅ Handles traffic bursts<br/>Flexible rate control<br/>Industry standard]
        TB_IMPL[Token counter + refill logic<br/>Moderate complexity<br/>Good performance]
    end
    
    subgraph "Performance Comparison"
        PERF1[Fixed Window: O(1)<br/>Memory: Low<br/>Accuracy: Low]
        PERF2[Sliding Window: O(n)<br/>Memory: High<br/>Accuracy: High]
        PERF3[Token Bucket: O(1)<br/>Memory: Low<br/>Accuracy: Medium]
    end
    
    subgraph "Use Case Recommendations"
        USE1[Fixed Window:<br/>Simple APIs<br/>Low traffic<br/>Basic protection]
        USE2[Sliding Window:<br/>Strict rate limiting<br/>High-value APIs<br/>Premium services]
        USE3[Token Bucket:<br/>Public APIs<br/>Variable traffic<br/>Production systems]
    end
    
    FW_TIME --> FW_LIMIT
    FW_LIMIT --> FW_ISSUE
    FW_ISSUE --> FW_IMPL
    
    SW_TIME --> SW_LIMIT
    SW_LIMIT --> SW_BENEFIT
    SW_BENEFIT --> SW_IMPL
    
    TB_TOKENS --> TB_LIMIT
    TB_LIMIT --> TB_BENEFIT
    TB_BENEFIT --> TB_IMPL
    
    FW_IMPL --> PERF1
    SW_IMPL --> PERF2
    TB_IMPL --> PERF3
    
    PERF1 --> USE1
    PERF2 --> USE2
    PERF3 --> USE3
    
    style FW_ISSUE fill:#ffebee
    style SW_BENEFIT fill:#e8f5e8
    style TB_BENEFIT fill:#e8f5e8
    style USE2 fill:#fff3e0
    style USE3 fill:#e1f5fe

Advanced Use Case: Sliding Window Throttling

Sliding window throttling smooths out request bursts by tracking requests in smaller sub-windows. This technique prevents sudden spikes near window boundaries.

Example: Implementing Sliding Window Throttling

To implement a sliding window, store request timestamps in an array and count requests within a defined time window.

slidingThrottle.js

// @filename: config.js
const cache = require('./cache')

const slidingThrottle = (limit, windowMs) => (req, res, next) => {
  const key = `throttle:${req.ip}`
  const now = Date.now()
  const timestamps = cache.get(key) || []

  // Remove timestamps outside the window
  const recentRequests = timestamps.filter(
    (timestamp) => now - timestamp < windowMs
  )

  if (recentRequests.length >= limit) {
    return res
      .status(429)
      .json({ message: 'Too many requests. Please try again later.' })
  }

  // Add the current timestamp and update the cache
  recentRequests.push(now)
  cache.set(key, recentRequests, windowMs / 1000) // TTL in seconds

  next()
}

module.exports = slidingThrottle

Applying Sliding Window Throttling

Use the sliding window throttle middleware in your application.

server.js

// @filename: server.js
const express = require('express')
const slidingThrottle = require('./slidingThrottle')

const app = express()
const port = 3000

// Allow 5 requests per 10-second sliding window
app.use(slidingThrottle(5, 10000))

app.get(
  '/',
  (
    req,

    res
  ) => {
    res.send('Welcome!')
  }
)

app.listen(port, () => {
  console.log(`Server running on port ${port}`)
})

In this example:

  • Users can make up to 5 requests every 10 seconds, even if the requests are spread out or clustered near the end of the window.

Conclusion

Implementing request throttling with node-cache in Node.js provides a straightforward, efficient way to control API usage and protect your server from excessive requests. By setting custom limits, exploring dynamic throttling, and using sliding windows, you can create a robust rate-limiting system that ensures fair usage, maintains performance, and delivers a smooth user experience.

Integrate these techniques in your Node.js applications to manage traffic effectively and protect your resources from high demand, all while maintaining application responsiveness.

Node.js JavaScript Backend
Share:

Continue Reading

Implementing Email Notifications in Node.js with Nodemailer and Cron Jobs

Email notifications keep users informed and engaged by providing timely updates on important events. By combining Nodemailer with cron jobs, you can automate email notifications in a Node.js application, whether it’s for daily reports, reminders, or promotional updates. This guide walks through setting up scheduled email notifications, covering the basics of Nodemailer and the use of cron jobs to handle recurring tasks.

Read article
Node.jsJavaScriptBackend