Over a million developers have joined DZone.
{{announcement.body}}
{{announcement.title}}

Rate Limiting Fun

DZone's Guide to

Rate Limiting Fun

Have you ever thought about how you'd implement rate limiting in your APIs? In this post, Ayende knocks out some code to play with the ideas behind an implementation.

Free Resource

SignalFx is the only real-time cloud monitoring platform for infrastructure, microservices, and applications. The platform collects metrics and traces across every component in your cloud environment, replacing traditional point tools with a single integrated solution that works across the stack.

I was talking with a few of the guys at work, and the concept of rate limiting came up. In particular, being able to limit the number of operations a particular user can do against the system. The actual scenario isn't really important, but the idea kept bouncing in my head, so I sat down and wrote a quick test case.

Nitpicker corner: This is scratchpad code, it isn't production worthy, tested or validated.

The idea is probably best explained in code, like so:

private SemaphoreSlim _rateLimit = new SemaphoreSlim(10);

public async Task HandlePath(HttpContext context, string method, string path)
{
    if (await _rateLimit.WaitAsync(3000) == false)
    {
        context.Response.StatusCode = 429; // Too many requests
        return;
    }

    // actually process requests

}

Basically, we define a semaphore, with the maximum number of operations that we want to allow, and we wait on the semaphore when we are starting the operation.

However, there is nothing that actually releases the semaphore. Here we get into design choices.

We can release the semaphore when the request is over, which effectively gives us rate limiting in terms of concurrent requests.

The more interesting approach from my perspective was to use this:

 _timer = new Timer(state =>
 {
     var currentCount = 10 - _rateLimit.CurrentCount;
     if (currentCount == 0)
         return;
     _rateLimit.Release(currentCount);
 }, null, 1000, 1000);

Using this approach, we are actually limited to 10 requests a second.

And yes, this actually allows more concurrent requests than the previous option, because if a request takes more than one second, we'll reset its count on the timer's tick.

I actually tested this using gobench, and it confirmed that this is actually serving exactly 10 requests / second.

SignalFx is built on a massively scalable streaming architecture that applies advanced predictive analytics for real-time problem detection. With its NoSample™ distributed tracing capabilities, SignalFx reliably monitors all transactions across microservices, accurately identifying all anomalies. And through data-science-powered directed troubleshooting SignalFx guides the operator to find the root cause of issues in seconds.

Topics:
.net ,profiling

Published at DZone with permission of

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}