Skip to content
This repository has been archived by the owner on Nov 6, 2019. It is now read-only.

Feature Request: Rate Limiting #65

Open
athenawisdoms opened this issue Dec 12, 2015 · 2 comments
Open

Feature Request: Rate Limiting #65

athenawisdoms opened this issue Dec 12, 2015 · 2 comments

Comments

@athenawisdoms
Copy link

Since rate limiting and throttling are very commonly used with job queues, it will be really awesomely great to have this feature in monq. Or at least a helper package that just works with monq.

From my perspective, there is not much benefit to throttling on the producer side, so having a rate-limiting feature on the worker will be ideal for most use cases.

Thank you!

@scttnlsn
Copy link
Owner

Are you imagining a rate limit per individual worker? Or per queue (i.e.. all workers reading from a single queue cannot process more than N jobs/sec)?

@Zaephor
Copy link

Zaephor commented Jan 30, 2016

I currently have a use case I'm working with where rate limiting or concurrency per queue(regardless of worker count) would be extremely beneficial.

My use case:
I have a large collection of identical api endpoints that I need to poll data from, I've created a separate queue and worker for each one. The endpoints aren't running on the best hardware and they often start 502'ing at me if I begin to perform more than 1 request per 2 seconds. Problem is that since I know my web application will get hammered, I've designed it assuming that multiple cloned servers may be running in parallel behind a load balancer. This introduces the issue that each instance of the application will be watching the job queue mongo collection, and the next task will not be blocked on a parallel system while the current task is being performed.

I think the easiest solution is to create an additional collection to maintain a mutex state for each queue. Each time a job is detected by the worker, check the mutex to see if it is permitted to be executed. If it is permitted, have the worker generate a unique token(could even be preconfigured or generated at application start) to represent itself and place it's token in the mutex entry to prevent anyone else from working on that queue. Additionally a timeout could be configured incase that instance has been deleted/crashed, so that other workers can take over for that queue again. I'd offer a PR for this solution, but I'm still struggling my way through nodejs at the moment.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants