Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[ID-98] Rate limits #139

Merged
merged 50 commits into from
Jan 12, 2023
Merged

[ID-98] Rate limits #139

merged 50 commits into from
Jan 12, 2023

Conversation

petyos
Copy link
Collaborator

@petyos petyos commented Jan 9, 2023

Description

**Resolves #98

Here are the highlihts of this PR:

  • Introduced priority queue feature for sending notifications.
  • You can send notification immediatelly or in a schedulled time in the future.
  • If you want to send a message at specific time in the future then you just need to add "time" field when hitting the API for creating messages. If the "time" field is omitted then it means now and tries to send the notification asap.
  • You can also set a priority of this message from 1 to 10. 1 means highest priority.
  • Here is the priority order for sending: the first criteria is the "time" field. The notifications with most recently time will be sent first. If we have notifications with equals time then the second criteria is the "priority" field mentioned above.
  • The queue should manage a lot of notifications processing them on small pieces. The piece count is managed from the database. 50 is acceptable number.
  • Once one service starts processing the priority queue then it locks it until it finish. Once finished then it unlocks it again. This way we prevent from multiple services sending notifications simulativelly.
  • Removed code for data manipualting for multi-tenancy and recipients refactoring - no needed anymore.
  • Code order in the web driver adapter.
  • Removed mode.InputMessage and introduced tool generated struct on web driver level for the create messages APIs.

Review Time Estimate

Please give your idea of how soon this pull request needs to be reviewed by selecting one of the options below. This can be based on the criticality of the issue at hand and/or other relevant factors.

  • Immediately
  • Within a week
  • When possible

Type of changes

Please select a relevant option:

  • Bug fix (non-breaking change which fixes an issue).
  • New feature (non-breaking change which adds functionality).
  • Breaking change (fix or feature that would cause existing functionality to not work as expected).
  • Other (any another change that does not fall in one of the above categories.)

@petyos petyos requested a review from shurwit January 9, 2023 07:54
@petyos petyos requested a review from mdryankov as a code owner January 9, 2023 07:54
@petyos
Copy link
Collaborator Author

petyos commented Jan 10, 2023

When you test the changes you first need to:

  • create collection "queue" in your database
  • import the following record in it:
{
  "_id": "1",
  "status": "ready",
  "process_items_count": 50
}

Copy link
Collaborator

@mdryankov mdryankov left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

Copy link
Collaborator

@shurwit shurwit left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @petyos, this is really great work! I think this implementation is really well structured and easy to follow. I did leave a couple of comments below talking about some edge cases that may come up if a thread or instance crashes. I don't believe that either one is critically urgent enough to stop this from being merged, but let me know if you have any questions or thoughts on them. Thanks again!

Comment on lines +167 to +171
if queue.Status != "ready" {
q.logger.Infof("the queue is not ready but %s", queue.Status)
queueAvailable = false
return nil
}
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

One thing to consider here is what happens if the thread dies while processing the queue. In this case, it seems to me that the queue would be stuck in an unavailable state and would never automatically recover. The way I handled a similar locking situation on the Groups BB in the past was to assign a timestamp when the lock is set and check for potential timeout scenarios. To do this, if the status is not "ready" we would also check if the timestamp is older than some configurable timeout period, and if it is we would override the status and begin processing the queue anyway. Let me know if you have any questions or any other thoughts on how to handle this case.

func (q queueLogic) start() {
q.logger.Info("queueLogic start")

q.processQueue()
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think there may be an issue here if the queue is locked when the instance first starts. In this case, it looks to me like the instance will never attempt to process the queue unless the specific instance receives a new message through the API. In normal functioning I think that is ok, but if some of the instances crash I think we could end up in a state where no instances are actively processing the timed items in the queue. One potential solution could be to periodically retry processing the queue until it is able to acquire the lock.

@petyos
Copy link
Collaborator Author

petyos commented Jan 12, 2023

Hi @mdryankov , @shurwit , thanks for your feedback. I am merging it. @shurwit I've opened a separate task for covering your feedback here - #140

@petyos petyos merged commit 8499d82 into develop Jan 12, 2023
@petyos petyos deleted the 98-feature-rate-limits-part-2 branch January 12, 2023 07:33
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[FEATURE] Rate limits
3 participants