Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

yield by each x delivered bytes instead of each y msg/frame #863

Merged
merged 10 commits into from
Dec 12, 2024

Conversation

viktorerlingsson
Copy link
Member

WHAT is this pull request doing?

Changes read_loop and deliver_loop to Fiber.yield based on delivered bytes instead of number of messages/frames. This should make it so LavinMQ does not freeze when handling large volumes of large messages.

Performance wise it seems pretty similar to main with small messages, and much more stable with larger messages.

this branch

lavinmqperf throughput
Average publish rate: 802665.3 msgs/s
Average consume rate: 798787.9 msgs/s

lavinmqperf throughput -s 100000
Average publish rate: 7346.7 msgs/s
Average consume rate: 5862.9 msgs/s

lavinmqperf throughput -s 1000000
Average publish rate: 738.6 msgs/s
Average consume rate: 738.3 msgs/s

main

lavinmqperf throughput
Average publish rate: 792352.1 msgs/s
Average consume rate: 793866.8 msgs/s

lavinmqperf throughput -s 100000
Average publish rate: 6883.7 msgs/s
Average consume rate: 131.7 msgs/s

lavinmqperf throughput -s 1000000
Average publish rate: 542.9 msgs/s
Average consume rate: 4.3 msgs/s

HOW can this pull request be tested?

Existing specs should cover this pretty well.

@viktorerlingsson viktorerlingsson requested a review from a team as a code owner November 27, 2024 14:29
src/lavinmq/amqp/client.cr Outdated Show resolved Hide resolved
src/lavinmq/amqp/client.cr Outdated Show resolved Hide resolved
src/lavinmq/amqp/client.cr Outdated Show resolved Hide resolved
src/lavinmq/amqp/consumer.cr Outdated Show resolved Hide resolved
src/lavinmq/config.cr Outdated Show resolved Hide resolved
src/lavinmq/amqp/client.cr Outdated Show resolved Hide resolved
Co-authored-by: Carl Hörberg <[email protected]>
@carlhoerberg
Copy link
Member

carlhoerberg commented Dec 4, 2024

Or skip my suggested type change, it would only become a problem if someone sent a frame larger than 2 GiB.. (which a rouge client potentially could do, but then, we don't do wrap around checks so nothing would even crash)

Copy link
Member

@carlhoerberg carlhoerberg left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Trying to write good docs for these settings makes me think that this probably shouldn't be user configurable settings at all. We only want it to be configurable to be able to easy test out new settings internally. The long term solution is either a fixed value, not something we expect users to change, or better yet, waiting for the better MT implementation in crystal that force "yield" fibers after 10ms or something.

src/lavinmq/config.cr Outdated Show resolved Hide resolved
viktorerlingsson and others added 2 commits December 5, 2024 09:53
Co-authored-by: Carl Hörberg <[email protected]>
@viktorerlingsson
Copy link
Member Author

Trying to write good docs for these settings makes me think that this probably shouldn't be user configurable settings at all. We only want it to be configurable to be able to easy test out new settings internally. The long term solution is either a fixed value, not something we expect users to change, or better yet, waiting for the better MT implementation in crystal that force "yield" fibers after 10ms or something.

Yeah, agreed for the most part. In most cases these values shouldn't need to be changed. There might be a case for changing them in some extreme cases, but it can also cause some "harm" if users set the values without understanding them.

@spuun since you suggested making the values configurable, WDYT?

@spuun
Copy link
Member

spuun commented Dec 5, 2024

Trying to write good docs for these settings makes me think that this probably shouldn't be user configurable settings at all. We only want it to be configurable to be able to easy test out new settings internally. The long term solution is either a fixed value, not something we expect users to change, or better yet, waiting for the better MT implementation in crystal that force "yield" fibers after 10ms or something.

Yeah, agreed for the most part. In most cases these values shouldn't need to be changed. There might be a case for changing them in some extreme cases, but it can also cause some "harm" if users set the values without understanding them.

@spuun since you suggested making the values configurable, WDYT?

Yeah, hm. I think it could be nice to have them until MT to be able to tweak individual instances. "Hide" them as ENVs instead? Or just document? Put them in an [experimental] section?

I think it would be nice to avoid having to do releases just to tweak values such this even more.

@carlhoerberg
Copy link
Member

carlhoerberg commented Dec 5, 2024 via email

@carlhoerberg carlhoerberg merged commit 20ebdb0 into main Dec 12, 2024
22 of 25 checks passed
@carlhoerberg carlhoerberg deleted the yield_on_delivered_bytes branch December 12, 2024 16:26
kickster97 pushed a commit that referenced this pull request Dec 16, 2024
Changes read_loop and deliver_loop to Fiber.yield based on delivered bytes instead of number of messages/frames. This should make it so LavinMQ does not freeze when handling large volumes of large messages.

Performance wise it seems pretty similar to main with small messages, and much more stable with larger messages

---------

Co-authored-by: Carl Hörberg <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants