Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Replace BTreeMap by BTreeSet and remove id form Entry #123

Merged
merged 1 commit into from
May 16, 2023

Conversation

robatipoor
Copy link
Contributor

Related to the issue

@carllerche
Copy link
Member

That is simpler. The original code used a u64 to avoid cloning the key. While perf isn't a top concern, I am trying to balance "real world" concerns with simplicity. Would you mind using redis-benchmark to compare the change with before?

redis-benchmark -n 1000000 -c 20 -t set,get

You can get it from the Redis source.

If it is a minimal impact, we can move forward.

@robatipoor
Copy link
Contributor Author

robatipoor commented May 14, 2023

Benchmark output of the original code :

====== SET ======
1000000 requests completed in 10.28 seconds
20 parallel clients
3 bytes payload
keep alive: 1
multi-thread: no

Latency by percentile distribution:
0.000% <= 0.031 milliseconds (cumulative count 3)
50.000% <= 0.095 milliseconds (cumulative count 513995)
75.000% <= 0.135 milliseconds (cumulative count 764645)
87.500% <= 0.151 milliseconds (cumulative count 875034)
93.750% <= 0.167 milliseconds (cumulative count 946376)
96.875% <= 0.183 milliseconds (cumulative count 975298)
98.438% <= 0.199 milliseconds (cumulative count 986336)
99.219% <= 0.239 milliseconds (cumulative count 992340)
99.609% <= 0.311 milliseconds (cumulative count 996168)
99.805% <= 0.367 milliseconds (cumulative count 998281)
99.902% <= 0.415 milliseconds (cumulative count 999125)
99.951% <= 0.463 milliseconds (cumulative count 999556)
99.976% <= 0.503 milliseconds (cumulative count 999770)
99.988% <= 0.535 milliseconds (cumulative count 999889)
99.994% <= 0.567 milliseconds (cumulative count 999940)
99.997% <= 0.647 milliseconds (cumulative count 999970)
99.998% <= 0.743 milliseconds (cumulative count 999985)
99.999% <= 1.015 milliseconds (cumulative count 999993)
100.000% <= 1.031 milliseconds (cumulative count 1000000)
100.000% <= 1.031 milliseconds (cumulative count 1000000)

Cumulative distribution of latencies:
57.513% <= 0.103 milliseconds (cumulative count 575128)
98.881% <= 0.207 milliseconds (cumulative count 988807)
99.579% <= 0.303 milliseconds (cumulative count 995786)
99.902% <= 0.407 milliseconds (cumulative count 999023)
99.977% <= 0.503 milliseconds (cumulative count 999770)
99.996% <= 0.607 milliseconds (cumulative count 999964)
99.997% <= 0.703 milliseconds (cumulative count 999975)
99.999% <= 0.807 milliseconds (cumulative count 999986)
99.999% <= 1.007 milliseconds (cumulative count 999990)
100.000% <= 1.103 milliseconds (cumulative count 1000000)

Summary:
throughput summary: 97304.66 requests per second
latency summary (msec):
avg min p50 p95 p99 max
0.111 0.024 0.095 0.175 0.215 1.031
====== GET ======
1000000 requests completed in 10.57 seconds
20 parallel clients
3 bytes payload
keep alive: 1
multi-thread: no

Latency by percentile distribution:
0.000% <= 0.023 milliseconds (cumulative count 4)
50.000% <= 0.103 milliseconds (cumulative count 526584)
75.000% <= 0.143 milliseconds (cumulative count 783891)
87.500% <= 0.159 milliseconds (cumulative count 909256)
93.750% <= 0.167 milliseconds (cumulative count 939836)
96.875% <= 0.183 milliseconds (cumulative count 972594)
98.438% <= 0.199 milliseconds (cumulative count 985083)
99.219% <= 0.247 milliseconds (cumulative count 992254)
99.609% <= 0.327 milliseconds (cumulative count 996534)
99.805% <= 0.367 milliseconds (cumulative count 998210)
99.902% <= 0.407 milliseconds (cumulative count 999055)
99.951% <= 0.447 milliseconds (cumulative count 999556)
99.976% <= 0.479 milliseconds (cumulative count 999780)
99.988% <= 0.511 milliseconds (cumulative count 999882)
99.994% <= 0.575 milliseconds (cumulative count 999941)
99.997% <= 0.623 milliseconds (cumulative count 999974)
99.998% <= 1.423 milliseconds (cumulative count 999987)
99.999% <= 1.439 milliseconds (cumulative count 999993)
100.000% <= 1.455 milliseconds (cumulative count 999999)
100.000% <= 1.463 milliseconds (cumulative count 1000000)
100.000% <= 1.463 milliseconds (cumulative count 1000000)

Cumulative distribution of latencies:
52.658% <= 0.103 milliseconds (cumulative count 526584)
98.782% <= 0.207 milliseconds (cumulative count 987818)
99.529% <= 0.303 milliseconds (cumulative count 995289)
99.906% <= 0.407 milliseconds (cumulative count 999055)
99.986% <= 0.503 milliseconds (cumulative count 999859)
99.997% <= 0.607 milliseconds (cumulative count 999967)
99.997% <= 0.703 milliseconds (cumulative count 999974)
99.998% <= 0.903 milliseconds (cumulative count 999978)
99.998% <= 1.007 milliseconds (cumulative count 999980)
99.998% <= 1.407 milliseconds (cumulative count 999981)
100.000% <= 1.503 milliseconds (cumulative count 1000000)

Summary:
throughput summary: 94634.23 requests per second
latency summary (msec):
avg min p50 p95 p99 max
0.114 0.016 0.103 0.175 0.223 1.463

Benchmark output of new changes :

====== SET ======
1000000 requests completed in 10.36 seconds
20 parallel clients
3 bytes payload
keep alive: 1
multi-thread: no

Latency by percentile distribution:
0.000% <= 0.031 milliseconds (cumulative count 1)
50.000% <= 0.095 milliseconds (cumulative count 514838)
75.000% <= 0.143 milliseconds (cumulative count 774220)
87.500% <= 0.159 milliseconds (cumulative count 919582)
93.750% <= 0.167 milliseconds (cumulative count 946326)
96.875% <= 0.183 milliseconds (cumulative count 974959)
98.438% <= 0.199 milliseconds (cumulative count 986290)
99.219% <= 0.239 milliseconds (cumulative count 992593)
99.609% <= 0.319 milliseconds (cumulative count 996166)
99.805% <= 0.367 milliseconds (cumulative count 998144)
99.902% <= 0.415 milliseconds (cumulative count 999108)
99.951% <= 0.455 milliseconds (cumulative count 999541)
99.976% <= 0.487 milliseconds (cumulative count 999768)
99.988% <= 0.527 milliseconds (cumulative count 999881)
99.994% <= 0.559 milliseconds (cumulative count 999939)
99.997% <= 0.591 milliseconds (cumulative count 999975)
99.998% <= 0.615 milliseconds (cumulative count 999985)
99.999% <= 0.647 milliseconds (cumulative count 999993)
100.000% <= 0.671 milliseconds (cumulative count 999997)
100.000% <= 0.679 milliseconds (cumulative count 1000000)
100.000% <= 0.679 milliseconds (cumulative count 1000000)

Cumulative distribution of latencies:
56.221% <= 0.103 milliseconds (cumulative count 562211)
98.886% <= 0.207 milliseconds (cumulative count 988863)
99.543% <= 0.303 milliseconds (cumulative count 995425)
99.897% <= 0.407 milliseconds (cumulative count 998973)
99.982% <= 0.503 milliseconds (cumulative count 999818)
99.998% <= 0.607 milliseconds (cumulative count 999981)
100.000% <= 0.703 milliseconds (cumulative count 1000000)

Summary:
throughput summary: 96562.38 requests per second
latency summary (msec):
avg min p50 p95 p99 max
0.112 0.024 0.095 0.175 0.215 0.679
====== GET ======
1000000 requests completed in 10.87 seconds
20 parallel clients
3 bytes payload
keep alive: 1
multi-thread: no

Latency by percentile distribution:
0.000% <= 0.015 milliseconds (cumulative count 1)
50.000% <= 0.111 milliseconds (cumulative count 524900)
75.000% <= 0.151 milliseconds (cumulative count 844807)
87.500% <= 0.159 milliseconds (cumulative count 905860)
93.750% <= 0.167 milliseconds (cumulative count 938882)
96.875% <= 0.183 milliseconds (cumulative count 973035)
98.438% <= 0.199 milliseconds (cumulative count 985586)
99.219% <= 0.255 milliseconds (cumulative count 992546)
99.609% <= 0.319 milliseconds (cumulative count 996113)
99.805% <= 0.367 milliseconds (cumulative count 998200)
99.902% <= 0.407 milliseconds (cumulative count 999127)
99.951% <= 0.439 milliseconds (cumulative count 999535)
99.976% <= 0.479 milliseconds (cumulative count 999782)
99.988% <= 0.503 milliseconds (cumulative count 999912)
99.994% <= 0.527 milliseconds (cumulative count 999939)
99.997% <= 0.575 milliseconds (cumulative count 999970)
99.998% <= 0.615 milliseconds (cumulative count 999985)
99.999% <= 0.703 milliseconds (cumulative count 999993)
100.000% <= 0.727 milliseconds (cumulative count 999998)
100.000% <= 0.735 milliseconds (cumulative count 1000000)
100.000% <= 0.735 milliseconds (cumulative count 1000000)

Cumulative distribution of latencies:
46.765% <= 0.103 milliseconds (cumulative count 467649)
98.814% <= 0.207 milliseconds (cumulative count 988139)
99.519% <= 0.303 milliseconds (cumulative count 995189)
99.913% <= 0.407 milliseconds (cumulative count 999127)
99.991% <= 0.503 milliseconds (cumulative count 999912)
99.998% <= 0.607 milliseconds (cumulative count 999984)
99.999% <= 0.703 milliseconds (cumulative count 999993)
100.000% <= 0.807 milliseconds (cumulative count 1000000)

Summary:
throughput summary: 91996.32 requests per second
latency summary (msec):
avg min p50 p95 p99 max
0.118 0.008 0.111 0.175 0.223 0.735

Copy link
Member

@carllerche carllerche left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good enough for me, thanks. I'll bias toward simplifying the code.

@carllerche carllerche merged commit df98a94 into tokio-rs:master May 16, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants