Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

v1.9.1 #105

Closed
wants to merge 1 commit into from
Closed

v1.9.1 #105

wants to merge 1 commit into from

Conversation

notgull
Copy link
Member

@notgull notgull commented Mar 13, 2024

Signed-off-by: John Nunley <[email protected]>
@taiki-e
Copy link
Collaborator

taiki-e commented Mar 13, 2024

Have all the known regressions introduced in 1.9.0 (#103) been fixed? (Otherwise, we will just end up yanking it again.)

@notgull
Copy link
Member Author

notgull commented Mar 13, 2024

It was said that it fixed the regression in Bevy (albeit at a performance cost, which I find acceptable in the short term). In addition, zbus tests seem to pass when I add this to its Cargo.toml:

[patch.crates-io]
async-executor = { git = "https://github.com/smol-rs/async-executor" }

@taiki-e
Copy link
Collaborator

taiki-e commented Mar 13, 2024

Thanks for the clarification. If the regression has been fixed, I'm okay with this.

albeit at a performance cost, which I find acceptable in the short term

The performance degradation compared to 1.9.0, which has the bug, is not a problem. However, if performance is degraded compared to 1.8.0, that is a problem. Is there a performance comparison to 1.8.0?

@notgull
Copy link
Member Author

notgull commented Mar 13, 2024

     Running benches/executor.rs (/hard_data/cargo_target/release/deps/executor-45ad0a619e1329b0)
executor::create        time:   [1.2568 µs 1.2581 µs 1.2593 µs]
                        change: [+18.276% +18.451% +18.630%] (p = 0.00 < 0.05)
                        Performance has regressed.
Found 6 outliers among 100 measurements (6.00%)
  6 (6.00%) high mild

single_thread/executor::spawn_one
                        time:   [5.2113 µs 5.3943 µs 5.5192 µs]
                        change: [+2.9134% +7.8629% +12.546%] (p = 0.00 < 0.05)
                        Performance has regressed.
Found 12 outliers among 100 measurements (12.00%)
  10 (10.00%) low severe
  2 (2.00%) high mild
single_thread/executor::spawn_many_local
                        time:   [11.058 ms 11.397 ms 11.687 ms]
                        change: [-0.7056% +3.3126% +7.2993%] (p = 0.10 > 0.05)
                        No change in performance detected.
Found 9 outliers among 100 measurements (9.00%)
  7 (7.00%) low severe
  1 (1.00%) low mild
  1 (1.00%) high mild
single_thread/executor::spawn_recursively
                        time:   [33.515 ms 33.661 ms 33.815 ms]
                        change: [-1.6246% -0.8815% -0.1761%] (p = 0.02 < 0.05)
                        Change within noise threshold.
Found 8 outliers among 100 measurements (8.00%)
  6 (6.00%) high mild
  2 (2.00%) high severe
single_thread/executor::yield_now
                        time:   [3.4690 ms 3.4755 ms 3.4820 ms]
                        change: [-11.581% -11.366% -11.136%] (p = 0.00 < 0.05)
                        Performance has improved.
Found 9 outliers among 100 measurements (9.00%)
  1 (1.00%) low severe
  1 (1.00%) low mild
  7 (7.00%) high mild

multi_thread/executor::spawn_one
                        time:   [5.1600 µs 5.3962 µs 5.6124 µs]
                        change: [+37.545% +45.859% +55.042%] (p = 0.00 < 0.05)
                        Performance has regressed.
Found 6 outliers among 100 measurements (6.00%)
  2 (2.00%) low mild
  4 (4.00%) high mild
Benchmarking multi_thread/executor::spawn_many_local: Warming up for 3.0000 s
Warning: Unable to complete 100 samples in 5.0s. You may wish to increase target time to 6.5s, or reduce sample count to 70.
multi_thread/executor::spawn_many_local
                        time:   [55.573 ms 55.717 ms 55.867 ms]
                        change: [+159.71% +169.78% +180.36%] (p = 0.00 < 0.05)
                        Performance has regressed.
Found 1 outliers among 100 measurements (1.00%)
  1 (1.00%) high mild
Benchmarking multi_thread/executor::spawn_recursively: Warming up for 3.0000 s
Warning: Unable to complete 100 samples in 5.0s. You may wish to increase target time to 11.9s, or reduce sample count to 40.
multi_thread/executor::spawn_recursively
                        time:   [118.49 ms 118.99 ms 119.48 ms]
                        change: [-8.9313% -8.4757% -8.0163%] (p = 0.00 < 0.05)
                        Performance has improved.
Benchmarking multi_thread/executor::yield_now: Warming up for 3.0000 s
Warning: Unable to complete 100 samples in 5.0s. You may wish to increase target time to 8.0s, enable flat sampling, or reduce sample count to 50.
multi_thread/executor::yield_now
                        time:   [1.5913 ms 1.5986 ms 1.6060 ms]
                        change: [-90.748% -90.676% -90.603%] (p = 0.00 < 0.05)
                        Performance has improved.
Found 9 outliers among 100 measurements (9.00%)
  1 (1.00%) low severe
  3 (3.00%) low mild
  3 (3.00%) high mild
  2 (2.00%) high severe

Slight regressions in the single-threaded tests, but significant improvements in multi-threaded tests.

@zeenix
Copy link
Member

zeenix commented Mar 13, 2024

Slight regressions in the single-threaded tests, but significant improvements in multi-threaded tests.

This one looks pretty significant:

multi_thread/executor::spawn_many_local
                        time:   [55.573 ms 55.717 ms 55.867 ms]
                        change: [+159.71% +169.78% +180.36%] (p = 0.00 < 0.05)
                        Performance has regressed.

🤔

@james7132
Copy link
Contributor

james7132 commented Mar 14, 2024

#104 (comment)

The contention issues from the new implementation creates a major performance regressions on Bevy's end. The tasks that are spawning Bevy's systems are taking 10x longer and average end-to-end frame times are seeing double digit percentage regressions. Given that 1.9.1 will be semver compatible, this will retroactively affect virtually every Bevy version upon running cargo update.

@notgull notgull closed this Mar 15, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

Successfully merging this pull request may close these issues.

4 participants