-
Notifications
You must be signed in to change notification settings - Fork 97
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
benchmarks: add initial benchmarking support. #126
Conversation
The sources for the benchmark modules are here: https://github.com/jedisct1/libsodium/tree/master/test/default |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM! Some questions regarding the benchmark:
- What areas do this set of wasm modules test? CPU, Memory etc.
- Does this set of wasm modules require WASI at all?
let waiter = Wait::new(tx); | ||
w.wait(&waiter).unwrap(); | ||
|
||
let res = match rx.recv_timeout(Duration::from_secs(600)) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
10 minutes of timeout seems too long for me...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That's right. I reduced the timeout to one minute. That should be plenty too...
Depends on the test. For example, the
No, these tests don't use WASI. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM!
I can merge this in when all CI pipelines are passed.
Some linting issues, also conflict in Cargo.lock.
|
Add a submodule "webassembly-benchmarks". The benchmark contains a set of wasm files from libsodium. Only the benchmarks which are fast to run (take under 5 seconds to complete on a desktop computer using WasmEdge) were selected for benchmarking. The benchmark suite initializes the Wasm engine, runs the benchmark, and cleans up the container. This is done for both WasmEdge and Wasmtime shim crates. Note that the results can't be directly compared right now, because Wasmtime shim hasn't been converted to youki/libcontainer-rs yet (and that changes the performance due to the extended namespacing). The benchmarking library used is Criterion. Run the benchmarks like this: # cargo bench If you have gnuplot binary installed, Criterion will generate an HTML report in target/criterion/report/index.html file. Signed-off-by: Ismo Puustinen <[email protected]>
Add a submodule "webassembly-benchmarks". The benchmark contains a set of wasm files from libsodium. Only the benchmarks which are fast to run (take under 5 seconds to complete on a desktop computer using WasmEdge) were selected for benchmarking.
The benchmark suite initializes the Wasm runtime, runs the benchmark, and cleans up the container. This is done for both WasmEdge and Wasmtime shim crates. Note that the results for the two runtimes shouldn't really be directly compared right now, because Wasmtime shim hasn't been converted to youki/libcontainer-rs yet and that changes the performance due to the extended namespacing. What this helps with is to see what is the performance impact of any given change, for example having the Youki integration done, using a seccomp filter, or setting cgroup options.
The benchmarking library used is Criterion. Run the benchmarks like this:
If you have gnuplot binary installed, Criterion will generate an HTML report with graphs in target/criterion/report/index.html file. Running the suite takes bit over ten minutes on my desktop.
This PR adds two new direct dependencies: Criterion (Apache2.0/MIT) as dev-dependency and webassembly-benchmarks (ISC) as a submodule.
Note that this PR is just the first step in the benchmarking path -- it does only pretty low-level benchmarking using a limited set of tests. Later on we can add comparison to the normal containers (by compiling and running the libsodium benchmarks as regular processes with the native Youki executor), add or remove Wasm benchmark modules, and improve the test setup. We could for example try running hunderds of containers at once to see how "dense" containers on a node would perform.
This is related to #97 (and should fix the second point in the list).