Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Backend performance #57

Open
andrea-pasquale opened this issue Dec 13, 2024 · 4 comments
Open

Backend performance #57

andrea-pasquale opened this issue Dec 13, 2024 · 4 comments

Comments

@andrea-pasquale
Copy link

I did a benchmark to check how fast are the backends which are currently available in qiboml. I've just executed a QFT for different number of qubits.

test_qiboml

It is interesting how JAX performed relatively bad for low number of qubits, but at around 20 qubits it seems to match the performances of the other backends.

Here is the code used for the benchmark.
from timeit import timeit
from qibo import set_backend, models


benchmarks = {}

QUBITS = list(range(1,22))

for backend in ["numpy-None", "qiboml-pytorch", "qiboml-jax", "qiboml-tensorflow"]:
    benchmarks[backend] = []
    set_backend(backend=backend.split("-")[0], platform =backend.split("-")[1])

    for qubits in QUBITS:
        qft = models.QFT(qubits)
        print(f"Executing on {backend} with qubits {qubits}.")
        benchmarks[backend].append(timeit("qft()", globals=locals(), number=10))

import matplotlib.pyplot as plt

plt.scatter(QUBITS, benchmarks["numpy-None"], label="numpy")
plt.scatter(QUBITS, benchmarks["qiboml-pytorch"], label="pytorch")
plt.scatter(QUBITS, benchmarks["qiboml-jax"], label="jax")
plt.scatter(QUBITS, benchmarks["qiboml-tensorflow"], label="tensorflow")
plt.yscale("log")
plt.legend()
plt.savefig("test_qiboml.png")

Whenever I have time I can start looking at the JAX backend to see if we can improve the performance.

@renatomello
Copy link
Collaborator

They all seem almost two orders of magnitude slower than in the original qibo paper. Why is that?

@andrea-pasquale
Copy link
Author

Probably it could be related to the fact that the benchmark was performed on different CPUs. I've obtained this data by running on my laptop. I will try to repeat the benchmark on the cluster.

@andrea-pasquale
Copy link
Author

Here is the updated plot executed on the cluster for 20 repetitions with error bars.
test_qiboml_new
I'm currently trying to investigate the region beyond 20 qubits to see if jax can outperform some of the other backends.

@alecandido
Copy link
Member

alecandido commented Dec 18, 2024

Thanks for the interesting result!

@andrea-pasquale whenever you can, would you try to rerun with many more repetitions? (e.g. with 200).
Just to see the error shrinking a bit, since a strict statistical interpretation would see many different options largely compatible, though I expect the central values to be more stable than what the error bars suggest (looking at the stability of the trends).

Otherwise, if you update/share your script and lock file (or confirm you're just using the Qiboml's one), I could even dispatch it myself, of course.

I'm currently trying to investigate the region beyond 20 qubits to see if jax can outperform some of the other backends.

That's also certainly interesting, since the current plot is just stopping on the intersection. Thanks again :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants