From ee9a88354c25e803039db68fc4407ceeaa1af773 Mon Sep 17 00:00:00 2001 From: Pedro Alves <1581332+pdroalves@users.noreply.github.com> Date: Wed, 15 Jan 2025 13:07:36 -0300 Subject: [PATCH] chore(docs): Remove mention to NVLink NVLink is not needed anymore in the CUDA backend. --- tfhe/docs/guides/run_on_gpu.md | 8 +------- 1 file changed, 1 insertion(+), 7 deletions(-) diff --git a/tfhe/docs/guides/run_on_gpu.md b/tfhe/docs/guides/run_on_gpu.md index f916c3e69f..af441ba6d1 100644 --- a/tfhe/docs/guides/run_on_gpu.md +++ b/tfhe/docs/guides/run_on_gpu.md @@ -164,13 +164,7 @@ All operations follow the same syntax than the one described in [here](../gettin ## Multi-GPU support -TFHE-rs supports platforms with multiple GPUs with some restrictions at the moment: -the platform should have NVLink support, and only GPUs that have peer access to GPU 0 via NVLink -will be used for the computation. -Depending on the platform, this can restrict the number of GPUs used to perform the computation. - -There is **nothing to change in the code to execute on multiple GPUs**, when -they are available and have peer access to GPU 0 via NVLink. To keep the API as user-friendly as possible, the configuration is automatically set, i.e., the user has no fine-grained control over the number of GPUs to be used. +TFHE-rs supports platforms with multiple GPUs. There is **nothing to change in the code to execute on such platforms**. To keep the API as user-friendly as possible, the configuration is automatically set, i.e., the user has no fine-grained control over the number of GPUs to be used. ## Benchmark Please refer to the [GPU benchmarks](../getting_started/benchmarks/gpu_benchmarks.md) for detailed performance benchmark results.