Skip to content

Commit

Permalink
PR #21213: [GPU] Fix mutex locking of a cuDNN handle.
Browse files Browse the repository at this point in the history
Imported from GitHub PR #21213

The CudnnHandle object containing a mutex has to stay alive while cudnnHandle_t it guards is in use. This brings the use in sync with the other uses in this file. There is no evidence that this caused failures so far, rather prefetching potential problems, therefore no test added.
Copybara import of the project:

--
0472972 by Ilia Sergachev <[email protected]>:

[GPU] Fix mutex locking of a cuDNN handle.

The CudnnHandle object containing a mutex has to stay alive while
cudnnHandle_t it guards is in use. This brings the use in sync with the
other uses in this file. There is no evidence that this caused failures
so far, rather prefetching potential problems, therefore no test added.

Merging this change closes #21213

COPYBARA_INTEGRATE_REVIEW=#21213 from openxla:fix_cudnn_locking 0472972
PiperOrigin-RevId: 714275180
  • Loading branch information
sergachev authored and Google-ML-Automation committed Jan 11, 2025
1 parent c0a7469 commit f03d1f6
Showing 1 changed file with 3 additions and 3 deletions.
6 changes: 3 additions & 3 deletions xla/stream_executor/cuda/cuda_dnn.cc
Original file line number Diff line number Diff line change
Expand Up @@ -8570,9 +8570,9 @@ absl::Status CudnnGraph::Execute(Stream& stream,

const CudnnSupport& dnn_support =
static_cast<CudnnSupport&>(*stream.parent()->AsDnn());
RETURN_IF_CUDNN_FRONTEND_ERROR(graph_.execute(
dnn_support.cudnn_->GetHandle(stream.parent(), &stream).handle(),
tensor_to_ptr_map, workspace.opaque()));
auto cudnn = dnn_support.cudnn_->GetHandle(stream.parent(), &stream);
RETURN_IF_CUDNN_FRONTEND_ERROR(
graph_.execute(cudnn.handle(), tensor_to_ptr_map, workspace.opaque()));
return absl::OkStatus();
}

Expand Down

0 comments on commit f03d1f6

Please sign in to comment.