Skip to content

Commit

Permalink
change torch.npu.is_available() to is_npu_available in precision.py
Browse files Browse the repository at this point in the history
  • Loading branch information
Nicorgi committed Jan 7, 2025
1 parent b9b4fe9 commit fb6c480
Showing 1 changed file with 1 addition and 1 deletion.
2 changes: 1 addition & 1 deletion torchtune/training/precision.py
Original file line number Diff line number Diff line change
Expand Up @@ -33,7 +33,7 @@ def _set_float32_precision(precision: str = "high") -> None:
Args:
precision (str): The setting to determine which datatypes to use for matrix multiplication and convolution operations.
"""
if not torch.cuda.is_available() or not torch.npu.is_available(): # Not relevant for non-CUDA devices
if not torch.cuda.is_available() or not is_npu_available: # Not relevant for non-CUDA devices
return
# set precision for matrix multiplications
torch.set_float32_matmul_precision(precision)
Expand Down

0 comments on commit fb6c480

Please sign in to comment.