You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The external blas/lapack libraries available on lassen (cusolver, cublas, essl etc) do not append an underscore in the symbol names. This results in a linker error in gromacs. It would be good to have a configure option in gromacs to account for this behavior
>> ../../lib/libgromacs_mpi.so.9.0.0: undefined reference to `sscal_'
>> ../../lib/libgromacs_mpi.so.9.0.0: undefined reference to `dlanst_'
>> ../../lib/libgromacs_mpi.so.9.0.0: undefined reference to `sswap_'
Moreover, openblas does not build on lassen (clang assembler error)
Error: invalid switch -mpower10
Error: unrecognized option -mpower10
clang: error: assembler command failed with exit code 1 (use -v to see invocation)
As a result, we are forced to use gromacs' internal blas/lapack implementation on lassen (ppc64le)
This is enforced in the spack package.py for gromacs
Using rocblas/rocsolver for blas/lapack on tioga results in a segfault for the rocm experiment (intel mkl library works)
Enabling GPU-aware MPI on tioga (rocm mode) requires setting the CXX_EXE_LINKER_FLAGS in gromacs in package.py. However, this breaks the CUDA build on lassen. It would be desirable to have a config option in gromacs to pass in extra compiler/linker flags
Bonded interactions do not work on the GPU (its possible the input file used for testing does not have any GPU interactions). Is there a different input file/problem available to test bonded interactions?
The mapping of cores, gpus to ranks is not well-understood. Additional performance optimizations need to be investigate
On lassen, v2024 does not build for the CUDA variant (v2023.3 works fine)
The text was updated successfully, but these errors were encountered:
point 4. indeed, for the "water" case GROMACS won't accept "-bonded gpu" only "-bonded auto"; I will contribute new cases that include bonded workload and are also more modern than those in ramble.
point 5. I can clarify / help implement that, we might want a separate issue for that?
point 6: please share details, is it a GROMACS build system issue?
Moreover, openblas does not build on lassen (clang assembler error)
As a result, we are forced to use gromacs' internal blas/lapack implementation on lassen (ppc64le)
This is enforced in the spack package.py for gromacs
Using rocblas/rocsolver for blas/lapack on tioga results in a segfault for the rocm experiment (intel mkl library works)
Enabling GPU-aware MPI on tioga (rocm mode) requires setting the CXX_EXE_LINKER_FLAGS in gromacs in package.py. However, this breaks the CUDA build on lassen. It would be desirable to have a config option in gromacs to pass in extra compiler/linker flags
Bonded interactions do not work on the GPU (its possible the input file used for testing does not have any GPU interactions). Is there a different input file/problem available to test bonded interactions?
The mapping of cores, gpus to ranks is not well-understood. Additional performance optimizations need to be investigate
On lassen, v2024 does not build for the CUDA variant (v2023.3 works fine)
The text was updated successfully, but these errors were encountered: