-
Notifications
You must be signed in to change notification settings - Fork 9
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add apptainer files #10
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @Edgar-21 - this seems useful. I have a few suggestions and clarifying questions.
From: /software/chtc/containers/ubuntu/22.04/openmpi-4.1.6_gcc-11.3.0.sif | ||
|
||
%arguments | ||
HDF5_VERSION=hdf5_1_14_3 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Seems like this should be defined in a way to be useful below? It's currently unused and the version is hard coded.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
added args that allow controlling the hdf5 version by providing a url to wget from.
libpng-dev \ | ||
libnetcdf-dev \ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
indentation
wget https://support.hdfgroup.org/releases/hdf5/v1_14/v1_14_3/downloads/hdf5-1.14.3.tar.gz | ||
tar -xvf hdf5-1.14.3.tar.gz |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
see above comment on variables for this version
MOAB_VERSION=5.3.0 | ||
DOUBLE_DOWN_VERSION=v1.0.0 | ||
EMBREE_VERSION=v3.12.2 | ||
DAGMC_VERSION=v3.2.1 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why not newer versions? v3.2.3 has been around a while and v3.2.4 just dropped
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'll try with the latest supported versions of moab, dagmc and double down and see, but this build process is fairly fragile.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
working with dagmc 3.2.4, moab 5.5.1, doubledown 1.1.0
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
and embree 4.3.3
|
||
apptainer build \ | ||
--bind $TMPDIR:/tmp \ | ||
"${@:2}" \ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What does this do?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nothing, it has been removed
#SBATCH --error=job.%J.err | ||
#SBATCH --output=job.%J.out | ||
|
||
export JOB_TMP_PATH=/local/$USER/${SLURM_JOB_ID} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This seems like the only line that is CHTC specific, because of the path, right?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes, I think so? I've not used other clusters not sure what might change
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe we can add a comment that notes that this particular path is correct for CHTC/HPC so that folks no where to change if they want to try this elsewhere
@@ -0,0 +1,148 @@ | |||
Bootstrap: localimage | |||
From: /software/chtc/containers/ubuntu/22.04/openmpi-4.1.6_gcc-11.3.0.sif |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does CHTC publish the def file for this image? This is the only line that makes this CHTC-specific, right?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the def file is available in that same directory. I think this image is designed to have the openmpi built in a way that is compatible, and somehow also interfaces with spack, to what end I am not sure.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just trying to think about how to make this portable to other systems, or at least note where others would need to be careful if trying to port to other systems. I wonder if it's possible to summarize what this image has in a comment.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
added a few comments noting where things are uw hpc specific, and a brief explanation of the base image, as I understand it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Few comments and suggestions. Open to discuss if you don't like all my suggestions
#!/bin/bash | ||
#SBATCH --partition=pre # default "shared", if not specified | ||
#SBATCH --time=0-24:00:00 # run time in days-hh:mm:ss | ||
#SBATCH --nodes=10 # require 1 nodes | ||
#SBATCH --ntasks-per-node=32 # cpus per node (by default, "ntasks"="cpus") | ||
#SBATCH --mem=30000 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Might be good to mention the upper limits for things like time
, nodes
, ntasks-per-node
, and mem
. I know these are probably on the CHTC website, but it would be nice to have in the script so you don't have to stop what you're doing to look it up.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think I'd prefer to avoid duplicating this information, like you mentioned its on the CHTC website. I'll remove the existing comments for consistency
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fair enough. Especially if these values change in the future, we'd want to make sure the info is accurate. Probably best just to direct people to the CHTC page. Can you add the link to the README.md in this repo?
|
||
module load openmpi | ||
# image_path is the path to the Apptainer .sif file you wish to use |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
module load openmpi | |
# image_path is the path to the Apptainer .sif file you wish to use | |
# load required modules here | |
module load openmpi | |
# image_path is the path to the Apptainer .def or .sif file you wish to use |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is an example to submit a job, I don't think a .def is applicable here
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You can submit jobs to build containers :)
#SBATCH --partition=pre | ||
#SBATCH --time=0-04:00:00 | ||
#SBATCH --nodes=1 | ||
#SBATCH --ntasks-per-node=8 | ||
#SBATCH --mem-per-cpu=4000 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we maybe request a bit more power to build this quicker?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
good call
Co-authored-by: Lewis Gross <[email protected]>
Co-authored-by: Lewis Gross <[email protected]>
I think @lewisgross1296 should merge when he decides this is ready |
Confirmed to be working on UW HPC. |
No description provided.