Skip to content

Commit

Permalink
Merge pull request #263 from RaulPPelaez/examples
Browse files Browse the repository at this point in the history
Update Examples
  • Loading branch information
RaulPPelaez authored Jan 30, 2024
2 parents 9108514 + 7ae5286 commit af64cdb
Show file tree
Hide file tree
Showing 8 changed files with 15 additions and 10 deletions.
1 change: 1 addition & 0 deletions examples/ET-ANI1.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -57,3 +57,4 @@ weight_decay: 0.0
box_vecs: null
charge: false
spin: false
vector_cutoff: true
1 change: 1 addition & 0 deletions examples/ET-MD17.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -58,3 +58,4 @@ weight_decay: 0.0
box_vecs: null
charge: false
spin: false
vector_cutoff: true
2 changes: 1 addition & 1 deletion examples/ET-QM9.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -59,4 +59,4 @@ box_vecs: null
precision: 32
charge: false
spin: false

vector_cutoff: true
3 changes: 2 additions & 1 deletion examples/ET-SPICE.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ cutoff_lower: 0.0
cutoff_upper: 10.0
dataset: SPICE
dataset_arg:
version: 1.1.1
version: 1.1.4
dataset_root: data
derivative: true
distance_influence: both
Expand Down Expand Up @@ -58,3 +58,4 @@ weight_decay: 0.0
box_vecs: null
charge: false
spin: false
vector_cutoff: true
10 changes: 6 additions & 4 deletions examples/README.md
Original file line number Diff line number Diff line change
@@ -1,18 +1,20 @@
# Examples

## Training
We provide three example config files for the ET for training on QM9, MD17 and ANI1 respectively. To train on a QM9 target other than `energy_U0`, change the parameter `dataset_arg` in the QM9 config file. Changing the MD17 molecule to train on works analogously. To train an ET from scratch you can use the following code from the torchmd-net directory:

You can reproduce any of the trainings in these folder using the torchmd-train utility:

```bash
CUDA_VISIBLE_DEVICES=0,1 torchmd-train --conf examples/ET-{QM9,MD17,ANI1}.yaml
torchmd-train --conf [file].yaml
```
Use the `CUDA_VISIBLE_DEVICES` environment variable to select which and how many GPUs you want to train on. The example above selects GPUs with indices 0 and 1. The training code will want to save checkpoints and config files in a directory called `logs/`, which you can change either in the config .yaml file or as an additional command line argument: `--log-dir path/to/log-dir`.
The training code will want to save checkpoints and config files in a directory called `logs/`, which you can change either in the config .yaml file or as an additional command line argument: `--log-dir path/to/log-dir`.

## Loading checkpoints
You can access several pretrained checkpoint files under the following URLs:
- equivariant Transformer pretrained on QM9 (U0): http://pub.htmd.org/et-qm9.zip
- equivariant Transformer pretrained on MD17 (aspirin): http://pub.htmd.org/et-md17.zip
- equivariant Transformer pretrained on ANI1: http://pub.htmd.org/et-ani1.zip
- invariant Transformer pretrained on ANI1: http://pub.htmd.org/t-ani1.zip


The checkpoints can be loaded using the `load_model` function in TorchMD-Net. Additional model arguments (e.g. turning on force prediction on top of energies) for inference can also be passed to the function. See the following example code for loading an ET pretrained on the ANI1 dataset:
```python
Expand Down
6 changes: 3 additions & 3 deletions examples/TensorNet-SPICE.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ cutoff_lower: 0.0
cutoff_upper: 10.0
dataset: SPICE
dataset_arg:
version: 1.1.3
version: 1.1.4
max_gradient: 50.94
dataset_root: ~/data
derivative: true
Expand All @@ -18,9 +18,9 @@ embed_files: null
embedding_dimension: 128
energy_files: null
equivariance_invariance_group: O(3)
y_weight: 0.5
y_weight: 1.0
force_files: null
neg_dy_weight: 0.5
neg_dy_weight: 10.0
gradient_clipping: 100.0
inference_batch_size: 16
load_model: null
Expand Down
Binary file modified tests/expected.pkl
Binary file not shown.
2 changes: 1 addition & 1 deletion tests/test_model.py
Original file line number Diff line number Diff line change
Expand Up @@ -196,7 +196,7 @@ def test_forward_output(model_name, output_model, overwrite_reference=False):

@mark.parametrize("model_name", models.__all_models__)
def test_gradients(model_name):
pl.seed_everything(1234)
pl.seed_everything(12345)
precision = 64
output_model = "Scalar"
# create model and sample batch
Expand Down

0 comments on commit af64cdb

Please sign in to comment.