-
Notifications
You must be signed in to change notification settings - Fork 10
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Question about how to evaluate a trained model #12
Comments
Hi, The gist is that, once you have a fully trained model, you can use it in self-consistent calculations with PySCF to "evaluate some other molecules". The fastest/easiest way to do so would be to put all molecules you want to compute into a .xyz or .traj file and use As to the Let me know if that answers your question! |
Hi,
|
Hi, to answer your questions:
This should clarify the ordering |
Hi, I recently started trying this repo and found it really cool!
I have managed to run the example in
examples/example_scripts/train_model/
on some data and would like to use the final model to evaluate some other molecules. I know that theneuralxc sc ...
command can do the testing if I provide a testing.traj.However, I'd like to use the
neuralxc eval ...
command so I that I don't have to re-train the same model.The --hdf5 argument requires the path to hdf5 file, baseline data, reference data. I assume the last one refers to a testing.traj like the one used with
neuralxc sc ...
in the example. However, I not sure what the first two files refer to and how to get them and couldn't find an example in the repo. Could you please give some advice or examples?Moreover, I'm wondering how to set
n_max
andl_max
as mentioned in the paper. I can't seem to find these options in thehyperparameters.json
or thebasis.json
file.The text was updated successfully, but these errors were encountered: