-
Notifications
You must be signed in to change notification settings - Fork 18
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Ask about the issue of units during the training process #43
Comments
Hello! |
##config test: ##loss ##data_process ##schnet
|
First of all, model parameters differs from nablaDFT's SchNet. Preferred way to reproduce results is to use If you want to reproduce SchNet trained on tiny data split for structures (ST) test use this command: python run.py --config schnet_test.yaml schnet_test.yaml: # Global variables
name: SchNet
dataset_name: dataset_test_structures
max_steps: 1000000
job_type: test
pretrained: SchNet_train_tiny # name of pretrained split or 'null'
ckpt_path: null # path to checkpoint for training resume or test run
# Datamodule parameters
root: ./datasets/nablaDFT/${.job_type}
batch_size: 32
num_workers: 8
# Devices
devices: [0]
# configs
defaults:
- _self_
- datamodule: nablaDFT_ase_test.yaml # dataset config
- model: schnet.yaml # model config
- callbacks: callbacks_spk.yaml # pl callbacks config
- loggers: wandb.yaml # pl loggers config
- trainer: test.yaml # trainer config
# need this to set working dir as current dir
hydra:
output_subdir: null
run:
dir: .
original_work_dir: ${hydra:runtime.cwd}
seed: 23 |
Thanks for your explanation and response, I understand. I have another question to ask. I used PygnablaDFT to read the dataset test_2k_conformers_v2_formation_energy_w_forces_test and saved it into a .pt file. Are the energy values in this dataset in Hartree units?
|
All energy units in energy databases are Hartree units. |
Thanks for your response! In the training process involving multiple partitioning methods for energy and force, is the loss calculation typically energy * 1 + force * 1 ? |
In general, these are the hyperparameters of the training pipeline. For instance, we used 1:100 for training SchNet, PaiNN, GemNet. 1:1 for DimeNet and 2:100 for EquiformerV2 |
Thanks for your response! I would like to ask if this dataset contains a force label. "dataset_train_full": "https://a002dlils-kadurin-nabladft.obs.ru-moscow-1.hc.sbercloud.ru/data/nablaDFTv2/energy_databases/train_full_v2_formation_energy.db", |
Please, use the updated dataset with force labels: https://a002dlils-kadurin-nabladft.obs.ru-moscow-1.hc.sbercloud.ru/data/nablaDFTv2/energy_databases/train_full_v2_formation_energy/train_full_v2_formation_energy_w_forces_wo_outliers.db |
Hello, I would like to ask about energy prediction using SCHNET. According to the results you provided, under the ST split of the tiny dataset, is its MAE 1.17 Hartree or 1.17 * 10^(-2) Hartree? I tried to reproduce the result, and I found that the MAE on the test set is around 0.75, but I am not sure about the unit. Could you clarify this?
The text was updated successfully, but these errors were encountered: