Skip to content

0.99.18

Compare
Choose a tag to compare
@Joao-L-S-Almeida Joao-L-S-Almeida released this 05 Apr 21:53
· 485 commits to main since this release

0.99.18

  • Support for defining Python functions inside symbolic expressions used for training PINNs.
def k1(t:torch.Tensor) -> torch.Tensor:

   return 2*(t-mu)*torch.cos(omega*pi*t)

# The expression we aim at minimizing
f = "D(u, t) - k1(t) + omega*pi*((t - mu)**2)*sin(omega*pi*t)"
  • Option for using extra datasets to train PINN models in addition to the symbolic residuals.
params = {                                                                                                               
   "residual": residual,
   "initial_input": np.array([0])[:, None],
   "initial_state": u_data[0],
   "extra_input_data": time_extra_train[:, None],
   "extra_target_data": u_extra_train[:, None],
   "weights_residual": [1],
   "initial_penalty": 1,
}
  • The extra datasets can be used for enhancing the forward PINN approximation or for estimating unknown values for parameters employed in the symbolic expressions, which it is usually termed as backward estimation.
  • Backward problems can be defined as class templates, in which parameters are estimated together with the neural net weights and biases (see this example for further details).
  • It is possible to periodically save models during long training workloads by passing a configuration dictionary to the argument checkpoint_params of Optimizer, as seen below:
optimizer = Optimizer(
   "adam",
   params=optimizer_config,
   lr_decay_scheduler_params={
       "name": "ExponentialLR",
       "gamma": 0.9,
       "decay_frequency": 5_000,
   },
   checkpoint_params={
       "save_dir": save_path,
       "name": model_name,
       "template": model,
       "checkpoint_frequency": 10_000,
       "overwrite": False,
   },
   summary_writer=True,
)
  • As Pytorch 2.0 is already supported, it was included a boolean option use_jit passed to the method Optimizer.fit and used for invoking the new PyTorch JIT compilation framework (torch.compile) at the optimization start-up :
optimizer.fit(
    op=rober_net,
    input_data=input_data,
    n_epochs=n_epochs,
    loss="opirmse",
    params=params,
    device="gpu",
    batch_size=batch_size,
    use_jit=True,
)

which will compile neural net instances and, for Physics-informed applications, also residual (SymbolicOperator) objects.

  • Many enhancements and updates have been done in the source code documentation.