Skip to content

Releases: IBM/simulai

0.99.18

05 Apr 21:53
Compare
Choose a tag to compare

0.99.18

  • Support for defining Python functions inside symbolic expressions used for training PINNs.
def k1(t:torch.Tensor) -> torch.Tensor:

   return 2*(t-mu)*torch.cos(omega*pi*t)

# The expression we aim at minimizing
f = "D(u, t) - k1(t) + omega*pi*((t - mu)**2)*sin(omega*pi*t)"
  • Option for using extra datasets to train PINN models in addition to the symbolic residuals.
params = {                                                                                                               
   "residual": residual,
   "initial_input": np.array([0])[:, None],
   "initial_state": u_data[0],
   "extra_input_data": time_extra_train[:, None],
   "extra_target_data": u_extra_train[:, None],
   "weights_residual": [1],
   "initial_penalty": 1,
}
  • The extra datasets can be used for enhancing the forward PINN approximation or for estimating unknown values for parameters employed in the symbolic expressions, which it is usually termed as backward estimation.
  • Backward problems can be defined as class templates, in which parameters are estimated together with the neural net weights and biases (see this example for further details).
  • It is possible to periodically save models during long training workloads by passing a configuration dictionary to the argument checkpoint_params of Optimizer, as seen below:
optimizer = Optimizer(
   "adam",
   params=optimizer_config,
   lr_decay_scheduler_params={
       "name": "ExponentialLR",
       "gamma": 0.9,
       "decay_frequency": 5_000,
   },
   checkpoint_params={
       "save_dir": save_path,
       "name": model_name,
       "template": model,
       "checkpoint_frequency": 10_000,
       "overwrite": False,
   },
   summary_writer=True,
)
  • As Pytorch 2.0 is already supported, it was included a boolean option use_jit passed to the method Optimizer.fit and used for invoking the new PyTorch JIT compilation framework (torch.compile) at the optimization start-up :
optimizer.fit(
    op=rober_net,
    input_data=input_data,
    n_epochs=n_epochs,
    loss="opirmse",
    params=params,
    device="gpu",
    batch_size=batch_size,
    use_jit=True,
)

which will compile neural net instances and, for Physics-informed applications, also residual (SymbolicOperator) objects.

  • Many enhancements and updates have been done in the source code documentation.

0.99.16

24 Mar 13:10
Compare
Choose a tag to compare
  • It was solved a bug in simulai.optimization.Optimizer that was making the number of epochs be underestimated for mini-batch optimization.
  • Runge-Kutta Fehlberg 7(8), an adaptive Runge-Kutta time-integrator aimed at reducing the cumulated error when dealing with chaotic systems.
  • MoE Pool can receive not-trainable routing/gating algorithms, as K-Means. See scalability_tests/scripts/lorenz_63_kmeans_moe.py for more details.
  • Updates in the documentation.

0.99.15

10 Mar 13:27
Compare
Choose a tag to compare
  • Sanity check for the loss function, which avoids the usage the wrong loss functions for a given problem. The checker will interrupt the process when the choice is not proper and indicate the correct one :
Exception: The loss function used for this case (vaermse) is not the recommended (['rmse', 'wrmse']). Please, redefine it.
  • The automatically configured Variational Autoencoders (VAE) has the option shallow, which allows to define shallow bottleneck encoder-decoders (it means, simple linear operators) for the CNN-VAEs:
    autoencoder = AutoencoderVariational(
        input_dim=(None, 1, 32, 32),
        latent_dim=l_d,
        activation="relu",
        architecture="cnn",
        shallow=True,
        name="KDV_VAE",
        case="2d",
        devices="gpu",
    )   
  • Improvemments were done in order to allow the usage of simulai.metrics.MinMaxEvaluation for evaluating minimum and maximum values of HDF5 objects in a lazzy way, which can be useful for normalization techniques.
  • A bug which was impeding the instantiation of automatically generated MLP VAEs was solved.
  • Updates in the documentation.

0.99.14

06 Feb 14:00
Compare
Choose a tag to compare
  • The definition of templates for automatically (using default choices) instantiating CNN and Dense networks.
  • Using these templates for default choices it is possible to easily create any kind of autoencoder defined in the package, as seen below:
autoencoder = AutoencoderVariational(
            input_dim=(None, 1, 64, 128),
            latent_dim=8,
            activation="tanh",
            architecture="cnn",
            case="2d",
        )
  • Bugs related to circular importing were fixed.
  • The list of dependencies has been revised and reduced to reflect the real needs of the package.
  • Documentation and code coverage were extended.
  • The source code has undergone a massive reformatting in order to better fit PEP 8.

0.99.13

24 Jan 20:46
Compare
Choose a tag to compare
  • Correcting import errors from the previous releases.
  • Updating documentation.

0.99.8

21 Jan 12:38
66cad80
Compare
Choose a tag to compare
  • Consolidation of tests.
  • Removal of useless or obsolete objects.
  • Improvements in documentation.

Updates for enhancing package stability.

17 Jan 19:17
Compare
Choose a tag to compare
  • Tests consolidation.
  • Tensorboard support.
  • Corrections in the instantiation of differential operators (Laplacian and Divergence) for PINNs.

0.99.6: Ignoring docs/_build

11 Jan 15:38
Compare
Choose a tag to compare
Signed-off-by: João Lucas de Sousa Almeida <[email protected]>

Initial public beta release

23 Nov 13:42
Compare
Choose a tag to compare
Pre-release

This is the very first public release of SimulAI. As we move to stabilize experimental features and add documentation, we will progressively converge to a major version bump, moving from beta to production stage. SimulAI is also available for installation from PyPI:

$ pip install simulai-toolkit