Assume you have two functions:
- You have a function
$g$ that you want to differentiate and autograd is not an option. - You do have a differentiable version of its inverse function
$f$ . - Then you can use the utilities provided in this repository to differentiate
$g$ .
The classes ImplicitInverseLayer
and ImplicitInverseLayerAuto
extend PyTorch's Function
class. They implements a custom backward pass with gradients computed using the inverse function theorem.
The function implicit_inverse_layer
provides a wrapper around them, allowing for a more convenient usage.
import torch
from inverse_differentiation_layer import implicit_inverse_layer
data = torch.rand((batch_size, 5), requires_grad=True)
def g(x):
"""g is the inverse to f, but you can't get the gradients with autograd"""
with torch.no_grad(): # a more realistic example would be if this was calculated with another library, e.g., numpy
return torch.asin(x)
y = implicit_inverse_layer(data, g, forward_func=torch.sin)
print(torch.autograd.grad(torch.sum(y), y)[0]) # Gradients are available, even though g cannot be differentiated with autograd
To use the code in this repository, you must install Python and PyTorch. You can install PyTorch using pip:
pip install torch
After that, use the implicit_inverse_layer
function as you would use any other torch function.
To run the tests in test_implicit_differentiation.py,
you can use a test runner like pytest:
pytest test_implicit_differentiation.py
Contributions are welcome. Please submit a pull request with your changes.
This project is licensed under the MIT License.