Note that the backbone and activations models are not created with keras.Input objects, but with the tensors that originate from keras.Input objects. Under the hood, the layers and weights will be shared across these models, so that user can train the full_model, and use backbone or activations to do feature extraction. The inputs and outputs of the model can be nested structures of tensors as well, and the created models are standard Functional API models that support all the existing APIs.
+
+
+
By subclassing the Model class
+
In that case, you should define your layers in __init__() and you should implement the model’s forward pass in call().
If you subclass Model, you can optionally have a training argument (boolean) in call(), which you can use to specify a different behavior in training and inference:
Once the model is created, you can config the model with losses and metrics with model.compile(), train the model with model.fit(), or use the model to do prediction with model.predict().
+
+
+
With the Sequential class
+
In addition, keras.Sequential is a special case of model where the model is purely a stack of single-input, single-output layers.
Prepare the CF module. It will hook up the data module, and its apply functions via the init_apply_fns method (e.g., apply_constraints_fn and compute_reg_loss_fn). Next, it will train the model if cf_module is a ParametricCFModule. Finally, it will call before_generate_cf method.
Prepare the predictive function for the CF module. We will train the model if pred_fn is not provided and cf_module does not have pred_fn. If pred_fn is found in cf_module, we will use it irrespective of pred_fn argument. If pred_fn is provided, we will use it.
/tmp/ipykernel_5475/4129963786.py:17: DeprecationWarning: Argument `data` is deprecated. Use `data_module` instead.
+ warnings.warn(
+
+
+
+
cfnet = CounterNet()
+cfnet.train(dm, epochs=1)
+# Test cases for checking if ParametricCFModule is trained twice.
+# If it is trained twice, cfs will be different.
+cfs = jax.vmap(cfnet.generate_cf)(dm.xs)
+assert cfnet.is_trained ==True
+exp = generate_cf_explanations(cfnet, dm)
+assert np.allclose(einops.rearrange(exp.cfs, 'N 1 K -> N K'), cfs)
+
+
/home/birk/miniconda3/envs/dev/lib/python3.10/site-packages/relax/legacy/ckpt_manager.py:47: UserWarning: `monitor_metrics` is not specified in `CheckpointManager`. No checkpoints will be stored.
+ warnings.warn(
+Epoch 0: 100%|██████████| 191/191 [00:08<00:00, 22.21batch/s, train/train_loss_1=0.06329722, train/train_loss_2=0.07011371, train/train_loss_3=0.101814255]
+/tmp/ipykernel_5475/4129963786.py:17: DeprecationWarning: Argument `data` is deprecated. Use `data_module` instead.
+ warnings.warn(
DataPreprocessor transforms individual features into numerical representations for the machine learning and recourse generation workflows. It can be considered as a drop-in jax-friendly replacement to the sklearn.preprocessing module. The supported preprocessing methods include MinMaxScaler and OneHotEncoder.
MinMaxScaler only supports scaling a single feature.
+
+
xs = xs.reshape(50, 2)
+scaler = MinMaxScaler()
+test_fail(lambda: scaler.fit_transform(xs),
+ contains="`MinMaxScaler` only supports array with a single feature")
+
+
Convert to a dictionary (or the pytree representations).
def test_set_transformations(transformation, correct_shape):
+ T = transformation
+ feats_list_2 = deepcopy(feats_list)
+ feats_list_2.set_transformations({
+ feat: T for feat in cat_feats
+ })
+assert feats_list_2.transformed_data.shape == correct_shape
+ name = T.name ifisinstance(T, Transformation) else T
+
+for feat in feats_list_2:
+if feat.name in cat_feats:
+assert feat.transformation.name == name
+assert feat.is_categorical
+else:
+assert feat.transformation.name =='minmax'
+assert feat.is_categorical isFalse
+assert feat.is_immutable isFalse
+
+ x = jax.random.uniform(jax.random.PRNGKey(0), shape=(100, correct_shape[-1]))
+ _ = feats_list_2.apply_constraints(feats_list_2.transformed_data[:100], x, hard=False)
+ _ = feats_list_2.apply_constraints(feats_list_2.transformed_data[:100], x, hard=True)
+
+
+
test_set_transformations('ordinal', (32561, 8))
+test_set_transformations('ohe', (32561, 29))
+test_set_transformations('gumbel', (32561, 29))
+# TODO: [bug] raise error when set_transformations is called with
+# SoftmaxTransformation() or GumbelSoftmaxTransformation(),
+# instead of "ohe" or "gumbel".
+# test_set_transformations(SoftmaxTransformation(), (32561, 29))
+# test_set_transformations(GumbelSoftmaxTransformation(), (32561, 29))
+
+
+
# Test transform and inverse_transform
+# Convert df to dict[str, np.ndarray]
+df_dict = {k: np.array(v).reshape(-1, 1) for k, v in df.iloc[:, :-1].to_dict(orient='list').items()}
+# feats_list.transform(df_dict) should be the same as feats_list.transformed_data
+transformed_data = feats_list.transform(df_dict)
+assert np.equal(feats_list.transformed_data, transformed_data).all()
+# feats_list.inverse_transform(transformed_data) should be the same as df_dict
+inverse_transformed_data = feats_list.inverse_transform(transformed_data)
+pd.testing.assert_frame_equal(
+ pd.DataFrame.from_dict({k: v.reshape(-1) for k, v in inverse_transformed_data.items()}),
+ pd.DataFrame.from_dict({k: v.reshape(-1) for k, v in df_dict.items()}),
+ check_dtype=False, check_exact=False,
+)
# Test `to_pandas`
+feats_pd = feats_list.to_pandas()
+pd.testing.assert_frame_equal(
+ feats_pd,
+ pd.DataFrame.from_dict({k: v.reshape(-1) for k, v in df_dict.items()}),
+ check_dtype=False,
+)
+
+
+
# Test save and load
+feats_list.save('tmp/data_module/')
+feats_list_1 = FeaturesList.load_from_path('tmp/data_module/')
+# remove tmp folder
+shutil.rmtree('tmp/data_module/')
+
+
+
sk_ohe = skp.OneHotEncoder(sparse_output=False)
+sk_minmax = skp.MinMaxScaler()
+
+# for feat in feats_list.features:
+for feat in feats_list:
+if feat.name in cont_feats:
+assert np.allclose(
+ sk_minmax.fit_transform(feat.data),
+ feat.transformed_data,
+ ), f"Failed at {feat.name}. "
+else:
+assert np.allclose(
+ sk_ohe.fit_transform(feat.data),
+ feat.transformed_data,
+ ), f"Failed at {feat.name}"
class relax.docs.CustomizedMarkdownRenderer(sym, name=None, title_level=3)
+
+
Displaying documents of functions, classes, haiku.module, and BaseParser.
+
CustomizedMarkdownRenderer is the customized markdown render for the ReLax documentation site. We can use it to displaying documents of functions, classes, haiku.module, and BaseParser.
+
We can display documentations for functions:
+
+
def validate_config(
+ configs: Dict|BaseParser, # A configuration of the model/data.
+ config_cls: BaseParser # The desired configuration class.
+) -> BaseParser:
+"""Return a valid configuration object."""
+ ...
+
+CustomizedMarkdownRenderer(validate_config)
+
+
+validate_config
+
+
+
validate_config(configs, config_cls)
+
+
Return a valid configuration object.
+
+
Parameters:
+
+
configs (Dict | BaseParser) – A configuration of the model/data.
+
config_cls (BaseParser) – The desired configuration class.
+
+
+
+
Returns:
+
(BaseParser)
+
+
+
+
We can display documentations for classes:
+
+
class VanillaCF:
+"""VanillaCF Explanation of the model."""
+
+def__init__(
+self,
+ configs: Dict|BaseParser=None# A configuration of the model.
+ ): ...
+
+def generate_cf(
+self,
+ x: np.ndarray, # A data point.
+ pred_fn: Callable, # A prediction function.
+ ) -> Array:
+"""Generate counterfactuals for the given data point."""
+pass
+
+ __ALL__ = ["generate_cf"]
+CustomizedMarkdownRenderer(VanillaCF)
ReLax (Recourse Explanation Library in Jax) is an efficient and scalable benchmarking library for recourse and counterfactual explanations, built on top of jax. By leveraging language primitives such as vectorization, parallelization, and just-in-time compilation in jax, ReLax offers massive speed improvements in generating individual (or local) explanations for predictions made by Machine Learning algorithms.
+
Some of the key features are as follows:
+
+
🏃 Fast and scalable recourse generation.
+
🚀 Accelerated over cpu, gpu, tpu.
+
🪓 Comprehensive set of recourse methods implemented for benchmarking.
+
👐 Customizable API to enable the building of entire modeling and interpretation pipelines for new recourse algorithms.
+
+
+
+
Installation
+
pip install jax-relax
+# Or install the latest version of `jax-relax`
+pip install git+https://github.com/BirkhoffG/jax-relax.git
+
To futher unleash the power of accelerators (i.e., GPU/TPU), we suggest to first install this library via pip install jax-relax. Then, follow steps in the official install guidelines to install the right version for GPU or TPU.
+
+
+
Dive into ReLax
+
ReLax is a recourse explanation library for explaining (any) JAX-based ML models. We believe that it is important to give users flexibility to choose how to use ReLax. You can
+
+
only use methods implemeted in ReLax (as a recourse methods library);
+
build a pipeline using ReLax to define data module, training ML models, and generating CF explanation (for constructing recourse benchmarking pipeline).
+
+
+
ReLax as a Recourse Explanation Library
+
We introduce basic use cases of using methods in ReLax to generate recourse explanations. For more advanced usages of methods in ReLax, See this tutorials.
+
+
from relax.methods import VanillaCF
+from relax import DataModule, MLModule, generate_cf_explanations, benchmark_cfs
+from sklearn.datasets import make_classification
+from sklearn.model_selection import train_test_split
+import functools as ft
+import jax
Next, we fit an MLP model for this data. Note that this model can be any model implmented in JAX. We will use the MLModule in ReLax as an example.
+
+
model = MLModule()
+model.train((train_xs, train_ys), epochs=10, batch_size=64)
+
+
Generating recourse explanations are straightforward. We can simply call generate_cf of an implemented recourse method to generate one recourse explanation:
The above example illustrates the usage of the decoupled relax.methods to generate recourse explanations. However, users are required to write boilerplate code for tasks such as data preprocessing, model training, and generating recourse explanations with feature constraints.
+
ReLax additionally offers a one-liner framework, streamlining the process and helping users in building a standardized pipeline for generating recourse explanations. You can write three lines of code to benchmark recourse explanations:
/home/birk/code/jax-relax/relax/legacy/ckpt_manager.py:47: UserWarning: `monitor_metrics` is not specified in `CheckpointManager`. No checkpoints will be stored.
+ warnings.warn(
+Epoch 0: 100%|██████████| 191/191 [00:01<00:00, 106.57batch/s, train/train_loss=0.08575804]
+
+
+
+
from relax.ml_model import MLModule
+
+
+
model = MLModule()
+model.train(datamodule, batch_size=128, epochs=1)
configs (dict | BaseParser) – A configuration of the model/dataset.
+
config_cls (BaseParser) – The desired configuration class.
+
+
+
+
Returns:
+
(BaseParser)
+
+
We define a configuration object (which inherent BaseParser) to manage training/model/data configurations. validate_configs ensures to return the designated configuration object.
+
For example, we define a configuration object LearningConfigs:
+
+
class LearningConfigs(BaseParser):
+ lr: float
+
+
A configuration can be LearningConfigs, or the raw data in dictionary.
cat_arrays (List[List[str]]) – A list of a list of each categorical feature name
+
cat_idx (int) – Index that starts categorical features
+
hard (bool, default=False) – If True, return one-hot vectors; If False, return probability normalized via softmax
+
+
+
+
Returns:
+
(jnp.ndarray)
+
+
A tabular data point is encoded as x = [\underbrace{x_{0}, x_{1}, ..., x_{m}}_{\text{cont features}},
+\underbrace{x_{m+1}^{c=1},..., x_{m+p}^{c=1}}_{\text{cat feature (1)}}, ...,
+\underbrace{x_{k-q}^{c=i},..., x_{k}^{^{c=i}}}_{\text{cat feature (i)}}]
+
cat_normalize ensures the generated cf that satisfy the categorical constraints, i.e., \sum_j x^{c=i}_j=1, x^{c=i}_j > 0, \forall c=[1, ..., i].
+
cat_idx is the index of the first categorical feature. In the above example, cat_idx is m+1.
+
For example, let’s define a valid input data point:
Note that the backbone and activations models are not created with keras.Input objects, but with the tensors that originate from keras.Input objects. Under the hood, the layers and weights will be shared across these models, so that user can train the full_model, and use backbone or activations to do feature extraction. The inputs and outputs of the model can be nested structures of tensors as well, and the created models are standard Functional API models that support all the existing APIs.
+
+
+
By subclassing the Model class
+
In that case, you should define your layers in __init__() and you should implement the model’s forward pass in call().
If you subclass Model, you can optionally have a training argument (boolean) in call(), which you can use to specify a different behavior in training and inference:
Once the model is created, you can config the model with losses and metrics with model.compile(), train the model with model.fit(), or use the model to do prediction with model.predict().
+
+
+
With the Sequential class
+
In addition, keras.Sequential is a special case of model where the model is purely a stack of single-input, single-output layers.
/home/birk/code/jax-relax/relax/data_module.py:234: UserWarning: Passing `config` will have no effect.
+ warnings.warn("Passing `config` will have no effect.")
class relax.methods.clue.Decoder(sizes, output_size, dropout=0.1)
+
+
This is the class from which all layers inherit.
+
A layer is a callable object that takes as input one or more tensors and that outputs one or more tensors. It involves computation, defined in the call() method, and a state (weight variables). State can be created:
+
+
in __init__(), for instance via self.add_weight();
+
in the optional build() method, which is invoked by the first __call__() to the layer, and supplies the shape(s) of the input(s), which may not have been known at initialization time.
+
+
Layers are recursively composable: If you assign a Layer instance as an attribute of another Layer, the outer layer will start tracking the weights created by the inner layer. Nested layers should be instantiated in the __init__() method or build() method.
+
Users will just instantiate a layer and then treat it as a callable.
+
Args: trainable: Boolean, whether the layer’s variables should be trainable. name: String name of the layer. dtype: The dtype of the layer’s computations and weights. Can also be a keras.DTypePolicy, which allows the computation and weight dtype to differ. Defaults to None. None means to use keras.config.dtype_policy(), which is a float32 policy unless set to different value (via keras.config.set_dtype_policy()).
+
Attributes: name: The name of the layer (string). dtype: Dtype of the layer’s weights. Alias of layer.variable_dtype. variable_dtype: Dtype of the layer’s weights. compute_dtype: The dtype of the layer’s computations. Layers automatically cast inputs to this dtype, which causes the computations and output to also be in this dtype. When mixed precision is used with a keras.DTypePolicy, this will be different than variable_dtype. trainable_weights: List of variables to be included in backprop. non_trainable_weights: List of variables that should not be included in backprop. weights: The concatenation of the lists trainable_weights and non_trainable_weights (in this order). trainable: Whether the layer should be trained (boolean), i.e. whether its potentially-trainable weights should be returned as part of layer.trainable_weights. input_spec: Optional (list of) InputSpec object(s) specifying the constraints on inputs that can be accepted by the layer.
+
We recommend that descendants of Layer implement the following methods:
+
+
__init__(): Defines custom layer attributes, and creates layer weights that do not depend on input shapes, using add_weight(), or other state.
+
build(self, input_shape): This method can be used to create weights that depend on the shape(s) of the input(s), using add_weight(), or other state. __call__() will automatically build the layer (if it has not been built yet) by calling build().
+
call(self, *args, **kwargs): Called in __call__ after making sure build() has been called. call() performs the logic of applying the layer to the input arguments. Two reserved keyword arguments you can optionally use in call() are: 1. training (boolean, whether the call is in inference mode or training mode). 2. mask (boolean tensor encoding masked timesteps in the input, used e.g. in RNN layers). A typical signature for this method is call(self, inputs), and user could optionally add training and mask if the layer need them.
+
get_config(self): Returns a dictionary containing the configuration used to initialize this layer. If the keys differ from the arguments in __init__(), then override from_config(self) as well. This method is used when saving the layer or a model that contains this layer.
+
+
Examples:
+
Here’s a basic example: a layer with two variables, w and b, that returns y = w . x + b. It shows how to implement build() and call(). Variables set as attributes of a layer are tracked as weights of the layers (in layer.weights).
+
class SimpleDense(Layer):
+def__init__(self, units=32):
+super().__init__()
+self.units = units
+
+# Create the state of the layer (weights)
+def build(self, input_shape):
+self.kernel =self.add_weight(
+ shape=(input_shape[-1], self.units),
+ initializer="glorot_uniform",
+ trainable=True,
+ name="kernel",
+ )
+self.bias =self.add_weight(
+ shape=(self.units,),
+ initializer="zeros",
+ trainable=True,
+ name="bias",
+ )
+
+# Defines the computation
+def call(self, inputs):
+return ops.matmul(inputs, self.kernel) +self.bias
+
+# Instantiates the layer.
+linear_layer = SimpleDense(4)
+
+# This will also call `build(input_shape)` and create the weights.
+y = linear_layer(ops.ones((2, 2)))
+assertlen(linear_layer.weights) ==2
+
+# These weights are trainable, so they're listed in `trainable_weights`:
+assertlen(linear_layer.trainable_weights) ==2
+
Besides trainable weights, updated via backpropagation during training, layers can also have non-trainable weights. These weights are meant to be updated manually during call(). Here’s a example layer that computes the running sum of its inputs:
class relax.methods.clue.Encoder(sizes, dropout=0.1)
+
+
This is the class from which all layers inherit.
+
A layer is a callable object that takes as input one or more tensors and that outputs one or more tensors. It involves computation, defined in the call() method, and a state (weight variables). State can be created:
+
+
in __init__(), for instance via self.add_weight();
+
in the optional build() method, which is invoked by the first __call__() to the layer, and supplies the shape(s) of the input(s), which may not have been known at initialization time.
+
+
Layers are recursively composable: If you assign a Layer instance as an attribute of another Layer, the outer layer will start tracking the weights created by the inner layer. Nested layers should be instantiated in the __init__() method or build() method.
+
Users will just instantiate a layer and then treat it as a callable.
+
Args: trainable: Boolean, whether the layer’s variables should be trainable. name: String name of the layer. dtype: The dtype of the layer’s computations and weights. Can also be a keras.DTypePolicy, which allows the computation and weight dtype to differ. Defaults to None. None means to use keras.config.dtype_policy(), which is a float32 policy unless set to different value (via keras.config.set_dtype_policy()).
+
Attributes: name: The name of the layer (string). dtype: Dtype of the layer’s weights. Alias of layer.variable_dtype. variable_dtype: Dtype of the layer’s weights. compute_dtype: The dtype of the layer’s computations. Layers automatically cast inputs to this dtype, which causes the computations and output to also be in this dtype. When mixed precision is used with a keras.DTypePolicy, this will be different than variable_dtype. trainable_weights: List of variables to be included in backprop. non_trainable_weights: List of variables that should not be included in backprop. weights: The concatenation of the lists trainable_weights and non_trainable_weights (in this order). trainable: Whether the layer should be trained (boolean), i.e. whether its potentially-trainable weights should be returned as part of layer.trainable_weights. input_spec: Optional (list of) InputSpec object(s) specifying the constraints on inputs that can be accepted by the layer.
+
We recommend that descendants of Layer implement the following methods:
+
+
__init__(): Defines custom layer attributes, and creates layer weights that do not depend on input shapes, using add_weight(), or other state.
+
build(self, input_shape): This method can be used to create weights that depend on the shape(s) of the input(s), using add_weight(), or other state. __call__() will automatically build the layer (if it has not been built yet) by calling build().
+
call(self, *args, **kwargs): Called in __call__ after making sure build() has been called. call() performs the logic of applying the layer to the input arguments. Two reserved keyword arguments you can optionally use in call() are: 1. training (boolean, whether the call is in inference mode or training mode). 2. mask (boolean tensor encoding masked timesteps in the input, used e.g. in RNN layers). A typical signature for this method is call(self, inputs), and user could optionally add training and mask if the layer need them.
+
get_config(self): Returns a dictionary containing the configuration used to initialize this layer. If the keys differ from the arguments in __init__(), then override from_config(self) as well. This method is used when saving the layer or a model that contains this layer.
+
+
Examples:
+
Here’s a basic example: a layer with two variables, w and b, that returns y = w . x + b. It shows how to implement build() and call(). Variables set as attributes of a layer are tracked as weights of the layers (in layer.weights).
+
class SimpleDense(Layer):
+def__init__(self, units=32):
+super().__init__()
+self.units = units
+
+# Create the state of the layer (weights)
+def build(self, input_shape):
+self.kernel =self.add_weight(
+ shape=(input_shape[-1], self.units),
+ initializer="glorot_uniform",
+ trainable=True,
+ name="kernel",
+ )
+self.bias =self.add_weight(
+ shape=(self.units,),
+ initializer="zeros",
+ trainable=True,
+ name="bias",
+ )
+
+# Defines the computation
+def call(self, inputs):
+return ops.matmul(inputs, self.kernel) +self.bias
+
+# Instantiates the layer.
+linear_layer = SimpleDense(4)
+
+# This will also call `build(input_shape)` and create the weights.
+y = linear_layer(ops.ones((2, 2)))
+assertlen(linear_layer.weights) ==2
+
+# These weights are trainable, so they're listed in `trainable_weights`:
+assertlen(linear_layer.trainable_weights) ==2
+
Besides trainable weights, updated via backpropagation during training, layers can also have non-trainable weights. These weights are meant to be updated manually during call(). Here’s a example layer that computes the running sum of its inputs:
Note that the backbone and activations models are not created with keras.Input objects, but with the tensors that originate from keras.Input objects. Under the hood, the layers and weights will be shared across these models, so that user can train the full_model, and use backbone or activations to do feature extraction. The inputs and outputs of the model can be nested structures of tensors as well, and the created models are standard Functional API models that support all the existing APIs.
+
+
+
By subclassing the Model class
+
In that case, you should define your layers in __init__() and you should implement the model’s forward pass in call().
If you subclass Model, you can optionally have a training argument (boolean) in call(), which you can use to specify a different behavior in training and inference:
Once the model is created, you can config the model with losses and metrics with model.compile(), train the model with model.fit(), or use the model to do prediction with model.predict().
+
+
+
With the Sequential class
+
In addition, keras.Sequential is a special case of model where the model is purely a stack of single-input, single-output layers.
vae_model = VAEGaussCat()
+vae_model.compile(optimizer=keras.optimizers.Adam(0.001), loss=None)
+dm = load_data('dummy')
+xs, _ = dm['train']
+history = vae_model.fit(
+ xs, xs,
+ batch_size=64,
+ epochs=2,
+ verbose=0# Set to 1 for training progress
+)
+assert history.history['loss'][0] > history.history['loss'][-1]
+
+
/home/birk/code/jax-relax/relax/data_module.py:234: UserWarning: Passing `config` will have no effect.
+ warnings.warn("Passing `config` will have no effect.")
/home/birk/code/jax-relax/relax/data_module.py:234: UserWarning: Passing `config` will have no effect.
+ warnings.warn("Passing `config` will have no effect.")
/home/birk/code/jax-relax/relax/data_module.py:234: UserWarning: Passing `config` will have no effect.
+ warnings.warn("Passing `config` will have no effect.")
The first gradient update optimizes for predictive accuracy: \theta^{(1)} = \theta^{(0)} - \nabla_{\theta^{(0)}} (\lambda_1 \cdot \mathcal{L}_1).
+
The second gradient update optimizes for generating CF explanation: \theta^{(2)}_g = \theta^{(1)}_g - \nabla_{\theta^{(1)}_g} (\mathcal \lambda_2 \cdot \mathcal{L}_2 + \lambda_3 \cdot \mathcal{L}_3)
+
+
The design choice of this optimizing procedure is made due to improved convergence of the model, and improved adversarial robustness of the predictor network. The CounterNet paper elaborates the design choices.
/tmp/ipykernel_11637/3412149913.py:2: UserWarning: The given NumPy array is not writable, and PyTorch does not support non-writable tensors. This means writing to this tensor will result in undefined behavior. You may want to copy the array to protect its data or make it writable before converting it to a tensor. This type of warning will be suppressed for the rest of this program. (Triggered internally at ../torch/csrc/utils/tensor_numpy.cpp:206.)
+ return torch.from_numpy(x.__array__())
+
+
+
Our jax-based implementation is ~500X faster than DiCE’s pytorch implementation.
+
+
torch_res = dpp_style_torch(cfs_tensor)
+
+
318 ms ± 4.24 ms per loop (mean ± std. dev. of 5 runs, 1 loop each)
+
+
+
+
jax_res = dpp_style_vmap(cfs)
+
+
571 µs ± 44.4 µs per loop (mean ± std. dev. of 7 runs, 50 loops each)
/home/birk/code/jax-relax/relax/data_module.py:234: UserWarning: Passing `config` will have no effect.
+ warnings.warn("Passing `config` will have no effect.")
Note that the backbone and activations models are not created with keras.Input objects, but with the tensors that originate from keras.Input objects. Under the hood, the layers and weights will be shared across these models, so that user can train the full_model, and use backbone or activations to do feature extraction. The inputs and outputs of the model can be nested structures of tensors as well, and the created models are standard Functional API models that support all the existing APIs.
+
+
+
By subclassing the Model class
+
In that case, you should define your layers in __init__() and you should implement the model’s forward pass in call().
If you subclass Model, you can optionally have a training argument (boolean) in call(), which you can use to specify a different behavior in training and inference:
Once the model is created, you can config the model with losses and metrics with model.compile(), train the model with model.fit(), or use the model to do prediction with model.predict().
+
+
+
With the Sequential class
+
In addition, keras.Sequential is a special case of model where the model is purely a stack of single-input, single-output layers.
/home/birk/code/jax-relax/relax/data_module.py:234: UserWarning: Passing `config` will have no effect.
+ warnings.warn("Passing `config` will have no effect.")
Note that the backbone and activations models are not created with keras.Input objects, but with the tensors that originate from keras.Input objects. Under the hood, the layers and weights will be shared across these models, so that user can train the full_model, and use backbone or activations to do feature extraction. The inputs and outputs of the model can be nested structures of tensors as well, and the created models are standard Functional API models that support all the existing APIs.
+
+
+
By subclassing the Model class
+
In that case, you should define your layers in __init__() and you should implement the model’s forward pass in call().
If you subclass Model, you can optionally have a training argument (boolean) in call(), which you can use to specify a different behavior in training and inference:
Once the model is created, you can config the model with losses and metrics with model.compile(), train the model with model.fit(), or use the model to do prediction with model.predict().
+
+
+
With the Sequential class
+
In addition, keras.Sequential is a special case of model where the model is purely a stack of single-input, single-output layers.