Note that the backbone and activations models are not created with keras.Input objects, but with the tensors that originate from keras.Input objects. Under the hood, the layers and weights will be shared across these models, so that user can train the full_model, and use backbone or activations to do feature extraction. The inputs and outputs of the model can be nested structures of tensors as well, and the created models are standard Functional API models that support all the existing APIs.
+
+
+
By subclassing the Model class
+
In that case, you should define your layers in __init__() and you should implement the model’s forward pass in call().
If you subclass Model, you can optionally have a training argument (boolean) in call(), which you can use to specify a different behavior in training and inference:
Once the model is created, you can config the model with losses and metrics with model.compile(), train the model with model.fit(), or use the model to do prediction with model.predict().
+
+
+
With the Sequential class
+
In addition, keras.Sequential is a special case of model where the model is purely a stack of single-input, single-output layers.
Prepare the CF module. It will hook up the data module, and its apply functions via the init_apply_fns method (e.g., apply_constraints_fn and compute_reg_loss_fn). Next, it will train the model if cf_module is a ParametricCFModule. Finally, it will call before_generate_cf method.
Prepare the predictive function for the CF module. We will train the model if pred_fn is not provided and cf_module does not have pred_fn. If pred_fn is found in cf_module, we will use it irrespective of pred_fn argument. If pred_fn is provided, we will use it.
/tmp/ipykernel_5475/4129963786.py:17: DeprecationWarning: Argument `data` is deprecated. Use `data_module` instead.
+ warnings.warn(
+
+
+
+
cfnet = CounterNet()
+cfnet.train(dm, epochs=1)
+# Test cases for checking if ParametricCFModule is trained twice.
+# If it is trained twice, cfs will be different.
+cfs = jax.vmap(cfnet.generate_cf)(dm.xs)
+assert cfnet.is_trained ==True
+exp = generate_cf_explanations(cfnet, dm)
+assert np.allclose(einops.rearrange(exp.cfs, 'N 1 K -> N K'), cfs)
+
+
/home/birk/miniconda3/envs/dev/lib/python3.10/site-packages/relax/legacy/ckpt_manager.py:47: UserWarning: `monitor_metrics` is not specified in `CheckpointManager`. No checkpoints will be stored.
+ warnings.warn(
+Epoch 0: 100%|██████████| 191/191 [00:08<00:00, 22.21batch/s, train/train_loss_1=0.06329722, train/train_loss_2=0.07011371, train/train_loss_3=0.101814255]
+/tmp/ipykernel_5475/4129963786.py:17: DeprecationWarning: Argument `data` is deprecated. Use `data_module` instead.
+ warnings.warn(
def test_set_transformations(transformation, correct_shape):
+ T = transformation
+ feats_list_2 = deepcopy(feats_list)
+ feats_list_2.set_transformations({
+ feat: T for feat in cat_feats
+ })
+assert feats_list_2.transformed_data.shape == correct_shape
+ name = T.name ifisinstance(T, BaseTransformation) else T
+
+for feat in feats_list_2:
+if feat.name in cat_feats:
+assert feat.transformation.name == name
+assert feat.is_categorical
+else:
+assert feat.transformation.name =='minmax'
+assert feat.is_categorical isFalse
+assert feat.is_immutable isFalse
+
+ x = jax.random.uniform(jax.random.PRNGKey(0), shape=(100, correct_shape[-1]))
+ _ = feats_list_2.apply_constraints(feats_list_2.transformed_data[:100], x, hard=False)
+ _ = feats_list_2.apply_constraints(feats_list_2.transformed_data[:100], x, hard=True)
+
+
+
test_set_transformations('ordinal', (32561, 8))
+test_set_transformations('ohe', (32561, 29))
+test_set_transformations('gumbel', (32561, 29))
+# TODO: [bug] raise error when set_transformations is called with
+# SoftmaxTransformation() or GumbelSoftmaxTransformation(),
+# instead of "ohe" or "gumbel".
+test_set_transformations(SoftmaxTransformation(), (32561, 29))
+test_set_transformations(GumbelSoftmaxTransformation(), (32561, 29))
+
+
+
# Test transform and inverse_transform
+# Convert df to dict[str, np.ndarray]
+df_dict = {k: np.array(v).reshape(-1, 1) for k, v in df.iloc[:, :-1].to_dict(orient='list').items()}
+# feats_list.transform(df_dict) should be the same as feats_list.transformed_data
+transformed_data = feats_list.transform(df_dict)
+assert np.equal(feats_list.transformed_data, transformed_data).all()
+# feats_list.inverse_transform(transformed_data) should be the same as df_dict
+inverse_transformed_data = feats_list.inverse_transform(transformed_data)
+pd.testing.assert_frame_equal(
+ pd.DataFrame.from_dict({k: v.reshape(-1) for k, v in inverse_transformed_data.items()}),
+ pd.DataFrame.from_dict({k: v.reshape(-1) for k, v in df_dict.items()}),
+ check_dtype=False, check_exact=False,
+)
# Test `to_pandas`
+feats_pd = feats_list.to_pandas()
+pd.testing.assert_frame_equal(
+ feats_pd,
+ pd.DataFrame.from_dict({k: v.reshape(-1) for k, v in df_dict.items()}),
+ check_dtype=False,
+)
+
+
+
# Test save and load
+feats_list.save('tmp/data_module/')
+feats_list_1 = FeaturesList.load_from_path('tmp/data_module/')
+# remove tmp folder
+shutil.rmtree('tmp/data_module/')
+
+
+
sk_ohe = skp.OneHotEncoder(sparse_output=False)
+sk_minmax = skp.MinMaxScaler()
+
+# for feat in feats_list.features:
+for feat in feats_list:
+if feat.name in cont_feats:
+assert np.allclose(
+ sk_minmax.fit_transform(feat.data),
+ feat.transformed_data,
+ ), f"Failed at {feat.name}. "
+else:
+assert np.allclose(
+ sk_ohe.fit_transform(feat.data),
+ feat.transformed_data,
+ ), f"Failed at {feat.name}"
MinMaxScaler only supports scaling a single feature.
+
+
xs = xs.reshape(50, 2)
+scaler = MinMaxScaler()
+test_fail(lambda: scaler.fit_transform(xs),
+ contains="`MinMaxScaler` only supports array with a single feature")
+
+
Convert to a dictionary (or the pytree representations).
class relax.docs.CustomizedMarkdownRenderer(sym, name=None, title_level=3)
+
+
Displaying documents of functions, classes, haiku.module, and BaseParser.
+
CustomizedMarkdownRenderer is the customized markdown render for the ReLax documentation site. We can use it to displaying documents of functions, classes, haiku.module, and BaseParser.
+
We can display documentations for functions:
+
+
def validate_config(
+ configs: Dict|BaseParser, # A configuration of the model/data.
+ config_cls: BaseParser # The desired configuration class.
+) -> BaseParser:
+"""Return a valid configuration object."""
+ ...
+
+CustomizedMarkdownRenderer(validate_config)
+
+
+validate_config
+
+
+
validate_config(configs, config_cls)
+
+
Return a valid configuration object.
+
+
Parameters:
+
+
configs (Dict | BaseParser) – A configuration of the model/data.
+
config_cls (BaseParser) – The desired configuration class.
+
+
+
+
Returns:
+
(BaseParser)
+
+
+
+
We can display documentations for classes:
+
+
class VanillaCF:
+"""VanillaCF Explanation of the model."""
+
+def__init__(
+self,
+ configs: Dict|BaseParser=None# A configuration of the model.
+ ): ...
+
+def generate_cf(
+self,
+ x: np.ndarray, # A data point.
+ pred_fn: Callable, # A prediction function.
+ ) -> Array:
+"""Generate counterfactuals for the given data point."""
+pass
+
+ __ALL__ = ["generate_cf"]
+CustomizedMarkdownRenderer(VanillaCF)
ReLax (Recourse Explanation Library in Jax) is an efficient and scalable benchmarking library for recourse and counterfactual explanations, built on top of jax. By leveraging language primitives such as vectorization, parallelization, and just-in-time compilation in jax, ReLax offers massive speed improvements in generating individual (or local) explanations for predictions made by Machine Learning algorithms.
+
Some of the key features are as follows:
+
+
🏃 Fast and scalable recourse generation.
+
🚀 Accelerated over cpu, gpu, tpu.
+
🪓 Comprehensive set of recourse methods implemented for benchmarking.
+
👐 Customizable API to enable the building of entire modeling and interpretation pipelines for new recourse algorithms.
+
+
+
+
Installation
+
pip install jax-relax
+# Or install the latest version of `jax-relax`
+pip install git+https://github.com/BirkhoffG/jax-relax.git
+
To futher unleash the power of accelerators (i.e., GPU/TPU), we suggest to first install this library via pip install jax-relax. Then, follow steps in the official install guidelines to install the right version for GPU or TPU.
+
+
+
Dive into ReLax
+
ReLax is a recourse explanation library for explaining (any) JAX-based ML models. We believe that it is important to give users flexibility to choose how to use ReLax. You can
+
+
only use methods implemeted in ReLax (as a recourse methods library);
+
build a pipeline using ReLax to define data module, training ML models, and generating CF explanation (for constructing recourse benchmarking pipeline).
+
+
+
ReLax as a Recourse Explanation Library
+
We introduce basic use cases of using methods in ReLax to generate recourse explanations. For more advanced usages of methods in ReLax, See this tutorials.
+
+
from relax.methods import VanillaCF
+from relax import DataModule, MLModule, generate_cf_explanations, benchmark_cfs
+from sklearn.datasets import make_classification
+from sklearn.model_selection import train_test_split
+import functools as ft
+import jax
Next, we fit an MLP model for this data. Note that this model can be any model implmented in JAX. We will use the MLModule in ReLax as an example.
+
+
model = MLModule()
+model.train((train_xs, train_ys), epochs=10, batch_size=64)
+
+
Generating recourse explanations are straightforward. We can simply call generate_cf of an implemented recourse method to generate one recourse explanation:
The above example illustrates the usage of the decoupled relax.methods to generate recourse explanations. However, users are required to write boilerplate code for tasks such as data preprocessing, model training, and generating recourse explanations with feature constraints.
+
ReLax additionally offers a one-liner framework, streamlining the process and helping users in building a standardized pipeline for generating recourse explanations. You can write three lines of code to benchmark recourse explanations:
/home/birk/code/jax-relax/relax/legacy/ckpt_manager.py:47: UserWarning: `monitor_metrics` is not specified in `CheckpointManager`. No checkpoints will be stored.
+ warnings.warn(
+Epoch 0: 100%|██████████| 191/191 [00:01<00:00, 106.57batch/s, train/train_loss=0.08575804]
+
+
+
+
from relax.ml_model import MLModule
+
+
+
model = MLModule()
+model.train(datamodule, batch_size=128, epochs=1)
configs (dict | BaseParser) – A configuration of the model/dataset.
+
config_cls (BaseParser) – The desired configuration class.
+
+
+
+
Returns:
+
(BaseParser)
+
+
We define a configuration object (which inherent BaseParser) to manage training/model/data configurations. validate_configs ensures to return the designated configuration object.
+
For example, we define a configuration object LearningConfigs:
+
+
class LearningConfigs(BaseParser):
+ lr: float
+
+
A configuration can be LearningConfigs, or the raw data in dictionary.
cat_arrays (List[List[str]]) – A list of a list of each categorical feature name
+
cat_idx (int) – Index that starts categorical features
+
hard (bool, default=False) – If True, return one-hot vectors; If False, return probability normalized via softmax
+
+
+
+
Returns:
+
(jnp.ndarray)
+
+
A tabular data point is encoded as x = [\underbrace{x_{0}, x_{1}, ..., x_{m}}_{\text{cont features}},
+\underbrace{x_{m+1}^{c=1},..., x_{m+p}^{c=1}}_{\text{cat feature (1)}}, ...,
+\underbrace{x_{k-q}^{c=i},..., x_{k}^{^{c=i}}}_{\text{cat feature (i)}}]
+
cat_normalize ensures the generated cf that satisfy the categorical constraints, i.e., \sum_j x^{c=i}_j=1, x^{c=i}_j > 0, \forall c=[1, ..., i].
+
cat_idx is the index of the first categorical feature. In the above example, cat_idx is m+1.
+
For example, let’s define a valid input data point:
Note that the backbone and activations models are not created with keras.Input objects, but with the tensors that originate from keras.Input objects. Under the hood, the layers and weights will be shared across these models, so that user can train the full_model, and use backbone or activations to do feature extraction. The inputs and outputs of the model can be nested structures of tensors as well, and the created models are standard Functional API models that support all the existing APIs.
+
+
+
By subclassing the Model class
+
In that case, you should define your layers in __init__() and you should implement the model’s forward pass in call().
If you subclass Model, you can optionally have a training argument (boolean) in call(), which you can use to specify a different behavior in training and inference:
Once the model is created, you can config the model with losses and metrics with model.compile(), train the model with model.fit(), or use the model to do prediction with model.predict().
+
+
+
With the Sequential class
+
In addition, keras.Sequential is a special case of model where the model is purely a stack of single-input, single-output layers.
/home/birk/code/jax-relax/relax/data_module.py:234: UserWarning: Passing `config` will have no effect.
+ warnings.warn("Passing `config` will have no effect.")
class relax.methods.clue.Decoder(sizes, output_size, dropout=0.1)
+
+
This is the class from which all layers inherit.
+
A layer is a callable object that takes as input one or more tensors and that outputs one or more tensors. It involves computation, defined in the call() method, and a state (weight variables). State can be created:
+
+
in __init__(), for instance via self.add_weight();
+
in the optional build() method, which is invoked by the first __call__() to the layer, and supplies the shape(s) of the input(s), which may not have been known at initialization time.
+
+
Layers are recursively composable: If you assign a Layer instance as an attribute of another Layer, the outer layer will start tracking the weights created by the inner layer. Nested layers should be instantiated in the __init__() method or build() method.
+
Users will just instantiate a layer and then treat it as a callable.
+
Args: trainable: Boolean, whether the layer’s variables should be trainable. name: String name of the layer. dtype: The dtype of the layer’s computations and weights. Can also be a keras.DTypePolicy, which allows the computation and weight dtype to differ. Defaults to None. None means to use keras.config.dtype_policy(), which is a float32 policy unless set to different value (via keras.config.set_dtype_policy()).
+
Attributes: name: The name of the layer (string). dtype: Dtype of the layer’s weights. Alias of layer.variable_dtype. variable_dtype: Dtype of the layer’s weights. compute_dtype: The dtype of the layer’s computations. Layers automatically cast inputs to this dtype, which causes the computations and output to also be in this dtype. When mixed precision is used with a keras.DTypePolicy, this will be different than variable_dtype. trainable_weights: List of variables to be included in backprop. non_trainable_weights: List of variables that should not be included in backprop. weights: The concatenation of the lists trainable_weights and non_trainable_weights (in this order). trainable: Whether the layer should be trained (boolean), i.e. whether its potentially-trainable weights should be returned as part of layer.trainable_weights. input_spec: Optional (list of) InputSpec object(s) specifying the constraints on inputs that can be accepted by the layer.
+
We recommend that descendants of Layer implement the following methods:
+
+
__init__(): Defines custom layer attributes, and creates layer weights that do not depend on input shapes, using add_weight(), or other state.
+
build(self, input_shape): This method can be used to create weights that depend on the shape(s) of the input(s), using add_weight(), or other state. __call__() will automatically build the layer (if it has not been built yet) by calling build().
+
call(self, *args, **kwargs): Called in __call__ after making sure build() has been called. call() performs the logic of applying the layer to the input arguments. Two reserved keyword arguments you can optionally use in call() are: 1. training (boolean, whether the call is in inference mode or training mode). 2. mask (boolean tensor encoding masked timesteps in the input, used e.g. in RNN layers). A typical signature for this method is call(self, inputs), and user could optionally add training and mask if the layer need them.
+
get_config(self): Returns a dictionary containing the configuration used to initialize this layer. If the keys differ from the arguments in __init__(), then override from_config(self) as well. This method is used when saving the layer or a model that contains this layer.
+
+
Examples:
+
Here’s a basic example: a layer with two variables, w and b, that returns y = w . x + b. It shows how to implement build() and call(). Variables set as attributes of a layer are tracked as weights of the layers (in layer.weights).
+
class SimpleDense(Layer):
+def__init__(self, units=32):
+super().__init__()
+self.units = units
+
+# Create the state of the layer (weights)
+def build(self, input_shape):
+self.kernel =self.add_weight(
+ shape=(input_shape[-1], self.units),
+ initializer="glorot_uniform",
+ trainable=True,
+ name="kernel",
+ )
+self.bias =self.add_weight(
+ shape=(self.units,),
+ initializer="zeros",
+ trainable=True,
+ name="bias",
+ )
+
+# Defines the computation
+def call(self, inputs):
+return ops.matmul(inputs, self.kernel) +self.bias
+
+# Instantiates the layer.
+linear_layer = SimpleDense(4)
+
+# This will also call `build(input_shape)` and create the weights.
+y = linear_layer(ops.ones((2, 2)))
+assertlen(linear_layer.weights) ==2
+
+# These weights are trainable, so they're listed in `trainable_weights`:
+assertlen(linear_layer.trainable_weights) ==2
+
Besides trainable weights, updated via backpropagation during training, layers can also have non-trainable weights. These weights are meant to be updated manually during call(). Here’s a example layer that computes the running sum of its inputs:
class relax.methods.clue.Encoder(sizes, dropout=0.1)
+
+
This is the class from which all layers inherit.
+
A layer is a callable object that takes as input one or more tensors and that outputs one or more tensors. It involves computation, defined in the call() method, and a state (weight variables). State can be created:
+
+
in __init__(), for instance via self.add_weight();
+
in the optional build() method, which is invoked by the first __call__() to the layer, and supplies the shape(s) of the input(s), which may not have been known at initialization time.
+
+
Layers are recursively composable: If you assign a Layer instance as an attribute of another Layer, the outer layer will start tracking the weights created by the inner layer. Nested layers should be instantiated in the __init__() method or build() method.
+
Users will just instantiate a layer and then treat it as a callable.
+
Args: trainable: Boolean, whether the layer’s variables should be trainable. name: String name of the layer. dtype: The dtype of the layer’s computations and weights. Can also be a keras.DTypePolicy, which allows the computation and weight dtype to differ. Defaults to None. None means to use keras.config.dtype_policy(), which is a float32 policy unless set to different value (via keras.config.set_dtype_policy()).
+
Attributes: name: The name of the layer (string). dtype: Dtype of the layer’s weights. Alias of layer.variable_dtype. variable_dtype: Dtype of the layer’s weights. compute_dtype: The dtype of the layer’s computations. Layers automatically cast inputs to this dtype, which causes the computations and output to also be in this dtype. When mixed precision is used with a keras.DTypePolicy, this will be different than variable_dtype. trainable_weights: List of variables to be included in backprop. non_trainable_weights: List of variables that should not be included in backprop. weights: The concatenation of the lists trainable_weights and non_trainable_weights (in this order). trainable: Whether the layer should be trained (boolean), i.e. whether its potentially-trainable weights should be returned as part of layer.trainable_weights. input_spec: Optional (list of) InputSpec object(s) specifying the constraints on inputs that can be accepted by the layer.
+
We recommend that descendants of Layer implement the following methods:
+
+
__init__(): Defines custom layer attributes, and creates layer weights that do not depend on input shapes, using add_weight(), or other state.
+
build(self, input_shape): This method can be used to create weights that depend on the shape(s) of the input(s), using add_weight(), or other state. __call__() will automatically build the layer (if it has not been built yet) by calling build().
+
call(self, *args, **kwargs): Called in __call__ after making sure build() has been called. call() performs the logic of applying the layer to the input arguments. Two reserved keyword arguments you can optionally use in call() are: 1. training (boolean, whether the call is in inference mode or training mode). 2. mask (boolean tensor encoding masked timesteps in the input, used e.g. in RNN layers). A typical signature for this method is call(self, inputs), and user could optionally add training and mask if the layer need them.
+
get_config(self): Returns a dictionary containing the configuration used to initialize this layer. If the keys differ from the arguments in __init__(), then override from_config(self) as well. This method is used when saving the layer or a model that contains this layer.
+
+
Examples:
+
Here’s a basic example: a layer with two variables, w and b, that returns y = w . x + b. It shows how to implement build() and call(). Variables set as attributes of a layer are tracked as weights of the layers (in layer.weights).
+
class SimpleDense(Layer):
+def__init__(self, units=32):
+super().__init__()
+self.units = units
+
+# Create the state of the layer (weights)
+def build(self, input_shape):
+self.kernel =self.add_weight(
+ shape=(input_shape[-1], self.units),
+ initializer="glorot_uniform",
+ trainable=True,
+ name="kernel",
+ )
+self.bias =self.add_weight(
+ shape=(self.units,),
+ initializer="zeros",
+ trainable=True,
+ name="bias",
+ )
+
+# Defines the computation
+def call(self, inputs):
+return ops.matmul(inputs, self.kernel) +self.bias
+
+# Instantiates the layer.
+linear_layer = SimpleDense(4)
+
+# This will also call `build(input_shape)` and create the weights.
+y = linear_layer(ops.ones((2, 2)))
+assertlen(linear_layer.weights) ==2
+
+# These weights are trainable, so they're listed in `trainable_weights`:
+assertlen(linear_layer.trainable_weights) ==2
+
Besides trainable weights, updated via backpropagation during training, layers can also have non-trainable weights. These weights are meant to be updated manually during call(). Here’s a example layer that computes the running sum of its inputs:
Note that the backbone and activations models are not created with keras.Input objects, but with the tensors that originate from keras.Input objects. Under the hood, the layers and weights will be shared across these models, so that user can train the full_model, and use backbone or activations to do feature extraction. The inputs and outputs of the model can be nested structures of tensors as well, and the created models are standard Functional API models that support all the existing APIs.
+
+
+
By subclassing the Model class
+
In that case, you should define your layers in __init__() and you should implement the model’s forward pass in call().
If you subclass Model, you can optionally have a training argument (boolean) in call(), which you can use to specify a different behavior in training and inference:
Once the model is created, you can config the model with losses and metrics with model.compile(), train the model with model.fit(), or use the model to do prediction with model.predict().
+
+
+
With the Sequential class
+
In addition, keras.Sequential is a special case of model where the model is purely a stack of single-input, single-output layers.
vae_model = VAEGaussCat()
+vae_model.compile(optimizer=keras.optimizers.Adam(0.001), loss=None)
+dm = load_data('dummy')
+xs, _ = dm['train']
+history = vae_model.fit(
+ xs, xs,
+ batch_size=64,
+ epochs=2,
+ verbose=0# Set to 1 for training progress
+)
+assert history.history['loss'][0] > history.history['loss'][-1]
+
+
/home/birk/code/jax-relax/relax/data_module.py:234: UserWarning: Passing `config` will have no effect.
+ warnings.warn("Passing `config` will have no effect.")
/home/birk/code/jax-relax/relax/data_module.py:234: UserWarning: Passing `config` will have no effect.
+ warnings.warn("Passing `config` will have no effect.")
/home/birk/code/jax-relax/relax/data_module.py:234: UserWarning: Passing `config` will have no effect.
+ warnings.warn("Passing `config` will have no effect.")
The first gradient update optimizes for predictive accuracy: \theta^{(1)} = \theta^{(0)} - \nabla_{\theta^{(0)}} (\lambda_1 \cdot \mathcal{L}_1).
+
The second gradient update optimizes for generating CF explanation: \theta^{(2)}_g = \theta^{(1)}_g - \nabla_{\theta^{(1)}_g} (\mathcal \lambda_2 \cdot \mathcal{L}_2 + \lambda_3 \cdot \mathcal{L}_3)
+
+
The design choice of this optimizing procedure is made due to improved convergence of the model, and improved adversarial robustness of the predictor network. The CounterNet paper elaborates the design choices.
/tmp/ipykernel_11637/3412149913.py:2: UserWarning: The given NumPy array is not writable, and PyTorch does not support non-writable tensors. This means writing to this tensor will result in undefined behavior. You may want to copy the array to protect its data or make it writable before converting it to a tensor. This type of warning will be suppressed for the rest of this program. (Triggered internally at ../torch/csrc/utils/tensor_numpy.cpp:206.)
+ return torch.from_numpy(x.__array__())
+
+
+
Our jax-based implementation is ~500X faster than DiCE’s pytorch implementation.
+
+
torch_res = dpp_style_torch(cfs_tensor)
+
+
318 ms ± 4.24 ms per loop (mean ± std. dev. of 5 runs, 1 loop each)
+
+
+
+
jax_res = dpp_style_vmap(cfs)
+
+
571 µs ± 44.4 µs per loop (mean ± std. dev. of 7 runs, 50 loops each)
/home/birk/code/jax-relax/relax/data_module.py:234: UserWarning: Passing `config` will have no effect.
+ warnings.warn("Passing `config` will have no effect.")
Note that the backbone and activations models are not created with keras.Input objects, but with the tensors that originate from keras.Input objects. Under the hood, the layers and weights will be shared across these models, so that user can train the full_model, and use backbone or activations to do feature extraction. The inputs and outputs of the model can be nested structures of tensors as well, and the created models are standard Functional API models that support all the existing APIs.
+
+
+
By subclassing the Model class
+
In that case, you should define your layers in __init__() and you should implement the model’s forward pass in call().
If you subclass Model, you can optionally have a training argument (boolean) in call(), which you can use to specify a different behavior in training and inference:
Once the model is created, you can config the model with losses and metrics with model.compile(), train the model with model.fit(), or use the model to do prediction with model.predict().
+
+
+
With the Sequential class
+
In addition, keras.Sequential is a special case of model where the model is purely a stack of single-input, single-output layers.
/home/birk/code/jax-relax/relax/data_module.py:234: UserWarning: Passing `config` will have no effect.
+ warnings.warn("Passing `config` will have no effect.")
Note that the backbone and activations models are not created with keras.Input objects, but with the tensors that originate from keras.Input objects. Under the hood, the layers and weights will be shared across these models, so that user can train the full_model, and use backbone or activations to do feature extraction. The inputs and outputs of the model can be nested structures of tensors as well, and the created models are standard Functional API models that support all the existing APIs.
+
+
+
By subclassing the Model class
+
In that case, you should define your layers in __init__() and you should implement the model’s forward pass in call().
If you subclass Model, you can optionally have a training argument (boolean) in call(), which you can use to specify a different behavior in training and inference:
Once the model is created, you can config the model with losses and metrics with model.compile(), train the model with model.fit(), or use the model to do prediction with model.predict().
+
+
+
With the Sequential class
+
In addition, keras.Sequential is a special case of model where the model is purely a stack of single-input, single-output layers.
This library uses nbdev for development. We love great flexibility offered by jupyter notebook, and nbdev in addressing limitations of using Notebook in developing large-scale projects (e.g., sync between notebooks and python modules, documentations).
+
Here, we only cover basis of our development procedure. For an in-depth use of nbdev, please refer to the nbdev tutorial. Following links are particularly useful:
Refer to installation guidance for installing ReLax. For running ReLax in CPU, you should
+
pip install "jax-relax[dev]"
+
Next, install Quarto for the documentation system. See nbdev docs for more details.
+
nbdev_install_quarto
+
Next, install hooks for cleaning Jupyter Notebooks.
+
nbdev_install_hooks
+
+
+
Write Code in Jupyter Notebook
+
Note that nbdev provides a best practice guidline to writing code in Jupyter Notebooks. Here, we present some of the most important steps.
+
+
Export Cell to Python Module
+
#| export marks code cells (in Notebook; .ipynb) to be exported to Python Module (.py). By default, this cell will be exported to the file defined in #| default_exp file_name (usually presented upfront).
+
For example, the below function will be exported to the Python module.
+
#| export
+def func(args):
+ ...
+
We can also specify files to be exported.
+
#| export file_name.py
+def func(args):
+ ...
+
For private functions/objects, we can use #| exporti. In this way, the code will still be exported to the file, but not included in __all__.
Two-way Sync between Notebooks (.ipynb) and Python Code (.py)
+
To update code written in Jupyter Notebook to Python Module (i.e., .ipynb -> .py)
+
nbdev_export
+
To sync code updated in Python Module back to Jupyter Notebook (i.e., .py -> .ipynb)
+
nbdev_update
+
+
+
+
+
+
+Warning
+
+
+
+
If you write a new function/object in .py, nbdev_update will not include this function in __all__. The best practice is to write functions/objects in Jupyter Notebook, and debug in Python Module (via IDE).
It is desirable to write some unit tests for each function and object. nbdev recommends to write test cases after implementing a feature. A normal cell is considered for testing.
+
For example, let’s consider a function which adds up all the inputs:
+
+
def add_numbers(*args):
+returnsum(args)
+
+
To test this function, we write unit tests via assert.
Note that all the test cases should be quickly run. If a cell takes a long time to run (e.g., model training), mark the cell as #| eval: false to skip this cell.
+
+
+
+
+
Write Documentations in Jupyter Notebook
+
+
Doc string
+
To write documentations in nbdev, it is recommended to
+
+
use simple type annotations
+
describe each arguments with short comments
+
provide code examples and explanations in separate cells
+
+
+
+
+
+
+
+Tip
+
+
+
+
Union typing is introduced after Python 3.10. For Python 3.7 - 3.9 users, you should
+
from __future__ import annotations
+
+
+
+
def validate_configs(
+ configs: dict|BaseParser, # A configuration of the model/data.
+ config_cls: BaseParser # The desired configuration class.
+) -> BaseParser:
+"""return a valid configuration object."""
+ ...
+
+
nbdev will automatically render the documentation:
+
+
+validate_configs
+
+
+
validate_configs(configs, config_cls)
+
+
return a valid configuration object.
+
+
Parameters:
+
+
configs (dict | BaseParser) – A configuration of the model/data.
+
config_cls (BaseParser) – The desired configuration class.
+
+
+
+
Returns:
+
(BaseParser)
+
+
+
Next, we elaborate the use of this function with more descriptions and code examples.
+
+
We define a configuration object (which inherent BaseParser) to manage training/model/data configurations. validate_configs ensures to return the designated configuration object.
+
For example, we define a configuration object:
+
+
class LearningConfigs(BaseParser):
+ lr: float
+
+
A configuration can be LearningConfigs, or the raw data in dictionary.
Found cached dataset parquet (/home/birk/.cache/huggingface/datasets/birkhoffg___parquet/birkhoffg--folktables-acs-income-bc190711a423bf3e/0.0.0/2a3b91fbd88a2c90d1dbbb32b460cf621d31bd5b05b934492fdef7d8d6f236ec)
/home/birk/mambaforge-pypy3/envs/nbdev2/lib/python3.8/site-packages/sklearn/preprocessing/_encoders.py:868: FutureWarning: `sparse` was renamed to `sparse_output` in version 1.2 and will be removed in 1.4. `sparse_output` is ignored unless you leave `sparse` to its default value.
+ warnings.warn(
+ An end-to-end tutorial which demonstrates key features in ReLax.
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
This tutorial aims at introducing basics about ReLax, and how to use ReLax to generate counterfactual (or recourse) explanations for jax-based implementations of ML models.
+
In particular, we will cover the following things in this tutorial:
DataModule is a python class which modularizes tabular dataset loading. DataModule loads a .csv file from the directory by specifying the following attributes:
+
+
data_name is the name of your dataset.
+
data_dir should contain the relative path of the directory where your dataset is located.
+
continous_cols specifies a list of feature names representing all the continuous/numeric features in our dataset.
+
discret_cols specifies a list of feature names representing all discrete features in our dataset. By default, all discrete features are converted via one-hot encoding for training purposes.
+
imutable_cols specifies a list of feature names that represent immutable features that we do not wish to change in the generated recourse.
+
+
+
from relax.data_module import DataModuleConfig, DataModule, load_data
data_config = DataModuleConfig(
+# The name of this dataset is "adult"
+ data_name="adult",
+# The data file is located in `../assets/data/s_adult.csv`.
+ data_dir="../assets/adult/data/data.csv",
+# Contains 2 features with continuous variables
+ continous_cols=["age","hours_per_week"],
+# Contains 6 features with categorical (discrete) variables
+ discret_cols=["workclass","education","marital_status","occupation","race","gender"],
+# Contains 2 features that we do not wish to change
+ imutable_cols=["race", "gender"]
+)
For the purpose of exposing full functionality of the framework, we will train the model using the built-in functions in ReLax, which uses haiku for building neural network blocks. However, the recourse algorithms in ReLax can generate explanations for all jax-based framework (e.g., flax, haiku, vanilla jax).
+
+
+
+
+
+
+Warning
+
+
+
+
The recourse algorithms in ReLax currently only supports binary classification. The output of the classifier must be a probability score (bounded by [0, 1]). Future support for multi-class classification is planned.
+
+
+
Training a classifier using the built-in functions in ReLax is very simple. We will first specify the classifier. The classifier is called PredictiveTrainingModule, which specifies the model structure, and the optimization procedure (e.g., it specifies the loss function for optimizing the model). Next, we use train_model to train the model on TabularDataModule.
+
+
Define the Model
+
+
from relax.ml_model import MLModuleConfig, MLModule
We can directly use module.pred_fn for making the predictions.
+
+
pred_fn = module.pred_fn
+
+
+
+
+
Generate Counterfactual Explanations
+
Now, it is time to use ReLax to generate counterfactual explanations (or recourse).
+
+
from relax.methods import VanillaCF, VanillaCFConfig
+
+
We use VanillaCF (a very popular recourse generation algorithm) as an example for this tutorial. Defining VanillaCF is similar to defining TabularDataModule and PredictiveTrainingModule.
+
+
cf_config = VanillaCFConfig(
+ n_steps=1000, # Number of steps
+ lr=0.001# Learning rate
+)
+cf_exp = VanillaCF(cf_config)
+
+
Generate counterfactual examples.
+
+
from relax.explain import generate_cf_explanations
ReLax supports various platforms, thanks to efforts from developers of JAX. See the compatible platforms of JAX.
+
+
+
+
Installing ReLax
+
This section assumes that you are an end-user of ReLax, e.g., you only want to use this library for your own developement without modifying the ReLax codebase.
+
ReLax is built on top of JAX. You should also check the official installation guide from the Jax team.
+
+
Prerequisite: Set up your python environment
+
We suggest to create a new environment when using ReLax.
+
If you are using conda, you can create a new environment by:
If you wish to run ReLax on GPU or TPU, please first install this library via pypi. Next, you should install the right GPU or TPU version of JAX by following steps in the install guidelines.
+
For example, if you want to install a GPU version, you should run
+
pip install jax-relax
+
Next, install the GPU version of jax:
+
pip install -U"jax[cuda12]"
+
+
+
+
+
+
+Warning
+
+
+
+
We do not run continuous integration (CI) for GPU and TPU environments. If you encounter issues when running on GPU/TPU, please report to us.
+
+
+
+
+
+
If you are a Contributor of ReLax…
+
You will need to install additional packages if you want to fork and make changes to the library.
ReLax contains implementations of various recourse methods, which are decoupled from the rest of ReLax library. We give users flexibility on how to use ReLax:
+
+
You can use the recourse pipeline in ReLax (“one-liner” for easy benchmarking recourse methods; see this tutorial).
+
You can use all of the recourse methods in ReLax without relying on the entire pipeline of ReLax.
+
+
In this tutorial, we uncover the possibility of the second option by using recourse methods under relax.methods for debugging, diagnosing, interpreting your JAX models.
+
+
Types of Recourse Methods
+
+
Non-parametric methods: These methods do not rely on any learned parameters. They generate counterfactuals solely based on the model’s predictions and gradients. Examples in ReLax include VanillaCF, DiverseCF and GrowingSphere . These methods inherit from CFModule.
+
Semi-parametric methods: These methods learn some parameters to aid in counterfactual generation, but do not learn a full counterfactual generation model. Examples in ReLax include ProtoCF, CCHVAE and CLUE. These methods inherit from ParametricCFModule.
+
Parametric methods: These methods learn a full parametric model for counterfactual generation. The model is trained to generate counterfactuals that fool the model. Examples in ReLax include CounterNet and VAECF. These methods inherit from ParametricCFModule.
At a high level, you can use the implemented methods in ReLax to generate one recourse explanation via three lines of code:
+
from relax.methods import VanillaCF
+
+vcf = VanillaCF()
+# x is one data point. Shape: `(K)` or `(1, K)`
+cf = vcf.generate_cf(x, pred_fn=pred_fn)
+
Or generate a batch of recourse explanation via the jax.vmap primitive:
+
...
+import functools as ft
+
+vcf_gen_fn = ft.partial(vcf.generate_cf, pred_fn=pred_fn)
+# xs is a batched data. Shape: `(N, K)`
+cfs = jax.vmap(vcf_gen_fn)(xs)
+
To use parametric and semi-parametric methods, you can first train the model by calling ParametricCF.train, and then generate recourse explanations. Here is an example of using ReLax for CCHVAE.
+
from relax.methods import CCHVAE
+
+cchvae = CCHVAE()
+cchvae.train(train_data) # Train CVAE before generation
+cf = cchvae.generate_cf(x, pred_fn=pred_fn)
+
Or generate a batch of recourse explanation via the jax.vmap primitive:
+
...
+import functools as ft
+
+cchvae_gen_fn = ft.partial(cchvae.generate_cf, pred_fn=pred_fn)
+cfs = jax.vmap(cchvae_gen_fn)(xs) # Generate counterfactuals
+
+
+
Config Recourse Methods
+
Each recourse method in ReLax has an associated Config class that defines the set of supported configuration parameters. To configure a method, import and instantiate its Config class and pass it as the config parameter.
Each Config class inherits from a BaseConfig that defines common options like n_steps. Method-specific parameters are defined on the individual Config classes.
+
See the documentation for each recourse method for details on its supported configuration parameters. The Config class for a method can be imported from relax.methods.[method_name].
+
Alternatively, we can also specify this config via a dictionary.
This config dictionary is passed to VanillaCF’s init method, which will set the specified parameters. Now our VanillaCF instance is configured to:
+
+
Number 10 optimization steps (n_steps=100)
+
Use 0.1 validity regularization for counterfactuals (lambda_=0.1)
+
Use a learning rate of 0.1 for optimization (lr=0.1)
+
+
+
+
Implement your Own Recourse Methods
+
You can easily implement your own recourse methods and leverage jax_relax to scale the recourse generation. In this section, we implement a mock “Recourse Method”, which add random perturbations to the input x.
First, we define a configuration class for the random counterfactual module. This class inherits from the BaseConfig class.
+
+
class RandomCFConfig(BaseConfig):
+ max_perturb: float=0.2# Max perturbation allowed for RandomCF
+
+
Next, we define the random counterfactual module. This class inhertis from CFModule class. Importantly, you should override the CFModule.generate_cf and implement your CF generation procedure for each input (i.e., shape=(k,), where k is the number of features).
+
+
class RandomCF(CFModule):
+
+def__init__(
+self,
+ config: dict| RandomCFConfig =None,
+ name: str=None,
+ ):
+if config isNone:
+ config = RandomCFConfig()
+ config = validate_configs(config, RandomCFConfig)
+ name ="RandomCF"if name isNoneelse name
+super().__init__(config, name=name)
+
+@auto_reshaping('x')
+def generate_cf(
+self,
+ x: Array, # Input data point
+ pred_fn: Callable =None, # Prediction function
+ y_target: Array =None, # Target label
+ rng_key: jrand.PRNGKey =None, # Random key
+**kwargs,
+ ) -> Array:
+# Generate random perturbations in the range of [-max_perturb, max_perturb].
+ x_cf = x + jrand.uniform(rng_key, x.shape,
+ minval=-self.config.max_perturb,
+ maxval=self.config.max_perturb)
+return x_cf
+
+
Finally, you can easily use jax-relax to generate recourse explanations at scale.
configs (dict | BaseParser) – A configuration of the model/dataset.
+
config_cls (BaseParser) – The desired configuration class.
+
+
+
+
Returns:
+
(BaseParser)
+
+
We define a configuration object (which inherent BaseParser) to manage training/model/data configurations. validate_configs ensures to return the designated configuration object.
+
For example, we define a configuration object LearningConfigs:
+
+
class LearningConfigs(BaseParser):
+ lr: float
+
+
A configuration can be LearningConfigs, or the raw data in dictionary.
Decorator to automatically reshape function’s input into (1, k), and out to input’s shape.
+
+
Parameters:
+
+
reshape_argname (str) – The name of the argument to be reshaped.
+
reshape_output (bool, default=True) – Whether to reshape the output. Useful to set False when returning multiple cfs.
+
+
+
This decorator ensures that the specified input argument and output of a function are in the same shape. This is particularly useful when using jax.vamp.