Skip to content

Commit

Permalink
Github action: new release.
Browse files Browse the repository at this point in the history
  • Loading branch information
github-actions[bot] committed Nov 12, 2024
1 parent e41485d commit 1f5e503
Show file tree
Hide file tree
Showing 366 changed files with 12,904 additions and 6,192 deletions.
Original file line number Diff line number Diff line change
@@ -0,0 +1,233 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"\n# Speeding up PARAFAC2 with SVD compression\n\nPARAFAC2 can be very time-consuming to fit. However, if the number of rows greatly\nexceeds the number of columns or the data matrices are approximately low-rank, we can\ncompress the data before fitting the PARAFAC2 model to considerably speed up the fitting\nprocedure.\n\nThe compression works by first computing the SVD of the tensor slices and fitting the\nPARAFAC2 model to the right singular vectors multiplied by the singular values. Then,\nafter we fit the model, we left-multiply the $B_i$-matrices with the left singular\nvectors to recover the decompressed model. Fitting to compressed data and then\ndecompressing is mathematically equivalent to fitting to the original uncompressed data.\n\nFor more information about why this works, see the documentation of\n:py:meth:`tensorly.decomposition.preprocessing.svd_compress_tensor_slices`.\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"from time import monotonic\nimport tensorly as tl\nfrom tensorly.decomposition import parafac2\nimport tensorly.preprocessing as preprocessing"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Function to create synthetic data\n\nHere, we create a function that constructs a random tensor from a PARAFAC2\ndecomposition with noise\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"rng = tl.check_random_state(0)\n\n\ndef create_random_data(shape, rank, noise_level):\n I, J, K = shape # noqa: E741\n pf2 = tl.random.random_parafac2(\n [(J, K) for i in range(I)], rank=rank, random_state=rng\n )\n\n X = pf2.to_tensor()\n X_norm = [tl.norm(Xi) for Xi in X]\n\n noise = [rng.standard_normal((J, K)) for i in range(I)]\n noise = [noise_level * X_norm[i] / tl.norm(E_i) for i, E_i in enumerate(noise)]\n return [X_i + E_i for X_i, E_i in zip(X, noise)]"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Compressing data with many rows and few columns\n\nHere, we set up for a case where we have many rows compared to columns\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"n_inits = 5\nrank = 3\nshape = (10, 10_000, 15) # 10 matrices/tensor slices, each of size 10_000 x 15.\nnoise_level = 0.33\n\nuncompressed_data = create_random_data(shape, rank=rank, noise_level=noise_level)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Fitting without compression\n\nAs a baseline, we see how long time it takes to fit models without compression.\nSince PARAFAC2 is very prone to local minima, we fit five models and select the model\nwith the lowest reconstruction error.\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"print(\"Fitting PARAFAC2 model without compression...\")\nt1 = monotonic()\nlowest_error = float(\"inf\")\nfor i in range(n_inits):\n pf2, errs = parafac2(\n uncompressed_data,\n rank,\n n_iter_max=1000,\n nn_modes=[0],\n random_state=rng,\n return_errors=True,\n )\n if errs[-1] < lowest_error:\n pf2_full, errs_full = pf2, errs\nt2 = monotonic()\nprint(\n f\"It took {t2 - t1:.1f}s to fit a PARAFAC2 model a tensor of shape {shape} \"\n + \"without compression\"\n)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Fitting with lossless compression\n\nSince the tensor slices have many rows compared to columns, we should be able to save\na lot of time by compressing the data. By compressing the matrices, we only need to\nfit the PARAFAC2 model to a set of 10 matrices, each of size 15 x 15, not 10_000 x 15.\n\nThe main bottleneck here is the SVD computation at the beginning of the fitting\nprocedure, but luckily, this is independent of the initialisations, so we only need\nto compute this once. Also, if we are performing a grid search for the rank, then\nwe just need to perform the compression once for the whole grid search as well.\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"print(\"Fitting PARAFAC2 model with SVD compression...\")\nt1 = monotonic()\nlowest_error = float(\"inf\")\nscores, loadings = preprocessing.svd_compress_tensor_slices(uncompressed_data)\nt2 = monotonic()\nfor i in range(n_inits):\n pf2, errs = parafac2(\n scores,\n rank,\n n_iter_max=1000,\n nn_modes=[0],\n random_state=rng,\n return_errors=True,\n )\n if errs[-1] < lowest_error:\n pf2_compressed, errs_compressed = pf2, errs\npf2_decompressed = preprocessing.svd_decompress_parafac2_tensor(\n pf2_compressed, loadings\n)\nt3 = monotonic()\nprint(\n f\"It took {t3 - t1:.1f}s to fit a PARAFAC2 model a tensor of shape {shape} \"\n + \"with lossless SVD compression\"\n)\nprint(f\"The compression took {t2 - t1:.1f}s and the fitting took {t3 - t2:.1f}s\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We see that we saved a lot of time by compressing the data before fitting the model.\n\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Fitting with lossy compression\n\nWe can try to speed the process up even further by accepting a slight discrepancy\nbetween the model obtained from compressed data and a model obtained from uncompressed\ndata. Specifically, we can truncate the singular values at some threshold, essentially\nremoving the parts of the data matrices that have a very low \"signal strength\".\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"print(\"Fitting PARAFAC2 model with lossy SVD compression...\")\nt1 = monotonic()\nlowest_error = float(\"inf\")\nscores, loadings = preprocessing.svd_compress_tensor_slices(uncompressed_data, 1e-5)\nt2 = monotonic()\nfor i in range(n_inits):\n pf2, errs = parafac2(\n scores,\n rank,\n n_iter_max=1000,\n nn_modes=[0],\n random_state=rng,\n return_errors=True,\n )\n if errs[-1] < lowest_error:\n pf2_compressed_lossy, errs_compressed_lossy = pf2, errs\npf2_decompressed_lossy = preprocessing.svd_decompress_parafac2_tensor(\n pf2_compressed_lossy, loadings\n)\nt3 = monotonic()\nprint(\n f\"It took {t3 - t1:.1f}s to fit a PARAFAC2 model a tensor of shape {shape} \"\n + \"with lossy SVD compression\"\n)\nprint(\n f\"Of which the compression took {t2 - t1:.1f}s and the fitting took {t3 - t2:.1f}s\"\n)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We see that we didn't save much, if any, time in this case (compared to using\nlossless compression). This is because the main bottleneck now is the CP-part of\nthe PARAFAC2 procedure, so reducing the tensor size from 10 x 15 x 15 to 10 x 4 x 15\n(which is typically what we would get here) will have a negligible effect.\n\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Compressing data that is approximately low-rank\n\nHere, we simulate data with many rows and columns but an approximately low rank.\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"rank = 3\nshape = (10, 2_000, 2_000)\nnoise_level = 0.33\n\nuncompressed_data = create_random_data(shape, rank=rank, noise_level=noise_level)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Fitting without compression\n\nAgain, we start by fitting without compression as a baseline.\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"print(\"Fitting PARAFAC2 model without compression...\")\nt1 = monotonic()\nlowest_error = float(\"inf\")\nfor i in range(n_inits):\n pf2, errs = parafac2(\n uncompressed_data,\n rank,\n n_iter_max=1000,\n nn_modes=[0],\n random_state=rng,\n return_errors=True,\n )\n if errs[-1] < lowest_error:\n pf2_full, errs_full = pf2, errs\nt2 = monotonic()\nprint(\n f\"It took {t2 - t1:.1f}s to fit a PARAFAC2 model a tensor of shape {shape} \"\n + \"without compression\"\n)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Fitting with lossless compression\n\nNext, we fit with lossless compression.\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"print(\"Fitting PARAFAC2 model with SVD compression...\")\nt1 = monotonic()\nlowest_error = float(\"inf\")\nscores, loadings = preprocessing.svd_compress_tensor_slices(uncompressed_data)\nt2 = monotonic()\nfor i in range(n_inits):\n pf2, errs = parafac2(\n scores,\n rank,\n n_iter_max=1000,\n nn_modes=[0],\n random_state=rng,\n return_errors=True,\n )\n if errs[-1] < lowest_error:\n pf2_compressed, errs_compressed = pf2, errs\npf2_decompressed = preprocessing.svd_decompress_parafac2_tensor(\n pf2_compressed, loadings\n)\nt3 = monotonic()\nprint(\n f\"It took {t3 - t1:.1f}s to fit a PARAFAC2 model a tensor of shape {shape} \"\n + \"with lossless SVD compression\"\n)\nprint(\n f\"Of which the compression took {t2 - t1:.1f}s and the fitting took {t3 - t2:.1f}s\"\n)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We see that the lossless compression no effect for this data. This is because the\nnumber ofrows is equal to the number of columns, so we cannot compress the data\nlosslessly with the SVD.\n\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Fitting with lossy compression\n\nFinally, we fit with lossy SVD compression.\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"print(\"Fitting PARAFAC2 model with lossy SVD compression...\")\nt1 = monotonic()\nlowest_error = float(\"inf\")\nscores, loadings = preprocessing.svd_compress_tensor_slices(uncompressed_data, 1e-5)\nt2 = monotonic()\nfor i in range(n_inits):\n pf2, errs = parafac2(\n scores,\n rank,\n n_iter_max=1000,\n nn_modes=[0],\n random_state=rng,\n return_errors=True,\n )\n if errs[-1] < lowest_error:\n pf2_compressed_lossy, errs_compressed_lossy = pf2, errs\npf2_decompressed_lossy = preprocessing.svd_decompress_parafac2_tensor(\n pf2_compressed_lossy, loadings\n)\nt3 = monotonic()\nprint(\n f\"It took {t3 - t1:.1f}s to fit a PARAFAC2 model a tensor of shape {shape} \"\n + \"with lossy SVD compression\"\n)\nprint(\n f\"Of which the compression took {t2 - t1:.1f}s and the fitting took {t3 - t2:.1f}s\"\n)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Here we see a large speedup. This is because the data is approximately low rank so\nthe compressed tensor slices will have shape R x 2_000, where R is typically below 10\nin this example. If your tensor slices are large in both modes, you might want to plot\nthe singular values of your dataset to see if lossy compression could speed up\nPARAFAC2.\n\n"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.12.7"
}
},
"nbformat": 4,
"nbformat_minor": 0
}
Original file line number Diff line number Diff line change
Expand Up @@ -8,10 +8,10 @@
# Introduction
# -----------------------
# Since version 0.7, Tensorly includes constrained CP decomposition which penalizes or
# constrains factors as chosen by the user. The proposed implementation of constrained CP uses the
# constrains factors as chosen by the user. The proposed implementation of constrained CP uses the
# Alternating Optimization Alternating Direction Method of Multipliers (AO-ADMM) algorithm from [1] which
# solves alternatively convex optimization problem using primal-dual optimization. In constrained CP
# decomposition, an auxilliary factor is introduced which is constrained or regularized using an operator called the
# decomposition, an auxilliary factor is introduced which is constrained or regularized using an operator called the
# proximal operator. The proximal operator may therefore change according to the selected constraint or penalization.
#
# Tensorly provides several constraints and their corresponding proximal operators, each can apply to one or all factors in the CP decomposition:
Expand Down Expand Up @@ -80,7 +80,7 @@
# Using one constraint for all modes
# --------------------------------------------
# Constraints are inputs of the constrained_parafac function, which itself uses the
# ``tensorly.tenalg.proximal.validate_constraints`` function in order to process the input
# ``tensorly.solver.proximal.validate_constraints`` function in order to process the input
# of the user. If a user wants to use the same constraint for all modes, an
# input (bool or a scalar value or list of scalar values) should be given to this constraint.
# Assume, one wants to use unimodality constraint for all modes. Since it does not require
Expand All @@ -94,7 +94,7 @@
fig = plt.figure()
for i in range(rank):
plt.plot(factors[0][:, i])
plt.legend(['1. column', '2. column', '3. column'], loc='upper left')
plt.legend(["1. column", "2. column", "3. column"], loc="upper left")

##############################################################################
# Constraints requiring a scalar input can be used similarly as follows:
Expand All @@ -103,11 +103,11 @@
##############################################################################
# The same regularization coefficient l1_reg is used for all the modes. Here the l1 penalization induces sparsity given that the regularization coefficient is large enough.
fig = plt.figure()
plt.title('Histogram of 1. factor')
plt.title("Histogram of 1. factor")
_, _, _ = plt.hist(factors[0].flatten())

fig = plt.figure()
plt.title('Histogram of 2. factor')
plt.title("Histogram of 2. factor")
_, _, _ = plt.hist(factors[1].flatten())

##############################################################################
Expand All @@ -133,15 +133,15 @@
_, factors = constrained_parafac(tensor, rank=rank, l1_reg=[0.01, 0.02, 0.03])

fig = plt.figure()
plt.title('Histogram of 1. factor')
plt.title("Histogram of 1. factor")
_, _, _ = plt.hist(factors[0].flatten())

fig = plt.figure()
plt.title('Histogram of 2. factor')
plt.title("Histogram of 2. factor")
_, _, _ = plt.hist(factors[1].flatten())

fig = plt.figure()
plt.title('Histogram of 3. factor')
plt.title("Histogram of 3. factor")
_, _, _ = plt.hist(factors[2].flatten())

##############################################################################
Expand All @@ -150,8 +150,9 @@
# To use different constraint for different modes, the dictionary structure
# should be preferred:

_, factors = constrained_parafac(tensor, rank=rank, non_negative={1:True}, l1_reg={0: 0.01},
l2_square_reg={2: 0.01})
_, factors = constrained_parafac(
tensor, rank=rank, non_negative={1: True}, l1_reg={0: 0.01}, l2_square_reg={2: 0.01}
)

##############################################################################
# In the dictionary, `key` is the selected mode and `value` is a scalar value or
Expand Down
Binary file not shown.
Loading

0 comments on commit 1f5e503

Please sign in to comment.