diff --git a/stable/_downloads/02911ae80224ade0c761c6cc5f2d7d69/plot_parafac2_compression.ipynb b/stable/_downloads/02911ae80224ade0c761c6cc5f2d7d69/plot_parafac2_compression.ipynb new file mode 100644 index 000000000..bae953783 --- /dev/null +++ b/stable/_downloads/02911ae80224ade0c761c6cc5f2d7d69/plot_parafac2_compression.ipynb @@ -0,0 +1,233 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "\n# Speeding up PARAFAC2 with SVD compression\n\nPARAFAC2 can be very time-consuming to fit. However, if the number of rows greatly\nexceeds the number of columns or the data matrices are approximately low-rank, we can\ncompress the data before fitting the PARAFAC2 model to considerably speed up the fitting\nprocedure.\n\nThe compression works by first computing the SVD of the tensor slices and fitting the\nPARAFAC2 model to the right singular vectors multiplied by the singular values. Then,\nafter we fit the model, we left-multiply the $B_i$-matrices with the left singular\nvectors to recover the decompressed model. Fitting to compressed data and then\ndecompressing is mathematically equivalent to fitting to the original uncompressed data.\n\nFor more information about why this works, see the documentation of\n:py:meth:`tensorly.decomposition.preprocessing.svd_compress_tensor_slices`.\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "collapsed": false + }, + "outputs": [], + "source": [ + "from time import monotonic\nimport tensorly as tl\nfrom tensorly.decomposition import parafac2\nimport tensorly.preprocessing as preprocessing" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Function to create synthetic data\n\nHere, we create a function that constructs a random tensor from a PARAFAC2\ndecomposition with noise\n\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "collapsed": false + }, + "outputs": [], + "source": [ + "rng = tl.check_random_state(0)\n\n\ndef create_random_data(shape, rank, noise_level):\n I, J, K = shape # noqa: E741\n pf2 = tl.random.random_parafac2(\n [(J, K) for i in range(I)], rank=rank, random_state=rng\n )\n\n X = pf2.to_tensor()\n X_norm = [tl.norm(Xi) for Xi in X]\n\n noise = [rng.standard_normal((J, K)) for i in range(I)]\n noise = [noise_level * X_norm[i] / tl.norm(E_i) for i, E_i in enumerate(noise)]\n return [X_i + E_i for X_i, E_i in zip(X, noise)]" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Compressing data with many rows and few columns\n\nHere, we set up for a case where we have many rows compared to columns\n\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "collapsed": false + }, + "outputs": [], + "source": [ + "n_inits = 5\nrank = 3\nshape = (10, 10_000, 15) # 10 matrices/tensor slices, each of size 10_000 x 15.\nnoise_level = 0.33\n\nuncompressed_data = create_random_data(shape, rank=rank, noise_level=noise_level)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Fitting without compression\n\nAs a baseline, we see how long time it takes to fit models without compression.\nSince PARAFAC2 is very prone to local minima, we fit five models and select the model\nwith the lowest reconstruction error.\n\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "collapsed": false + }, + "outputs": [], + "source": [ + "print(\"Fitting PARAFAC2 model without compression...\")\nt1 = monotonic()\nlowest_error = float(\"inf\")\nfor i in range(n_inits):\n pf2, errs = parafac2(\n uncompressed_data,\n rank,\n n_iter_max=1000,\n nn_modes=[0],\n random_state=rng,\n return_errors=True,\n )\n if errs[-1] < lowest_error:\n pf2_full, errs_full = pf2, errs\nt2 = monotonic()\nprint(\n f\"It took {t2 - t1:.1f}s to fit a PARAFAC2 model a tensor of shape {shape} \"\n + \"without compression\"\n)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Fitting with lossless compression\n\nSince the tensor slices have many rows compared to columns, we should be able to save\na lot of time by compressing the data. By compressing the matrices, we only need to\nfit the PARAFAC2 model to a set of 10 matrices, each of size 15 x 15, not 10_000 x 15.\n\nThe main bottleneck here is the SVD computation at the beginning of the fitting\nprocedure, but luckily, this is independent of the initialisations, so we only need\nto compute this once. Also, if we are performing a grid search for the rank, then\nwe just need to perform the compression once for the whole grid search as well.\n\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "collapsed": false + }, + "outputs": [], + "source": [ + "print(\"Fitting PARAFAC2 model with SVD compression...\")\nt1 = monotonic()\nlowest_error = float(\"inf\")\nscores, loadings = preprocessing.svd_compress_tensor_slices(uncompressed_data)\nt2 = monotonic()\nfor i in range(n_inits):\n pf2, errs = parafac2(\n scores,\n rank,\n n_iter_max=1000,\n nn_modes=[0],\n random_state=rng,\n return_errors=True,\n )\n if errs[-1] < lowest_error:\n pf2_compressed, errs_compressed = pf2, errs\npf2_decompressed = preprocessing.svd_decompress_parafac2_tensor(\n pf2_compressed, loadings\n)\nt3 = monotonic()\nprint(\n f\"It took {t3 - t1:.1f}s to fit a PARAFAC2 model a tensor of shape {shape} \"\n + \"with lossless SVD compression\"\n)\nprint(f\"The compression took {t2 - t1:.1f}s and the fitting took {t3 - t2:.1f}s\")" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "We see that we saved a lot of time by compressing the data before fitting the model.\n\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Fitting with lossy compression\n\nWe can try to speed the process up even further by accepting a slight discrepancy\nbetween the model obtained from compressed data and a model obtained from uncompressed\ndata. Specifically, we can truncate the singular values at some threshold, essentially\nremoving the parts of the data matrices that have a very low \"signal strength\".\n\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "collapsed": false + }, + "outputs": [], + "source": [ + "print(\"Fitting PARAFAC2 model with lossy SVD compression...\")\nt1 = monotonic()\nlowest_error = float(\"inf\")\nscores, loadings = preprocessing.svd_compress_tensor_slices(uncompressed_data, 1e-5)\nt2 = monotonic()\nfor i in range(n_inits):\n pf2, errs = parafac2(\n scores,\n rank,\n n_iter_max=1000,\n nn_modes=[0],\n random_state=rng,\n return_errors=True,\n )\n if errs[-1] < lowest_error:\n pf2_compressed_lossy, errs_compressed_lossy = pf2, errs\npf2_decompressed_lossy = preprocessing.svd_decompress_parafac2_tensor(\n pf2_compressed_lossy, loadings\n)\nt3 = monotonic()\nprint(\n f\"It took {t3 - t1:.1f}s to fit a PARAFAC2 model a tensor of shape {shape} \"\n + \"with lossy SVD compression\"\n)\nprint(\n f\"Of which the compression took {t2 - t1:.1f}s and the fitting took {t3 - t2:.1f}s\"\n)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "We see that we didn't save much, if any, time in this case (compared to using\nlossless compression). This is because the main bottleneck now is the CP-part of\nthe PARAFAC2 procedure, so reducing the tensor size from 10 x 15 x 15 to 10 x 4 x 15\n(which is typically what we would get here) will have a negligible effect.\n\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Compressing data that is approximately low-rank\n\nHere, we simulate data with many rows and columns but an approximately low rank.\n\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "collapsed": false + }, + "outputs": [], + "source": [ + "rank = 3\nshape = (10, 2_000, 2_000)\nnoise_level = 0.33\n\nuncompressed_data = create_random_data(shape, rank=rank, noise_level=noise_level)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Fitting without compression\n\nAgain, we start by fitting without compression as a baseline.\n\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "collapsed": false + }, + "outputs": [], + "source": [ + "print(\"Fitting PARAFAC2 model without compression...\")\nt1 = monotonic()\nlowest_error = float(\"inf\")\nfor i in range(n_inits):\n pf2, errs = parafac2(\n uncompressed_data,\n rank,\n n_iter_max=1000,\n nn_modes=[0],\n random_state=rng,\n return_errors=True,\n )\n if errs[-1] < lowest_error:\n pf2_full, errs_full = pf2, errs\nt2 = monotonic()\nprint(\n f\"It took {t2 - t1:.1f}s to fit a PARAFAC2 model a tensor of shape {shape} \"\n + \"without compression\"\n)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Fitting with lossless compression\n\nNext, we fit with lossless compression.\n\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "collapsed": false + }, + "outputs": [], + "source": [ + "print(\"Fitting PARAFAC2 model with SVD compression...\")\nt1 = monotonic()\nlowest_error = float(\"inf\")\nscores, loadings = preprocessing.svd_compress_tensor_slices(uncompressed_data)\nt2 = monotonic()\nfor i in range(n_inits):\n pf2, errs = parafac2(\n scores,\n rank,\n n_iter_max=1000,\n nn_modes=[0],\n random_state=rng,\n return_errors=True,\n )\n if errs[-1] < lowest_error:\n pf2_compressed, errs_compressed = pf2, errs\npf2_decompressed = preprocessing.svd_decompress_parafac2_tensor(\n pf2_compressed, loadings\n)\nt3 = monotonic()\nprint(\n f\"It took {t3 - t1:.1f}s to fit a PARAFAC2 model a tensor of shape {shape} \"\n + \"with lossless SVD compression\"\n)\nprint(\n f\"Of which the compression took {t2 - t1:.1f}s and the fitting took {t3 - t2:.1f}s\"\n)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "We see that the lossless compression no effect for this data. This is because the\nnumber ofrows is equal to the number of columns, so we cannot compress the data\nlosslessly with the SVD.\n\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Fitting with lossy compression\n\nFinally, we fit with lossy SVD compression.\n\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "collapsed": false + }, + "outputs": [], + "source": [ + "print(\"Fitting PARAFAC2 model with lossy SVD compression...\")\nt1 = monotonic()\nlowest_error = float(\"inf\")\nscores, loadings = preprocessing.svd_compress_tensor_slices(uncompressed_data, 1e-5)\nt2 = monotonic()\nfor i in range(n_inits):\n pf2, errs = parafac2(\n scores,\n rank,\n n_iter_max=1000,\n nn_modes=[0],\n random_state=rng,\n return_errors=True,\n )\n if errs[-1] < lowest_error:\n pf2_compressed_lossy, errs_compressed_lossy = pf2, errs\npf2_decompressed_lossy = preprocessing.svd_decompress_parafac2_tensor(\n pf2_compressed_lossy, loadings\n)\nt3 = monotonic()\nprint(\n f\"It took {t3 - t1:.1f}s to fit a PARAFAC2 model a tensor of shape {shape} \"\n + \"with lossy SVD compression\"\n)\nprint(\n f\"Of which the compression took {t2 - t1:.1f}s and the fitting took {t3 - t2:.1f}s\"\n)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Here we see a large speedup. This is because the data is approximately low rank so\nthe compressed tensor slices will have shape R x 2_000, where R is typically below 10\nin this example. If your tensor slices are large in both modes, you might want to plot\nthe singular values of your dataset to see if lossy compression could speed up\nPARAFAC2.\n\n" + ] + } + ], + "metadata": { + "kernelspec": { + "display_name": "Python 3", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.12.7" + } + }, + "nbformat": 4, + "nbformat_minor": 0 +} \ No newline at end of file diff --git a/stable/_downloads/02fe230fcb90df96787f11e615bb0af8/plot_guide_for_constrained_cp.py b/stable/_downloads/02fe230fcb90df96787f11e615bb0af8/plot_guide_for_constrained_cp.py index 460db8a09..6294eb486 100644 --- a/stable/_downloads/02fe230fcb90df96787f11e615bb0af8/plot_guide_for_constrained_cp.py +++ b/stable/_downloads/02fe230fcb90df96787f11e615bb0af8/plot_guide_for_constrained_cp.py @@ -8,10 +8,10 @@ # Introduction # ----------------------- # Since version 0.7, Tensorly includes constrained CP decomposition which penalizes or -# constrains factors as chosen by the user. The proposed implementation of constrained CP uses the +# constrains factors as chosen by the user. The proposed implementation of constrained CP uses the # Alternating Optimization Alternating Direction Method of Multipliers (AO-ADMM) algorithm from [1] which # solves alternatively convex optimization problem using primal-dual optimization. In constrained CP -# decomposition, an auxilliary factor is introduced which is constrained or regularized using an operator called the +# decomposition, an auxilliary factor is introduced which is constrained or regularized using an operator called the # proximal operator. The proximal operator may therefore change according to the selected constraint or penalization. # # Tensorly provides several constraints and their corresponding proximal operators, each can apply to one or all factors in the CP decomposition: @@ -80,7 +80,7 @@ # Using one constraint for all modes # -------------------------------------------- # Constraints are inputs of the constrained_parafac function, which itself uses the -# ``tensorly.tenalg.proximal.validate_constraints`` function in order to process the input +# ``tensorly.solver.proximal.validate_constraints`` function in order to process the input # of the user. If a user wants to use the same constraint for all modes, an # input (bool or a scalar value or list of scalar values) should be given to this constraint. # Assume, one wants to use unimodality constraint for all modes. Since it does not require @@ -94,7 +94,7 @@ fig = plt.figure() for i in range(rank): plt.plot(factors[0][:, i]) - plt.legend(['1. column', '2. column', '3. column'], loc='upper left') + plt.legend(["1. column", "2. column", "3. column"], loc="upper left") ############################################################################## # Constraints requiring a scalar input can be used similarly as follows: @@ -103,11 +103,11 @@ ############################################################################## # The same regularization coefficient l1_reg is used for all the modes. Here the l1 penalization induces sparsity given that the regularization coefficient is large enough. fig = plt.figure() -plt.title('Histogram of 1. factor') +plt.title("Histogram of 1. factor") _, _, _ = plt.hist(factors[0].flatten()) fig = plt.figure() -plt.title('Histogram of 2. factor') +plt.title("Histogram of 2. factor") _, _, _ = plt.hist(factors[1].flatten()) ############################################################################## @@ -133,15 +133,15 @@ _, factors = constrained_parafac(tensor, rank=rank, l1_reg=[0.01, 0.02, 0.03]) fig = plt.figure() -plt.title('Histogram of 1. factor') +plt.title("Histogram of 1. factor") _, _, _ = plt.hist(factors[0].flatten()) fig = plt.figure() -plt.title('Histogram of 2. factor') +plt.title("Histogram of 2. factor") _, _, _ = plt.hist(factors[1].flatten()) fig = plt.figure() -plt.title('Histogram of 3. factor') +plt.title("Histogram of 3. factor") _, _, _ = plt.hist(factors[2].flatten()) ############################################################################## @@ -150,8 +150,9 @@ # To use different constraint for different modes, the dictionary structure # should be preferred: -_, factors = constrained_parafac(tensor, rank=rank, non_negative={1:True}, l1_reg={0: 0.01}, - l2_square_reg={2: 0.01}) +_, factors = constrained_parafac( + tensor, rank=rank, non_negative={1: True}, l1_reg={0: 0.01}, l2_square_reg={2: 0.01} +) ############################################################################## # In the dictionary, `key` is the selected mode and `value` is a scalar value or diff --git a/stable/_downloads/07fcc19ba03226cd3d83d4e40ec44385/auto_examples_python.zip b/stable/_downloads/07fcc19ba03226cd3d83d4e40ec44385/auto_examples_python.zip index 25259045d..8382c40e2 100644 Binary files a/stable/_downloads/07fcc19ba03226cd3d83d4e40ec44385/auto_examples_python.zip and b/stable/_downloads/07fcc19ba03226cd3d83d4e40ec44385/auto_examples_python.zip differ diff --git a/stable/_downloads/0d7c4ccdff2f531825995c8fa152400c/plot_guide_for_constrained_cp.ipynb b/stable/_downloads/0d7c4ccdff2f531825995c8fa152400c/plot_guide_for_constrained_cp.ipynb index 92275bdb2..63dc72105 100644 --- a/stable/_downloads/0d7c4ccdff2f531825995c8fa152400c/plot_guide_for_constrained_cp.ipynb +++ b/stable/_downloads/0d7c4ccdff2f531825995c8fa152400c/plot_guide_for_constrained_cp.ipynb @@ -1,16 +1,5 @@ { "cells": [ - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "collapsed": false - }, - "outputs": [], - "source": [ - "%matplotlib inline" - ] - }, { "cell_type": "markdown", "metadata": {}, @@ -22,7 +11,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "## Introduction\nSince version 0.7, Tensorly includes constrained CP decomposition which penalizes or\nconstrains factors as chosen by the user. The proposed implementation of constrained CP uses the \nAlternating Optimization Alternating Direction Method of Multipliers (AO-ADMM) algorithm from [1] which\nsolves alternatively convex optimization problem using primal-dual optimization. In constrained CP\ndecomposition, an auxilliary factor is introduced which is constrained or regularized using an operator called the \nproximal operator. The proximal operator may therefore change according to the selected constraint or penalization.\n\nTensorly provides several constraints and their corresponding proximal operators, each can apply to one or all factors in the CP decomposition:\n\n1. Non-negativity\n * `non_negative` in signature\n * Prevents negative values in CP factors.\n2. L1 regularization\n * `l1_reg` in signature\n * Adds a L1 regularization term on the CP factors to the CP cost function, this promotes sparsity in the CP factors. The user chooses the regularization amount.\n3. L2 regularization\n * `l2_reg` in signature\n * Adds a L2 regularization term on the CP factors to the CP cost function. The user chooses the regularization amount.\n4. L2 square regularization\n * `l2_square_reg` in signature\n * Adds a L2 regularization term on the CP factors to the CP cost function. The user chooses the regularization amount.\n5. Unimodality\n * `unimodality` in signature\n * This constraint acts columnwise on the factors\n * Impose that each column of the factors is unimodal (there is only one local maximum, like a Gaussian).\n6. Simplex\n * `simplex` in signature\n * This constraint acts columnwise on the factors\n * Impose that each column of the factors lives on the simplex or user-defined radius (entries are nonnegative and sum to a user-defined positive parameter columnwise).\n7. Normalization\n * `normalize` in signature\n * Impose that the largest absolute value in the factors elementwise is 1.\n8. Normalized sparsity\n * `normalized_sparsity` in signature\n * This constraint acts columnwise on the factors\n * Impose that the columns of factors are both normalized with the L2 norm, and k-sparse (at most k-nonzeros per column) with k user-defined.\n9. Soft sparsity\n * `soft_sparsity` in signature\n * This constraint acts columnwise on the factors\n * Impose that the columns of factors have L1 norm bounded by a user-defined threshold.\n10. Smoothness\n * `smoothness` in signature\n * This constraint acts columnwise on the factors\n * Favor smoothness in factors columns by penalizing the L2 norm of finite differences. The user chooses the regularization amount. The proximal operator in fact solves a banded system.\n11. Monotonicity\n * `monotonicity` in signature\n * This constraint acts columnwise on the factors\n * Impose that the factors are either always increasing or decreasing (user-specified) columnwise. This is based on isotonic regression.\n12. Hard sparsity\n * `hard_sparsity` in signature\n * This constraint acts columnwise on the factors\n * Impose that each column of the factors has at most k nonzero entries (k is user-defined).\n\nWhile some of these constraints (2, 3, 4, 6, 8, 9, 12) require a scalar\ninput as its parameter or regularizer, boolean input could be enough\nfor other constraints (1, 5, 7, 10, 11). Selection of one of these\nconstraints for all mode (or factors) or using different constraints for different modes are both supported.\n\n" + "## Introduction\nSince version 0.7, Tensorly includes constrained CP decomposition which penalizes or\nconstrains factors as chosen by the user. The proposed implementation of constrained CP uses the\nAlternating Optimization Alternating Direction Method of Multipliers (AO-ADMM) algorithm from [1] which\nsolves alternatively convex optimization problem using primal-dual optimization. In constrained CP\ndecomposition, an auxilliary factor is introduced which is constrained or regularized using an operator called the\nproximal operator. The proximal operator may therefore change according to the selected constraint or penalization.\n\nTensorly provides several constraints and their corresponding proximal operators, each can apply to one or all factors in the CP decomposition:\n\n1. Non-negativity\n * `non_negative` in signature\n * Prevents negative values in CP factors.\n2. L1 regularization\n * `l1_reg` in signature\n * Adds a L1 regularization term on the CP factors to the CP cost function, this promotes sparsity in the CP factors. The user chooses the regularization amount.\n3. L2 regularization\n * `l2_reg` in signature\n * Adds a L2 regularization term on the CP factors to the CP cost function. The user chooses the regularization amount.\n4. L2 square regularization\n * `l2_square_reg` in signature\n * Adds a L2 regularization term on the CP factors to the CP cost function. The user chooses the regularization amount.\n5. Unimodality\n * `unimodality` in signature\n * This constraint acts columnwise on the factors\n * Impose that each column of the factors is unimodal (there is only one local maximum, like a Gaussian).\n6. Simplex\n * `simplex` in signature\n * This constraint acts columnwise on the factors\n * Impose that each column of the factors lives on the simplex or user-defined radius (entries are nonnegative and sum to a user-defined positive parameter columnwise).\n7. Normalization\n * `normalize` in signature\n * Impose that the largest absolute value in the factors elementwise is 1.\n8. Normalized sparsity\n * `normalized_sparsity` in signature\n * This constraint acts columnwise on the factors\n * Impose that the columns of factors are both normalized with the L2 norm, and k-sparse (at most k-nonzeros per column) with k user-defined.\n9. Soft sparsity\n * `soft_sparsity` in signature\n * This constraint acts columnwise on the factors\n * Impose that the columns of factors have L1 norm bounded by a user-defined threshold.\n10. Smoothness\n * `smoothness` in signature\n * This constraint acts columnwise on the factors\n * Favor smoothness in factors columns by penalizing the L2 norm of finite differences. The user chooses the regularization amount. The proximal operator in fact solves a banded system.\n11. Monotonicity\n * `monotonicity` in signature\n * This constraint acts columnwise on the factors\n * Impose that the factors are either always increasing or decreasing (user-specified) columnwise. This is based on isotonic regression.\n12. Hard sparsity\n * `hard_sparsity` in signature\n * This constraint acts columnwise on the factors\n * Impose that each column of the factors has at most k nonzero entries (k is user-defined).\n\nWhile some of these constraints (2, 3, 4, 6, 8, 9, 12) require a scalar\ninput as its parameter or regularizer, boolean input could be enough\nfor other constraints (1, 5, 7, 10, 11). Selection of one of these\nconstraints for all mode (or factors) or using different constraints for different modes are both supported.\n\n" ] }, { @@ -40,7 +29,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "## Using one constraint for all modes\nConstraints are inputs of the constrained_parafac function, which itself uses the\n``tensorly.tenalg.proximal.validate_constraints`` function in order to process the input\nof the user. If a user wants to use the same constraint for all modes, an\ninput (bool or a scalar value or list of scalar values) should be given to this constraint.\nAssume, one wants to use unimodality constraint for all modes. Since it does not require\nany scalar input, unimodality can be imposed by writing `True` for `unimodality`:\n\n" + "## Using one constraint for all modes\nConstraints are inputs of the constrained_parafac function, which itself uses the\n``tensorly.solver.proximal.validate_constraints`` function in order to process the input\nof the user. If a user wants to use the same constraint for all modes, an\ninput (bool or a scalar value or list of scalar values) should be given to this constraint.\nAssume, one wants to use unimodality constraint for all modes. Since it does not require\nany scalar input, unimodality can be imposed by writing `True` for `unimodality`:\n\n" ] }, { @@ -69,7 +58,7 @@ }, "outputs": [], "source": [ - "fig = plt.figure()\nfor i in range(rank):\n plt.plot(factors[0][:, i])\n plt.legend(['1. column', '2. column', '3. column'], loc='upper left')" + "fig = plt.figure()\nfor i in range(rank):\n plt.plot(factors[0][:, i])\n plt.legend([\"1. column\", \"2. column\", \"3. column\"], loc=\"upper left\")" ] }, { @@ -105,7 +94,7 @@ }, "outputs": [], "source": [ - "fig = plt.figure()\nplt.title('Histogram of 1. factor')\n_, _, _ = plt.hist(factors[0].flatten())\n\nfig = plt.figure()\nplt.title('Histogram of 2. factor')\n_, _, _ = plt.hist(factors[1].flatten())" + "fig = plt.figure()\nplt.title(\"Histogram of 1. factor\")\n_, _, _ = plt.hist(factors[0].flatten())\n\nfig = plt.figure()\nplt.title(\"Histogram of 2. factor\")\n_, _, _ = plt.hist(factors[1].flatten())" ] }, { @@ -148,7 +137,7 @@ }, "outputs": [], "source": [ - "_, factors = constrained_parafac(tensor, rank=rank, l1_reg=[0.01, 0.02, 0.03])\n\nfig = plt.figure()\nplt.title('Histogram of 1. factor')\n_, _, _ = plt.hist(factors[0].flatten())\n\nfig = plt.figure()\nplt.title('Histogram of 2. factor')\n_, _, _ = plt.hist(factors[1].flatten())\n\nfig = plt.figure()\nplt.title('Histogram of 3. factor')\n_, _, _ = plt.hist(factors[2].flatten())" + "_, factors = constrained_parafac(tensor, rank=rank, l1_reg=[0.01, 0.02, 0.03])\n\nfig = plt.figure()\nplt.title(\"Histogram of 1. factor\")\n_, _, _ = plt.hist(factors[0].flatten())\n\nfig = plt.figure()\nplt.title(\"Histogram of 2. factor\")\n_, _, _ = plt.hist(factors[1].flatten())\n\nfig = plt.figure()\nplt.title(\"Histogram of 3. factor\")\n_, _, _ = plt.hist(factors[2].flatten())" ] }, { @@ -166,7 +155,7 @@ }, "outputs": [], "source": [ - "_, factors = constrained_parafac(tensor, rank=rank, non_negative={1:True}, l1_reg={0: 0.01},\n l2_square_reg={2: 0.01})" + "_, factors = constrained_parafac(\n tensor, rank=rank, non_negative={1: True}, l1_reg={0: 0.01}, l2_square_reg={2: 0.01}\n)" ] }, { @@ -218,7 +207,7 @@ "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", - "version": "3.9.16" + "version": "3.12.7" } }, "nbformat": 4, diff --git a/stable/_downloads/2af0682e07ba2fb9ad2fd36324f584e8/plot_cp_line_search.py b/stable/_downloads/2af0682e07ba2fb9ad2fd36324f584e8/plot_cp_line_search.py index 695eb88d3..68450c361 100644 --- a/stable/_downloads/2af0682e07ba2fb9ad2fd36324f584e8/plot_cp_line_search.py +++ b/stable/_downloads/2af0682e07ba2fb9ad2fd36324f584e8/plot_cp_line_search.py @@ -4,6 +4,7 @@ Example on how to use :func:`tensorly.decomposition.parafac` with line search to accelerate convergence. """ + import matplotlib.pyplot as plt from time import time @@ -24,7 +25,7 @@ err_min = tl.norm(tl.cp_to_tensor(fac) - tensor) for ii, toll in enumerate(tol): - # Run PARAFAC decomposition without line search and time + # Run PARAFAC decomposition without line search and time start = time() cp = CP(rank=3, n_iter_max=2000000, tol=toll, linesearch=False) fac = cp.fit_transform(tensor) @@ -44,10 +45,10 @@ fig = plt.figure() ax = fig.add_subplot(1, 1, 1) -ax.loglog(tt, err - err_min, '.', label="No line search") -ax.loglog(tt_ls, err_ls - err_min, '.r', label="Line search") +ax.loglog(tt, err - err_min, ".", label="No line search") +ax.loglog(tt_ls, err_ls - err_min, ".r", label="Line search") ax.legend() ax.set_ylabel("Time") ax.set_xlabel("Error") -plt.show() \ No newline at end of file +plt.show() diff --git a/stable/_downloads/2d1781e05b6ef942fb097ff181023668/plot_covid.py b/stable/_downloads/2d1781e05b6ef942fb097ff181023668/plot_covid.py index b0c736e31..43fad4988 100644 --- a/stable/_downloads/2d1781e05b6ef942fb097ff181023668/plot_covid.py +++ b/stable/_downloads/2d1781e05b6ef942fb097ff181023668/plot_covid.py @@ -18,7 +18,7 @@ # to comprehensively profile the interactions between the antibodies and # `Fc receptors `_ alongside other types of immunological # and demographic data. Here, we will apply CP decomposition to a -# `COVID-19 system serology dataset `_. +# `COVID-19 system serology dataset `_. # In this dataset, serum antibodies # of 438 samples collected from COVID-19 patients were systematically profiled by their binding behavior # to SARS-CoV-2 (the virus that causes COVID-19) antigens and Fc receptors activities. Samples are @@ -45,20 +45,26 @@ # Now we apply CP decomposition to this dataset. comps = np.arange(1, 7) -CMTFfacs = [parafac(data.tensor, cc, tol=1e-10, n_iter_max=1000, - linesearch=True, orthogonalise=2) for cc in comps] +CMTFfacs = [ + parafac( + data.tensor, cc, tol=1e-10, n_iter_max=1000, linesearch=True, orthogonalise=2 + ) + for cc in comps +] ############################################################################## # To evaluate how well CP decomposition explains the variance in the dataset, we plot the percent # variance reconstructed (R2X) for a range of ranks. + def reconstructed_variance(tFac, tIn=None): - """ This function calculates the amount of variance captured (R2X) by the tensor method. """ + """This function calculates the amount of variance captured (R2X) by the tensor method.""" tMask = np.isfinite(tIn) vTop = np.sum(np.square(tl.cp_to_tensor(tFac) * tMask - np.nan_to_num(tIn))) vBottom = np.sum(np.square(np.nan_to_num(tIn))) return 1.0 - vTop / vBottom + fig1 = plt.figure() CMTFR2X = np.array([reconstructed_variance(f, data.tensor) for f in CMTFfacs]) plt.plot(comps, CMTFR2X, "bo") @@ -81,8 +87,8 @@ def reconstructed_variance(tFac, tIn=None): tfac.factors[1][:, 0] *= -1 tfac.factors[2][:, 0] *= -1 -fig2, ax = plt.subplots(1, 3, figsize=(16,6)) -for ii in [0,1,2]: +fig2, ax = plt.subplots(1, 3, figsize=(16, 6)) +for ii in [0, 1, 2]: fac = tfac.factors[ii] scales = np.linalg.norm(fac, ord=np.inf, axis=0) fac /= scales @@ -92,12 +98,20 @@ def reconstructed_variance(tFac, tIn=None): ax[ii].set_xticklabels(["Comp. 1", "Comp. 2"]) ax[ii].set_yticks(range(len(data.ticks[ii]))) if ii == 0: - ax[0].set_yticklabels([data.ticks[0][i] if i==0 or data.ticks[0][i]!=data.ticks[0][i-1] - else "" for i in range(len(data.ticks[0]))]) + ax[0].set_yticklabels( + [ + ( + data.ticks[0][i] + if i == 0 or data.ticks[0][i] != data.ticks[0][i - 1] + else "" + ) + for i in range(len(data.ticks[0])) + ] + ) else: ax[ii].set_yticklabels(data.ticks[ii]) ax[ii].set_title(data.dims[ii]) - ax[ii].set_aspect('auto') + ax[ii].set_aspect("auto") fig2.colorbar(ScalarMappable(norm=plt.Normalize(-1, 1), cmap="PiYG")) diff --git a/stable/_downloads/2e3d154df5c15282f0c4a6209fe4ae14/plot_covid.ipynb b/stable/_downloads/2e3d154df5c15282f0c4a6209fe4ae14/plot_covid.ipynb index bbca5d1e9..c595a6409 100644 --- a/stable/_downloads/2e3d154df5c15282f0c4a6209fe4ae14/plot_covid.ipynb +++ b/stable/_downloads/2e3d154df5c15282f0c4a6209fe4ae14/plot_covid.ipynb @@ -1,16 +1,5 @@ { "cells": [ - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "collapsed": false - }, - "outputs": [], - "source": [ - "%matplotlib inline" - ] - }, { "cell_type": "markdown", "metadata": {}, @@ -33,7 +22,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "## Introduction\nPARAFAC (CP) decomposition is extremely useful in dimensionality reduction, allowing us\nto develop models that are both representative and compact while retaining crucial patterns\nbetween subjects. Here, we provide an example of how it can be applied to biomedical research.\n\nSystems serology is a new technology that examines the antibodies from a patient's serum, aiming\nto comprehensively profile the interactions between the antibodies and\n[Fc receptors](https://en.wikipedia.org/wiki/Fc_receptor) alongside other types of immunological\nand demographic data. Here, we will apply CP decomposition to a\n[COVID-19 system serology dataset](https://www.sciencedirect.com/science/article/pii/S0092867420314598). \nIn this dataset, serum antibodies\nof 438 samples collected from COVID-19 patients were systematically profiled by their binding behavior\nto SARS-CoV-2 (the virus that causes COVID-19) antigens and Fc receptors activities. Samples are\nlabeled by the status of the patients.\n\nDetails of this analysis as well as more in-depth biological implications can be found in\n[this work](https://www.embopress.org/doi/full/10.15252/msb.202110243). It also includes applying\ntensor methods to HIV systems serology measurements and using them to predict patient status.\n\nWe first import this dataset of a panel of COVID-19 patients:\n\n" + "## Introduction\nPARAFAC (CP) decomposition is extremely useful in dimensionality reduction, allowing us\nto develop models that are both representative and compact while retaining crucial patterns\nbetween subjects. Here, we provide an example of how it can be applied to biomedical research.\n\nSystems serology is a new technology that examines the antibodies from a patient's serum, aiming\nto comprehensively profile the interactions between the antibodies and\n[Fc receptors](https://en.wikipedia.org/wiki/Fc_receptor) alongside other types of immunological\nand demographic data. Here, we will apply CP decomposition to a\n[COVID-19 system serology dataset](https://www.sciencedirect.com/science/article/pii/S0092867420314598).\nIn this dataset, serum antibodies\nof 438 samples collected from COVID-19 patients were systematically profiled by their binding behavior\nto SARS-CoV-2 (the virus that causes COVID-19) antigens and Fc receptors activities. Samples are\nlabeled by the status of the patients.\n\nDetails of this analysis as well as more in-depth biological implications can be found in\n[this work](https://www.embopress.org/doi/full/10.15252/msb.202110243). It also includes applying\ntensor methods to HIV systems serology measurements and using them to predict patient status.\n\nWe first import this dataset of a panel of COVID-19 patients:\n\n" ] }, { @@ -62,7 +51,7 @@ }, "outputs": [], "source": [ - "comps = np.arange(1, 7)\nCMTFfacs = [parafac(data.tensor, cc, tol=1e-10, n_iter_max=1000,\n linesearch=True, orthogonalise=2) for cc in comps]" + "comps = np.arange(1, 7)\nCMTFfacs = [\n parafac(\n data.tensor, cc, tol=1e-10, n_iter_max=1000, linesearch=True, orthogonalise=2\n )\n for cc in comps\n]" ] }, { @@ -80,7 +69,7 @@ }, "outputs": [], "source": [ - "def reconstructed_variance(tFac, tIn=None):\n \"\"\" This function calculates the amount of variance captured (R2X) by the tensor method. \"\"\"\n tMask = np.isfinite(tIn)\n vTop = np.sum(np.square(tl.cp_to_tensor(tFac) * tMask - np.nan_to_num(tIn)))\n vBottom = np.sum(np.square(np.nan_to_num(tIn)))\n return 1.0 - vTop / vBottom\n\nfig1 = plt.figure()\nCMTFR2X = np.array([reconstructed_variance(f, data.tensor) for f in CMTFfacs])\nplt.plot(comps, CMTFR2X, \"bo\")\nplt.xlabel(\"Number of Components\")\nplt.ylabel(\"Variance Explained (R2X)\")\nplt.gca().set_xlim([0.0, np.amax(comps) + 0.5])\nplt.gca().set_ylim([0, 1])" + "def reconstructed_variance(tFac, tIn=None):\n \"\"\"This function calculates the amount of variance captured (R2X) by the tensor method.\"\"\"\n tMask = np.isfinite(tIn)\n vTop = np.sum(np.square(tl.cp_to_tensor(tFac) * tMask - np.nan_to_num(tIn)))\n vBottom = np.sum(np.square(np.nan_to_num(tIn)))\n return 1.0 - vTop / vBottom\n\n\nfig1 = plt.figure()\nCMTFR2X = np.array([reconstructed_variance(f, data.tensor) for f in CMTFfacs])\nplt.plot(comps, CMTFR2X, \"bo\")\nplt.xlabel(\"Number of Components\")\nplt.ylabel(\"Variance Explained (R2X)\")\nplt.gca().set_xlim([0.0, np.amax(comps) + 0.5])\nplt.gca().set_ylim([0, 1])" ] }, { @@ -98,7 +87,7 @@ }, "outputs": [], "source": [ - "tfac = CMTFfacs[1]\n\n# Ensure that factors are negative on at most one direction.\ntfac.factors[1][:, 0] *= -1\ntfac.factors[2][:, 0] *= -1\n\nfig2, ax = plt.subplots(1, 3, figsize=(16,6))\nfor ii in [0,1,2]:\n fac = tfac.factors[ii]\n scales = np.linalg.norm(fac, ord=np.inf, axis=0)\n fac /= scales\n\n ax[ii].imshow(fac, cmap=\"PiYG\", vmin=-1, vmax=1)\n ax[ii].set_xticks([0, 1])\n ax[ii].set_xticklabels([\"Comp. 1\", \"Comp. 2\"])\n ax[ii].set_yticks(range(len(data.ticks[ii])))\n if ii == 0:\n ax[0].set_yticklabels([data.ticks[0][i] if i==0 or data.ticks[0][i]!=data.ticks[0][i-1]\n else \"\" for i in range(len(data.ticks[0]))])\n else:\n ax[ii].set_yticklabels(data.ticks[ii])\n ax[ii].set_title(data.dims[ii])\n ax[ii].set_aspect('auto')\n\nfig2.colorbar(ScalarMappable(norm=plt.Normalize(-1, 1), cmap=\"PiYG\"))" + "tfac = CMTFfacs[1]\n\n# Ensure that factors are negative on at most one direction.\ntfac.factors[1][:, 0] *= -1\ntfac.factors[2][:, 0] *= -1\n\nfig2, ax = plt.subplots(1, 3, figsize=(16, 6))\nfor ii in [0, 1, 2]:\n fac = tfac.factors[ii]\n scales = np.linalg.norm(fac, ord=np.inf, axis=0)\n fac /= scales\n\n ax[ii].imshow(fac, cmap=\"PiYG\", vmin=-1, vmax=1)\n ax[ii].set_xticks([0, 1])\n ax[ii].set_xticklabels([\"Comp. 1\", \"Comp. 2\"])\n ax[ii].set_yticks(range(len(data.ticks[ii])))\n if ii == 0:\n ax[0].set_yticklabels(\n [\n (\n data.ticks[0][i]\n if i == 0 or data.ticks[0][i] != data.ticks[0][i - 1]\n else \"\"\n )\n for i in range(len(data.ticks[0]))\n ]\n )\n else:\n ax[ii].set_yticklabels(data.ticks[ii])\n ax[ii].set_title(data.dims[ii])\n ax[ii].set_aspect(\"auto\")\n\nfig2.colorbar(ScalarMappable(norm=plt.Normalize(-1, 1), cmap=\"PiYG\"))" ] }, { @@ -132,7 +121,7 @@ "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", - "version": "3.9.16" + "version": "3.12.7" } }, "nbformat": 4, diff --git a/stable/_downloads/31ebe5f65fa406fbccca74ea8deb8a69/plot_IL2.zip b/stable/_downloads/31ebe5f65fa406fbccca74ea8deb8a69/plot_IL2.zip new file mode 100644 index 000000000..b43d8de3f Binary files /dev/null and b/stable/_downloads/31ebe5f65fa406fbccca74ea8deb8a69/plot_IL2.zip differ diff --git a/stable/_downloads/32f8d624f9d1f7b561ad62798d8861ca/plot_parafac2.py b/stable/_downloads/32f8d624f9d1f7b561ad62798d8861ca/plot_parafac2.py index 03342cb31..b64074dc7 100644 --- a/stable/_downloads/32f8d624f9d1f7b561ad62798d8861ca/plot_parafac2.py +++ b/stable/_downloads/32f8d624f9d1f7b561ad62798d8861ca/plot_parafac2.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- """ Demonstration of PARAFAC2 @@ -19,15 +18,15 @@ # Create synthetic tensor # ----------------------- # Here, we create a random tensor that follows the PARAFAC2 constraints found -# inx `(Kiers et al 1999)`_. -# +# in `(Kiers et al 1999)`_. +# # This particular tensor, # :math:`\mathcal{X} \in \mathbb{R}^{I\times J \times K}`, is a shifted # CP tensor, that is, a tensor on the form: -# +# # .. math:: # \mathcal{X}_{ijk} = \sum_{r=1}^R A_{ir} B_{\sigma_i(j) r} C_{kr}, -# +# # where :math:`\sigma_i` is a cyclic permutation of :math:`J` elements. @@ -43,21 +42,23 @@ C_factor_matrix = np.random.uniform(size=(K, true_rank)) # Normalised factor matrices -A_normalised = A_factor_matrix/la.norm(A_factor_matrix, axis=0) -B_normalised = B_factor_matrix/la.norm(B_factor_matrix, axis=0) -C_normalised = C_factor_matrix/la.norm(C_factor_matrix, axis=0) +A_normalised = A_factor_matrix / la.norm(A_factor_matrix, axis=0) +B_normalised = B_factor_matrix / la.norm(B_factor_matrix, axis=0) +C_normalised = C_factor_matrix / la.norm(C_factor_matrix, axis=0) # Generate the shifted factor matrix B_factor_matrices = [np.roll(B_factor_matrix, shift=i, axis=0) for i in range(I)] Bs_normalised = [np.roll(B_normalised, shift=i, axis=0) for i in range(I)] # Construct the tensor -tensor = np.einsum('ir,ijr,kr->ijk', A_factor_matrix, B_factor_matrices, C_factor_matrix) +tensor = np.einsum( + "ir,ijr,kr->ijk", A_factor_matrix, B_factor_matrices, C_factor_matrix +) # Add noise noise = np.random.standard_normal(tensor.shape) noise /= np.linalg.norm(noise) -noise *= noise_rate*np.linalg.norm(tensor) +noise *= noise_rate * np.linalg.norm(tensor) tensor += noise @@ -72,55 +73,63 @@ decomposition = None for run in range(10): - print(f'Training model {run}...') - trial_decomposition, trial_errs = parafac2(tensor, true_rank, return_errors=True, tol=1e-8, n_iter_max=500, random_state=run) - print(f'Number of iterations: {len(trial_errs)}') - print(f'Final error: {trial_errs[-1]}') + print(f"Training model {run}...") + trial_decomposition, trial_errs = parafac2( + tensor, + true_rank, + return_errors=True, + tol=1e-8, + n_iter_max=500, + random_state=run, + ) + print(f"Number of iterations: {len(trial_errs)}") + print(f"Final error: {trial_errs[-1]}") if best_err > trial_errs[-1]: best_err = trial_errs[-1] err = trial_errs decomposition = trial_decomposition - print('-------------------------------') -print(f'Best model error: {best_err}') + print("-------------------------------") +print(f"Best model error: {best_err}") ############################################################################## -# A decomposition is a wrapper object for three variables: the *weights*, +# A decomposition is a wrapper object for three variables: the *weights*, # the *factor matrices* and the *projection matrices*. The weights are similar -# to the output of a CP decomposition. The factor matrices and projection +# to the output of a CP decomposition. The factor matrices and projection # matrices are somewhat different. For a CP decomposition, we only have the # weights and the factor matrices. However, since the PARAFAC2 factor matrices # for the second mode is given by -# +# # .. math:: # B_i = P_i B, -# -# where :math:`B` is an :math:`R \times R` matrix and :math:`P_i` is an +# +# where :math:`B` is an :math:`R \times R` matrix and :math:`P_i` is an # :math:`I \times R` projection matrix, we cannot store the factor matrices # the same as for a CP decomposition. -# -# Instead, we store the factor matrix along the first mode (:math:`A`), the -# "blueprint" matrix for the second mode (:math:`B`) and the factor matrix +# +# Instead, we store the factor matrix along the first mode (:math:`A`), the +# "blueprint" matrix for the second mode (:math:`B`) and the factor matrix # along the third mode (:math:`C`) in one tuple and the projection matrices, # :math:`P_i`, in a separate tuple. -# +# # If we wish to extract the informative :math:`B_i` factor matrices, then we -# use the ``tensorly.parafac2_tensor.apply_projection_matrices`` function on +# use the ``tensorly.parafac2_tensor.apply_projection_matrices`` function on # the PARAFAC2 tensor instance to get another wrapper object for two # variables: *weights* and *factor matrices*. However, now, the second element # of the factor matrices tuple is now a list of factor matrices, one for each # frontal slice of the tensor. -# +# # Likewise, if we wish to construct the tensor or the frontal slices, then we # can use the ``tensorly.parafac2_tensor.parafac2_to_tensor`` function. If the # decomposed dataset consisted of uneven-length frontal slices, then we can -# use the ``tensorly.parafac2_tensor.parafac2_to_slices`` function to get a +# use the ``tensorly.parafac2_tensor.parafac2_to_slices`` function to get a # list of frontal slices. - est_tensor = tl.parafac2_tensor.parafac2_to_tensor(decomposition) -est_weights, (est_A, est_B, est_C) = tl.parafac2_tensor.apply_parafac2_projections(decomposition) +est_weights, (est_A, est_B, est_C) = tl.parafac2_tensor.apply_parafac2_projections( + decomposition +) ############################################################################## # Compute performance metrics @@ -128,32 +137,42 @@ reconstruction_error = la.norm(est_tensor - tensor) -recovery_rate = 1 - reconstruction_error/la.norm(tensor) +recovery_rate = 1 - reconstruction_error / la.norm(tensor) -print(f'{recovery_rate:2.0%} of the data is explained by the model, which is expected with noise rate: {noise_rate}') +print( + f"{recovery_rate:2.0%} of the data is explained by the model, which is expected with noise rate: {noise_rate}" +) # To evaluate how well the original structure is recovered, we calculate the tucker congruence coefficient. -est_A, est_projected_Bs, est_C = tl.parafac2_tensor.apply_parafac2_projections(decomposition)[1] +est_A, est_projected_Bs, est_C = tl.parafac2_tensor.apply_parafac2_projections( + decomposition +)[1] sign = np.sign(est_A) est_A = np.abs(est_A) -est_projected_Bs = sign[:, np.newaxis]*est_projected_Bs +est_projected_Bs = sign[:, np.newaxis] * est_projected_Bs -est_A_normalised = est_A/la.norm(est_A, axis=0) -est_Bs_normalised = [est_B/la.norm(est_B, axis=0) for est_B in est_projected_Bs] -est_C_normalised = est_C/la.norm(est_C, axis=0) +est_A_normalised = est_A / la.norm(est_A, axis=0) +est_Bs_normalised = [est_B / la.norm(est_B, axis=0) for est_B in est_projected_Bs] +est_C_normalised = est_C / la.norm(est_C, axis=0) -B_corr = np.array(est_Bs_normalised).reshape(-1, true_rank).T@np.array(Bs_normalised).reshape(-1, true_rank)/len(est_Bs_normalised) -A_corr = est_A_normalised.T@A_normalised -C_corr = est_C_normalised.T@C_normalised +B_corr = ( + np.array(est_Bs_normalised).reshape(-1, true_rank).T + @ np.array(Bs_normalised).reshape(-1, true_rank) + / len(est_Bs_normalised) +) +A_corr = est_A_normalised.T @ A_normalised +C_corr = est_C_normalised.T @ C_normalised -corr = A_corr*B_corr*C_corr -permutation = linear_sum_assignment(-corr) # Old versions of scipy does not support maximising, from scipy v1.4, you can pass `corr` and `maximize=True` instead of `-corr` to maximise the sum. +corr = A_corr * B_corr * C_corr +permutation = linear_sum_assignment( + -corr +) # Old versions of scipy does not support maximising, from scipy v1.4, you can pass `corr` and `maximize=True` instead of `-corr` to maximise the sum. congruence_coefficient = np.mean(corr[permutation]) -print(f'Average tucker congruence coefficient: {congruence_coefficient}') +print(f"Average tucker congruence coefficient: {congruence_coefficient}") ############################################################################## # Visualize the components @@ -161,45 +180,45 @@ # Find the best permutation so that we can plot the estimated components on top of the true components -permutation = np.argmax(A_corr*B_corr*C_corr, axis=0) +permutation = np.argmax(A_corr * B_corr * C_corr, axis=0) # Create plots of each component vector for each mode # (We just look at one of the B_i matrices) -fig, axes = plt.subplots(true_rank, 3, figsize=(15, 3*true_rank+1)) -i = 0 # What slice, B_i, we look at for the B mode +fig, axes = plt.subplots(true_rank, 3, figsize=(15, 3 * true_rank + 1)) +i = 0 # What slice, B_i, we look at for the B mode for r in range(true_rank): - + # Plot true and estimated components for mode A - axes[r][0].plot((A_normalised[:, r]), label='True') - axes[r][0].plot((est_A_normalised[:, permutation[r]]),'--', label='Estimated') - + axes[r][0].plot((A_normalised[:, r]), label="True") + axes[r][0].plot((est_A_normalised[:, permutation[r]]), "--", label="Estimated") + # Labels for the different components - axes[r][0].set_ylabel(f'Component {r}') + axes[r][0].set_ylabel(f"Component {r}") # Plot true and estimated components for mode C axes[r][2].plot(C_normalised[:, r]) - axes[r][2].plot(est_C_normalised[:, permutation[r]], '--') + axes[r][2].plot(est_C_normalised[:, permutation[r]], "--") # Plot true components for mode B axes[r][1].plot(Bs_normalised[i][:, r]) - + # Get the signs so that we can flip the B mode factor matrices A_sign = np.sign(est_A_normalised) - + # Plot estimated components for mode B (after sign correction) - axes[r][1].plot(A_sign[i, r]*est_Bs_normalised[i][:, permutation[r]], '--') + axes[r][1].plot(A_sign[i, r] * est_Bs_normalised[i][:, permutation[r]], "--") # Titles for the different modes -axes[0][0].set_title('A mode') -axes[0][2].set_title('C mode') -axes[0][1].set_title(f'B mode (slice {i})') +axes[0][0].set_title("A mode") +axes[0][2].set_title("C mode") +axes[0][1].set_title(f"B mode (slice {i})") -# Create a legend for the entire figure -handles, labels = axes[r][0].get_legend_handles_labels() -fig.legend(handles, labels, loc='upper center', ncol=2) +# Create a legend for the entire figure +handles, labels = axes[r][0].get_legend_handles_labels() +fig.legend(handles, labels, loc="upper center", ncol=2) ############################################################################## # Inspect the convergence rate @@ -209,12 +228,15 @@ # initial loss often dominate the rest of the plot, making it difficult # to check for convergence. -loss_fig, loss_ax = plt.subplots(figsize=(9, 9/1.6)) +loss_fig, loss_ax = plt.subplots(figsize=(9, 9 / 1.6)) loss_ax.plot(range(1, len(err)), err[1:]) -loss_ax.set_xlabel('Iteration number') -loss_ax.set_ylabel('Relative reconstruction error') +loss_ax.set_xlabel("Iteration number") +loss_ax.set_ylabel("Relative reconstruction error") mathematical_expression_of_loss = r"$\frac{\left|\left|\hat{\mathcal{X}}\right|\right|_F}{\left|\left|\mathcal{X}\right|\right|_F}$" -loss_ax.set_title(f'Loss plot: {mathematical_expression_of_loss} \n (starting after first iteration)', fontsize=16) +loss_ax.set_title( + f"Loss plot: {mathematical_expression_of_loss} \n (starting after first iteration)", + fontsize=16, +) xticks = loss_ax.get_xticks() loss_ax.set_xticks([1] + list(xticks[1:])) loss_ax.set_xlim(1, len(err)) @@ -222,17 +244,14 @@ plt.show() - ############################################################################## # References # ---------- -# +# # .. _(Kiers et al 1999): -# -# Kiers HA, Ten Berge JM, Bro R. *PARAFAC2—Part I. +# +# Kiers HA, Ten Berge JM, Bro R. *PARAFAC2—Part I. # A direct fitting algorithm for the PARAFAC2 model.* # **Journal of Chemometrics: A Journal of the Chemometrics Society.** # 1999 May;13(3‐4):275-94. `(Online version) # `_ - - diff --git a/stable/_downloads/3ca037908294af324764e4c88851b187/plot_covid.zip b/stable/_downloads/3ca037908294af324764e4c88851b187/plot_covid.zip new file mode 100644 index 000000000..fcfb70b96 Binary files /dev/null and b/stable/_downloads/3ca037908294af324764e4c88851b187/plot_covid.zip differ diff --git a/stable/_downloads/42966e24b5f81902fc31165ec0d52e6c/plot_cp_regression.py b/stable/_downloads/42966e24b5f81902fc31165ec0d52e6c/plot_cp_regression.py index 9e474c407..48f95ab5b 100644 --- a/stable/_downloads/42966e24b5f81902fc31165ec0d52e6c/plot_cp_regression.py +++ b/stable/_downloads/42966e24b5f81902fc31165ec0d52e6c/plot_cp_regression.py @@ -15,7 +15,7 @@ image_height = 25 image_width = 25 # shape of the images -patterns = ['rectangle', 'swiss', 'circle'] +patterns = ["rectangle", "swiss", "circle"] # ranks to test ranks = [1, 2, 3, 4, 5] @@ -33,33 +33,41 @@ for i, pattern in enumerate(patterns): # Generate the original image - weight_img = gen_image(region=pattern, image_height=image_height, image_width=image_width) + weight_img = gen_image( + region=pattern, image_height=image_height, image_width=image_width + ) weight_img = tl.tensor(weight_img) # Generate the labels y = tl.dot(partial_tensor_to_vec(X, skip_begin=1), tensor_to_vec(weight_img)) # Plot the original weights - ax = fig.add_subplot(n_rows, n_columns, i*n_columns + 1) - ax.imshow(tl.to_numpy(weight_img), cmap=plt.cm.OrRd, interpolation='nearest') + ax = fig.add_subplot(n_rows, n_columns, i * n_columns + 1) + ax.imshow(tl.to_numpy(weight_img), cmap=plt.cm.OrRd, interpolation="nearest") ax.set_axis_off() if i == 0: - ax.set_title('Original\nweights') + ax.set_title("Original\nweights") for j, rank in enumerate(ranks): # Create a tensor Regressor estimator - estimator = CPRegressor(weight_rank=rank, tol=10e-7, n_iter_max=100, reg_W=1, verbose=0) + estimator = CPRegressor( + weight_rank=rank, tol=10e-7, n_iter_max=100, reg_W=1, verbose=0 + ) # Fit the estimator to the data estimator.fit(X, y) - ax = fig.add_subplot(n_rows, n_columns, i*n_columns + j + 2) - ax.imshow(tl.to_numpy(estimator.weight_tensor_), cmap=plt.cm.OrRd, interpolation='nearest') + ax = fig.add_subplot(n_rows, n_columns, i * n_columns + j + 2) + ax.imshow( + tl.to_numpy(estimator.weight_tensor_), + cmap=plt.cm.OrRd, + interpolation="nearest", + ) ax.set_axis_off() if i == 0: - ax.set_title('Learned\nrank = {}'.format(rank)) + ax.set_title(f"Learned\nrank = {rank}") plt.suptitle("CP tensor regression") plt.show() diff --git a/stable/_downloads/43729c3dc12e8eff64285b0fa0df2e01/plot_nn_tucker.zip b/stable/_downloads/43729c3dc12e8eff64285b0fa0df2e01/plot_nn_tucker.zip new file mode 100644 index 000000000..a25a12e79 Binary files /dev/null and b/stable/_downloads/43729c3dc12e8eff64285b0fa0df2e01/plot_nn_tucker.zip differ diff --git a/stable/_downloads/4646f2ed882fdbd7317db8bc1eb3916e/plot_cp_regression.ipynb b/stable/_downloads/4646f2ed882fdbd7317db8bc1eb3916e/plot_cp_regression.ipynb index fdf1b048d..012a57f68 100644 --- a/stable/_downloads/4646f2ed882fdbd7317db8bc1eb3916e/plot_cp_regression.ipynb +++ b/stable/_downloads/4646f2ed882fdbd7317db8bc1eb3916e/plot_cp_regression.ipynb @@ -1,16 +1,5 @@ { "cells": [ - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "collapsed": false - }, - "outputs": [], - "source": [ - "%matplotlib inline" - ] - }, { "cell_type": "markdown", "metadata": {}, @@ -26,7 +15,7 @@ }, "outputs": [], "source": [ - "import matplotlib.pyplot as plt\nfrom tensorly.base import tensor_to_vec, partial_tensor_to_vec\nfrom tensorly.datasets.synthetic import gen_image\nfrom tensorly.regression.cp_regression import CPRegressor\nimport tensorly as tl\n\n# Parameter of the experiment\nimage_height = 25\nimage_width = 25\n# shape of the images\npatterns = ['rectangle', 'swiss', 'circle']\n# ranks to test\nranks = [1, 2, 3, 4, 5]\n\n# Generate random samples\nrng = tl.check_random_state(1)\nX = tl.tensor(rng.normal(size=(1000, image_height, image_width), loc=0, scale=1))\n\n\n# Parameters of the plot, deduced from the data\nn_rows = len(patterns)\nn_columns = len(ranks) + 1\n# Plot the three images\nfig = plt.figure()\n\nfor i, pattern in enumerate(patterns):\n\n # Generate the original image\n weight_img = gen_image(region=pattern, image_height=image_height, image_width=image_width)\n weight_img = tl.tensor(weight_img)\n\n # Generate the labels\n y = tl.dot(partial_tensor_to_vec(X, skip_begin=1), tensor_to_vec(weight_img))\n\n # Plot the original weights\n ax = fig.add_subplot(n_rows, n_columns, i*n_columns + 1)\n ax.imshow(tl.to_numpy(weight_img), cmap=plt.cm.OrRd, interpolation='nearest')\n ax.set_axis_off()\n if i == 0:\n ax.set_title('Original\\nweights')\n\n for j, rank in enumerate(ranks):\n\n # Create a tensor Regressor estimator\n estimator = CPRegressor(weight_rank=rank, tol=10e-7, n_iter_max=100, reg_W=1, verbose=0)\n\n # Fit the estimator to the data\n estimator.fit(X, y)\n\n ax = fig.add_subplot(n_rows, n_columns, i*n_columns + j + 2)\n ax.imshow(tl.to_numpy(estimator.weight_tensor_), cmap=plt.cm.OrRd, interpolation='nearest')\n ax.set_axis_off()\n\n if i == 0:\n ax.set_title('Learned\\nrank = {}'.format(rank))\n\nplt.suptitle(\"CP tensor regression\")\nplt.show()" + "import matplotlib.pyplot as plt\nfrom tensorly.base import tensor_to_vec, partial_tensor_to_vec\nfrom tensorly.datasets.synthetic import gen_image\nfrom tensorly.regression.cp_regression import CPRegressor\nimport tensorly as tl\n\n# Parameter of the experiment\nimage_height = 25\nimage_width = 25\n# shape of the images\npatterns = [\"rectangle\", \"swiss\", \"circle\"]\n# ranks to test\nranks = [1, 2, 3, 4, 5]\n\n# Generate random samples\nrng = tl.check_random_state(1)\nX = tl.tensor(rng.normal(size=(1000, image_height, image_width), loc=0, scale=1))\n\n\n# Parameters of the plot, deduced from the data\nn_rows = len(patterns)\nn_columns = len(ranks) + 1\n# Plot the three images\nfig = plt.figure()\n\nfor i, pattern in enumerate(patterns):\n\n # Generate the original image\n weight_img = gen_image(\n region=pattern, image_height=image_height, image_width=image_width\n )\n weight_img = tl.tensor(weight_img)\n\n # Generate the labels\n y = tl.dot(partial_tensor_to_vec(X, skip_begin=1), tensor_to_vec(weight_img))\n\n # Plot the original weights\n ax = fig.add_subplot(n_rows, n_columns, i * n_columns + 1)\n ax.imshow(tl.to_numpy(weight_img), cmap=plt.cm.OrRd, interpolation=\"nearest\")\n ax.set_axis_off()\n if i == 0:\n ax.set_title(\"Original\\nweights\")\n\n for j, rank in enumerate(ranks):\n\n # Create a tensor Regressor estimator\n estimator = CPRegressor(\n weight_rank=rank, tol=10e-7, n_iter_max=100, reg_W=1, verbose=0\n )\n\n # Fit the estimator to the data\n estimator.fit(X, y)\n\n ax = fig.add_subplot(n_rows, n_columns, i * n_columns + j + 2)\n ax.imshow(\n tl.to_numpy(estimator.weight_tensor_),\n cmap=plt.cm.OrRd,\n interpolation=\"nearest\",\n )\n ax.set_axis_off()\n\n if i == 0:\n ax.set_title(f\"Learned\\nrank = {rank}\")\n\nplt.suptitle(\"CP tensor regression\")\nplt.show()" ] } ], @@ -46,7 +35,7 @@ "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", - "version": "3.9.16" + "version": "3.12.7" } }, "nbformat": 4, diff --git a/stable/_downloads/468942bd78c2647648843aeda5b2ab0c/plot_IL2.ipynb b/stable/_downloads/468942bd78c2647648843aeda5b2ab0c/plot_IL2.ipynb index 5bbd1f337..fb9e16c52 100644 --- a/stable/_downloads/468942bd78c2647648843aeda5b2ab0c/plot_IL2.ipynb +++ b/stable/_downloads/468942bd78c2647648843aeda5b2ab0c/plot_IL2.ipynb @@ -1,16 +1,5 @@ { "cells": [ - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "collapsed": false - }, - "outputs": [], - "source": [ - "%matplotlib inline" - ] - }, { "cell_type": "markdown", "metadata": {}, @@ -33,7 +22,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "Here we will load a tensor of experimentally measured cellular responses to \nIL-2 stimulation. IL-2 is a naturally occurring immune signaling molecule \nwhich has been engineered by pharmaceutical companies and drug designers \nin attempts to act as an effective immunotherapy. In order to make effective IL-2\ntherapies, pharmaceutical engineer have altered IL-2's signaling activity in order to\nincrease or decrease its interactions with particular cell types. \n\nIL-2 signals through the Jak/STAT pathway and transmits a signal into immune cells by \nphosphorylating STAT5 (pSTAT5). When phosphorylated, STAT5 will cause various immune \ncell types to proliferate, and depending on whether regulatory (regulatory T cells, or Tregs) \nor effector cells (helper T cells, natural killer cells, and cytotoxic T cells,\nor Thelpers, NKs, and CD8+ cells) respond, IL-2 signaling can result in \nimmunosuppression or immunostimulation respectively. Thus, when designing a drug\nmeant to repress the immune system, potentially for the treatment of autoimmune\ndiseases, IL-2 which primarily enacts a response in Tregs is desirable. Conversely,\nwhen designing a drug that is meant to stimulate the immune system, potentially for\nthe treatment of cancer, IL-2 which primarily enacts a response in effector cells\nis desirable. In order to achieve either signaling bias, IL-2 variants with altered\naffinity for it's various receptors (IL2R\u03b1 or IL2R\u03b2) have been designed. Furthermore\nIL-2 variants with multiple binding domains have been designed as multivalent \nIL-2 may act as a more effective therapeutic. In order to understand how these mutations\nand alterations affect which cells respond to an IL-2 mutant, we will perform \nnon-negative PARAFAC tensor decomposition on our cell response data tensor.\n\nHere, our data contains the responses of 8 different cell types to 13 different \nIL-2 mutants, at 4 different timepoints, at 12 standardized IL-2 concentrations.\nTherefore, our tensor will have shape (13 x 4 x 12 x 8), with dimensions\nrepresenting IL-2 mutant, stimulation time, dose, and cell type respectively. Each\nmeasured quantity represents the amount of phosphorlyated STAT5 (pSTAT5) in a \ngiven cell population following stimulation with the specified IL-2 mutant.\n\n" + "Here we will load a tensor of experimentally measured cellular responses to\nIL-2 stimulation. IL-2 is a naturally occurring immune signaling molecule\nwhich has been engineered by pharmaceutical companies and drug designers\nin attempts to act as an effective immunotherapy. In order to make effective IL-2\ntherapies, pharmaceutical engineer have altered IL-2's signaling activity in order to\nincrease or decrease its interactions with particular cell types.\n\nIL-2 signals through the Jak/STAT pathway and transmits a signal into immune cells by\nphosphorylating STAT5 (pSTAT5). When phosphorylated, STAT5 will cause various immune\ncell types to proliferate, and depending on whether regulatory (regulatory T cells, or Tregs)\nor effector cells (helper T cells, natural killer cells, and cytotoxic T cells,\nor Thelpers, NKs, and CD8+ cells) respond, IL-2 signaling can result in\nimmunosuppression or immunostimulation respectively. Thus, when designing a drug\nmeant to repress the immune system, potentially for the treatment of autoimmune\ndiseases, IL-2 which primarily enacts a response in Tregs is desirable. Conversely,\nwhen designing a drug that is meant to stimulate the immune system, potentially for\nthe treatment of cancer, IL-2 which primarily enacts a response in effector cells\nis desirable. In order to achieve either signaling bias, IL-2 variants with altered\naffinity for it's various receptors (IL2R\u03b1 or IL2R\u03b2) have been designed. Furthermore\nIL-2 variants with multiple binding domains have been designed as multivalent\nIL-2 may act as a more effective therapeutic. In order to understand how these mutations\nand alterations affect which cells respond to an IL-2 mutant, we will perform\nnon-negative PARAFAC tensor decomposition on our cell response data tensor.\n\nHere, our data contains the responses of 8 different cell types to 13 different\nIL-2 mutants, at 4 different timepoints, at 12 standardized IL-2 concentrations.\nTherefore, our tensor will have shape (13 x 4 x 12 x 8), with dimensions\nrepresenting IL-2 mutant, stimulation time, dose, and cell type respectively. Each\nmeasured quantity represents the amount of phosphorlyated STAT5 (pSTAT5) in a\ngiven cell population following stimulation with the specified IL-2 mutant.\n\n" ] }, { @@ -51,7 +40,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "Now we will run non-negative PARAFAC tensor decomposition to reduce the dimensionality \nof our tensor. We will use 3 components, and normalize our resulting tensor to aid in \nfuture comparisons of correlations across components.\n\nFirst we must preprocess our tensor to ready it for factorization. Our data has a \nfew missing values, and so we must first generate a mask to mark where those values\noccur.\n\n" + "Now we will run non-negative PARAFAC tensor decomposition to reduce the dimensionality\nof our tensor. We will use 3 components, and normalize our resulting tensor to aid in\nfuture comparisons of correlations across components.\n\nFirst we must preprocess our tensor to ready it for factorization. Our data has a\nfew missing values, and so we must first generate a mask to mark where those values\noccur.\n\n" ] }, { @@ -69,7 +58,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "Now that we've marked where those non-finite values occur, we can regenerate our \ntensor without including non-finite values, allowing it to be factorized.\n\n" + "Now that we've marked where those non-finite values occur, we can regenerate our\ntensor without including non-finite values, allowing it to be factorized.\n\n" ] }, { @@ -98,14 +87,14 @@ }, "outputs": [], "source": [ - "sig_tensor_fact = non_negative_parafac(response_data_fin, init='random', rank=3, mask=tensor_mask, n_iter_max=5000, tol=1e-9, random_state=1)\nsig_tensor_fact = cp_normalize(sig_tensor_fact)" + "sig_tensor_fact = non_negative_parafac(\n response_data_fin,\n init=\"random\",\n rank=3,\n mask=tensor_mask,\n n_iter_max=5000,\n tol=1e-9,\n random_state=1,\n)\nsig_tensor_fact = cp_normalize(sig_tensor_fact)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ - "Now we will load the names of our cell types and IL-2 mutants, in the order in which \nthey are present in our original tensor. IL-2 mutant names refer to the specific \nmutations made to their amino acid sequence, as well as their valency \nformat (monovalent or bivalent).\n\nFinally, we label, plot, and analyze our factored tensor of data.\n\n" + "Now we will load the names of our cell types and IL-2 mutants, in the order in which\nthey are present in our original tensor. IL-2 mutant names refer to the specific\nmutations made to their amino acid sequence, as well as their valency\nformat (monovalent or bivalent).\n\nFinally, we label, plot, and analyze our factored tensor of data.\n\n" ] }, { @@ -116,14 +105,14 @@ }, "outputs": [], "source": [ - "f, ax = plt.subplots(1, 2, figsize=(9, 4.5))\n\ncomponents = [1, 2, 3]\nwidth = 0.25\n\nlig_facs = sig_tensor_fact[1][0]\nligands = IL2mutants\nx_lig = np.arange(len(ligands))\n\nlig_rects_comp1 = ax[0].bar(x_lig - width, lig_facs[:, 0], width, label='Component 1')\nlig_rects_comp2 = ax[0].bar(x_lig, lig_facs[:, 1], width, label='Component 2')\nlig_rects_comp3 = ax[0].bar(x_lig + width, lig_facs[:, 2], width, label='Component 3')\nax[0].set(xlabel=\"Ligand\", ylabel=\"Component Weight\", ylim=(0, 1))\nax[0].set_xticks(x_lig, ligands)\nax[0].set_xticklabels(ax[0].get_xticklabels(), rotation=60, ha=\"right\", fontsize=9)\nax[0].legend()\n\n\ncell_facs = sig_tensor_fact[1][3]\nx_cell = np.arange(len(cells))\n\ncell_rects_comp1 = ax[1].bar(x_cell - width, cell_facs[:, 0], width, label='Component 1')\ncell_rects_comp2 = ax[1].bar(x_cell, cell_facs[:, 1], width, label='Component 2')\ncell_rects_comp3 = ax[1].bar(x_cell + width, cell_facs[:, 2], width, label='Component 3')\nax[1].set(xlabel=\"Cell\", ylabel=\"Component Weight\", ylim=(0, 1))\nax[1].set_xticks(x_cell, cells)\nax[1].set_xticklabels(ax[1].get_xticklabels(), rotation=45, ha=\"right\")\nax[1].legend()\n\nf.tight_layout()\nplt.show()" + "f, ax = plt.subplots(1, 2, figsize=(9, 4.5))\n\ncomponents = [1, 2, 3]\nwidth = 0.25\n\nlig_facs = sig_tensor_fact[1][0]\nligands = IL2mutants\nx_lig = np.arange(len(ligands))\n\nlig_rects_comp1 = ax[0].bar(x_lig - width, lig_facs[:, 0], width, label=\"Component 1\")\nlig_rects_comp2 = ax[0].bar(x_lig, lig_facs[:, 1], width, label=\"Component 2\")\nlig_rects_comp3 = ax[0].bar(x_lig + width, lig_facs[:, 2], width, label=\"Component 3\")\nax[0].set(xlabel=\"Ligand\", ylabel=\"Component Weight\", ylim=(0, 1))\nax[0].set_xticks(x_lig, ligands)\nax[0].set_xticklabels(ax[0].get_xticklabels(), rotation=60, ha=\"right\", fontsize=9)\nax[0].legend()\n\n\ncell_facs = sig_tensor_fact[1][3]\nx_cell = np.arange(len(cells))\n\ncell_rects_comp1 = ax[1].bar(\n x_cell - width, cell_facs[:, 0], width, label=\"Component 1\"\n)\ncell_rects_comp2 = ax[1].bar(x_cell, cell_facs[:, 1], width, label=\"Component 2\")\ncell_rects_comp3 = ax[1].bar(\n x_cell + width, cell_facs[:, 2], width, label=\"Component 3\"\n)\nax[1].set(xlabel=\"Cell\", ylabel=\"Component Weight\", ylim=(0, 1))\nax[1].set_xticks(x_cell, cells)\nax[1].set_xticklabels(ax[1].get_xticklabels(), rotation=45, ha=\"right\")\nax[1].legend()\n\nf.tight_layout()\nplt.show()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ - "Here we observe the correlations which both ligands and cell types have with each of \nour three components - we can interepret our tensor factorization for looking for \npatterns among these correlations. \n\nFor example, we can see that bivalent mutants generally have higher correlations with\ncomponent two, as do regulatory T cells. Thus we can infer that bivalent ligands \nactivate regulatory T cells more than monovalent ligands. We also see that this \nrelationship is strengthened by the availability of IL2R\u03b1, one subunit of the IL-2 receptor.\n\nThis is just one example of an insight we can make using tensor factorization. \nBy plotting the correlations which time and dose have with each component, we \ncould additionally make inferences as to the dynamics and dose dependence of how mutations \naffect IL-2 signaling in immune cells.\n\n" + "Here we observe the correlations which both ligands and cell types have with each of\nour three components - we can interepret our tensor factorization for looking for\npatterns among these correlations.\n\nFor example, we can see that bivalent mutants generally have higher correlations with\ncomponent two, as do regulatory T cells. Thus we can infer that bivalent ligands\nactivate regulatory T cells more than monovalent ligands. We also see that this\nrelationship is strengthened by the availability of IL2R\u03b1, one subunit of the IL-2 receptor.\n\nThis is just one example of an insight we can make using tensor factorization.\nBy plotting the correlations which time and dose have with each component, we\ncould additionally make inferences as to the dynamics and dose dependence of how mutations\naffect IL-2 signaling in immune cells.\n\n" ] } ], @@ -143,7 +132,7 @@ "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", - "version": "3.9.16" + "version": "3.12.7" } }, "nbformat": 4, diff --git a/stable/_downloads/5a995fc29a1b64970094cc40854ffae2/plot_nn_tucker.py b/stable/_downloads/5a995fc29a1b64970094cc40854ffae2/plot_nn_tucker.py index e70c14cfc..d91f057f3 100644 --- a/stable/_downloads/5a995fc29a1b64970094cc40854ffae2/plot_nn_tucker.py +++ b/stable/_downloads/5a995fc29a1b64970094cc40854ffae2/plot_nn_tucker.py @@ -60,7 +60,7 @@ # tensor generation array = np.random.randint(1000, size=(10, 30, 40)) -tensor = tl.tensor(array, dtype='float') +tensor = tl.tensor(array, dtype="float") ############################################################################## # Non-negative Tucker @@ -68,9 +68,11 @@ # First, multiplicative update can be implemented as: tic = time.time() -tensor_mu, error_mu = non_negative_tucker(tensor, rank=[5, 5, 5], tol=1e-12, n_iter_max=100, return_errors=True) +tensor_mu, error_mu = non_negative_tucker( + tensor, rank=[5, 5, 5], tol=1e-12, n_iter_max=100, return_errors=True +) tucker_reconstruction_mu = tl.tucker_to_tensor(tensor_mu) -time_mu = time.time()-tic +time_mu = time.time() - tic ############################################################################## # Here, we also compute the output tensor from the decomposed factors by using @@ -83,9 +85,11 @@ # HALS algorithm with FISTA can be calculated as: ticnew = time.time() -tensor_hals_fista, error_fista = non_negative_tucker_hals(tensor, rank=[5, 5, 5], algorithm='fista', return_errors=True) +tensor_hals_fista, error_fista = non_negative_tucker_hals( + tensor, rank=[5, 5, 5], algorithm="fista", return_errors=True +) tucker_reconstruction_fista = tl.tucker_to_tensor(tensor_hals_fista) -time_fista = time.time()-ticnew +time_fista = time.time() - ticnew ############################################################################## # Non-negative Tucker with HALS and Active Set @@ -93,9 +97,11 @@ # As a second option, HALS algorithm with Active Set can be called as follows: ticnew = time.time() -tensor_hals_as, error_as = non_negative_tucker_hals(tensor, rank=[5, 5, 5], algorithm='active_set', return_errors=True) +tensor_hals_as, error_as = non_negative_tucker_hals( + tensor, rank=[5, 5, 5], algorithm="active_set", return_errors=True +) tucker_reconstruction_as = tl.tucker_to_tensor(tensor_hals_as) -time_as = time.time()-ticnew +time_as = time.time() - ticnew ############################################################################## # Comparison @@ -103,9 +109,9 @@ # To compare the various methods, first we may look at each algorithm # processing time: -print('time for tensorly nntucker:'+' ' + str("{:.2f}".format(time_mu))) -print('time for HALS with fista:'+' ' + str("{:.2f}".format(time_fista))) -print('time for HALS with as:'+' ' + str("{:.2f}".format(time_as))) +print("time for tensorly nntucker:" + " " + str(f"{time_mu:.2f}")) +print("time for HALS with fista:" + " " + str(f"{time_fista:.2f}")) +print("time for HALS with as:" + " " + str(f"{time_as:.2f}")) ############################################################################## # All algorithms should run with about the same number of iterations on our @@ -114,9 +120,11 @@ # the error between the output and input tensor. In Tensorly, there is a function # to compute Root Mean Square Error (RMSE): -print('RMSE tensorly nntucker:'+' ' + str(RMSE(tensor, tucker_reconstruction_mu))) -print('RMSE for hals with fista:'+' ' + str(RMSE(tensor, tucker_reconstruction_fista))) -print('RMSE for hals with as:'+' ' + str(RMSE(tensor, tucker_reconstruction_as))) +print("RMSE tensorly nntucker:" + " " + str(RMSE(tensor, tucker_reconstruction_mu))) +print( + "RMSE for hals with fista:" + " " + str(RMSE(tensor, tucker_reconstruction_fista)) +) +print("RMSE for hals with as:" + " " + str(RMSE(tensor, tucker_reconstruction_as))) ############################################################################## # According to the RMSE results, HALS is better than the multiplicative update @@ -124,17 +132,17 @@ # the difference in convergence speed on the following error per iteration plot: -def each_iteration(a,b,c,title): - fig=plt.figure() +def each_iteration(a, b, c, title): + fig = plt.figure() fig.set_size_inches(10, fig.get_figheight(), forward=True) plt.plot(a) plt.plot(b) plt.plot(c) plt.title(str(title)) - plt.legend(['MU', 'HALS + Fista', 'HALS + AS'], loc='upper right') + plt.legend(["MU", "HALS + Fista", "HALS + AS"], loc="upper right") -each_iteration(error_mu, error_fista, error_as, 'Error for each iteration') +each_iteration(error_mu, error_fista, error_as, "Error for each iteration") ############################################################################## # In conclusion, on this quick test, it appears that the HALS algorithm gives @@ -151,5 +159,5 @@ def each_iteration(a,b,c,title): # # Gillis, N., & Glineur, F. (2012). Accelerated multiplicative updates and # hierarchical ALS algorithms for nonnegative matrix factorization. -# Neural computation, 24(4), 1085-1105. -# `(Link) https://direct.mit.edu/neco/article/24/4/1085/7755/Accelerated-Multiplicative-Updates-and>`_ \ No newline at end of file +# Neural computation, 24(4), 1085-1105. +# `(Link) https://direct.mit.edu/neco/article/24/4/1085/7755/Accelerated-Multiplicative-Updates-and>`_ diff --git a/stable/_downloads/6f1e7a639e0699d6164445b55e6c116d/auto_examples_jupyter.zip b/stable/_downloads/6f1e7a639e0699d6164445b55e6c116d/auto_examples_jupyter.zip index be5daba7b..779f576a4 100644 Binary files a/stable/_downloads/6f1e7a639e0699d6164445b55e6c116d/auto_examples_jupyter.zip and b/stable/_downloads/6f1e7a639e0699d6164445b55e6c116d/auto_examples_jupyter.zip differ diff --git a/stable/_downloads/6f392aba7031ac3e4c8365850f77f099/plot_cp_line_search.ipynb b/stable/_downloads/6f392aba7031ac3e4c8365850f77f099/plot_cp_line_search.ipynb index 3cd8e3041..cd0581497 100644 --- a/stable/_downloads/6f392aba7031ac3e4c8365850f77f099/plot_cp_line_search.ipynb +++ b/stable/_downloads/6f392aba7031ac3e4c8365850f77f099/plot_cp_line_search.ipynb @@ -1,16 +1,5 @@ { "cells": [ - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "collapsed": false - }, - "outputs": [], - "source": [ - "%matplotlib inline" - ] - }, { "cell_type": "markdown", "metadata": {}, @@ -26,7 +15,7 @@ }, "outputs": [], "source": [ - "import matplotlib.pyplot as plt\n\nfrom time import time\nimport numpy as np\nimport tensorly as tl\nfrom tensorly.random import random_cp\nfrom tensorly.decomposition import CP, parafac\n\ntol = np.logspace(-1, -9)\nerr = np.empty_like(tol)\nerr_ls = np.empty_like(tol)\ntt = np.empty_like(tol)\ntt_ls = np.empty_like(tol)\ntensor = random_cp((10, 10, 10), 3, random_state=1234, full=True)\n\n# Get a high-accuracy decomposition for comparison\nfac = parafac(tensor, rank=3, n_iter_max=2000000, tol=1.0e-15, linesearch=True)\nerr_min = tl.norm(tl.cp_to_tensor(fac) - tensor)\n\nfor ii, toll in enumerate(tol):\n\t# Run PARAFAC decomposition without line search and time\n start = time()\n cp = CP(rank=3, n_iter_max=2000000, tol=toll, linesearch=False)\n fac = cp.fit_transform(tensor)\n tt[ii] = time() - start\n err[ii] = tl.norm(tl.cp_to_tensor(fac) - tensor)\n\n# Run PARAFAC decomposition with line search and time\nfor ii, toll in enumerate(tol):\n start = time()\n cp = CP(rank=3, n_iter_max=2000000, tol=toll, linesearch=True)\n fac_ls = cp.fit_transform(tensor)\n tt_ls[ii] = time() - start\n\n # Calculate the error of both decompositions\n err_ls[ii] = tl.norm(tl.cp_to_tensor(fac_ls) - tensor)\n\n\nfig = plt.figure()\nax = fig.add_subplot(1, 1, 1)\nax.loglog(tt, err - err_min, '.', label=\"No line search\")\nax.loglog(tt_ls, err_ls - err_min, '.r', label=\"Line search\")\nax.legend()\nax.set_ylabel(\"Time\")\nax.set_xlabel(\"Error\")\n\nplt.show()" + "import matplotlib.pyplot as plt\n\nfrom time import time\nimport numpy as np\nimport tensorly as tl\nfrom tensorly.random import random_cp\nfrom tensorly.decomposition import CP, parafac\n\ntol = np.logspace(-1, -9)\nerr = np.empty_like(tol)\nerr_ls = np.empty_like(tol)\ntt = np.empty_like(tol)\ntt_ls = np.empty_like(tol)\ntensor = random_cp((10, 10, 10), 3, random_state=1234, full=True)\n\n# Get a high-accuracy decomposition for comparison\nfac = parafac(tensor, rank=3, n_iter_max=2000000, tol=1.0e-15, linesearch=True)\nerr_min = tl.norm(tl.cp_to_tensor(fac) - tensor)\n\nfor ii, toll in enumerate(tol):\n # Run PARAFAC decomposition without line search and time\n start = time()\n cp = CP(rank=3, n_iter_max=2000000, tol=toll, linesearch=False)\n fac = cp.fit_transform(tensor)\n tt[ii] = time() - start\n err[ii] = tl.norm(tl.cp_to_tensor(fac) - tensor)\n\n# Run PARAFAC decomposition with line search and time\nfor ii, toll in enumerate(tol):\n start = time()\n cp = CP(rank=3, n_iter_max=2000000, tol=toll, linesearch=True)\n fac_ls = cp.fit_transform(tensor)\n tt_ls[ii] = time() - start\n\n # Calculate the error of both decompositions\n err_ls[ii] = tl.norm(tl.cp_to_tensor(fac_ls) - tensor)\n\n\nfig = plt.figure()\nax = fig.add_subplot(1, 1, 1)\nax.loglog(tt, err - err_min, \".\", label=\"No line search\")\nax.loglog(tt_ls, err_ls - err_min, \".r\", label=\"Line search\")\nax.legend()\nax.set_ylabel(\"Time\")\nax.set_xlabel(\"Error\")\n\nplt.show()" ] } ], @@ -46,7 +35,7 @@ "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", - "version": "3.9.16" + "version": "3.12.7" } }, "nbformat": 4, diff --git a/stable/_downloads/78da1f22de909029be4b3608effa951d/plot_image_compression.py b/stable/_downloads/78da1f22de909029be4b3608effa951d/plot_image_compression.py index 72e03dce5..daeb304eb 100644 --- a/stable/_downloads/78da1f22de909029be4b3608effa951d/plot_image_compression.py +++ b/stable/_downloads/78da1f22de909029be4b3608effa951d/plot_image_compression.py @@ -1,4 +1,3 @@ - """ Image compression via tensor decomposition ========================================== @@ -19,7 +18,8 @@ random_state = 12345 image = face() -image = tl.tensor(zoom(face(), (0.3, 0.3, 1)), dtype='float64') +image = tl.tensor(zoom(face(), (0.3, 0.3, 1)), dtype="float64") + def to_image(tensor): """A convenience function to convert from a float dtype back to uint8""" @@ -29,18 +29,21 @@ def to_image(tensor): im *= 255 return im.astype(np.uint8) + # Rank of the CP decomposition cp_rank = 25 # Rank of the Tucker decomposition tucker_rank = [100, 100, 2] # Perform the CP decomposition -weights, factors = parafac(image, rank=cp_rank, init='random', tol=10e-6) +weights, factors = parafac(image, rank=cp_rank, init="random", tol=10e-6) # Reconstruct the image from the factors cp_reconstruction = tl.cp_to_tensor((weights, factors)) # Tucker decomposition -core, tucker_factors = tucker(image, rank=tucker_rank, init='random', tol=10e-5, random_state=random_state) +core, tucker_factors = tucker( + image, rank=tucker_rank, init="random", tol=10e-5, random_state=random_state +) tucker_reconstruction = tl.tucker_to_tensor((core, tucker_factors)) # Plotting the original and reconstruction from the decompositions @@ -48,17 +51,17 @@ def to_image(tensor): ax = fig.add_subplot(1, 3, 1) ax.set_axis_off() ax.imshow(to_image(image)) -ax.set_title('original') +ax.set_title("original") ax = fig.add_subplot(1, 3, 2) ax.set_axis_off() ax.imshow(to_image(cp_reconstruction)) -ax.set_title('CP') +ax.set_title("CP") ax = fig.add_subplot(1, 3, 3) ax.set_axis_off() ax.imshow(to_image(tucker_reconstruction)) -ax.set_title('Tucker') +ax.set_title("Tucker") plt.tight_layout() plt.show() diff --git a/stable/_downloads/7c4762fd973c92c13874f6405624a983/plot_tensor.ipynb b/stable/_downloads/7c4762fd973c92c13874f6405624a983/plot_tensor.ipynb index f4121e494..8a9d6aac2 100644 --- a/stable/_downloads/7c4762fd973c92c13874f6405624a983/plot_tensor.ipynb +++ b/stable/_downloads/7c4762fd973c92c13874f6405624a983/plot_tensor.ipynb @@ -1,16 +1,5 @@ { "cells": [ - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "collapsed": false - }, - "outputs": [], - "source": [ - "%matplotlib inline" - ] - }, { "cell_type": "markdown", "metadata": {}, @@ -44,7 +33,7 @@ }, "outputs": [], "source": [ - "tensor = tl.tensor(np.arange(24).reshape((3, 4, 2)))\nprint('* original tensor:\\n{}'.format(tensor))" + "tensor = tl.tensor(np.arange(24).reshape((3, 4, 2)))\nprint(f\"* original tensor:\\n{tensor}\")" ] }, { @@ -62,7 +51,7 @@ }, "outputs": [], "source": [ - "for mode in range(tensor.ndim):\n print('* mode-{} unfolding:\\n{}'.format(mode, tl.unfold(tensor, mode)))" + "for mode in range(tensor.ndim):\n print(f\"* mode-{mode} unfolding:\\n{tl.unfold(tensor, mode)}\")" ] }, { @@ -100,7 +89,7 @@ "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", - "version": "3.9.16" + "version": "3.12.7" } }, "nbformat": 4, diff --git a/stable/_downloads/83776b8a7dbe4858882c8c190f93bdb0/plot_tucker_regression.py b/stable/_downloads/83776b8a7dbe4858882c8c190f93bdb0/plot_tucker_regression.py index 57650c92e..d9c21c54e 100644 --- a/stable/_downloads/83776b8a7dbe4858882c8c190f93bdb0/plot_tucker_regression.py +++ b/stable/_downloads/83776b8a7dbe4858882c8c190f93bdb0/plot_tucker_regression.py @@ -15,7 +15,7 @@ image_height = 25 image_width = 25 # shape of the images -patterns = ['rectangle', 'swiss', 'circle'] +patterns = ["rectangle", "swiss", "circle"] # ranks to test ranks = [1, 2, 3, 4, 5] @@ -31,37 +31,45 @@ for i, pattern in enumerate(patterns): - print('fitting pattern n.{}'.format(i)) + print(f"fitting pattern n.{i}") # Generate the original image - weight_img = gen_image(region=pattern, image_height=image_height, image_width=image_width) + weight_img = gen_image( + region=pattern, image_height=image_height, image_width=image_width + ) weight_img = tl.tensor(weight_img) # Generate the labels y = tl.dot(partial_tensor_to_vec(X, skip_begin=1), tensor_to_vec(weight_img)) # Plot the original weights - ax = fig.add_subplot(n_rows, n_columns, i*n_columns + 1) - ax.imshow(tl.to_numpy(weight_img), cmap=plt.cm.OrRd, interpolation='nearest') + ax = fig.add_subplot(n_rows, n_columns, i * n_columns + 1) + ax.imshow(tl.to_numpy(weight_img), cmap=plt.cm.OrRd, interpolation="nearest") ax.set_axis_off() if i == 0: - ax.set_title('Original\nweights') + ax.set_title("Original\nweights") for j, rank in enumerate(ranks): - print('fitting for rank = {}'.format(rank)) + print(f"fitting for rank = {rank}") # Create a tensor Regressor estimator - estimator = TuckerRegressor(weight_ranks=[rank, rank], tol=10e-7, n_iter_max=100, reg_W=1, verbose=0) + estimator = TuckerRegressor( + weight_ranks=[rank, rank], tol=10e-7, n_iter_max=100, reg_W=1, verbose=0 + ) # Fit the estimator to the data estimator.fit(X, y) - ax = fig.add_subplot(n_rows, n_columns, i*n_columns + j + 2) - ax.imshow(tl.to_numpy(estimator.weight_tensor_), cmap=plt.cm.OrRd, interpolation='nearest') + ax = fig.add_subplot(n_rows, n_columns, i * n_columns + j + 2) + ax.imshow( + tl.to_numpy(estimator.weight_tensor_), + cmap=plt.cm.OrRd, + interpolation="nearest", + ) ax.set_axis_off() if i == 0: - ax.set_title('Learned\nrank = {}'.format(rank)) + ax.set_title(f"Learned\nrank = {rank}") plt.suptitle("Tucker tensor regression") plt.show() diff --git a/stable/_downloads/8c1716b060db83a5d0c4cf467673c021/plot_parafac2.ipynb b/stable/_downloads/8c1716b060db83a5d0c4cf467673c021/plot_parafac2.ipynb index f292166d7..29f6e092a 100644 --- a/stable/_downloads/8c1716b060db83a5d0c4cf467673c021/plot_parafac2.ipynb +++ b/stable/_downloads/8c1716b060db83a5d0c4cf467673c021/plot_parafac2.ipynb @@ -1,16 +1,5 @@ { "cells": [ - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "collapsed": false - }, - "outputs": [], - "source": [ - "%matplotlib inline" - ] - }, { "cell_type": "markdown", "metadata": {}, @@ -33,7 +22,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "## Create synthetic tensor\nHere, we create a random tensor that follows the PARAFAC2 constraints found\ninx `(Kiers et al 1999)`_.\n\nThis particular tensor,\n$\\mathcal{X}\u00a0\\in \\mathbb{R}^{I\\times J \\times K}$, is a shifted\nCP tensor, that is, a tensor on the form:\n\n\\begin{align}\\mathcal{X}_{ijk} = \\sum_{r=1}^R A_{ir} B_{\\sigma_i(j) r} C_{kr},\\end{align}\n\nwhere $\\sigma_i$\u00a0is a cyclic permutation of $J$ elements.\n\n" + "## Create synthetic tensor\nHere, we create a random tensor that follows the PARAFAC2 constraints found\nin `(Kiers et al 1999)`_.\n\nThis particular tensor,\n$\\mathcal{X}\u00a0\\in \\mathbb{R}^{I\\times J \\times K}$, is a shifted\nCP tensor, that is, a tensor on the form:\n\n\\begin{align}\\mathcal{X}_{ijk} = \\sum_{r=1}^R A_{ir} B_{\\sigma_i(j) r} C_{kr},\\end{align}\n\nwhere $\\sigma_i$\u00a0is a cyclic permutation of $J$ elements.\n\n" ] }, { @@ -44,7 +33,7 @@ }, "outputs": [], "source": [ - "# Set parameters\ntrue_rank = 3\nI, J, K = 30, 40, 20\nnoise_rate = 0.1\nnp.random.seed(0)\n\n# Generate random matrices\nA_factor_matrix = np.random.uniform(1, 2, size=(I, true_rank))\nB_factor_matrix = np.random.uniform(size=(J, true_rank))\nC_factor_matrix = np.random.uniform(size=(K, true_rank))\n\n# Normalised factor matrices\nA_normalised = A_factor_matrix/la.norm(A_factor_matrix, axis=0)\nB_normalised = B_factor_matrix/la.norm(B_factor_matrix, axis=0)\nC_normalised = C_factor_matrix/la.norm(C_factor_matrix, axis=0)\n\n# Generate the shifted factor matrix\nB_factor_matrices = [np.roll(B_factor_matrix, shift=i, axis=0) for i in range(I)]\nBs_normalised = [np.roll(B_normalised, shift=i, axis=0) for i in range(I)]\n\n# Construct the tensor\ntensor = np.einsum('ir,ijr,kr->ijk', A_factor_matrix, B_factor_matrices, C_factor_matrix)\n\n# Add noise\nnoise = np.random.standard_normal(tensor.shape)\nnoise /= np.linalg.norm(noise)\nnoise *= noise_rate*np.linalg.norm(tensor)\ntensor += noise" + "# Set parameters\ntrue_rank = 3\nI, J, K = 30, 40, 20\nnoise_rate = 0.1\nnp.random.seed(0)\n\n# Generate random matrices\nA_factor_matrix = np.random.uniform(1, 2, size=(I, true_rank))\nB_factor_matrix = np.random.uniform(size=(J, true_rank))\nC_factor_matrix = np.random.uniform(size=(K, true_rank))\n\n# Normalised factor matrices\nA_normalised = A_factor_matrix / la.norm(A_factor_matrix, axis=0)\nB_normalised = B_factor_matrix / la.norm(B_factor_matrix, axis=0)\nC_normalised = C_factor_matrix / la.norm(C_factor_matrix, axis=0)\n\n# Generate the shifted factor matrix\nB_factor_matrices = [np.roll(B_factor_matrix, shift=i, axis=0) for i in range(I)]\nBs_normalised = [np.roll(B_normalised, shift=i, axis=0) for i in range(I)]\n\n# Construct the tensor\ntensor = np.einsum(\n \"ir,ijr,kr->ijk\", A_factor_matrix, B_factor_matrices, C_factor_matrix\n)\n\n# Add noise\nnoise = np.random.standard_normal(tensor.shape)\nnoise /= np.linalg.norm(noise)\nnoise *= noise_rate * np.linalg.norm(tensor)\ntensor += noise" ] }, { @@ -62,14 +51,14 @@ }, "outputs": [], "source": [ - "best_err = np.inf\ndecomposition = None\n\nfor run in range(10):\n print(f'Training model {run}...')\n trial_decomposition, trial_errs = parafac2(tensor, true_rank, return_errors=True, tol=1e-8, n_iter_max=500, random_state=run)\n print(f'Number of iterations: {len(trial_errs)}')\n print(f'Final error: {trial_errs[-1]}')\n if best_err > trial_errs[-1]:\n best_err = trial_errs[-1]\n err = trial_errs\n decomposition = trial_decomposition\n print('-------------------------------')\nprint(f'Best model error: {best_err}')" + "best_err = np.inf\ndecomposition = None\n\nfor run in range(10):\n print(f\"Training model {run}...\")\n trial_decomposition, trial_errs = parafac2(\n tensor,\n true_rank,\n return_errors=True,\n tol=1e-8,\n n_iter_max=500,\n random_state=run,\n )\n print(f\"Number of iterations: {len(trial_errs)}\")\n print(f\"Final error: {trial_errs[-1]}\")\n if best_err > trial_errs[-1]:\n best_err = trial_errs[-1]\n err = trial_errs\n decomposition = trial_decomposition\n print(\"-------------------------------\")\nprint(f\"Best model error: {best_err}\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ - "A decomposition is a wrapper object for three variables: the *weights*, \nthe *factor matrices* and the *projection matrices*. The weights are similar\nto the output of a CP decomposition. The factor matrices and projection \nmatrices are somewhat different. For a CP decomposition, we only have the\nweights and the factor matrices. However, since the PARAFAC2 factor matrices\nfor the second mode is given by\n\n\\begin{align}B_i = P_i B,\\end{align}\n\nwhere $B$ is an $R \\times R$ matrix and $P_i$ is an \n$I \\times R$ projection matrix, we cannot store the factor matrices\nthe same as for a CP decomposition.\n\nInstead, we store the factor matrix along the first mode ($A$), the \n\"blueprint\" matrix for the second mode ($B$) and the factor matrix \nalong the third mode ($C$) in one tuple and the projection matrices,\n$P_i$, in a separate tuple.\n\nIf we wish to extract the informative $B_i$ factor matrices, then we\nuse the ``tensorly.parafac2_tensor.apply_projection_matrices`` function on \nthe PARAFAC2 tensor instance to get another wrapper object for two\nvariables: *weights* and *factor matrices*. However, now, the second element\nof the factor matrices tuple is now a list of factor matrices, one for each\nfrontal slice of the tensor.\n\nLikewise, if we wish to construct the tensor or the frontal slices, then we\ncan use the ``tensorly.parafac2_tensor.parafac2_to_tensor`` function. If the\ndecomposed dataset consisted of uneven-length frontal slices, then we can\nuse the ``tensorly.parafac2_tensor.parafac2_to_slices`` function to get a \nlist of frontal slices.\n\n" + "A decomposition is a wrapper object for three variables: the *weights*,\nthe *factor matrices* and the *projection matrices*. The weights are similar\nto the output of a CP decomposition. The factor matrices and projection\nmatrices are somewhat different. For a CP decomposition, we only have the\nweights and the factor matrices. However, since the PARAFAC2 factor matrices\nfor the second mode is given by\n\n\\begin{align}B_i = P_i B,\\end{align}\n\nwhere $B$ is an $R \\times R$ matrix and $P_i$ is an\n$I \\times R$ projection matrix, we cannot store the factor matrices\nthe same as for a CP decomposition.\n\nInstead, we store the factor matrix along the first mode ($A$), the\n\"blueprint\" matrix for the second mode ($B$) and the factor matrix\nalong the third mode ($C$) in one tuple and the projection matrices,\n$P_i$, in a separate tuple.\n\nIf we wish to extract the informative $B_i$ factor matrices, then we\nuse the ``tensorly.parafac2_tensor.apply_projection_matrices`` function on\nthe PARAFAC2 tensor instance to get another wrapper object for two\nvariables: *weights* and *factor matrices*. However, now, the second element\nof the factor matrices tuple is now a list of factor matrices, one for each\nfrontal slice of the tensor.\n\nLikewise, if we wish to construct the tensor or the frontal slices, then we\ncan use the ``tensorly.parafac2_tensor.parafac2_to_tensor`` function. If the\ndecomposed dataset consisted of uneven-length frontal slices, then we can\nuse the ``tensorly.parafac2_tensor.parafac2_to_slices`` function to get a\nlist of frontal slices.\n\n" ] }, { @@ -80,7 +69,7 @@ }, "outputs": [], "source": [ - "est_tensor = tl.parafac2_tensor.parafac2_to_tensor(decomposition)\nest_weights, (est_A, est_B, est_C) = tl.parafac2_tensor.apply_parafac2_projections(decomposition)" + "est_tensor = tl.parafac2_tensor.parafac2_to_tensor(decomposition)\nest_weights, (est_A, est_B, est_C) = tl.parafac2_tensor.apply_parafac2_projections(\n decomposition\n)" ] }, { @@ -98,7 +87,7 @@ }, "outputs": [], "source": [ - "reconstruction_error = la.norm(est_tensor - tensor)\nrecovery_rate = 1 - reconstruction_error/la.norm(tensor)\n\nprint(f'{recovery_rate:2.0%} of the data is explained by the model, which is expected with noise rate: {noise_rate}')\n\n\n# To evaluate how well the original structure is recovered, we calculate the tucker congruence coefficient.\n\nest_A, est_projected_Bs, est_C = tl.parafac2_tensor.apply_parafac2_projections(decomposition)[1]\n\nsign = np.sign(est_A)\nest_A = np.abs(est_A)\nest_projected_Bs = sign[:, np.newaxis]*est_projected_Bs\n\nest_A_normalised = est_A/la.norm(est_A, axis=0)\nest_Bs_normalised = [est_B/la.norm(est_B, axis=0) for est_B in est_projected_Bs]\nest_C_normalised = est_C/la.norm(est_C, axis=0)\n\nB_corr = np.array(est_Bs_normalised).reshape(-1, true_rank).T@np.array(Bs_normalised).reshape(-1, true_rank)/len(est_Bs_normalised)\nA_corr = est_A_normalised.T@A_normalised\nC_corr = est_C_normalised.T@C_normalised\n\ncorr = A_corr*B_corr*C_corr\npermutation = linear_sum_assignment(-corr) # Old versions of scipy does not support maximising, from scipy v1.4, you can pass `corr` and `maximize=True` instead of `-corr` to maximise the sum.\n\ncongruence_coefficient = np.mean(corr[permutation])\nprint(f'Average tucker congruence coefficient: {congruence_coefficient}')" + "reconstruction_error = la.norm(est_tensor - tensor)\nrecovery_rate = 1 - reconstruction_error / la.norm(tensor)\n\nprint(\n f\"{recovery_rate:2.0%} of the data is explained by the model, which is expected with noise rate: {noise_rate}\"\n)\n\n\n# To evaluate how well the original structure is recovered, we calculate the tucker congruence coefficient.\n\nest_A, est_projected_Bs, est_C = tl.parafac2_tensor.apply_parafac2_projections(\n decomposition\n)[1]\n\nsign = np.sign(est_A)\nest_A = np.abs(est_A)\nest_projected_Bs = sign[:, np.newaxis] * est_projected_Bs\n\nest_A_normalised = est_A / la.norm(est_A, axis=0)\nest_Bs_normalised = [est_B / la.norm(est_B, axis=0) for est_B in est_projected_Bs]\nest_C_normalised = est_C / la.norm(est_C, axis=0)\n\nB_corr = (\n np.array(est_Bs_normalised).reshape(-1, true_rank).T\n @ np.array(Bs_normalised).reshape(-1, true_rank)\n / len(est_Bs_normalised)\n)\nA_corr = est_A_normalised.T @ A_normalised\nC_corr = est_C_normalised.T @ C_normalised\n\ncorr = A_corr * B_corr * C_corr\npermutation = linear_sum_assignment(\n -corr\n) # Old versions of scipy does not support maximising, from scipy v1.4, you can pass `corr` and `maximize=True` instead of `-corr` to maximise the sum.\n\ncongruence_coefficient = np.mean(corr[permutation])\nprint(f\"Average tucker congruence coefficient: {congruence_coefficient}\")" ] }, { @@ -116,7 +105,7 @@ }, "outputs": [], "source": [ - "# Find the best permutation so that we can plot the estimated components on top of the true components\npermutation = np.argmax(A_corr*B_corr*C_corr, axis=0)\n\n\n# Create plots of each component vector for each mode\n# (We just look at one of the B_i matrices)\n\nfig, axes = plt.subplots(true_rank, 3, figsize=(15, 3*true_rank+1))\ni = 0 # What slice, B_i, we look at for the B mode\n\nfor r in range(true_rank):\n \n # Plot true and estimated components for mode A\n axes[r][0].plot((A_normalised[:, r]), label='True')\n axes[r][0].plot((est_A_normalised[:, permutation[r]]),'--', label='Estimated')\n \n # Labels for the different components\n axes[r][0].set_ylabel(f'Component {r}')\n\n # Plot true and estimated components for mode C\n axes[r][2].plot(C_normalised[:, r])\n axes[r][2].plot(est_C_normalised[:, permutation[r]], '--')\n\n # Plot true components for mode B\n axes[r][1].plot(Bs_normalised[i][:, r])\n \n # Get the signs so that we can flip the B mode factor matrices\n A_sign = np.sign(est_A_normalised)\n \n # Plot estimated components for mode B (after sign correction)\n axes[r][1].plot(A_sign[i, r]*est_Bs_normalised[i][:, permutation[r]], '--')\n\n# Titles for the different modes\naxes[0][0].set_title('A mode')\naxes[0][2].set_title('C mode')\naxes[0][1].set_title(f'B mode (slice {i})')\n\n# Create a legend for the entire figure \nhandles, labels = axes[r][0].get_legend_handles_labels()\nfig.legend(handles, labels, loc='upper center', ncol=2)" + "# Find the best permutation so that we can plot the estimated components on top of the true components\npermutation = np.argmax(A_corr * B_corr * C_corr, axis=0)\n\n\n# Create plots of each component vector for each mode\n# (We just look at one of the B_i matrices)\n\nfig, axes = plt.subplots(true_rank, 3, figsize=(15, 3 * true_rank + 1))\ni = 0 # What slice, B_i, we look at for the B mode\n\nfor r in range(true_rank):\n\n # Plot true and estimated components for mode A\n axes[r][0].plot((A_normalised[:, r]), label=\"True\")\n axes[r][0].plot((est_A_normalised[:, permutation[r]]), \"--\", label=\"Estimated\")\n\n # Labels for the different components\n axes[r][0].set_ylabel(f\"Component {r}\")\n\n # Plot true and estimated components for mode C\n axes[r][2].plot(C_normalised[:, r])\n axes[r][2].plot(est_C_normalised[:, permutation[r]], \"--\")\n\n # Plot true components for mode B\n axes[r][1].plot(Bs_normalised[i][:, r])\n\n # Get the signs so that we can flip the B mode factor matrices\n A_sign = np.sign(est_A_normalised)\n\n # Plot estimated components for mode B (after sign correction)\n axes[r][1].plot(A_sign[i, r] * est_Bs_normalised[i][:, permutation[r]], \"--\")\n\n# Titles for the different modes\naxes[0][0].set_title(\"A mode\")\naxes[0][2].set_title(\"C mode\")\naxes[0][1].set_title(f\"B mode (slice {i})\")\n\n# Create a legend for the entire figure\nhandles, labels = axes[r][0].get_legend_handles_labels()\nfig.legend(handles, labels, loc=\"upper center\", ncol=2)" ] }, { @@ -134,14 +123,14 @@ }, "outputs": [], "source": [ - "loss_fig, loss_ax = plt.subplots(figsize=(9, 9/1.6))\nloss_ax.plot(range(1, len(err)), err[1:])\nloss_ax.set_xlabel('Iteration number')\nloss_ax.set_ylabel('Relative reconstruction error')\nmathematical_expression_of_loss = r\"$\\frac{\\left|\\left|\\hat{\\mathcal{X}}\\right|\\right|_F}{\\left|\\left|\\mathcal{X}\\right|\\right|_F}$\"\nloss_ax.set_title(f'Loss plot: {mathematical_expression_of_loss} \\n (starting after first iteration)', fontsize=16)\nxticks = loss_ax.get_xticks()\nloss_ax.set_xticks([1] + list(xticks[1:]))\nloss_ax.set_xlim(1, len(err))\nplt.tight_layout()\nplt.show()" + "loss_fig, loss_ax = plt.subplots(figsize=(9, 9 / 1.6))\nloss_ax.plot(range(1, len(err)), err[1:])\nloss_ax.set_xlabel(\"Iteration number\")\nloss_ax.set_ylabel(\"Relative reconstruction error\")\nmathematical_expression_of_loss = r\"$\\frac{\\left|\\left|\\hat{\\mathcal{X}}\\right|\\right|_F}{\\left|\\left|\\mathcal{X}\\right|\\right|_F}$\"\nloss_ax.set_title(\n f\"Loss plot: {mathematical_expression_of_loss} \\n (starting after first iteration)\",\n fontsize=16,\n)\nxticks = loss_ax.get_xticks()\nloss_ax.set_xticks([1] + list(xticks[1:]))\nloss_ax.set_xlim(1, len(err))\nplt.tight_layout()\nplt.show()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ - "## References\n\n\nKiers HA, Ten Berge JM, Bro R. *PARAFAC2\u2014Part I. \nA direct fitting algorithm for the PARAFAC2 model.*\n**Journal of Chemometrics: A Journal of the Chemometrics Society.**\n1999 May;13(3\u20104):275-94. [(Online version)](https://onlinelibrary.wiley.com/doi/abs/10.1002/(SICI)1099-128X(199905/08)13:3/4%3C275::AID-CEM543%3E3.0.CO;2-B)\n\n" + "## References\n\n\nKiers HA, Ten Berge JM, Bro R. *PARAFAC2\u2014Part I.\nA direct fitting algorithm for the PARAFAC2 model.*\n**Journal of Chemometrics: A Journal of the Chemometrics Society.**\n1999 May;13(3\u20104):275-94. [(Online version)](https://onlinelibrary.wiley.com/doi/abs/10.1002/(SICI)1099-128X(199905/08)13:3/4%3C275::AID-CEM543%3E3.0.CO;2-B)\n\n" ] } ], @@ -161,7 +150,7 @@ "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", - "version": "3.9.16" + "version": "3.12.7" } }, "nbformat": 4, diff --git a/stable/_downloads/9176c20d1989efc7fe9ef15ff8b9ac98/plot_image_compression.zip b/stable/_downloads/9176c20d1989efc7fe9ef15ff8b9ac98/plot_image_compression.zip new file mode 100644 index 000000000..1e1f65b23 Binary files /dev/null and b/stable/_downloads/9176c20d1989efc7fe9ef15ff8b9ac98/plot_image_compression.zip differ diff --git a/stable/_downloads/963a65841fc063b51ab7dcf8ecab1001/plot_nn_cp_hals.ipynb b/stable/_downloads/963a65841fc063b51ab7dcf8ecab1001/plot_nn_cp_hals.ipynb index 0c53374e6..33a8ac323 100644 --- a/stable/_downloads/963a65841fc063b51ab7dcf8ecab1001/plot_nn_cp_hals.ipynb +++ b/stable/_downloads/963a65841fc063b51ab7dcf8ecab1001/plot_nn_cp_hals.ipynb @@ -1,16 +1,5 @@ { "cells": [ - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "collapsed": false - }, - "outputs": [], - "source": [ - "%matplotlib inline" - ] - }, { "cell_type": "markdown", "metadata": {}, @@ -58,7 +47,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "Our goal here is to produce an approximation of the tensor generated above\nwhich follows a low-rank CP model, with non-negative coefficients. Before\nusing these algorithms, we can use Tensorly to produce a good initial guess\nfor our NCP. In fact, in order to compare both algorithmic options in a\nfair way, it is a good idea to use same initialized factors in decomposition\nalgorithms. We make use of the ``initialize_cp`` function to initialize the\nfactors of the NCP (setting the ``non_negative`` option to `True`) \nand transform these factors (and factors weights) into\nan instance of the CPTensor class:\n\n" + "Our goal here is to produce an approximation of the tensor generated above\nwhich follows a low-rank CP model, with non-negative coefficients. Before\nusing these algorithms, we can use Tensorly to produce a good initial guess\nfor our NCP. In fact, in order to compare both algorithmic options in a\nfair way, it is a good idea to use same initialized factors in decomposition\nalgorithms. We make use of the ``initialize_cp`` function to initialize the\nfactors of the NCP (setting the ``non_negative`` option to `True`)\nand transform these factors (and factors weights) into\nan instance of the CPTensor class:\n\n" ] }, { @@ -69,7 +58,7 @@ }, "outputs": [], "source": [ - "weights_init, factors_init = initialize_cp(tensor, non_negative=True, init='random', rank=10)\n\ncp_init = CPTensor((weights_init, factors_init))" + "weights_init, factors_init = initialize_cp(\n tensor, non_negative=True, init=\"random\", rank=10\n)\n\ncp_init = CPTensor((weights_init, factors_init))" ] }, { @@ -87,7 +76,7 @@ }, "outputs": [], "source": [ - "tic = time.time()\ntensor_mu, errors_mu = non_negative_parafac(tensor, rank=10, init=deepcopy(cp_init), return_errors=True)\ncp_reconstruction_mu = tl.cp_to_tensor(tensor_mu)\ntime_mu = time.time()-tic" + "tic = time.time()\ntensor_mu, errors_mu = non_negative_parafac(\n tensor, rank=10, init=deepcopy(cp_init), return_errors=True\n)\ncp_reconstruction_mu = tl.cp_to_tensor(tensor_mu)\ntime_mu = time.time() - tic" ] }, { @@ -105,7 +94,7 @@ }, "outputs": [], "source": [ - "print('reconstructed tensor\\n', cp_reconstruction_mu[10:12, 10:12, 10:12], '\\n')\nprint('input data tensor\\n', tensor[10:12, 10:12, 10:12])" + "print(\"reconstructed tensor\\n\", cp_reconstruction_mu[10:12, 10:12, 10:12], \"\\n\")\nprint(\"input data tensor\\n\", tensor[10:12, 10:12, 10:12])" ] }, { @@ -123,7 +112,7 @@ }, "outputs": [], "source": [ - "tic = time.time()\ntensor_hals, errors_hals = non_negative_parafac_hals(tensor, rank=10, init=deepcopy(cp_init), return_errors=True)\ncp_reconstruction_hals = tl.cp_to_tensor(tensor_hals)\ntime_hals = time.time()-tic" + "tic = time.time()\ntensor_hals, errors_hals = non_negative_parafac_hals(\n tensor, rank=10, init=deepcopy(cp_init), return_errors=True\n)\ncp_reconstruction_hals = tl.cp_to_tensor(tensor_hals)\ntime_hals = time.time() - tic" ] }, { @@ -141,7 +130,7 @@ }, "outputs": [], "source": [ - "print('reconstructed tensor\\n',cp_reconstruction_hals[10:12, 10:12, 10:12], '\\n')\nprint('input data tensor\\n', tensor[10:12, 10:12, 10:12])" + "print(\"reconstructed tensor\\n\", cp_reconstruction_hals[10:12, 10:12, 10:12], \"\\n\")\nprint(\"input data tensor\\n\", tensor[10:12, 10:12, 10:12])" ] }, { @@ -159,7 +148,7 @@ }, "outputs": [], "source": [ - "tic = time.time()\ntensorhals_exact, errors_exact = non_negative_parafac_hals(tensor, rank=10, init=deepcopy(cp_init), return_errors=True, exact=True)\ncp_reconstruction_exact_hals = tl.cp_to_tensor(tensorhals_exact)\ntime_exact_hals = time.time()-tic" + "tic = time.time()\ntensorhals_exact, errors_exact = non_negative_parafac_hals(\n tensor, rank=10, init=deepcopy(cp_init), return_errors=True, exact=True\n)\ncp_reconstruction_exact_hals = tl.cp_to_tensor(tensorhals_exact)\ntime_exact_hals = time.time() - tic" ] }, { @@ -177,7 +166,7 @@ }, "outputs": [], "source": [ - "print(str(\"{:.2f}\".format(time_mu)) + ' ' + 'seconds')\nprint(str(\"{:.2f}\".format(time_hals)) + ' ' + 'seconds')\nprint(str(\"{:.2f}\".format(time_exact_hals)) + ' ' + 'seconds')" + "print(str(f\"{time_mu:.2f}\") + \" \" + \"seconds\")\nprint(str(f\"{time_hals:.2f}\") + \" \" + \"seconds\")\nprint(str(f\"{time_exact_hals:.2f}\") + \" \" + \"seconds\")" ] }, { @@ -195,7 +184,7 @@ }, "outputs": [], "source": [ - "from tensorly.metrics.regression import RMSE\nprint(RMSE(tensor, cp_reconstruction_mu))\nprint(RMSE(tensor, cp_reconstruction_hals))\nprint(RMSE(tensor, cp_reconstruction_exact_hals))" + "from tensorly.metrics.regression import RMSE\n\nprint(RMSE(tensor, cp_reconstruction_mu))\nprint(RMSE(tensor, cp_reconstruction_hals))\nprint(RMSE(tensor, cp_reconstruction_exact_hals))" ] }, { @@ -213,7 +202,7 @@ }, "outputs": [], "source": [ - "import matplotlib.pyplot as plt\ndef each_iteration(a,b,c,title):\n fig=plt.figure()\n fig.set_size_inches(10, fig.get_figheight(), forward=True)\n plt.plot(a)\n plt.plot(b)\n plt.plot(c)\n plt.title(str(title))\n plt.legend(['MU', 'HALS', 'Exact HALS'], loc='upper left')\n\n\neach_iteration(errors_mu, errors_hals, errors_exact, 'Error for each iteration')" + "import matplotlib.pyplot as plt\n\n\ndef each_iteration(a, b, c, title):\n fig = plt.figure()\n fig.set_size_inches(10, fig.get_figheight(), forward=True)\n plt.plot(a)\n plt.plot(b)\n plt.plot(c)\n plt.title(str(title))\n plt.legend([\"MU\", \"HALS\", \"Exact HALS\"], loc=\"upper left\")\n\n\neach_iteration(errors_mu, errors_hals, errors_exact, \"Error for each iteration\")" ] }, { @@ -247,7 +236,7 @@ "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", - "version": "3.9.16" + "version": "3.12.7" } }, "nbformat": 4, diff --git a/stable/_downloads/979a176afc5b35664fe7ea5c86b1532a/plot_IL2.py b/stable/_downloads/979a176afc5b35664fe7ea5c86b1532a/plot_IL2.py index 23f2b5e26..4aea1c38b 100644 --- a/stable/_downloads/979a176afc5b35664fe7ea5c86b1532a/plot_IL2.py +++ b/stable/_downloads/979a176afc5b35664fe7ea5c86b1532a/plot_IL2.py @@ -16,19 +16,19 @@ from tensorly.decomposition import non_negative_parafac from tensorly.cp_tensor import cp_normalize -#%% -# Here we will load a tensor of experimentally measured cellular responses to -# IL-2 stimulation. IL-2 is a naturally occurring immune signaling molecule -# which has been engineered by pharmaceutical companies and drug designers +# %% +# Here we will load a tensor of experimentally measured cellular responses to +# IL-2 stimulation. IL-2 is a naturally occurring immune signaling molecule +# which has been engineered by pharmaceutical companies and drug designers # in attempts to act as an effective immunotherapy. In order to make effective IL-2 # therapies, pharmaceutical engineer have altered IL-2's signaling activity in order to -# increase or decrease its interactions with particular cell types. -# -# IL-2 signals through the Jak/STAT pathway and transmits a signal into immune cells by -# phosphorylating STAT5 (pSTAT5). When phosphorylated, STAT5 will cause various immune -# cell types to proliferate, and depending on whether regulatory (regulatory T cells, or Tregs) +# increase or decrease its interactions with particular cell types. +# +# IL-2 signals through the Jak/STAT pathway and transmits a signal into immune cells by +# phosphorylating STAT5 (pSTAT5). When phosphorylated, STAT5 will cause various immune +# cell types to proliferate, and depending on whether regulatory (regulatory T cells, or Tregs) # or effector cells (helper T cells, natural killer cells, and cytotoxic T cells, -# or Thelpers, NKs, and CD8+ cells) respond, IL-2 signaling can result in +# or Thelpers, NKs, and CD8+ cells) respond, IL-2 signaling can result in # immunosuppression or immunostimulation respectively. Thus, when designing a drug # meant to repress the immune system, potentially for the treatment of autoimmune # diseases, IL-2 which primarily enacts a response in Tregs is desirable. Conversely, @@ -36,53 +36,61 @@ # the treatment of cancer, IL-2 which primarily enacts a response in effector cells # is desirable. In order to achieve either signaling bias, IL-2 variants with altered # affinity for it's various receptors (IL2Rα or IL2Rβ) have been designed. Furthermore -# IL-2 variants with multiple binding domains have been designed as multivalent +# IL-2 variants with multiple binding domains have been designed as multivalent # IL-2 may act as a more effective therapeutic. In order to understand how these mutations -# and alterations affect which cells respond to an IL-2 mutant, we will perform +# and alterations affect which cells respond to an IL-2 mutant, we will perform # non-negative PARAFAC tensor decomposition on our cell response data tensor. -# -# Here, our data contains the responses of 8 different cell types to 13 different +# +# Here, our data contains the responses of 8 different cell types to 13 different # IL-2 mutants, at 4 different timepoints, at 12 standardized IL-2 concentrations. # Therefore, our tensor will have shape (13 x 4 x 12 x 8), with dimensions # representing IL-2 mutant, stimulation time, dose, and cell type respectively. Each -# measured quantity represents the amount of phosphorlyated STAT5 (pSTAT5) in a +# measured quantity represents the amount of phosphorlyated STAT5 (pSTAT5) in a # given cell population following stimulation with the specified IL-2 mutant. response_data = load_IL2data() IL2mutants, cells = response_data.ticks[0], response_data.ticks[3] print(response_data.tensor.shape, response_data.dims) -#%% -# Now we will run non-negative PARAFAC tensor decomposition to reduce the dimensionality -# of our tensor. We will use 3 components, and normalize our resulting tensor to aid in +# %% +# Now we will run non-negative PARAFAC tensor decomposition to reduce the dimensionality +# of our tensor. We will use 3 components, and normalize our resulting tensor to aid in # future comparisons of correlations across components. # -# First we must preprocess our tensor to ready it for factorization. Our data has a +# First we must preprocess our tensor to ready it for factorization. Our data has a # few missing values, and so we must first generate a mask to mark where those values # occur. tensor_mask = np.isfinite(response_data.tensor) -#%% -# Now that we've marked where those non-finite values occur, we can regenerate our +# %% +# Now that we've marked where those non-finite values occur, we can regenerate our # tensor without including non-finite values, allowing it to be factorized. response_data_fin = np.nan_to_num(response_data.tensor) -#%% +# %% # Using this mask, and finite-value only tensor, we can decompose our signaling data into # three components. We will also normalize this tensor, which will allow for easier # comparisons to be made between the meanings, and magnitudes of our resulting components. -sig_tensor_fact = non_negative_parafac(response_data_fin, init='random', rank=3, mask=tensor_mask, n_iter_max=5000, tol=1e-9, random_state=1) +sig_tensor_fact = non_negative_parafac( + response_data_fin, + init="random", + rank=3, + mask=tensor_mask, + n_iter_max=5000, + tol=1e-9, + random_state=1, +) sig_tensor_fact = cp_normalize(sig_tensor_fact) -#%% -# Now we will load the names of our cell types and IL-2 mutants, in the order in which -# they are present in our original tensor. IL-2 mutant names refer to the specific -# mutations made to their amino acid sequence, as well as their valency +# %% +# Now we will load the names of our cell types and IL-2 mutants, in the order in which +# they are present in our original tensor. IL-2 mutant names refer to the specific +# mutations made to their amino acid sequence, as well as their valency # format (monovalent or bivalent). -# +# # Finally, we label, plot, and analyze our factored tensor of data. f, ax = plt.subplots(1, 2, figsize=(9, 4.5)) @@ -94,9 +102,9 @@ ligands = IL2mutants x_lig = np.arange(len(ligands)) -lig_rects_comp1 = ax[0].bar(x_lig - width, lig_facs[:, 0], width, label='Component 1') -lig_rects_comp2 = ax[0].bar(x_lig, lig_facs[:, 1], width, label='Component 2') -lig_rects_comp3 = ax[0].bar(x_lig + width, lig_facs[:, 2], width, label='Component 3') +lig_rects_comp1 = ax[0].bar(x_lig - width, lig_facs[:, 0], width, label="Component 1") +lig_rects_comp2 = ax[0].bar(x_lig, lig_facs[:, 1], width, label="Component 2") +lig_rects_comp3 = ax[0].bar(x_lig + width, lig_facs[:, 2], width, label="Component 3") ax[0].set(xlabel="Ligand", ylabel="Component Weight", ylim=(0, 1)) ax[0].set_xticks(x_lig, ligands) ax[0].set_xticklabels(ax[0].get_xticklabels(), rotation=60, ha="right", fontsize=9) @@ -106,9 +114,13 @@ cell_facs = sig_tensor_fact[1][3] x_cell = np.arange(len(cells)) -cell_rects_comp1 = ax[1].bar(x_cell - width, cell_facs[:, 0], width, label='Component 1') -cell_rects_comp2 = ax[1].bar(x_cell, cell_facs[:, 1], width, label='Component 2') -cell_rects_comp3 = ax[1].bar(x_cell + width, cell_facs[:, 2], width, label='Component 3') +cell_rects_comp1 = ax[1].bar( + x_cell - width, cell_facs[:, 0], width, label="Component 1" +) +cell_rects_comp2 = ax[1].bar(x_cell, cell_facs[:, 1], width, label="Component 2") +cell_rects_comp3 = ax[1].bar( + x_cell + width, cell_facs[:, 2], width, label="Component 3" +) ax[1].set(xlabel="Cell", ylabel="Component Weight", ylim=(0, 1)) ax[1].set_xticks(x_cell, cells) ax[1].set_xticklabels(ax[1].get_xticklabels(), rotation=45, ha="right") @@ -117,17 +129,17 @@ f.tight_layout() plt.show() -#%% -# Here we observe the correlations which both ligands and cell types have with each of -# our three components - we can interepret our tensor factorization for looking for -# patterns among these correlations. -# +# %% +# Here we observe the correlations which both ligands and cell types have with each of +# our three components - we can interepret our tensor factorization for looking for +# patterns among these correlations. +# # For example, we can see that bivalent mutants generally have higher correlations with -# component two, as do regulatory T cells. Thus we can infer that bivalent ligands -# activate regulatory T cells more than monovalent ligands. We also see that this +# component two, as do regulatory T cells. Thus we can infer that bivalent ligands +# activate regulatory T cells more than monovalent ligands. We also see that this # relationship is strengthened by the availability of IL2Rα, one subunit of the IL-2 receptor. # -# This is just one example of an insight we can make using tensor factorization. -# By plotting the correlations which time and dose have with each component, we -# could additionally make inferences as to the dynamics and dose dependence of how mutations +# This is just one example of an insight we can make using tensor factorization. +# By plotting the correlations which time and dose have with each component, we +# could additionally make inferences as to the dynamics and dose dependence of how mutations # affect IL-2 signaling in immune cells. diff --git a/stable/_downloads/b099986e302d05c170b2103cc56235d8/plot_parafac2.zip b/stable/_downloads/b099986e302d05c170b2103cc56235d8/plot_parafac2.zip new file mode 100644 index 000000000..02d0539b6 Binary files /dev/null and b/stable/_downloads/b099986e302d05c170b2103cc56235d8/plot_parafac2.zip differ diff --git a/stable/_downloads/b8fdcea423023234b51dd9091c942145/plot_guide_for_constrained_cp.zip b/stable/_downloads/b8fdcea423023234b51dd9091c942145/plot_guide_for_constrained_cp.zip new file mode 100644 index 000000000..763c7454b Binary files /dev/null and b/stable/_downloads/b8fdcea423023234b51dd9091c942145/plot_guide_for_constrained_cp.zip differ diff --git a/stable/_downloads/b9cbdb6e86868da417502cdb389f58d1/plot_cp_line_search.zip b/stable/_downloads/b9cbdb6e86868da417502cdb389f58d1/plot_cp_line_search.zip new file mode 100644 index 000000000..2717fa0a7 Binary files /dev/null and b/stable/_downloads/b9cbdb6e86868da417502cdb389f58d1/plot_cp_line_search.zip differ diff --git a/stable/_downloads/ba9ab31b668e2e402c959f74d1c232fd/plot_tucker_regression.ipynb b/stable/_downloads/ba9ab31b668e2e402c959f74d1c232fd/plot_tucker_regression.ipynb index d00f84aac..af85e5cfe 100644 --- a/stable/_downloads/ba9ab31b668e2e402c959f74d1c232fd/plot_tucker_regression.ipynb +++ b/stable/_downloads/ba9ab31b668e2e402c959f74d1c232fd/plot_tucker_regression.ipynb @@ -1,16 +1,5 @@ { "cells": [ - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "collapsed": false - }, - "outputs": [], - "source": [ - "%matplotlib inline" - ] - }, { "cell_type": "markdown", "metadata": {}, @@ -26,7 +15,7 @@ }, "outputs": [], "source": [ - "import matplotlib.pyplot as plt\nfrom tensorly.base import tensor_to_vec, partial_tensor_to_vec\nfrom tensorly.datasets.synthetic import gen_image\nfrom tensorly.regression.tucker_regression import TuckerRegressor\nimport tensorly as tl\n\n# Parameter of the experiment\nimage_height = 25\nimage_width = 25\n# shape of the images\npatterns = ['rectangle', 'swiss', 'circle']\n# ranks to test\nranks = [1, 2, 3, 4, 5]\n\n# Generate random samples\nrng = tl.check_random_state(1)\nX = tl.tensor(rng.normal(size=(1000, image_height, image_width), loc=0, scale=1))\n\n# Parameters of the plot, deduced from the data\nn_rows = len(patterns)\nn_columns = len(ranks) + 1\n# Plot the three images\nfig = plt.figure()\n\nfor i, pattern in enumerate(patterns):\n\n print('fitting pattern n.{}'.format(i))\n\n # Generate the original image\n weight_img = gen_image(region=pattern, image_height=image_height, image_width=image_width)\n weight_img = tl.tensor(weight_img)\n\n # Generate the labels\n y = tl.dot(partial_tensor_to_vec(X, skip_begin=1), tensor_to_vec(weight_img))\n\n # Plot the original weights\n ax = fig.add_subplot(n_rows, n_columns, i*n_columns + 1)\n ax.imshow(tl.to_numpy(weight_img), cmap=plt.cm.OrRd, interpolation='nearest')\n ax.set_axis_off()\n if i == 0:\n ax.set_title('Original\\nweights')\n\n for j, rank in enumerate(ranks):\n print('fitting for rank = {}'.format(rank))\n\n # Create a tensor Regressor estimator\n estimator = TuckerRegressor(weight_ranks=[rank, rank], tol=10e-7, n_iter_max=100, reg_W=1, verbose=0)\n\n # Fit the estimator to the data\n estimator.fit(X, y)\n\n ax = fig.add_subplot(n_rows, n_columns, i*n_columns + j + 2)\n ax.imshow(tl.to_numpy(estimator.weight_tensor_), cmap=plt.cm.OrRd, interpolation='nearest')\n ax.set_axis_off()\n\n if i == 0:\n ax.set_title('Learned\\nrank = {}'.format(rank))\n\nplt.suptitle(\"Tucker tensor regression\")\nplt.show()" + "import matplotlib.pyplot as plt\nfrom tensorly.base import tensor_to_vec, partial_tensor_to_vec\nfrom tensorly.datasets.synthetic import gen_image\nfrom tensorly.regression.tucker_regression import TuckerRegressor\nimport tensorly as tl\n\n# Parameter of the experiment\nimage_height = 25\nimage_width = 25\n# shape of the images\npatterns = [\"rectangle\", \"swiss\", \"circle\"]\n# ranks to test\nranks = [1, 2, 3, 4, 5]\n\n# Generate random samples\nrng = tl.check_random_state(1)\nX = tl.tensor(rng.normal(size=(1000, image_height, image_width), loc=0, scale=1))\n\n# Parameters of the plot, deduced from the data\nn_rows = len(patterns)\nn_columns = len(ranks) + 1\n# Plot the three images\nfig = plt.figure()\n\nfor i, pattern in enumerate(patterns):\n\n print(f\"fitting pattern n.{i}\")\n\n # Generate the original image\n weight_img = gen_image(\n region=pattern, image_height=image_height, image_width=image_width\n )\n weight_img = tl.tensor(weight_img)\n\n # Generate the labels\n y = tl.dot(partial_tensor_to_vec(X, skip_begin=1), tensor_to_vec(weight_img))\n\n # Plot the original weights\n ax = fig.add_subplot(n_rows, n_columns, i * n_columns + 1)\n ax.imshow(tl.to_numpy(weight_img), cmap=plt.cm.OrRd, interpolation=\"nearest\")\n ax.set_axis_off()\n if i == 0:\n ax.set_title(\"Original\\nweights\")\n\n for j, rank in enumerate(ranks):\n print(f\"fitting for rank = {rank}\")\n\n # Create a tensor Regressor estimator\n estimator = TuckerRegressor(\n weight_ranks=[rank, rank], tol=10e-7, n_iter_max=100, reg_W=1, verbose=0\n )\n\n # Fit the estimator to the data\n estimator.fit(X, y)\n\n ax = fig.add_subplot(n_rows, n_columns, i * n_columns + j + 2)\n ax.imshow(\n tl.to_numpy(estimator.weight_tensor_),\n cmap=plt.cm.OrRd,\n interpolation=\"nearest\",\n )\n ax.set_axis_off()\n\n if i == 0:\n ax.set_title(f\"Learned\\nrank = {rank}\")\n\nplt.suptitle(\"Tucker tensor regression\")\nplt.show()" ] } ], @@ -46,7 +35,7 @@ "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", - "version": "3.9.16" + "version": "3.12.7" } }, "nbformat": 4, diff --git a/stable/_downloads/bcb72d6051bfbba3dbbe23efd1e3601f/plot_permute_factors.zip b/stable/_downloads/bcb72d6051bfbba3dbbe23efd1e3601f/plot_permute_factors.zip new file mode 100644 index 000000000..920ba548f Binary files /dev/null and b/stable/_downloads/bcb72d6051bfbba3dbbe23efd1e3601f/plot_permute_factors.zip differ diff --git a/stable/_downloads/c180f3d1630c24fd07b3388b592a8845/plot_parafac2_compression.zip b/stable/_downloads/c180f3d1630c24fd07b3388b592a8845/plot_parafac2_compression.zip new file mode 100644 index 000000000..bb0541935 Binary files /dev/null and b/stable/_downloads/c180f3d1630c24fd07b3388b592a8845/plot_parafac2_compression.zip differ diff --git a/stable/_downloads/c50dd76f48de56cb97a914c7c62591e6/plot_nn_cp_hals.py b/stable/_downloads/c50dd76f48de56cb97a914c7c62591e6/plot_nn_cp_hals.py index 29949dc3b..98e3b7d27 100644 --- a/stable/_downloads/c50dd76f48de56cb97a914c7c62591e6/plot_nn_cp_hals.py +++ b/stable/_downloads/c50dd76f48de56cb97a914c7c62591e6/plot_nn_cp_hals.py @@ -42,11 +42,13 @@ # for our NCP. In fact, in order to compare both algorithmic options in a # fair way, it is a good idea to use same initialized factors in decomposition # algorithms. We make use of the ``initialize_cp`` function to initialize the -# factors of the NCP (setting the ``non_negative`` option to `True`) +# factors of the NCP (setting the ``non_negative`` option to `True`) # and transform these factors (and factors weights) into # an instance of the CPTensor class: -weights_init, factors_init = initialize_cp(tensor, non_negative=True, init='random', rank=10) +weights_init, factors_init = initialize_cp( + tensor, non_negative=True, init="random", rank=10 +) cp_init = CPTensor((weights_init, factors_init)) @@ -58,9 +60,11 @@ # Multiplicative Update, which can be called as follows: tic = time.time() -tensor_mu, errors_mu = non_negative_parafac(tensor, rank=10, init=deepcopy(cp_init), return_errors=True) +tensor_mu, errors_mu = non_negative_parafac( + tensor, rank=10, init=deepcopy(cp_init), return_errors=True +) cp_reconstruction_mu = tl.cp_to_tensor(tensor_mu) -time_mu = time.time()-tic +time_mu = time.time() - tic ############################################################################## # Here, we also compute the output tensor from the decomposed factors by using @@ -69,8 +73,8 @@ # first few values of both tensors shows that this is indeed # the case but the approximation is quite coarse. -print('reconstructed tensor\n', cp_reconstruction_mu[10:12, 10:12, 10:12], '\n') -print('input data tensor\n', tensor[10:12, 10:12, 10:12]) +print("reconstructed tensor\n", cp_reconstruction_mu[10:12, 10:12, 10:12], "\n") +print("input data tensor\n", tensor[10:12, 10:12, 10:12]) ############################################################################## # Non-negative Parafac with HALS @@ -79,15 +83,17 @@ # used as follows: tic = time.time() -tensor_hals, errors_hals = non_negative_parafac_hals(tensor, rank=10, init=deepcopy(cp_init), return_errors=True) +tensor_hals, errors_hals = non_negative_parafac_hals( + tensor, rank=10, init=deepcopy(cp_init), return_errors=True +) cp_reconstruction_hals = tl.cp_to_tensor(tensor_hals) -time_hals = time.time()-tic +time_hals = time.time() - tic ############################################################################## # Again, we can look at the reconstructed tensor entries. -print('reconstructed tensor\n',cp_reconstruction_hals[10:12, 10:12, 10:12], '\n') -print('input data tensor\n', tensor[10:12, 10:12, 10:12]) +print("reconstructed tensor\n", cp_reconstruction_hals[10:12, 10:12, 10:12], "\n") +print("input data tensor\n", tensor[10:12, 10:12, 10:12]) ############################################################################## # Non-negative Parafac with Exact HALS @@ -102,18 +108,20 @@ # in the function: tic = time.time() -tensorhals_exact, errors_exact = non_negative_parafac_hals(tensor, rank=10, init=deepcopy(cp_init), return_errors=True, exact=True) +tensorhals_exact, errors_exact = non_negative_parafac_hals( + tensor, rank=10, init=deepcopy(cp_init), return_errors=True, exact=True +) cp_reconstruction_exact_hals = tl.cp_to_tensor(tensorhals_exact) -time_exact_hals = time.time()-tic +time_exact_hals = time.time() - tic ############################################################################## # Comparison # ----------------------- # First comparison option is processing time for each algorithm: -print(str("{:.2f}".format(time_mu)) + ' ' + 'seconds') -print(str("{:.2f}".format(time_hals)) + ' ' + 'seconds') -print(str("{:.2f}".format(time_exact_hals)) + ' ' + 'seconds') +print(str(f"{time_mu:.2f}") + " " + "seconds") +print(str(f"{time_hals:.2f}") + " " + "seconds") +print(str(f"{time_exact_hals:.2f}") + " " + "seconds") ############################################################################## # As it is expected, the exact solution takes much longer than the approximate @@ -125,6 +133,7 @@ # In Tensorly, we provide a function to calculate Root Mean Square Error (RMSE): from tensorly.metrics.regression import RMSE + print(RMSE(tensor, cp_reconstruction_mu)) print(RMSE(tensor, cp_reconstruction_hals)) print(RMSE(tensor, cp_reconstruction_exact_hals)) @@ -136,17 +145,19 @@ # in convergence speed on the following error per iteration plot: import matplotlib.pyplot as plt -def each_iteration(a,b,c,title): - fig=plt.figure() + + +def each_iteration(a, b, c, title): + fig = plt.figure() fig.set_size_inches(10, fig.get_figheight(), forward=True) plt.plot(a) plt.plot(b) plt.plot(c) plt.title(str(title)) - plt.legend(['MU', 'HALS', 'Exact HALS'], loc='upper left') + plt.legend(["MU", "HALS", "Exact HALS"], loc="upper left") -each_iteration(errors_mu, errors_hals, errors_exact, 'Error for each iteration') +each_iteration(errors_mu, errors_hals, errors_exact, "Error for each iteration") ############################################################################## # In conclusion, on this quick test, it appears that the HALS algorithm gives diff --git a/stable/_downloads/c6b629fdc27c337322799c80254dc184/plot_image_compression.ipynb b/stable/_downloads/c6b629fdc27c337322799c80254dc184/plot_image_compression.ipynb index 7824665a5..03f130b6f 100644 --- a/stable/_downloads/c6b629fdc27c337322799c80254dc184/plot_image_compression.ipynb +++ b/stable/_downloads/c6b629fdc27c337322799c80254dc184/plot_image_compression.ipynb @@ -1,16 +1,5 @@ { "cells": [ - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "collapsed": false - }, - "outputs": [], - "source": [ - "%matplotlib inline" - ] - }, { "cell_type": "markdown", "metadata": {}, @@ -26,7 +15,7 @@ }, "outputs": [], "source": [ - "import matplotlib.pyplot as plt\nimport tensorly as tl\nimport numpy as np\nfrom scipy.misc import face\nfrom scipy.ndimage import zoom\nfrom tensorly.decomposition import parafac\nfrom tensorly.decomposition import tucker\nfrom math import ceil\n\n\nrandom_state = 12345\n\nimage = face()\nimage = tl.tensor(zoom(face(), (0.3, 0.3, 1)), dtype='float64')\n\ndef to_image(tensor):\n \"\"\"A convenience function to convert from a float dtype back to uint8\"\"\"\n im = tl.to_numpy(tensor)\n im -= im.min()\n im /= im.max()\n im *= 255\n return im.astype(np.uint8)\n\n# Rank of the CP decomposition\ncp_rank = 25\n# Rank of the Tucker decomposition\ntucker_rank = [100, 100, 2]\n\n# Perform the CP decomposition\nweights, factors = parafac(image, rank=cp_rank, init='random', tol=10e-6)\n# Reconstruct the image from the factors\ncp_reconstruction = tl.cp_to_tensor((weights, factors))\n\n# Tucker decomposition\ncore, tucker_factors = tucker(image, rank=tucker_rank, init='random', tol=10e-5, random_state=random_state)\ntucker_reconstruction = tl.tucker_to_tensor((core, tucker_factors))\n\n# Plotting the original and reconstruction from the decompositions\nfig = plt.figure()\nax = fig.add_subplot(1, 3, 1)\nax.set_axis_off()\nax.imshow(to_image(image))\nax.set_title('original')\n\nax = fig.add_subplot(1, 3, 2)\nax.set_axis_off()\nax.imshow(to_image(cp_reconstruction))\nax.set_title('CP')\n\nax = fig.add_subplot(1, 3, 3)\nax.set_axis_off()\nax.imshow(to_image(tucker_reconstruction))\nax.set_title('Tucker')\n\nplt.tight_layout()\nplt.show()" + "import matplotlib.pyplot as plt\nimport tensorly as tl\nimport numpy as np\nfrom scipy.misc import face\nfrom scipy.ndimage import zoom\nfrom tensorly.decomposition import parafac\nfrom tensorly.decomposition import tucker\nfrom math import ceil\n\n\nrandom_state = 12345\n\nimage = face()\nimage = tl.tensor(zoom(face(), (0.3, 0.3, 1)), dtype=\"float64\")\n\n\ndef to_image(tensor):\n \"\"\"A convenience function to convert from a float dtype back to uint8\"\"\"\n im = tl.to_numpy(tensor)\n im -= im.min()\n im /= im.max()\n im *= 255\n return im.astype(np.uint8)\n\n\n# Rank of the CP decomposition\ncp_rank = 25\n# Rank of the Tucker decomposition\ntucker_rank = [100, 100, 2]\n\n# Perform the CP decomposition\nweights, factors = parafac(image, rank=cp_rank, init=\"random\", tol=10e-6)\n# Reconstruct the image from the factors\ncp_reconstruction = tl.cp_to_tensor((weights, factors))\n\n# Tucker decomposition\ncore, tucker_factors = tucker(\n image, rank=tucker_rank, init=\"random\", tol=10e-5, random_state=random_state\n)\ntucker_reconstruction = tl.tucker_to_tensor((core, tucker_factors))\n\n# Plotting the original and reconstruction from the decompositions\nfig = plt.figure()\nax = fig.add_subplot(1, 3, 1)\nax.set_axis_off()\nax.imshow(to_image(image))\nax.set_title(\"original\")\n\nax = fig.add_subplot(1, 3, 2)\nax.set_axis_off()\nax.imshow(to_image(cp_reconstruction))\nax.set_title(\"CP\")\n\nax = fig.add_subplot(1, 3, 3)\nax.set_axis_off()\nax.imshow(to_image(tucker_reconstruction))\nax.set_title(\"Tucker\")\n\nplt.tight_layout()\nplt.show()" ] } ], @@ -46,7 +35,7 @@ "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", - "version": "3.9.16" + "version": "3.12.7" } }, "nbformat": 4, diff --git a/stable/_downloads/c83bd9c72b1132d0ca7a4c03a7979d57/plot_tucker_regression.zip b/stable/_downloads/c83bd9c72b1132d0ca7a4c03a7979d57/plot_tucker_regression.zip new file mode 100644 index 000000000..1ecf3f284 Binary files /dev/null and b/stable/_downloads/c83bd9c72b1132d0ca7a4c03a7979d57/plot_tucker_regression.zip differ diff --git a/stable/_downloads/c8fb7612e0bd35c006429a54d1f43cb4/plot_permute_factors.ipynb b/stable/_downloads/c8fb7612e0bd35c006429a54d1f43cb4/plot_permute_factors.ipynb index 8d459ed93..0a328bdd7 100644 --- a/stable/_downloads/c8fb7612e0bd35c006429a54d1f43cb4/plot_permute_factors.ipynb +++ b/stable/_downloads/c8fb7612e0bd35c006429a54d1f43cb4/plot_permute_factors.ipynb @@ -1,16 +1,5 @@ { "cells": [ - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "collapsed": false - }, - "outputs": [], - "source": [ - "%matplotlib inline" - ] - }, { "cell_type": "markdown", "metadata": {}, @@ -125,7 +114,7 @@ "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", - "version": "3.9.16" + "version": "3.12.7" } }, "nbformat": 4, diff --git a/stable/_downloads/cad167cb142025226f5b9454cd0abb41/plot_nn_cp_hals.zip b/stable/_downloads/cad167cb142025226f5b9454cd0abb41/plot_nn_cp_hals.zip new file mode 100644 index 000000000..b0e4ae15a Binary files /dev/null and b/stable/_downloads/cad167cb142025226f5b9454cd0abb41/plot_nn_cp_hals.zip differ diff --git a/stable/_downloads/cdde43113b9e6de785a08675bf643a4d/plot_nn_tucker.ipynb b/stable/_downloads/cdde43113b9e6de785a08675bf643a4d/plot_nn_tucker.ipynb index b625a0a25..ff42bc865 100644 --- a/stable/_downloads/cdde43113b9e6de785a08675bf643a4d/plot_nn_tucker.ipynb +++ b/stable/_downloads/cdde43113b9e6de785a08675bf643a4d/plot_nn_tucker.ipynb @@ -1,16 +1,5 @@ { "cells": [ - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "collapsed": false - }, - "outputs": [], - "source": [ - "%matplotlib inline" - ] - }, { "cell_type": "markdown", "metadata": {}, @@ -58,7 +47,7 @@ }, "outputs": [], "source": [ - "# tensor generation\narray = np.random.randint(1000, size=(10, 30, 40))\ntensor = tl.tensor(array, dtype='float')" + "# tensor generation\narray = np.random.randint(1000, size=(10, 30, 40))\ntensor = tl.tensor(array, dtype=\"float\")" ] }, { @@ -76,7 +65,7 @@ }, "outputs": [], "source": [ - "tic = time.time()\ntensor_mu, error_mu = non_negative_tucker(tensor, rank=[5, 5, 5], tol=1e-12, n_iter_max=100, return_errors=True)\ntucker_reconstruction_mu = tl.tucker_to_tensor(tensor_mu)\ntime_mu = time.time()-tic" + "tic = time.time()\ntensor_mu, error_mu = non_negative_tucker(\n tensor, rank=[5, 5, 5], tol=1e-12, n_iter_max=100, return_errors=True\n)\ntucker_reconstruction_mu = tl.tucker_to_tensor(tensor_mu)\ntime_mu = time.time() - tic" ] }, { @@ -101,7 +90,7 @@ }, "outputs": [], "source": [ - "ticnew = time.time()\ntensor_hals_fista, error_fista = non_negative_tucker_hals(tensor, rank=[5, 5, 5], algorithm='fista', return_errors=True)\ntucker_reconstruction_fista = tl.tucker_to_tensor(tensor_hals_fista)\ntime_fista = time.time()-ticnew" + "ticnew = time.time()\ntensor_hals_fista, error_fista = non_negative_tucker_hals(\n tensor, rank=[5, 5, 5], algorithm=\"fista\", return_errors=True\n)\ntucker_reconstruction_fista = tl.tucker_to_tensor(tensor_hals_fista)\ntime_fista = time.time() - ticnew" ] }, { @@ -119,7 +108,7 @@ }, "outputs": [], "source": [ - "ticnew = time.time()\ntensor_hals_as, error_as = non_negative_tucker_hals(tensor, rank=[5, 5, 5], algorithm='active_set', return_errors=True)\ntucker_reconstruction_as = tl.tucker_to_tensor(tensor_hals_as)\ntime_as = time.time()-ticnew" + "ticnew = time.time()\ntensor_hals_as, error_as = non_negative_tucker_hals(\n tensor, rank=[5, 5, 5], algorithm=\"active_set\", return_errors=True\n)\ntucker_reconstruction_as = tl.tucker_to_tensor(tensor_hals_as)\ntime_as = time.time() - ticnew" ] }, { @@ -137,7 +126,7 @@ }, "outputs": [], "source": [ - "print('time for tensorly nntucker:'+' ' + str(\"{:.2f}\".format(time_mu)))\nprint('time for HALS with fista:'+' ' + str(\"{:.2f}\".format(time_fista)))\nprint('time for HALS with as:'+' ' + str(\"{:.2f}\".format(time_as)))" + "print(\"time for tensorly nntucker:\" + \" \" + str(f\"{time_mu:.2f}\"))\nprint(\"time for HALS with fista:\" + \" \" + str(f\"{time_fista:.2f}\"))\nprint(\"time for HALS with as:\" + \" \" + str(f\"{time_as:.2f}\"))" ] }, { @@ -155,7 +144,7 @@ }, "outputs": [], "source": [ - "print('RMSE tensorly nntucker:'+' ' + str(RMSE(tensor, tucker_reconstruction_mu)))\nprint('RMSE for hals with fista:'+' ' + str(RMSE(tensor, tucker_reconstruction_fista)))\nprint('RMSE for hals with as:'+' ' + str(RMSE(tensor, tucker_reconstruction_as)))" + "print(\"RMSE tensorly nntucker:\" + \" \" + str(RMSE(tensor, tucker_reconstruction_mu)))\nprint(\n \"RMSE for hals with fista:\" + \" \" + str(RMSE(tensor, tucker_reconstruction_fista))\n)\nprint(\"RMSE for hals with as:\" + \" \" + str(RMSE(tensor, tucker_reconstruction_as)))" ] }, { @@ -173,7 +162,7 @@ }, "outputs": [], "source": [ - "def each_iteration(a,b,c,title):\n fig=plt.figure()\n fig.set_size_inches(10, fig.get_figheight(), forward=True)\n plt.plot(a)\n plt.plot(b)\n plt.plot(c)\n plt.title(str(title))\n plt.legend(['MU', 'HALS + Fista', 'HALS + AS'], loc='upper right')\n\n\neach_iteration(error_mu, error_fista, error_as, 'Error for each iteration')" + "def each_iteration(a, b, c, title):\n fig = plt.figure()\n fig.set_size_inches(10, fig.get_figheight(), forward=True)\n plt.plot(a)\n plt.plot(b)\n plt.plot(c)\n plt.title(str(title))\n plt.legend([\"MU\", \"HALS + Fista\", \"HALS + AS\"], loc=\"upper right\")\n\n\neach_iteration(error_mu, error_fista, error_as, \"Error for each iteration\")" ] }, { @@ -187,7 +176,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "## References\n\nGillis, N., & Glineur, F. (2012). Accelerated multiplicative updates and\nhierarchical ALS algorithms for nonnegative matrix factorization.\nNeural computation, 24(4), 1085-1105. \n`(Link) https://direct.mit.edu/neco/article/24/4/1085/7755/Accelerated-Multiplicative-Updates-and>`_\n" + "## References\n\nGillis, N., & Glineur, F. (2012). Accelerated multiplicative updates and\nhierarchical ALS algorithms for nonnegative matrix factorization.\nNeural computation, 24(4), 1085-1105.\n`(Link) https://direct.mit.edu/neco/article/24/4/1085/7755/Accelerated-Multiplicative-Updates-and>`_\n\n" ] } ], @@ -207,7 +196,7 @@ "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", - "version": "3.9.16" + "version": "3.12.7" } }, "nbformat": 4, diff --git a/stable/_downloads/e2b69b698c4b4f8baba0fd4e55bfc00c/plot_tensor.zip b/stable/_downloads/e2b69b698c4b4f8baba0fd4e55bfc00c/plot_tensor.zip new file mode 100644 index 000000000..f2c4ba206 Binary files /dev/null and b/stable/_downloads/e2b69b698c4b4f8baba0fd4e55bfc00c/plot_tensor.zip differ diff --git a/stable/_downloads/f0754cb6f49432e87378932ce6ee13bb/plot_cp_regression.zip b/stable/_downloads/f0754cb6f49432e87378932ce6ee13bb/plot_cp_regression.zip new file mode 100644 index 000000000..1319f992e Binary files /dev/null and b/stable/_downloads/f0754cb6f49432e87378932ce6ee13bb/plot_cp_regression.zip differ diff --git a/stable/_downloads/f283605c95b45f5b3bbceeebb1a78c48/plot_parafac2_compression.py b/stable/_downloads/f283605c95b45f5b3bbceeebb1a78c48/plot_parafac2_compression.py new file mode 100644 index 000000000..f3b717abf --- /dev/null +++ b/stable/_downloads/f283605c95b45f5b3bbceeebb1a78c48/plot_parafac2_compression.py @@ -0,0 +1,294 @@ +""" +Speeding up PARAFAC2 with SVD compression +========================================= + +PARAFAC2 can be very time-consuming to fit. However, if the number of rows greatly +exceeds the number of columns or the data matrices are approximately low-rank, we can +compress the data before fitting the PARAFAC2 model to considerably speed up the fitting +procedure. + +The compression works by first computing the SVD of the tensor slices and fitting the +PARAFAC2 model to the right singular vectors multiplied by the singular values. Then, +after we fit the model, we left-multiply the :math:`B_i`-matrices with the left singular +vectors to recover the decompressed model. Fitting to compressed data and then +decompressing is mathematically equivalent to fitting to the original uncompressed data. + +For more information about why this works, see the documentation of +:py:meth:`tensorly.decomposition.preprocessing.svd_compress_tensor_slices`. +""" + +from time import monotonic +import tensorly as tl +from tensorly.decomposition import parafac2 +import tensorly.preprocessing as preprocessing + + +############################################################################## +# Function to create synthetic data +# --------------------------------- +# +# Here, we create a function that constructs a random tensor from a PARAFAC2 +# decomposition with noise + +rng = tl.check_random_state(0) + + +def create_random_data(shape, rank, noise_level): + I, J, K = shape # noqa: E741 + pf2 = tl.random.random_parafac2( + [(J, K) for i in range(I)], rank=rank, random_state=rng + ) + + X = pf2.to_tensor() + X_norm = [tl.norm(Xi) for Xi in X] + + noise = [rng.standard_normal((J, K)) for i in range(I)] + noise = [noise_level * X_norm[i] / tl.norm(E_i) for i, E_i in enumerate(noise)] + return [X_i + E_i for X_i, E_i in zip(X, noise)] + + +############################################################################## +# Compressing data with many rows and few columns +# ----------------------------------------------- +# +# Here, we set up for a case where we have many rows compared to columns + +n_inits = 5 +rank = 3 +shape = (10, 10_000, 15) # 10 matrices/tensor slices, each of size 10_000 x 15. +noise_level = 0.33 + +uncompressed_data = create_random_data(shape, rank=rank, noise_level=noise_level) + +############################################################################## +# Fitting without compression +# ^^^^^^^^^^^^^^^^^^^^^^^^^^^ +# +# As a baseline, we see how long time it takes to fit models without compression. +# Since PARAFAC2 is very prone to local minima, we fit five models and select the model +# with the lowest reconstruction error. + +print("Fitting PARAFAC2 model without compression...") +t1 = monotonic() +lowest_error = float("inf") +for i in range(n_inits): + pf2, errs = parafac2( + uncompressed_data, + rank, + n_iter_max=1000, + nn_modes=[0], + random_state=rng, + return_errors=True, + ) + if errs[-1] < lowest_error: + pf2_full, errs_full = pf2, errs +t2 = monotonic() +print( + f"It took {t2 - t1:.1f}s to fit a PARAFAC2 model a tensor of shape {shape} " + + "without compression" +) + +############################################################################## +# Fitting with lossless compression +# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +# +# Since the tensor slices have many rows compared to columns, we should be able to save +# a lot of time by compressing the data. By compressing the matrices, we only need to +# fit the PARAFAC2 model to a set of 10 matrices, each of size 15 x 15, not 10_000 x 15. +# +# The main bottleneck here is the SVD computation at the beginning of the fitting +# procedure, but luckily, this is independent of the initialisations, so we only need +# to compute this once. Also, if we are performing a grid search for the rank, then +# we just need to perform the compression once for the whole grid search as well. + +print("Fitting PARAFAC2 model with SVD compression...") +t1 = monotonic() +lowest_error = float("inf") +scores, loadings = preprocessing.svd_compress_tensor_slices(uncompressed_data) +t2 = monotonic() +for i in range(n_inits): + pf2, errs = parafac2( + scores, + rank, + n_iter_max=1000, + nn_modes=[0], + random_state=rng, + return_errors=True, + ) + if errs[-1] < lowest_error: + pf2_compressed, errs_compressed = pf2, errs +pf2_decompressed = preprocessing.svd_decompress_parafac2_tensor( + pf2_compressed, loadings +) +t3 = monotonic() +print( + f"It took {t3 - t1:.1f}s to fit a PARAFAC2 model a tensor of shape {shape} " + + "with lossless SVD compression" +) +print(f"The compression took {t2 - t1:.1f}s and the fitting took {t3 - t2:.1f}s") + +############################################################################## +# We see that we saved a lot of time by compressing the data before fitting the model. + +############################################################################## +# Fitting with lossy compression +# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +# +# We can try to speed the process up even further by accepting a slight discrepancy +# between the model obtained from compressed data and a model obtained from uncompressed +# data. Specifically, we can truncate the singular values at some threshold, essentially +# removing the parts of the data matrices that have a very low "signal strength". + +print("Fitting PARAFAC2 model with lossy SVD compression...") +t1 = monotonic() +lowest_error = float("inf") +scores, loadings = preprocessing.svd_compress_tensor_slices(uncompressed_data, 1e-5) +t2 = monotonic() +for i in range(n_inits): + pf2, errs = parafac2( + scores, + rank, + n_iter_max=1000, + nn_modes=[0], + random_state=rng, + return_errors=True, + ) + if errs[-1] < lowest_error: + pf2_compressed_lossy, errs_compressed_lossy = pf2, errs +pf2_decompressed_lossy = preprocessing.svd_decompress_parafac2_tensor( + pf2_compressed_lossy, loadings +) +t3 = monotonic() +print( + f"It took {t3 - t1:.1f}s to fit a PARAFAC2 model a tensor of shape {shape} " + + "with lossy SVD compression" +) +print( + f"Of which the compression took {t2 - t1:.1f}s and the fitting took {t3 - t2:.1f}s" +) + +############################################################################## +# We see that we didn't save much, if any, time in this case (compared to using +# lossless compression). This is because the main bottleneck now is the CP-part of +# the PARAFAC2 procedure, so reducing the tensor size from 10 x 15 x 15 to 10 x 4 x 15 +# (which is typically what we would get here) will have a negligible effect. + + +############################################################################## +# Compressing data that is approximately low-rank +# ----------------------------------------------- +# +# Here, we simulate data with many rows and columns but an approximately low rank. + +rank = 3 +shape = (10, 2_000, 2_000) +noise_level = 0.33 + +uncompressed_data = create_random_data(shape, rank=rank, noise_level=noise_level) + +############################################################################## +# Fitting without compression +# ^^^^^^^^^^^^^^^^^^^^^^^^^^^ +# +# Again, we start by fitting without compression as a baseline. + +print("Fitting PARAFAC2 model without compression...") +t1 = monotonic() +lowest_error = float("inf") +for i in range(n_inits): + pf2, errs = parafac2( + uncompressed_data, + rank, + n_iter_max=1000, + nn_modes=[0], + random_state=rng, + return_errors=True, + ) + if errs[-1] < lowest_error: + pf2_full, errs_full = pf2, errs +t2 = monotonic() +print( + f"It took {t2 - t1:.1f}s to fit a PARAFAC2 model a tensor of shape {shape} " + + "without compression" +) + +############################################################################## +# Fitting with lossless compression +# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +# +# Next, we fit with lossless compression. + +print("Fitting PARAFAC2 model with SVD compression...") +t1 = monotonic() +lowest_error = float("inf") +scores, loadings = preprocessing.svd_compress_tensor_slices(uncompressed_data) +t2 = monotonic() +for i in range(n_inits): + pf2, errs = parafac2( + scores, + rank, + n_iter_max=1000, + nn_modes=[0], + random_state=rng, + return_errors=True, + ) + if errs[-1] < lowest_error: + pf2_compressed, errs_compressed = pf2, errs +pf2_decompressed = preprocessing.svd_decompress_parafac2_tensor( + pf2_compressed, loadings +) +t3 = monotonic() +print( + f"It took {t3 - t1:.1f}s to fit a PARAFAC2 model a tensor of shape {shape} " + + "with lossless SVD compression" +) +print( + f"Of which the compression took {t2 - t1:.1f}s and the fitting took {t3 - t2:.1f}s" +) + +############################################################################## +# We see that the lossless compression no effect for this data. This is because the +# number ofrows is equal to the number of columns, so we cannot compress the data +# losslessly with the SVD. + +############################################################################## +# Fitting with lossy compression +# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +# +# Finally, we fit with lossy SVD compression. + +print("Fitting PARAFAC2 model with lossy SVD compression...") +t1 = monotonic() +lowest_error = float("inf") +scores, loadings = preprocessing.svd_compress_tensor_slices(uncompressed_data, 1e-5) +t2 = monotonic() +for i in range(n_inits): + pf2, errs = parafac2( + scores, + rank, + n_iter_max=1000, + nn_modes=[0], + random_state=rng, + return_errors=True, + ) + if errs[-1] < lowest_error: + pf2_compressed_lossy, errs_compressed_lossy = pf2, errs +pf2_decompressed_lossy = preprocessing.svd_decompress_parafac2_tensor( + pf2_compressed_lossy, loadings +) +t3 = monotonic() +print( + f"It took {t3 - t1:.1f}s to fit a PARAFAC2 model a tensor of shape {shape} " + + "with lossy SVD compression" +) +print( + f"Of which the compression took {t2 - t1:.1f}s and the fitting took {t3 - t2:.1f}s" +) + + +############################################################################## +# Here we see a large speedup. This is because the data is approximately low rank so +# the compressed tensor slices will have shape R x 2_000, where R is typically below 10 +# in this example. If your tensor slices are large in both modes, you might want to plot +# the singular values of your dataset to see if lossy compression could speed up +# PARAFAC2. diff --git a/stable/_downloads/f907c46434c7c7991c5a12ab81407582/plot_tensor.py b/stable/_downloads/f907c46434c7c7991c5a12ab81407582/plot_tensor.py index e808fb42b..a92330e80 100644 --- a/stable/_downloads/f907c46434c7c7991c5a12ab81407582/plot_tensor.py +++ b/stable/_downloads/f907c46434c7c7991c5a12ab81407582/plot_tensor.py @@ -1,4 +1,3 @@ -# -*- coding: utf-8 -*- """ Basic tensor operations ======================= @@ -13,12 +12,12 @@ ########################################################################### # A tensor is simply a numpy array tensor = tl.tensor(np.arange(24).reshape((3, 4, 2))) -print('* original tensor:\n{}'.format(tensor)) +print(f"* original tensor:\n{tensor}") ########################################################################### # Unfolding a tensor is easy for mode in range(tensor.ndim): - print('* mode-{} unfolding:\n{}'.format(mode, tl.unfold(tensor, mode))) + print(f"* mode-{mode} unfolding:\n{tl.unfold(tensor, mode)}") ########################################################################### # Re-folding the tensor is as easy: diff --git a/stable/_images/sphx_glr_plot_IL2_001.png b/stable/_images/sphx_glr_plot_IL2_001.png index 2a7131af4..c1c7410c6 100644 Binary files a/stable/_images/sphx_glr_plot_IL2_001.png and b/stable/_images/sphx_glr_plot_IL2_001.png differ diff --git a/stable/_images/sphx_glr_plot_covid_001.png b/stable/_images/sphx_glr_plot_covid_001.png index c170b5f43..2a78eb7db 100644 Binary files a/stable/_images/sphx_glr_plot_covid_001.png and b/stable/_images/sphx_glr_plot_covid_001.png differ diff --git a/stable/_images/sphx_glr_plot_covid_002.png b/stable/_images/sphx_glr_plot_covid_002.png index df5d7aaa1..b197e0111 100644 Binary files a/stable/_images/sphx_glr_plot_covid_002.png and b/stable/_images/sphx_glr_plot_covid_002.png differ diff --git a/stable/_images/sphx_glr_plot_cp_line_search_001.png b/stable/_images/sphx_glr_plot_cp_line_search_001.png index b6bd4cfb2..a8b4e1268 100644 Binary files a/stable/_images/sphx_glr_plot_cp_line_search_001.png and b/stable/_images/sphx_glr_plot_cp_line_search_001.png differ diff --git a/stable/_images/sphx_glr_plot_cp_line_search_thumb.png b/stable/_images/sphx_glr_plot_cp_line_search_thumb.png index b4e40bef4..65fb1733a 100644 Binary files a/stable/_images/sphx_glr_plot_cp_line_search_thumb.png and b/stable/_images/sphx_glr_plot_cp_line_search_thumb.png differ diff --git a/stable/_images/sphx_glr_plot_cp_regression_001.png b/stable/_images/sphx_glr_plot_cp_regression_001.png index e09d6baae..7c948ff12 100644 Binary files a/stable/_images/sphx_glr_plot_cp_regression_001.png and b/stable/_images/sphx_glr_plot_cp_regression_001.png differ diff --git a/stable/_images/sphx_glr_plot_guide_for_constrained_cp_001.png b/stable/_images/sphx_glr_plot_guide_for_constrained_cp_001.png index 2f79a74fe..5a45d083b 100644 Binary files a/stable/_images/sphx_glr_plot_guide_for_constrained_cp_001.png and b/stable/_images/sphx_glr_plot_guide_for_constrained_cp_001.png differ diff --git a/stable/_images/sphx_glr_plot_guide_for_constrained_cp_002.png b/stable/_images/sphx_glr_plot_guide_for_constrained_cp_002.png index 4073cd15d..c6e3c92bb 100644 Binary files a/stable/_images/sphx_glr_plot_guide_for_constrained_cp_002.png and b/stable/_images/sphx_glr_plot_guide_for_constrained_cp_002.png differ diff --git a/stable/_images/sphx_glr_plot_guide_for_constrained_cp_003.png b/stable/_images/sphx_glr_plot_guide_for_constrained_cp_003.png index 2b61ef238..f44a54a62 100644 Binary files a/stable/_images/sphx_glr_plot_guide_for_constrained_cp_003.png and b/stable/_images/sphx_glr_plot_guide_for_constrained_cp_003.png differ diff --git a/stable/_images/sphx_glr_plot_guide_for_constrained_cp_004.png b/stable/_images/sphx_glr_plot_guide_for_constrained_cp_004.png index eece38951..4142db35d 100644 Binary files a/stable/_images/sphx_glr_plot_guide_for_constrained_cp_004.png and b/stable/_images/sphx_glr_plot_guide_for_constrained_cp_004.png differ diff --git a/stable/_images/sphx_glr_plot_guide_for_constrained_cp_005.png b/stable/_images/sphx_glr_plot_guide_for_constrained_cp_005.png index 928864f54..8b2074603 100644 Binary files a/stable/_images/sphx_glr_plot_guide_for_constrained_cp_005.png and b/stable/_images/sphx_glr_plot_guide_for_constrained_cp_005.png differ diff --git a/stable/_images/sphx_glr_plot_guide_for_constrained_cp_006.png b/stable/_images/sphx_glr_plot_guide_for_constrained_cp_006.png index ed8eb5bb1..ad1b751a2 100644 Binary files a/stable/_images/sphx_glr_plot_guide_for_constrained_cp_006.png and b/stable/_images/sphx_glr_plot_guide_for_constrained_cp_006.png differ diff --git a/stable/_images/sphx_glr_plot_guide_for_constrained_cp_thumb.png b/stable/_images/sphx_glr_plot_guide_for_constrained_cp_thumb.png index de0061c76..6e9224fb7 100644 Binary files a/stable/_images/sphx_glr_plot_guide_for_constrained_cp_thumb.png and b/stable/_images/sphx_glr_plot_guide_for_constrained_cp_thumb.png differ diff --git a/stable/_images/sphx_glr_plot_image_compression_001.png b/stable/_images/sphx_glr_plot_image_compression_001.png index 1f14d094e..2574e2839 100644 Binary files a/stable/_images/sphx_glr_plot_image_compression_001.png and b/stable/_images/sphx_glr_plot_image_compression_001.png differ diff --git a/stable/_images/sphx_glr_plot_image_compression_thumb.png b/stable/_images/sphx_glr_plot_image_compression_thumb.png index 38564dcde..38d4da088 100644 Binary files a/stable/_images/sphx_glr_plot_image_compression_thumb.png and b/stable/_images/sphx_glr_plot_image_compression_thumb.png differ diff --git a/stable/_images/sphx_glr_plot_nn_cp_hals_001.png b/stable/_images/sphx_glr_plot_nn_cp_hals_001.png index fd90aa014..45ec06c7a 100644 Binary files a/stable/_images/sphx_glr_plot_nn_cp_hals_001.png and b/stable/_images/sphx_glr_plot_nn_cp_hals_001.png differ diff --git a/stable/_images/sphx_glr_plot_nn_cp_hals_thumb.png b/stable/_images/sphx_glr_plot_nn_cp_hals_thumb.png index 92bb2c5d5..4754ec32c 100644 Binary files a/stable/_images/sphx_glr_plot_nn_cp_hals_thumb.png and b/stable/_images/sphx_glr_plot_nn_cp_hals_thumb.png differ diff --git a/stable/_images/sphx_glr_plot_nn_tucker_001.png b/stable/_images/sphx_glr_plot_nn_tucker_001.png index 2c007409f..55426b9b0 100644 Binary files a/stable/_images/sphx_glr_plot_nn_tucker_001.png and b/stable/_images/sphx_glr_plot_nn_tucker_001.png differ diff --git a/stable/_images/sphx_glr_plot_nn_tucker_thumb.png b/stable/_images/sphx_glr_plot_nn_tucker_thumb.png index 6808cbc9e..ce18bb911 100644 Binary files a/stable/_images/sphx_glr_plot_nn_tucker_thumb.png and b/stable/_images/sphx_glr_plot_nn_tucker_thumb.png differ diff --git a/stable/_images/sphx_glr_plot_parafac2_001.png b/stable/_images/sphx_glr_plot_parafac2_001.png index cf92a4124..fa860bdd3 100644 Binary files a/stable/_images/sphx_glr_plot_parafac2_001.png and b/stable/_images/sphx_glr_plot_parafac2_001.png differ diff --git a/stable/_images/sphx_glr_plot_parafac2_002.png b/stable/_images/sphx_glr_plot_parafac2_002.png index f2519d6e3..6c1009d34 100644 Binary files a/stable/_images/sphx_glr_plot_parafac2_002.png and b/stable/_images/sphx_glr_plot_parafac2_002.png differ diff --git a/stable/_images/sphx_glr_plot_parafac2_compression_thumb.png b/stable/_images/sphx_glr_plot_parafac2_compression_thumb.png new file mode 100644 index 000000000..8a5fed589 Binary files /dev/null and b/stable/_images/sphx_glr_plot_parafac2_compression_thumb.png differ diff --git a/stable/_images/sphx_glr_plot_parafac2_thumb.png b/stable/_images/sphx_glr_plot_parafac2_thumb.png index b110eb55b..8469bbd41 100644 Binary files a/stable/_images/sphx_glr_plot_parafac2_thumb.png and b/stable/_images/sphx_glr_plot_parafac2_thumb.png differ diff --git a/stable/_images/sphx_glr_plot_permute_factors_001.png b/stable/_images/sphx_glr_plot_permute_factors_001.png index 739c9fe79..998829fda 100644 Binary files a/stable/_images/sphx_glr_plot_permute_factors_001.png and b/stable/_images/sphx_glr_plot_permute_factors_001.png differ diff --git a/stable/_images/sphx_glr_plot_permute_factors_thumb.png b/stable/_images/sphx_glr_plot_permute_factors_thumb.png index 95d63fcb3..3bab9fa42 100644 Binary files a/stable/_images/sphx_glr_plot_permute_factors_thumb.png and b/stable/_images/sphx_glr_plot_permute_factors_thumb.png differ diff --git a/stable/_images/sphx_glr_plot_tucker_regression_001.png b/stable/_images/sphx_glr_plot_tucker_regression_001.png index 3c1bf2b78..7b1aa6190 100644 Binary files a/stable/_images/sphx_glr_plot_tucker_regression_001.png and b/stable/_images/sphx_glr_plot_tucker_regression_001.png differ diff --git a/stable/_modules/index.html b/stable/_modules/index.html index 65097c8e2..6ccaefecd 100644 --- a/stable/_modules/index.html +++ b/stable/_modules/index.html @@ -1,7 +1,6 @@ - - + @@ -15,17 +14,17 @@ - - - - - - + + + + + + - - - + + + @@ -157,7 +156,7 @@

All modules for which code is available

  • tensorly.decomposition._nn_cp
  • tensorly.decomposition._parafac2
  • tensorly.decomposition._symmetric_cp
  • -
  • tensorly.decomposition._tr
  • +
  • tensorly.decomposition._tr_svd
  • tensorly.decomposition._tt
  • tensorly.decomposition._tucker
  • tensorly.decomposition.robust_decomposition
  • @@ -166,10 +165,13 @@

    All modules for which code is available

  • tensorly.metrics.similarity
  • tensorly.parafac2_tensor
  • tensorly.plugins
  • +
  • tensorly.preprocessing
  • tensorly.random.base
  • tensorly.regression.cp_plsr
  • tensorly.regression.cp_regression
  • tensorly.regression.tucker_regression
  • +
  • tensorly.solvers.admm
  • +
  • tensorly.solvers.nnls
  • tensorly.tenalg.core_tenalg._batched_tensordot
  • tensorly.tenalg.core_tenalg._khatri_rao
  • tensorly.tenalg.core_tenalg._kronecker
  • @@ -193,7 +195,7 @@

    All modules for which code is available

    - © Copyright 2016 - 2023, TensorLy Developers.
    + © Copyright 2016 - 2024, TensorLy Developers.
    diff --git a/stable/_modules/tensorly/base.html b/stable/_modules/tensorly/base.html index 432e33c01..772685196 100644 --- a/stable/_modules/tensorly/base.html +++ b/stable/_modules/tensorly/base.html @@ -1,7 +1,6 @@ - - + @@ -15,17 +14,17 @@ - - - - - - + + + + + + - - - + + + @@ -145,11 +144,13 @@

    Source code for tensorly.base

    -from . import backend as tl
    -from .utils import prod
    +from math import prod
    +from . import backend as tl
     
     
    -
    [docs]def tensor_to_vec(tensor): +
    +[docs] +def tensor_to_vec(tensor): """Vectorises a tensor Parameters @@ -165,7 +166,10 @@

    Source code for tensorly.base

         return tl.reshape(tensor, (-1,))
    -
    [docs]def vec_to_tensor(vec, shape): + +
    +[docs] +def vec_to_tensor(vec, shape): """Folds a vectorised tensor back into a tensor of shape `shape` Parameters @@ -183,7 +187,10 @@

    Source code for tensorly.base

         return tl.reshape(vec, shape)
    -
    [docs]def unfold(tensor, mode): + +
    +[docs] +def unfold(tensor, mode): """Returns the mode-`mode` unfolding of `tensor` with modes starting at `0`. Parameters @@ -200,7 +207,10 @@

    Source code for tensorly.base

         return tl.reshape(tl.moveaxis(tensor, mode, 0), (tensor.shape[mode], -1))
    -
    [docs]def fold(unfolded_tensor, mode, shape): + +
    +[docs] +def fold(unfolded_tensor, mode, shape): """Refolds the mode-`mode` unfolding into a tensor of shape `shape` In other words, refolds the n-mode unfolded tensor @@ -226,7 +236,10 @@

    Source code for tensorly.base

         return tl.moveaxis(tl.reshape(unfolded_tensor, full_shape), 0, mode)
    -
    [docs]def partial_unfold(tensor, mode=0, skip_begin=1, skip_end=0, ravel_tensors=False): + +
    +[docs] +def partial_unfold(tensor, mode=0, skip_begin=1, skip_end=0, ravel_tensors=False): """Partially unfolds a tensor while ignoring the specified number of dimensions at the beginning and the end. For instance, if the first dimension of the tensor is the number of samples, to unfold each sample, @@ -265,7 +278,10 @@

    Source code for tensorly.base

         return tl.reshape(tl.moveaxis(tensor, mode + skip_begin, skip_begin), new_shape)
    -
    [docs]def partial_fold(unfolded, mode, shape, skip_begin=1, skip_end=0): + +
    +[docs] +def partial_fold(unfolded, mode, shape, skip_begin=1, skip_end=0): """Re-folds a partially unfolded tensor Parameters @@ -294,7 +310,10 @@

    Source code for tensorly.base

         )
    -
    [docs]def partial_tensor_to_vec(tensor, skip_begin=1, skip_end=0): + +
    +[docs] +def partial_tensor_to_vec(tensor, skip_begin=1, skip_end=0): """Partially vectorises a tensor Partially vectorises a tensor while ignoring the specified dimension at the beginning and the end @@ -318,7 +337,10 @@

    Source code for tensorly.base

         )
    -
    [docs]def partial_vec_to_tensor(matrix, shape, skip_begin=1, skip_end=0): + +
    +[docs] +def partial_vec_to_tensor(matrix, shape, skip_begin=1, skip_end=0): """Refolds a partially vectorised tensor into a full one Parameters @@ -342,6 +364,7 @@

    Source code for tensorly.base

         )
    + def matricize(tensor, row_modes, column_modes=None): """Matricizes the given tensor @@ -393,7 +416,7 @@

    Source code for tensorly.base

             
    - © Copyright 2016 - 2023, TensorLy Developers.
    + © Copyright 2016 - 2024, TensorLy Developers.
    diff --git a/stable/_modules/tensorly/contrib/decomposition/_tt_cross.html b/stable/_modules/tensorly/contrib/decomposition/_tt_cross.html index 20a5acbdc..34541b9e4 100644 --- a/stable/_modules/tensorly/contrib/decomposition/_tt_cross.html +++ b/stable/_modules/tensorly/contrib/decomposition/_tt_cross.html @@ -1,7 +1,6 @@ - - + @@ -15,17 +14,17 @@ - - - - - - + + + + + + - - - + + + @@ -150,7 +149,9 @@

    Source code for tensorly.contrib.decomposition._tt_cross

    import numpy as np -
    [docs]def tensor_train_cross(input_tensor, rank, tol=1e-4, n_iter_max=100, random_state=None): +
    +[docs] +def tensor_train_cross(input_tensor, rank, tol=1e-4, n_iter_max=100, random_state=None): """TT (tensor-train) decomposition via cross-approximation (TTcross) [1] Decomposes `input_tensor` into a sequence of order-3 tensors of given rank. (factors/cores) @@ -253,14 +254,10 @@

    Source code for tensorly.contrib.decomposition._tt_cross

    # Initialize rank if rank[0] != 1: - message = "Provided rank[0] == {} but boundary conditions dictate rank[0] == rank[-1] == 1.".format( - rank[0] - ) + message = f"Provided rank[0] == {rank[0]} but boundary conditions dictate rank[0] == rank[-1] == 1." raise ValueError(message) if rank[-1] != 1: - message = "Provided rank[-1] == {} but boundary conditions dictate rank[0] == rank[-1] == 1.".format( - rank[-1] - ) + message = f"Provided rank[-1] == {rank[-1]} but boundary conditions dictate rank[0] == rank[-1] == 1." raise ValueError(message) # list col_idx: column indices (right indices) for skeleton-decomposition: indicate which columns used in each core. @@ -384,6 +381,7 @@

    Source code for tensorly.contrib.decomposition._tt_cross

    return factor_new
    + def left_right_ttcross_step(input_tensor, k, rank, row_idx, col_idx): """Compute the next (right) core's row indices by QR decomposition. @@ -575,7 +573,7 @@

    Source code for tensorly.contrib.decomposition._tt_cross

    (n, r) = tl.shape(A) # The index of row of the submatrix - row_idx = tl.zeros(r) + row_idx = tl.zeros(r, dtype=tl.int64) # Rest of rows / unselected rows rest_of_rows = tl.tensor(list(range(n)), dtype=tl.int64) @@ -588,7 +586,7 @@

    Source code for tensorly.contrib.decomposition._tt_cross

    # Compute the square of norm of each row rows_norms = tl.sum(A_new**2, axis=1) - # If there is only one row of A left, let's just return it. MxNet is not robust about this case. + # If there is only one row of A left, let's just return it. if tl.shape(rows_norms) == (): row_idx[i] = rest_of_rows break @@ -609,7 +607,7 @@

    Source code for tensorly.contrib.decomposition._tt_cross

    # projection a to b is computed as: <a,b> / sqrt(|a|*|b|) projection = tl.dot(A_new, tl.transpose(max_row)) normalization = tl.sqrt(rows_norms[max_row_idx] * rows_norms) - # make sure normalization vector is of the same shape of projection (causing bugs for MxNet) + # make sure normalization vector is of the same shape of projection normalization = tl.reshape(normalization, tl.shape(projection)) projection = projection / normalization @@ -621,8 +619,8 @@

    Source code for tensorly.contrib.decomposition._tt_cross

    A_new = A_new[mask, :] # update the row_idx and rest_of_rows - row_idx[i] = rest_of_rows[max_row_idx] - rest_of_rows = rest_of_rows[mask] + row_idx = tl.index_update(row_idx, i, rest_of_rows[max_row_idx]) + rest_of_rows = rest_of_rows[tl.tensor(mask, dtype=tl.int64)] i = i + 1 row_idx = tl.tensor(row_idx, dtype=tl.int64) @@ -641,7 +639,7 @@

    Source code for tensorly.contrib.decomposition._tt_cross

    - © Copyright 2016 - 2023, TensorLy Developers.
    + © Copyright 2016 - 2024, TensorLy Developers.
    diff --git a/stable/_modules/tensorly/contrib/decomposition/tt_TTOI.html b/stable/_modules/tensorly/contrib/decomposition/tt_TTOI.html index 21745ad49..6ded4f9f9 100644 --- a/stable/_modules/tensorly/contrib/decomposition/tt_TTOI.html +++ b/stable/_modules/tensorly/contrib/decomposition/tt_TTOI.html @@ -1,7 +1,6 @@ - - + @@ -15,17 +14,17 @@ - - - - - - + + + + + + - - - + + + @@ -189,7 +188,9 @@

    Source code for tensorly.contrib.decomposition.tt_TTOI

    return tensor_prod -
    [docs]def tensor_train_OI(data_tensor, rank, n_iter=2, trajectory=False, return_errors=True): +
    +[docs] +def tensor_train_OI(data_tensor, rank, n_iter=2, trajectory=False, return_errors=True): """Perform tensor-train orthogonal iteration (TTOI) [1]_ for tensor train decomposition Parameters @@ -387,6 +388,7 @@

    Source code for tensorly.contrib.decomposition.tt_TTOI

    return factors, full_tensor
    + class TensorTrain_OI(DecompositionMixin): def __init__(self, rank, n_iter, trajectory, return_errors): self.rank = rank @@ -412,7 +414,7 @@

    Source code for tensorly.contrib.decomposition.tt_TTOI

    - © Copyright 2016 - 2023, TensorLy Developers.
    + © Copyright 2016 - 2024, TensorLy Developers.
    diff --git a/stable/_modules/tensorly/cp_tensor.html b/stable/_modules/tensorly/cp_tensor.html index 5ae8df4c4..16c406fc6 100644 --- a/stable/_modules/tensorly/cp_tensor.html +++ b/stable/_modules/tensorly/cp_tensor.html @@ -1,7 +1,6 @@ - - + @@ -15,17 +14,17 @@ - - - - - - + + + + + + - - - + + + @@ -153,7 +152,6 @@

    Source code for tensorly.cp_tensor

     from .base import fold, tensor_to_vec
     from ._factorized_tensor import FactorizedTensor
     from .tenalg import khatri_rao, unfolding_dot_khatri_rao
    -from .utils import DefineDeprecated
     from .metrics.factors import congruence_coefficient
     import numpy as np
     
    @@ -253,7 +251,7 @@ 

    Source code for tensorly.cp_tensor

     
             See also
             --------
    -        kruskal_multi_mode_dot : chaining several mode_dot in one call
    +        cp_mode_dot : chaining several mode_dot in one call
             """
             return cp_mode_dot(self, matrix_or_vector, mode, keep_dim=keep_dim, copy=copy)
     
    @@ -423,7 +421,9 @@ 

    Source code for tensorly.cp_tensor

         return rank
     
     
    -
    [docs]def cp_normalize(cp_tensor): +
    +[docs] +def cp_normalize(cp_tensor): """Returns cp_tensor with factors normalised to unit length Turns ``factors = [|U_1, ... U_n|]`` into ``[weights; |V_1, ... V_n|]``, @@ -469,6 +469,7 @@

    Source code for tensorly.cp_tensor

         return CPTensor((weights, normalized_factors))
    + def cp_flip_sign(cp_tensor, mode=0, func=None): """Returns cp_tensor with factors flipped to have positive signs. The sign of a given column is determined by `func`, which is the mean @@ -578,7 +579,9 @@

    Source code for tensorly.cp_tensor

         return CPTensor((None, grad_fac))
     
     
    -
    [docs]def cp_to_tensor(cp_tensor, mask=None): +
    +[docs] +def cp_to_tensor(cp_tensor, mask=None): """Turns the Khatri-product of matrices into a full tensor ``factor_matrices = [|U_1, ... U_n|]`` becomes @@ -633,7 +636,10 @@

    Source code for tensorly.cp_tensor

         return fold(full_tensor, 0, shape)
    -
    [docs]def cp_to_unfolded(cp_tensor, mode): + +
    +[docs] +def cp_to_unfolded(cp_tensor, mode): """Turns the khatri-product of matrices into an unfolded tensor turns ``factors = [|U_1, ... U_n|]`` into a mode-`mode` @@ -669,7 +675,10 @@

    Source code for tensorly.cp_tensor

             return T.dot(factors[mode], T.transpose(khatri_rao(factors, skip_matrix=mode)))
    -
    [docs]def cp_to_vec(cp_tensor): + +
    +[docs] +def cp_to_vec(cp_tensor): """Turns the khatri-product of matrices into a vector (the tensor ``factors = [|U_1, ... U_n|]`` @@ -694,7 +703,10 @@

    Source code for tensorly.cp_tensor

         return tensor_to_vec(cp_to_tensor(cp_tensor))
    -
    [docs]def cp_mode_dot(cp_tensor, matrix_or_vector, mode, keep_dim=False, copy=False): + +
    +[docs] +def cp_mode_dot(cp_tensor, matrix_or_vector, mode, keep_dim=False, copy=False): """n-mode product of a CP tensor and a matrix or vector at the specified mode Parameters @@ -715,7 +727,7 @@

    Source code for tensorly.cp_tensor

     
         See also
         --------
    -    kruskal_multi_mode_dot : chaining several mode_dot in one call
    +    cp_multi_mode_dot : chaining several mode_dot in one call
         """
         shape, _ = _validate_cp_tensor(cp_tensor)
         weights, factors = cp_tensor
    @@ -759,7 +771,10 @@ 

    Source code for tensorly.cp_tensor

             return cp_tensor
    -
    [docs]def cp_norm(cp_tensor): + +
    +[docs] +def cp_norm(cp_tensor): """Returns the l2 norm of a CP tensor Parameters @@ -788,12 +803,14 @@

    Source code for tensorly.cp_tensor

             # norm = T.dot(T.dot(weights, norm), weights)
             norm = norm * (T.reshape(weights, (-1, 1)) * T.reshape(weights, (1, -1)))
     
    -    # We sum even if weigths is not None
    -    # as e.g. MXNet would return a 1D tensor, not a 0D tensor
    +    # We sum even if weights is not None
         return T.sqrt(T.sum(norm))
    -
    [docs]def cp_permute_factors(ref_cp_tensor, tensors_to_permute): + +
    +[docs] +def cp_permute_factors(ref_cp_tensor, tensors_to_permute): """ Compares factors of a reference cp tensor with factors of other another tensor (or list of tensor) in order to match component order. Permutation occurs on the columns of factors, minimizing the cosine distance to reference cp tensor with scipy @@ -837,22 +854,6 @@

    Source code for tensorly.cp_tensor

             permuted_tensors = permuted_tensors[0]
         return permuted_tensors, permutation
    - -# Deprecated classes and functions -KruskalTensor = DefineDeprecated(deprecated_name="KruskalTensor", use_instead=CPTensor) -kruskal_norm = DefineDeprecated(deprecated_name="kruskal_norm", use_instead=cp_norm) -kruskal_mode_dot = DefineDeprecated( - deprecated_name="kruskal_mode_dot", use_instead=cp_mode_dot -) -kruskal_to_tensor = DefineDeprecated( - deprecated_name="kruskal_to_tensor", use_instead=cp_to_tensor -) -kruskal_to_unfolded = DefineDeprecated( - deprecated_name="kruskal_to_unfolded", use_instead=cp_to_unfolded -) -kruskal_to_vec = DefineDeprecated( - deprecated_name="kruskal_to_vec", use_instead=cp_to_vec -)
    @@ -862,7 +863,7 @@

    Source code for tensorly.cp_tensor

             
    - © Copyright 2016 - 2023, TensorLy Developers.
    + © Copyright 2016 - 2024, TensorLy Developers.
    diff --git a/stable/_modules/tensorly/datasets/data_imports.html b/stable/_modules/tensorly/datasets/data_imports.html index 05d968663..8d7cb7544 100644 --- a/stable/_modules/tensorly/datasets/data_imports.html +++ b/stable/_modules/tensorly/datasets/data_imports.html @@ -1,7 +1,6 @@ - - + @@ -15,17 +14,17 @@ - - - - - - + + + + + + - - - + + + @@ -181,7 +180,9 @@

    Source code for tensorly.datasets.data_imports

    pass -

    [docs]def load_IL2data(): +
    +[docs] +def load_IL2data(): """ Loads tensor of IL-2 mutein treatment responses. Tensor contains the signaling responses of eight different cell types to 13 IL-2 mutants. @@ -263,7 +264,10 @@

    Source code for tensorly.datasets.data_imports

    )

    -
    [docs]def load_covid19_serology(): + +
    +[docs] +def load_covid19_serology(): """ Load an example dataset of COVID-19 systems serology. Formatted in a three-mode tensor of samples, antigens, and receptors. @@ -779,7 +783,10 @@

    Source code for tensorly.datasets.data_imports

    )

    -
    [docs]def load_indian_pines(): + +
    +[docs] +def load_indian_pines(): """ Loads Indian pines hyperspectral data from tensorly datasets and returns it as a bunch. This dataset could be useful for non-negative constrained decomposition methods and classification/segmentation applications with the available ground truth in @@ -1023,7 +1030,10 @@

    Source code for tensorly.datasets.data_imports

    )

    -
    [docs]def load_kinetic(): + +
    +[docs] +def load_kinetic(): """ Loads the kinetic fluorescence dataset (X60t) as a tensorly tensor. The data is well suited for Parafac and multi-way partial least squares regression (N-PLS). Missing data are replaced by 0s, and a missing value mask is provided. Data is a courtesy of Rasmus Bro and collaborators, it can be originally downloaded at https://ucphchemometrics.com/. Please cite the original reference [1] if you use this data in any way. @@ -1081,6 +1091,7 @@

    Source code for tensorly.datasets.data_imports

    DESC=desc, LICENCE=licence, )

    +
    @@ -1090,7 +1101,7 @@

    Source code for tensorly.datasets.data_imports

    - © Copyright 2016 - 2023, TensorLy Developers.
    + © Copyright 2016 - 2024, TensorLy Developers.
    diff --git a/stable/_modules/tensorly/datasets/synthetic.html b/stable/_modules/tensorly/datasets/synthetic.html index 2ff86d7e8..e522a8012 100644 --- a/stable/_modules/tensorly/datasets/synthetic.html +++ b/stable/_modules/tensorly/datasets/synthetic.html @@ -1,7 +1,6 @@ - - + @@ -15,17 +14,17 @@ - - - - - - + + + + + + - - - + + + @@ -149,7 +148,9 @@

    Source code for tensorly.datasets.synthetic

     from .. import backend as T
     
     
    -
    [docs]def gen_image( +
    +[docs] +def gen_image( region="swiss", image_height=20, image_width=20, n_channels=None, weight_value=1 ): """Generates an image for regression testing @@ -197,6 +198,7 @@

    Source code for tensorly.datasets.synthetic

             weight = np.concatenate([weight[..., None]] * n_channels, axis=-1)
     
         return T.tensor(weight)
    +
    @@ -206,7 +208,7 @@

    Source code for tensorly.datasets.synthetic

             
    - © Copyright 2016 - 2023, TensorLy Developers.
    + © Copyright 2016 - 2024, TensorLy Developers.
    diff --git a/stable/_modules/tensorly/decomposition/_constrained_cp.html b/stable/_modules/tensorly/decomposition/_constrained_cp.html index f4ac6d95f..03fa23e9e 100644 --- a/stable/_modules/tensorly/decomposition/_constrained_cp.html +++ b/stable/_modules/tensorly/decomposition/_constrained_cp.html @@ -1,7 +1,6 @@ - - + @@ -15,17 +14,17 @@ - - - - - - + + + + + + - - - + + + @@ -153,7 +152,8 @@

    Source code for tensorly.decomposition._constrained_cp

    from ._base_decomposition import DecompositionMixin from ..base import unfold from ..cp_tensor import CPTensor, cp_norm, validate_cp_rank -from ..tenalg.proximal import admm, proximal_operator, validate_constraints +from ..solvers.admm import admm +from ..tenalg.proximal import proximal_operator, validate_constraints from ..tenalg.svd import svd_interface from ..tenalg import unfolding_dot_khatri_rao @@ -309,7 +309,9 @@

    Source code for tensorly.decomposition._constrained_cp

    return kt -
    [docs]def constrained_parafac( +
    +[docs] +def constrained_parafac( tensor, rank, n_iter_max=100, @@ -526,8 +528,7 @@

    Source code for tensorly.decomposition._constrained_cp

    factors_norm = cp_norm((weights, factors)) iprod = tl.sum(tl.sum(mttkrp * factors[-1], axis=0) * weights) rec_error = ( - tl.sqrt(tl.abs(norm_tensor**2 + factors_norm**2 - 2 * iprod)) - / norm_tensor + tl.sqrt(tl.abs(norm_tensor**2 + factors_norm**2 - 2 * iprod)) / norm_tensor ) rec_errors.append(rec_error) constraint_error = 0 @@ -547,7 +548,7 @@

    Source code for tensorly.decomposition._constrained_cp

    if constraint_error < tol_outer: break if cvg_criterion == "abs_rec_error": - stop_flag = abs(rec_error_decrease) < tol_outer + stop_flag = tl.abs(rec_error_decrease) < tol_outer elif cvg_criterion == "rec_error": stop_flag = rec_error_decrease < tol_outer else: @@ -570,7 +571,10 @@

    Source code for tensorly.decomposition._constrained_cp

    return cp_tensor
    -
    [docs]class ConstrainedCP(DecompositionMixin): + +
    +[docs] +class ConstrainedCP(DecompositionMixin): """CANDECOMP/PARAFAC decomposition via alternating optimization of alternating direction method of multipliers (AO-ADMM): @@ -714,7 +718,9 @@

    Source code for tensorly.decomposition._constrained_cp

    self.monotonicity = monotonicity self.hard_sparsity = hard_sparsity -
    [docs] def fit_transform(self, tensor): +
    +[docs] + def fit_transform(self, tensor): """Decompose an input tensor Parameters @@ -756,7 +762,9 @@

    Source code for tensorly.decomposition._constrained_cp

    ) self.decomposition_ = cp_tensor self.errors_ = errors - return self.decomposition_
    + return self.decomposition_
    +
    +
    @@ -766,7 +774,7 @@

    Source code for tensorly.decomposition._constrained_cp

    - © Copyright 2016 - 2023, TensorLy Developers.
    + © Copyright 2016 - 2024, TensorLy Developers.
    diff --git a/stable/_modules/tensorly/decomposition/_cp.html b/stable/_modules/tensorly/decomposition/_cp.html index 0a41a1081..fac1a83e4 100644 --- a/stable/_modules/tensorly/decomposition/_cp.html +++ b/stable/_modules/tensorly/decomposition/_cp.html @@ -1,7 +1,6 @@ - - + @@ -15,17 +14,17 @@ - - - - - - + + + + + + - - - + + + @@ -374,7 +373,9 @@

    Source code for tensorly.decomposition._cp

         return unnorml_rec_error, tensor, norm_tensor
     
     
    -
    [docs]def parafac( +
    +[docs] +def parafac( tensor, rank, n_iter_max=100, @@ -555,7 +556,7 @@

    Source code for tensorly.decomposition._cp

                 if verbose > 1:
                     print("Mode", mode, "of", tl.ndim(tensor))
     
    -            pseudo_inverse = tl.tensor(np.ones((rank, rank)), **tl.context(tensor))
    +            pseudo_inverse = tl.ones((rank, rank), **tl.context(tensor))
                 for i, factor in enumerate(factors):
                     if i != mode:
                         pseudo_inverse = pseudo_inverse * tl.dot(
    @@ -573,8 +574,6 @@ 

    Source code for tensorly.decomposition._cp

                     tl.solve(tl.conj(tl.transpose(pseudo_inverse)), tl.transpose(mttkrp))
                 )
                 factors[mode] = factor
    -            if normalize_factors and mode != modes_list[-1]:
    -                weights, factors = cp_normalize((weights, factors))
     
             # Will we be performing a line search iteration
             if linesearch and iteration % 2 == 0 and iteration > 5:
    @@ -661,7 +660,7 @@ 

    Source code for tensorly.decomposition._cp

                         )
     
                     if cvg_criterion == "abs_rec_error":
    -                    stop_flag = abs(rec_error_decrease) < tol
    +                    stop_flag = tl.abs(rec_error_decrease) < tol
                     elif cvg_criterion == "rec_error":
                         stop_flag = rec_error_decrease < tol
                     else:
    @@ -692,7 +691,10 @@ 

    Source code for tensorly.decomposition._cp

             return cp_tensor
    -
    [docs]def sample_khatri_rao( + +
    +[docs] +def sample_khatri_rao( matrices, n_samples, skip_matrix=None, @@ -779,7 +781,10 @@

    Source code for tensorly.decomposition._cp

             return sampled_kr, indices_list
    -
    [docs]def randomised_parafac( + +
    +[docs] +def randomised_parafac( tensor, rank, n_samples, @@ -861,7 +866,6 @@

    Source code for tensorly.decomposition._cp

                 indices_list = [i.tolist() for i in indices_list]
                 # Keep all the elements of the currently considered mode
                 indices_list.insert(mode, slice(None, None, None))
    -            # MXNet will not be happy if this is a list instead of a tuple
                 indices_list = tuple(indices_list)
                 if mode:
                     sampled_unfolding = tensor[indices_list]
    @@ -899,7 +903,7 @@ 

    Source code for tensorly.decomposition._cp

                             f"reconstruction error={rec_errors[-1]}, variation={rec_errors[-2]-rec_errors[-1]}."
                         )
     
    -                if (tol and abs(rec_errors[-2] - rec_errors[-1]) < tol) or (
    +                if (tol and tl.abs(rec_errors[-2] - rec_errors[-1]) < tol) or (
                         stagnation and (stagnation > max_stagnation)
                     ):
                         if verbose:
    @@ -912,7 +916,10 @@ 

    Source code for tensorly.decomposition._cp

             return CPTensor((weights, factors))
    -
    [docs]class CP(DecompositionMixin): + +
    +[docs] +class CP(DecompositionMixin): """Candecomp-Parafac decomposition via Alternating-Least Square Computes a rank-`rank` decomposition of `tensor` [1]_ such that:: @@ -1024,7 +1031,9 @@

    Source code for tensorly.decomposition._cp

             self.linesearch = linesearch
             self.callback = callback
     
    -
    [docs] def fit_transform(self, tensor): +
    +[docs] + def fit_transform(self, tensor): """Decompose an input tensor Parameters @@ -1062,11 +1071,15 @@

    Source code for tensorly.decomposition._cp

             self.errors_ = errors
             return self.decomposition_
    + def __repr__(self): return f"Rank-{self.rank} CP decomposition."
    -
    [docs]class RandomizedCP(DecompositionMixin): + +
    +[docs] +class RandomizedCP(DecompositionMixin): """Randomised CP decomposition via sampled ALS Parameters @@ -1143,6 +1156,7 @@

    Source code for tensorly.decomposition._cp

                 callback=self.callback,
             )
             return self.decomposition_
    +
    @@ -1152,7 +1166,7 @@

    Source code for tensorly.decomposition._cp

             
    - © Copyright 2016 - 2023, TensorLy Developers.
    + © Copyright 2016 - 2024, TensorLy Developers.
    diff --git a/stable/_modules/tensorly/decomposition/_cp_power.html b/stable/_modules/tensorly/decomposition/_cp_power.html index d9b051437..788d206d1 100644 --- a/stable/_modules/tensorly/decomposition/_cp_power.html +++ b/stable/_modules/tensorly/decomposition/_cp_power.html @@ -1,7 +1,6 @@ - - + @@ -15,17 +14,17 @@ - - - - - - + + + + + + - - - + + + @@ -156,7 +155,9 @@

    Source code for tensorly.decomposition._cp_power

    # License: BSD 3 clause -
    [docs]def power_iteration(tensor, n_repeat=10, n_iteration=10, verbose=False): +
    +[docs] +def power_iteration(tensor, n_repeat=10, n_iteration=10, verbose=False): """A single Robust Tensor Power Iteration Parameters @@ -225,7 +226,10 @@

    Source code for tensorly.decomposition._cp_power

    return eigenval, best_factors, deflated
    -
    [docs]def parafac_power_iteration(tensor, rank, n_repeat=10, n_iteration=10, verbose=0): + +
    +[docs] +def parafac_power_iteration(tensor, rank, n_repeat=10, n_iteration=10, verbose=0): """CP Decomposition via Robust Tensor Power Iteration Parameters @@ -270,7 +274,10 @@

    Source code for tensorly.decomposition._cp_power

    return weights, factors
    -
    [docs]class CPPower(DecompositionMixin): + +
    +[docs] +class CPPower(DecompositionMixin): """CP Decomposition via Robust Tensor Power Iteration Parameters @@ -302,7 +309,9 @@

    Source code for tensorly.decomposition._cp_power

    self.n_iteration = n_iteration self.verbose = verbose -
    [docs] def fit_transform(self, tensor): +
    +[docs] + def fit_transform(self, tensor): """Decompose an input tensor Parameters @@ -325,8 +334,10 @@

    Source code for tensorly.decomposition._cp_power

    self.decomposition_ = cp_tensor return cp_tensor
    + def __repr__(self): return f"Rank-{self.rank} CP decomposition via Robust Tensor Power Iteration."
    +
    @@ -336,7 +347,7 @@

    Source code for tensorly.decomposition._cp_power

    - © Copyright 2016 - 2023, TensorLy Developers.
    + © Copyright 2016 - 2024, TensorLy Developers.
    diff --git a/stable/_modules/tensorly/decomposition/_nn_cp.html b/stable/_modules/tensorly/decomposition/_nn_cp.html index fc0787e2b..28a319d99 100644 --- a/stable/_modules/tensorly/decomposition/_nn_cp.html +++ b/stable/_modules/tensorly/decomposition/_nn_cp.html @@ -1,7 +1,6 @@ - - + @@ -15,17 +14,17 @@ - - - - - - + + + + + + - - - + + + @@ -149,7 +148,7 @@

    Source code for tensorly.decomposition._nn_cp

    import tensorly as tl from ._base_decomposition import DecompositionMixin from ._cp import initialize_cp -from ..tenalg.proximal import hals_nnls +from ..solvers.nnls import hals_nnls from ..cp_tensor import ( CPTensor, unfolding_dot_khatri_rao, @@ -171,7 +170,9 @@

    Source code for tensorly.decomposition._nn_cp

    # License: BSD 3 clause -
    [docs]def non_negative_parafac( +
    +[docs] +def non_negative_parafac( tensor, rank, n_iter_max=100, @@ -306,7 +307,7 @@

    Source code for tensorly.decomposition._nn_cp

    ) if cvg_criterion == "abs_rec_error": - stop_flag = abs(rec_error_decrease) < tol + stop_flag = tl.abs(rec_error_decrease) < tol elif cvg_criterion == "rec_error": stop_flag = rec_error_decrease < tol else: @@ -329,7 +330,10 @@

    Source code for tensorly.decomposition._nn_cp

    return cp_tensor
    -
    [docs]def non_negative_parafac_hals( + +
    +[docs] +def non_negative_parafac_hals( tensor, rank, n_iter_max=100, @@ -455,7 +459,7 @@

    Source code for tensorly.decomposition._nn_cp

    # One pass of least squares on each updated mode for mode in modes: # Computing Hadamard of cross-products - pseudo_inverse = tl.tensor(tl.ones((rank, rank)), **tl.context(tensor)) + pseudo_inverse = tl.ones((rank, rank), **tl.context(tensor)) for i, factor in enumerate(factors): if i != mode: pseudo_inverse = pseudo_inverse * tl.dot( @@ -471,7 +475,7 @@

    Source code for tensorly.decomposition._nn_cp

    if mode in nn_modes: # Call the hals resolution with nnls, optimizing the current mode - nn_factor, _, _, _ = hals_nnls( + nn_factor = hals_nnls( tl.transpose(mttkrp), pseudo_inverse, tl.transpose(factors[mode]), @@ -502,7 +506,7 @@

    Source code for tensorly.decomposition._nn_cp

    ) if cvg_criterion == "abs_rec_error": - stop_flag = abs(rec_error_decrease) < tol + stop_flag = tl.abs(rec_error_decrease) < tol elif cvg_criterion == "rec_error": stop_flag = rec_error_decrease < tol else: @@ -524,6 +528,7 @@

    Source code for tensorly.decomposition._nn_cp

    return cp_tensor
    + class CP_NN(DecompositionMixin): """ Non-Negative Candecomp-Parafac decomposition via Alternating-Least Square @@ -657,7 +662,9 @@

    Source code for tensorly.decomposition._nn_cp

    return f"Rank-{self.rank} Non-Negative CP decomposition." -
    [docs]class CP_NN_HALS(DecompositionMixin): +
    +[docs] +class CP_NN_HALS(DecompositionMixin): """ Non-Negative Candecomp-Parafac decomposition via Alternating-Least Square @@ -759,7 +766,9 @@

    Source code for tensorly.decomposition._nn_cp

    self.cvg_criterion = cvg_criterion self.random_state = random_state -
    [docs] def fit_transform(self, tensor): +
    +[docs] + def fit_transform(self, tensor): """Decompose an input tensor Parameters @@ -795,8 +804,10 @@

    Source code for tensorly.decomposition._nn_cp

    self.errors_ = errors return self.decomposition_
    + def __repr__(self): return f"Rank-{self.rank} Non-Negative CP decomposition."
    +
    @@ -806,7 +817,7 @@

    Source code for tensorly.decomposition._nn_cp

    - © Copyright 2016 - 2023, TensorLy Developers.
    + © Copyright 2016 - 2024, TensorLy Developers.
    diff --git a/stable/_modules/tensorly/decomposition/_parafac2.html b/stable/_modules/tensorly/decomposition/_parafac2.html index e845d4b44..164827b92 100644 --- a/stable/_modules/tensorly/decomposition/_parafac2.html +++ b/stable/_modules/tensorly/decomposition/_parafac2.html @@ -1,7 +1,6 @@ - - + @@ -15,17 +14,17 @@ - - - - - - + + + + + + - - - + + + @@ -146,6 +145,7 @@

    Source code for tensorly.decomposition._parafac2

     from warnings import warn
    +from typing import Iterable, Optional, Sequence, Literal, Union
     
     import tensorly as tl
     from ._base_decomposition import DecompositionMixin
    @@ -153,13 +153,11 @@ 

    Source code for tensorly.decomposition._parafac2

    from tensorly import backend as T from . import parafac, non_negative_parafac_hals from ..parafac2_tensor import ( - parafac2_to_slice, Parafac2Tensor, _validate_parafac2_tensor, ) -from ..cp_tensor import CPTensor -from ..base import unfold -from ..tenalg.svd import svd_interface +from ..cp_tensor import CPTensor, cp_normalize +from ..tenalg.svd import svd_interface, SVD_TYPES # Authors: Marie Roald # Yngve Mardal Moe @@ -168,7 +166,23 @@

    Source code for tensorly.decomposition._parafac2

    def initialize_decomposition( tensor_slices, rank, init="random", svd="truncated_svd", random_state=None ): - r"""Initiate a random PARAFAC2 decomposition given rank and tensor slices + r"""Initiate a random PARAFAC2 decomposition given rank and tensor slices. + + The SVD-based initialization is based on concatenation of all the tensor slices. + This concatenated matrix is used to derive the factor matrix corresponding to the + :math:`k` mode for an :math:`X_{ijk}` tensor. However, concatenating these slices + requires a new copy of the tensor. For tensors where the sum of the :math:`j` mode + along each slice is on average larger than the :math:`k` mode, an alternative + strategy is adding together the cross-product matrix of each slice: + + .. math:: + + K = X_{1}^T X_{1} + X_{2}^T X_{2} + ... + + The eigenvectors of this symmetric matrix are then equal to the right eigenvectors + of the concatenation matrix. This function automatically chooses between + concatenating or forming the cross-product, depending on which resulting matrix + is smaller. Parameters ---------- @@ -182,26 +196,35 @@

    Source code for tensorly.decomposition._parafac2

    parafac2_tensor : Parafac2Tensor List of initialized factors of the CP decomposition where element `i` is of shape (tensor.shape[i], rank) - """ context = tl.context(tensor_slices[0]) shapes = [m.shape for m in tensor_slices] + concat_shape = sum(shape[0] for shape in shapes) if init == "random": return random_parafac2( shapes, rank, full=False, random_state=random_state, **context ) elif init == "svd": - padded_tensor = _pad_by_zeros(tensor_slices) - A = T.ones((padded_tensor.shape[0], rank), **context) - - unfolded_mode_2 = unfold(padded_tensor, 2) - if T.shape(unfolded_mode_2)[0] < rank: + if shapes[0][1] < rank: raise ValueError( - f"Cannot perform SVD init if rank ({rank}) is greater than the number of columns in each tensor slice ({T.shape(unfolded_mode_2)[0]})" + f"Cannot perform SVD init if rank ({rank}) is greater than the number of columns in each tensor slice ({shapes[0][1]})" ) + + A = tl.ones((len(tensor_slices), rank), **context) + + if concat_shape > shapes[0][1]: + # If the concatenated matrix would be larger than the cross-product, use the latter + unfolded_mode_2 = tl.transpose(tensor_slices[0]) @ tensor_slices[0] + + for slice in tensor_slices[1:]: + unfolded_mode_2 += tl.matmul(tl.transpose(slice), slice) + else: + unfolded_mode_2 = tl.transpose(tl.concatenate(list(tensor_slices), axis=0)) + C = svd_interface(unfolded_mode_2, n_eigenvecs=rank, method=svd)[0] - B = T.eye(rank, **context) + + B = tl.eye(rank, **context) projections = _compute_projections(tensor_slices, (A, B, C), svd) return Parafac2Tensor((None, [A, B, C], projections)) @@ -219,20 +242,6 @@

    Source code for tensorly.decomposition._parafac2

    raise ValueError(f'Initialization method "{init}" not recognized') -def _pad_by_zeros(tensor_slices): - """Return zero-padded full tensor.""" - I = len(tensor_slices) - J = max(tensor_slice.shape[0] for tensor_slice in tensor_slices) - K = tensor_slices[0].shape[1] - padded = T.zeros((I, J, K), **T.context(tensor_slices[0])) - for i, tensor_slice in enumerate(tensor_slices): - J_i = len(tensor_slice) - - padded = tl.index_update(padded, tl.index[i, :J_i], tensor_slice) - - return padded - - def _compute_projections(tensor_slices, factors, svd): n_eig = factors[0].shape[1] out = [] @@ -240,7 +249,9 @@

    Source code for tensorly.decomposition._parafac2

    for A, tensor_slice in zip(factors[0], tensor_slices): lhs = T.dot(factors[1], T.transpose(A * factors[2])) rhs = T.transpose(tensor_slice) - U, _, Vh = svd_interface(T.dot(lhs, rhs), n_eigenvecs=n_eig, method=svd) + U, _, Vh = svd_interface( + T.dot(lhs, rhs), n_eigenvecs=n_eig, method=svd, flip_sign=False + ) out.append(T.transpose(T.dot(U, Vh))) @@ -256,29 +267,214 @@

    Source code for tensorly.decomposition._parafac2

    return tl.stack(slices) -def _parafac2_reconstruction_error(tensor_slices, decomposition): +class _BroThesisLineSearch: + def __init__( + self, + norm_tensor, + svd: str, + verbose: bool = False, + nn_modes=None, + acc_pow: float = 2.0, + max_fail: int = 4, + ): + """The line search strategy defined within Rasmus Bro's thesis [1, 2]. + + Parameters + ---------- + norm_tensor : int + Sum of the matrix norms for each slice. + svd : str + The function to use to compute the SVD, acceptable values in tensorly.SVD_FUNS + verbose : bool + Optionally provide output about each step. + nn_modes: None, 'all' or array of integers + (Default: None) Used to specify which modes to impose non-negativity constraints on. + We cannot impose non-negativity constraints on the the B-mode (mode 1) with the ALS + algorithm, so if this mode is among the constrained modes, then a warning will be shown + (see notes for more info). + acc_pow : int + Line search steps are defined as `iteration ** (1.0 / acc_pow)`. + max_fail : int + The number of line search failures before increasing `acc_pow`. + + References + ---------- + .. [1] R. Bro, "Multi-Way Analysis in the Food Industry: Models, Algorithms, and + Applications", PhD., University of Amsterdam, 1998 + .. [2] H. Yu, D. Augustijn, R. Bro, "Accelerating PARAFAC2 algorithms for non-negative + complex tensor decomposition." Chemometrics and Intelligent Laboratory Systems 214 + (2021): 104312. + """ + self.norm_tensor = norm_tensor + self.svd = svd + self.verbose = verbose + self.acc_pow = acc_pow # Extrapolate to the iteration^(1/acc_pow) ahead + self.max_fail = max_fail # Increase acc_pow with one after max_fail failure + self.acc_fail = 0 # How many times acceleration have failed + self.nn_modes = nn_modes + + def line_step( + self, + iteration: int, + tensor_slices: Iterable, + factors_last: list, + weights, + factors: list, + projections: list, + rec_error, + ): + r"""Perform one line search step. + + Parameters + ---------- + iteration : int + The current iteration number. + tensor_slices : ndarray or list of ndarrays + The data itself. Either a third order tensor or a list of second order tensors that + may have different number of rows. + factors_last : list of ndarrays + The CP factors from the previous iteration. + weights : ndarrays + The normalization weights for the current factors. + factors : list of ndarrays + The CP factors from the current iteration. + projections : list of ndarrays + The projection matrices from the current iteration. + rec_error : float + The reconstruction error from the current iteration. + + Returns + ------- + factors : list + List of factors for the accepted step. + projections : list + List of projection matrices from the accepted step. + rec_error : float + Reconstruction error of the accepted step. + """ + jump = iteration ** (1.0 / self.acc_pow) + + factors_ls = [ + factors_last[ii] + (factors[ii] - factors_last[ii]) * jump + for ii, _ in enumerate(factors) + ] + + # Clip if the mode should be non-negative + if self.nn_modes: + if 0 in self.nn_modes: + factors_ls[0] = tl.clip(factors_ls[0], 0) + if 2 in self.nn_modes: + factors_ls[2] = tl.clip(factors_ls[2], 0) + + projections_ls = _compute_projections(tensor_slices, factors_ls, self.svd) + + ls_rec_error = _parafac2_reconstruction_error( + tensor_slices, (weights, factors_ls, projections_ls), self.norm_tensor + ) + ls_rec_error /= self.norm_tensor + + if ls_rec_error < rec_error: + self.acc_fail = 0 + + if self.verbose: + print(f"Accepted line search jump of {jump}.") + + return factors_ls, projections_ls, ls_rec_error + else: + self.acc_fail += 1 + + if self.verbose: + print(f"Line search failed for jump of {jump}.") + + if self.acc_fail == self.max_fail: + self.acc_pow += 1.0 + self.acc_fail = 0 + + if self.verbose: + print("Reducing acceleration.") + + return factors, projections, rec_error + + +def _parafac2_reconstruction_error( + tensor_slices, decomposition, norm_matrices=None, projected_tensor=None +): + """Calculates the reconstruction error of the PARAFAC2 decomposition. This implementation + uses the inner product with each matrix for efficiency, as this avoids needing to + reconstruct the tensor. This is based on the property that: + + .. math:: + + ||tensor - rec||^2 = ||tensor||^2 + ||rec||^2 - 2*<tensor, rec> + + Parameters + ---------- + tensor_slices : ndarray or list of ndarrays + The data itself. Either a third order tensor or a list of second order tensors that + may have different number of rows. + decomposition : (weight, factors, projection_matrices) + * weights : 1D array of shape (rank, ) + weights of the (normalized) factors + * factors : List of factors of the CP decomposition element `i` is of shape + (tensor.shape[i], rank) + * projections : List of projection matrices used to create evolving factors. + norm_matrices : float, optional + The norm of the data. This can be optionally provided to avoid recalculating it. + projected_tensor : ndarray, optional + The projections of X into an aligned tensor for CP decomposition. This can be optionally + provided to avoid recalculating it. + + Returns + ------- + error : float + The norm of the reconstruction error of the PARAFAC2 decomposition. + """ _validate_parafac2_tensor(decomposition) - squared_error = 0 - for idx, tensor_slice in enumerate(tensor_slices): - reconstruction = parafac2_to_slice(decomposition, idx, validate=False) - squared_error += tl.sum((tensor_slice - reconstruction) ** 2) - return tl.sqrt(squared_error) + if norm_matrices is None: + norm_X_sq = sum(tl.norm(t_slice, 2) ** 2 for t_slice in tensor_slices) + else: + norm_X_sq = norm_matrices**2 + + weights, (A, B, C), projections = decomposition + if weights is not None: + A = A * weights + + norm_cmf_sq = 0 + inner_product = 0 + CtC = tl.dot(tl.transpose(C), C) + + for i, t_slice in enumerate(tensor_slices): + B_i = (projections[i] @ B) * A[i] + + if projected_tensor is None: + tmp = tl.dot(tl.transpose(B_i), t_slice) + else: + tmp = tl.reshape(A[i], (-1, 1)) * tl.transpose(B) @ projected_tensor[i] + + inner_product += tl.trace(tl.dot(tmp, C)) + + norm_cmf_sq += tl.sum((tl.transpose(B_i) @ B_i) * CtC) -
    [docs]def parafac2( + return tl.sqrt(norm_X_sq - 2 * inner_product + norm_cmf_sq) + + +
    +[docs] +def parafac2( tensor_slices, - rank, - n_iter_max=2000, + rank: int, + n_iter_max: int = 2000, init="random", - svd="truncated_svd", - normalize_factors=False, - tol=1e-8, - absolute_tol=1e-13, - nn_modes=None, + svd: SVD_TYPES = "truncated_svd", + normalize_factors: bool = False, + tol: float = 1.0e-8, + nn_modes: Optional[Union[Sequence[int], Literal["all"]]] = None, random_state=None, - verbose=False, - return_errors=False, - n_iter_parafac=5, + verbose: Union[bool, int] = False, + return_errors: bool = False, + n_iter_parafac: int = 5, + linesearch: bool = True, ): r"""PARAFAC2 decomposition [1]_ of a third order tensor via alternating least squares (ALS) @@ -309,7 +505,7 @@

    Source code for tensorly.decomposition._parafac2

    .. math:: - X_{ijk} = \sum_{r=1}^R A_{ir} B_{ijr} C_{kr}, + X_{ijk} = \sum_{r=1}^R A_{ir} B_{ijr} C_{kr}, with the same constraints hold for :math:`B_i` as above. @@ -347,15 +543,6 @@

    Source code for tensorly.decomposition._parafac2

    Previously, the stopping condition was :math:`\left|\| X - \hat{X}_{n-1} \| - \| X - \hat{X}_{n} \|\right| < \epsilon`. - absolute_tol : float, optional - (Default: 1e-13) Absolute reconstruction error tolearnce. The algorithm - is considered to have converged when - :math:`\left|\| X - \hat{X}_{n-1} \|^2 - \| X - \hat{X}_{n} \|^2\right| < \epsilon_\text{abs}`. - That is, when the relative sum of squared error is less than the specified tolerance. - The absolute tolerance is necessary for stopping the algorithm when used on noise-free - data that follows the PARAFAC2 constraint. - - If None, then the machine precision + 1000 will be used. nn_modes: None, 'all' or array of integers (Default: None) Used to specify which modes to impose non-negativity constraints on. We cannot impose non-negativity constraints on the the B-mode (mode 1) with the ALS @@ -368,6 +555,9 @@

    Source code for tensorly.decomposition._parafac2

    Activate return of iteration errors n_iter_parafac : int, optional Number of PARAFAC iterations to perform for each PARAFAC2 iteration + linesearch : bool, default is False + Whether to perform line search as proposed by Bro in his PhD dissertation [2]_ + (similar to the PLSToolbox line search described in [3]_). Returns ------- @@ -388,31 +578,47 @@

    Source code for tensorly.decomposition._parafac2

    .. [1] Kiers, H.A.L., ten Berge, J.M.F. and Bro, R. (1999), PARAFAC2—Part I. A direct fitting algorithm for the PARAFAC2 model. J. Chemometrics, 13: 275-294. + .. [2] R. Bro, "Multi-Way Analysis in the Food Industry: Models, Algorithms, and + Applications", PhD., University of Amsterdam, 1998 + .. [3] H. Yu, D. Augustijn, R. Bro, "Accelerating PARAFAC2 algorithms for non-negative + complex tensor decomposition." Chemometrics and Intelligent Laboratory Systems 214 + (2021): 104312. Notes ----- This formulation of the PARAFAC2 decomposition is slightly different from the one in [1]_. - The difference lies in that here, the second mode changes over the first mode, whereas in - [1]_, the second mode changes over the third mode. We made this change since that means - that the function accept both lists of matrices and a single nd-array as input without - any reordering of the modes. + The difference lies in that, here, the second mode changes over the first mode, whereas in + [1]_, the second mode changes over the third mode. This change allows the function to accept + both lists of matrices and a single nd-array as input without any mode reordering. Because of the reformulation above, :math:`B_i = P_i B`, the :math:`B_i` matrices cannot be constrained to be non-negative with ALS. If this mode is constrained to be non-negative, then :math:`B` will be non-negative, but not the orthogonal `P_i` matrices. Consequently, the `B_i` matrices are unlikely to be non-negative. """ + assert ( + rank <= tensor_slices[0].shape[1] + ), f"PARAFAC2 rank ({rank}) cannot be greater than the number of columns in each tensor slice ({tensor_slices[0].shape[1]})." + + for ii in range(1, len(tensor_slices)): + assert ( + tensor_slices[0].shape[1] == tensor_slices[ii].shape[1] + ), "All tensor slices must have the same number of columns." + weights, factors, projections = initialize_decomposition( tensor_slices, rank, init=init, svd=svd, random_state=random_state ) + factors = list(factors) rec_errors = [] norm_tensor = tl.sqrt( sum(tl.norm(tensor_slice, 2) ** 2 for tensor_slice in tensor_slices) ) - if absolute_tol is None: - absolute_tol = tl.eps(factors[0].dtype) * 1000 + if linesearch and not isinstance(linesearch, _BroThesisLineSearch): + linesearch = _BroThesisLineSearch( + norm_tensor, svd, verbose=verbose, nn_modes=nn_modes + ) # If nn_modes is set, we use HALS, otherwise, we use the standard parafac implementation. if nn_modes is None: @@ -455,46 +661,54 @@

    Source code for tensorly.decomposition._parafac2

    for iteration in range(n_iter_max): if verbose: print("Starting iteration", iteration) + factors[1] = factors[1] * T.reshape(weights, (1, -1)) weights = T.ones(weights.shape, **tl.context(tensor_slices[0])) + # Will we be performing a line search iteration? + if linesearch and iteration % 2 == 0 and iteration > 5: + line_iter = True + factors_last = [tl.copy(f) for f in factors] + else: + line_iter = False + projections = _compute_projections(tensor_slices, factors, svd) projected_tensor = _project_tensor_slices(tensor_slices, projections) factors = parafac_updates(projected_tensor, weights, factors) - if normalize_factors: - new_factors = [] - for factor in factors: - norms = T.norm(factor, axis=0) - norms = tl.where( - tl.abs(norms) <= tl.eps(factor.dtype), - tl.ones(tl.shape(norms), **tl.context(factors[0])), - norms, - ) - - weights = weights * norms - new_factors.append(factor / (tl.reshape(norms, (1, -1)))) + # Start line search if requested. + if line_iter: + factors, projections, rec_errors[-1] = linesearch.line_step( + iteration, + tensor_slices, + factors_last, + weights, + factors, + projections, + rec_errors[-1], + ) - factors = new_factors + if normalize_factors: + weights, factors = cp_normalize((weights, factors)) - if tol: + if tol and not line_iter: rec_error = _parafac2_reconstruction_error( - tensor_slices, (weights, factors, projections) + tensor_slices, + (weights, factors, projections), + norm_tensor, + projected_tensor, ) rec_error /= norm_tensor rec_errors.append(rec_error) + if tol: if iteration >= 1: if verbose: print( f"PARAFAC2 reconstruction error={rec_errors[-1]}, variation={rec_errors[-2] - rec_errors[-1]}." ) - if ( - abs(rec_errors[-2] ** 2 - rec_errors[-1] ** 2) - < (tol * rec_errors[-2] ** 2) - or rec_errors[-1] ** 2 < absolute_tol - ): + if tl.abs(rec_errors[-2] - rec_errors[-1]) < tol: if verbose: print(f"converged in {iteration} iterations.") break @@ -510,7 +724,10 @@

    Source code for tensorly.decomposition._parafac2

    return parafac2_tensor
    -
    [docs]class Parafac2(DecompositionMixin): + +
    +[docs] +class Parafac2(DecompositionMixin): r"""PARAFAC2 decomposition [1]_ of a third order tensor via alternating least squares (ALS) Computes a rank-`rank` PARAFAC2 decomposition of the third-order tensor defined by @@ -539,7 +756,7 @@

    Source code for tensorly.decomposition._parafac2

    .. math:: - X_{ijk} = \sum_{r=1}^R A_{ir} B_{ijr} C_{kr}, + X_{ijk} = \sum_{r=1}^R A_{ir} B_{ijr} C_{kr}, with the same constraints hold for :math:`B_i` as above. @@ -573,16 +790,6 @@

    Source code for tensorly.decomposition._parafac2

    Previously, the stopping condition was :math:`\left|\| X - \hat{X}_{n-1} \| - \| X - \hat{X}_{n} \|\right| < \epsilon`. - - absolute_tol : float, optional - (Default: 1e-13) Absolute reconstruction error tolearnce. The algorithm - is considered to have converged when - :math:`\left|\| X - \hat{X}_{n-1} \|^2 - \| X - \hat{X}_{n} \|^2\right| < \epsilon_\text{abs}`. - That is, when the relative sum of squared error is less than the specified tolerance. - The absolute tolerance is necessary for stopping the algorithm when used on noise-free - data that follows the PARAFAC2 constraint. - - If None, then the machine precision + 1000 will be used. nn_modes: None, 'all' or array of integers (Default: None) Used to specify which modes to impose non-negativity constraints on. We cannot impose non-negativity constraints on the the B-mode (mode 1) with the ALS @@ -616,10 +823,9 @@

    Source code for tensorly.decomposition._parafac2

    Notes ----- This formulation of the PARAFAC2 decomposition is slightly different from the one in [1]_. - The difference lies in that here, the second mode changes over the first mode, whereas in - [1]_, the second mode changes over the third mode. We made this change since that means - that the function accept both lists of matrices and a single nd-array as input without - any reordering of the modes. + The difference lies in that, here, the second mode changes over the first mode, whereas in + [1]_, the second mode changes over the third mode. This change allows the function to accept + both lists of matrices and a single nd-array as input without any mode reordering. """ def __init__( @@ -630,12 +836,12 @@

    Source code for tensorly.decomposition._parafac2

    svd="truncated_svd", normalize_factors=False, tol=1e-8, - absolute_tol=1e-13, nn_modes=None, random_state=None, verbose=False, return_errors=False, n_iter_parafac=5, + linesearch=False, ): self.rank = rank self.n_iter_max = n_iter_max @@ -643,14 +849,16 @@

    Source code for tensorly.decomposition._parafac2

    self.svd = svd self.normalize_factors = normalize_factors self.tol = tol - self.absolute_tol = absolute_tol self.nn_modes = nn_modes self.random_state = random_state self.verbose = verbose self.return_errors = return_errors self.n_iter_parafac = n_iter_parafac + self.linesearch = linesearch -
    [docs] def fit_transform(self, tensor): +
    +[docs] + def fit_transform(self, tensor): """Decompose an input tensor Parameters @@ -669,14 +877,16 @@

    Source code for tensorly.decomposition._parafac2

    svd=self.svd, normalize_factors=self.normalize_factors, tol=self.tol, - absolute_tol=self.absolute_tol, nn_modes=self.nn_modes, random_state=self.random_state, verbose=self.verbose, return_errors=self.return_errors, n_iter_parafac=self.n_iter_parafac, + linesearch=self.linesearch, ) - return self.decomposition_
    + return self.decomposition_
    +
    +
    @@ -686,7 +896,7 @@

    Source code for tensorly.decomposition._parafac2

    - © Copyright 2016 - 2023, TensorLy Developers.
    + © Copyright 2016 - 2024, TensorLy Developers.
    diff --git a/stable/_modules/tensorly/decomposition/_symmetric_cp.html b/stable/_modules/tensorly/decomposition/_symmetric_cp.html index 9d8ec796c..6299389bf 100644 --- a/stable/_modules/tensorly/decomposition/_symmetric_cp.html +++ b/stable/_modules/tensorly/decomposition/_symmetric_cp.html @@ -1,7 +1,6 @@ - - + @@ -15,17 +14,17 @@ - - - - - - + + + + + + - - - + + + @@ -152,7 +151,9 @@

    Source code for tensorly.decomposition._symmetric_cp

    from ..cp_tensor import validate_cp_rank -
    [docs]def symmetric_power_iteration(tensor, n_repeat=10, n_iteration=10, verbose=False): +
    +[docs] +def symmetric_power_iteration(tensor, n_repeat=10, n_iteration=10, verbose=False): """A single Robust Symmetric Tensor Power Iteration Parameters @@ -227,7 +228,10 @@

    Source code for tensorly.decomposition._symmetric_cp

    return eigenval, best_factor, deflated
    -
    [docs]def symmetric_parafac_power_iteration( + +
    +[docs] +def symmetric_parafac_power_iteration( tensor, rank, n_repeat=10, n_iteration=10, verbose=False ): """Symmetric CP Decomposition via Robust Symmetric Tensor Power Iteration @@ -281,7 +285,10 @@

    Source code for tensorly.decomposition._symmetric_cp

    return weights, factor
    -
    [docs]class SymmetricCP(DecompositionMixin): + +
    +[docs] +class SymmetricCP(DecompositionMixin): """Symmetric CP Decomposition via Robust Symmetric Tensor Power Iteration Parameters @@ -320,6 +327,7 @@

    Source code for tensorly.decomposition._symmetric_cp

    verbose=self.verbose, ) return self.decomposition_
    +
    @@ -329,7 +337,7 @@

    Source code for tensorly.decomposition._symmetric_cp

    - © Copyright 2016 - 2023, TensorLy Developers.
    + © Copyright 2016 - 2024, TensorLy Developers.
    diff --git a/stable/_modules/tensorly/decomposition/_tr_svd.html b/stable/_modules/tensorly/decomposition/_tr_svd.html new file mode 100644 index 000000000..0505566aa --- /dev/null +++ b/stable/_modules/tensorly/decomposition/_tr_svd.html @@ -0,0 +1,352 @@ + + + + + + + tensorly.decomposition._tr_svd — TensorLy: Tensor Learning in Python + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + + + + +
    + + +
    +
    + + + +
    + + +
    + + + + +
    + +

    Source code for tensorly.decomposition._tr_svd

    +import tensorly as tl
    +from ._base_decomposition import DecompositionMixin
    +from ..tr_tensor import validate_tr_rank, TRTensor
    +from ..tenalg.svd import svd_interface
    +
    +
    +
    +[docs] +def tensor_ring(input_tensor, rank, mode=0, svd="truncated_svd", verbose=False): + """Tensor Ring decomposition via recursive SVD + + Decomposes `input_tensor` into a sequence of order-3 tensors (factors) [1]_. + + Parameters + ---------- + input_tensor : tensorly.tensor + rank : Union[int, List[int]] + maximum allowable TR rank of the factors + if int, then this is the same for all the factors + if int list, then rank[k] is the rank of the kth factor + mode : int, default is 0 + index of the first factor to compute + svd : str, default is 'truncated_svd' + function to use to compute the SVD, acceptable values in tensorly.SVD_FUNS + verbose : boolean, optional + level of verbosity + + Returns + ------- + factors : TR factors + order-3 tensors of the TR decomposition + + References + ---------- + .. [1] Qibin Zhao et al. "Tensor Ring Decomposition" arXiv preprint arXiv:1606.05535, (2016). + """ + rank = validate_tr_rank(tl.shape(input_tensor), rank=rank) + n_dim = len(input_tensor.shape) + + # Change order + if mode: + order = tuple(range(mode, n_dim)) + tuple(range(mode)) + input_tensor = tl.transpose(input_tensor, order) + rank = rank[mode:] + rank[:mode] + + tensor_size = input_tensor.shape + + factors = [None] * n_dim + + # Getting the first factor + unfolding = tl.reshape(input_tensor, (tensor_size[0], -1)) + + n_row, n_column = unfolding.shape + if rank[0] * rank[1] > min(n_row, n_column): + raise ValueError( + f"rank[{mode}] * rank[{mode + 1}] = {rank[0] * rank[1]} is larger than " + f"first matricization dimension {n_row}×{n_column}.\n" + "Failed to compute first factor with specified rank. " + "Reduce specified ranks or change first matricization `mode`." + ) + + # SVD of unfolding matrix + U, S, V = svd_interface(unfolding, n_eigenvecs=rank[0] * rank[1], method=svd) + + # Get first TR factor + factor = tl.reshape(U, (tensor_size[0], rank[0], rank[1])) + factors[0] = tl.transpose(factor, (1, 0, 2)) + if verbose is True: + print("TR factor " + str(mode) + " computed with shape " + str(factor.shape)) + + # Get new unfolding matrix for the remaining factors + unfolding = tl.reshape(S, (-1, 1)) * V + unfolding = tl.reshape(unfolding, (rank[0], rank[1], -1)) + unfolding = tl.transpose(unfolding, (1, 2, 0)) + + # Getting the TR factors up to n_dim - 1 + for k in range(1, n_dim - 1): + # Reshape the unfolding matrix of the remaining factors + n_row = int(rank[k] * tensor_size[k]) + unfolding = tl.reshape(unfolding, (n_row, -1)) + + # SVD of unfolding matrix + n_row, n_column = unfolding.shape + current_rank = min(n_row, n_column, rank[k + 1]) + U, S, V = svd_interface(unfolding, n_eigenvecs=current_rank, method=svd) + rank[k + 1] = current_rank + + # Get kth TR factor + factors[k] = tl.reshape(U, (rank[k], tensor_size[k], rank[k + 1])) + + if verbose is True: + print( + "TR factor " + + str((mode + k) % n_dim) + + " computed with shape " + + str(factors[k].shape) + ) + + # Get new unfolding matrix for the remaining factors + unfolding = tl.reshape(S, (-1, 1)) * V + + # Getting the last factor + prev_rank = unfolding.shape[0] + factors[-1] = tl.reshape(unfolding, (prev_rank, -1, rank[0])) + + if verbose is True: + print( + "TR factor " + + str((mode - 1) % n_dim) + + " computed with shape " + + str(factors[-1].shape) + ) + + # Reorder factors to match input + if mode: + factors = factors[-mode:] + factors[:-mode] + + return TRTensor(factors)
    + + + +
    +[docs] +class TensorRing(DecompositionMixin): + """Tensor Ring decomposition via recursive SVD + + Decomposes `input_tensor` into a sequence of order-3 tensors (factors) [1]_. + + Parameters + ---------- + input_tensor : tensorly.tensor + rank : Union[int, List[int]] + maximum allowable TR rank of the factors + if int, then this is the same for all the factors + if int list, then rank[k] is the rank of the kth factor + mode : int, default is 0 + index of the first factor to compute + svd : str, default is 'truncated_svd' + function to use to compute the SVD, acceptable values in tensorly.SVD_FUNS + verbose : boolean, optional + level of verbosity + + Returns + ------- + factors : TR factors + order-3 tensors of the TR decomposition + + References + ---------- + .. [1] Qibin Zhao et al. "Tensor Ring Decomposition" arXiv preprint arXiv:1606.05535, (2016). + """ + + def __init__(self, rank, mode=0, svd="truncated_svd", verbose=False): + self.rank = rank + self.mode = mode + self.svd = svd + self.verbose = verbose + + def fit_transform(self, tensor): + self.decomposition_ = tensor_ring( + tensor, rank=self.rank, mode=self.mode, svd=self.svd, verbose=self.verbose + ) + return self.decomposition_
    + +
    + +
    + + + +
    +
    +
    + © Copyright 2016 - 2024, TensorLy Developers.
    +
    +
    +
    + +
    + +
    + + + +
    +
    + + + + + + + + \ No newline at end of file diff --git a/stable/_modules/tensorly/decomposition/_tt.html b/stable/_modules/tensorly/decomposition/_tt.html index 1d1995098..a85b08e5d 100644 --- a/stable/_modules/tensorly/decomposition/_tt.html +++ b/stable/_modules/tensorly/decomposition/_tt.html @@ -1,7 +1,6 @@ - - + @@ -15,17 +14,17 @@ - - - - - - + + + + + + - - - + + + @@ -149,11 +148,12 @@

    Source code for tensorly.decomposition._tt

     from ._base_decomposition import DecompositionMixin
     from ..tt_tensor import validate_tt_rank, TTTensor
     from ..tt_matrix import validate_tt_matrix_rank, TTMatrix
    -from ..utils import DefineDeprecated
     from ..tenalg.svd import svd_interface
     
     
    -
    [docs]def tensor_train(input_tensor, rank, svd="truncated_svd", verbose=False): +
    +[docs] +def tensor_train(input_tensor, rank, svd="truncated_svd", verbose=False): """TT decomposition via recursive SVD Decomposes `input_tensor` into a sequence of order-3 tensors (factors) @@ -226,7 +226,10 @@

    Source code for tensorly.decomposition._tt

         return TTTensor(factors)
    -
    [docs]def tensor_train_matrix(tensor, rank, svd="truncated_svd", verbose=False): + +
    +[docs] +def tensor_train_matrix(tensor, rank, svd="truncated_svd", verbose=False): """Decompose a tensor into a matrix in tt-format Parameters @@ -281,7 +284,10 @@

    Source code for tensorly.decomposition._tt

         return TTMatrix(factors)
    -
    [docs]class TensorTrain(DecompositionMixin): + +
    +[docs] +class TensorTrain(DecompositionMixin): """Decompose a tensor into a matrix in tt-format Parameters @@ -315,7 +321,10 @@

    Source code for tensorly.decomposition._tt

             return self.decomposition_
    -
    [docs]class TensorTrainMatrix(DecompositionMixin): + +
    +[docs] +class TensorTrainMatrix(DecompositionMixin): """TT decomposition via recursive SVD Decomposes `input_tensor` into a sequence of order-3 tensors (factors) @@ -354,8 +363,6 @@

    Source code for tensorly.decomposition._tt

             )
             return self.decomposition_
    - -matrix_product_state = DefineDeprecated("matrix_product_state", tensor_train)
    @@ -365,7 +372,7 @@

    Source code for tensorly.decomposition._tt

             
    - © Copyright 2016 - 2023, TensorLy Developers.
    + © Copyright 2016 - 2024, TensorLy Developers.
    diff --git a/stable/_modules/tensorly/decomposition/_tucker.html b/stable/_modules/tensorly/decomposition/_tucker.html index e6a70e3d2..a6e8718e3 100644 --- a/stable/_modules/tensorly/decomposition/_tucker.html +++ b/stable/_modules/tensorly/decomposition/_tucker.html @@ -1,7 +1,6 @@ - - + @@ -15,17 +14,17 @@ - - - - - - + + + + + + - - - + + + @@ -155,7 +154,10 @@

    Source code for tensorly.decomposition._tucker

    validate_tucker_rank, tucker_normalize, ) -from ..tenalg.proximal import hals_nnls, active_set_nnls, fista +from ..solvers.penalizations import ( + process_regularization_weights, +) +from ..solvers.nnls import hals_nnls, fista, active_set_nnls from math import sqrt import warnings from collections.abc import Iterable @@ -225,11 +227,15 @@

    Source code for tensorly.decomposition._tucker

    elif init == "random": rng = tl.check_random_state(random_state) core = tl.tensor( - rng.random_sample(rank) + 0.01, **tl.context(tensor) + rng.random_sample([rank[index] for index in range(len(modes))]) + 0.01, + **tl.context(tensor), ) # Check this factors = [ - tl.tensor(rng.random_sample(s), **tl.context(tensor)) - for s in zip(tl.shape(tensor), rank) + tl.tensor( + rng.random_sample((tensor.shape[mode], rank[index])), + **tl.context(tensor), + ) + for index, mode in enumerate(modes) ] else: @@ -242,7 +248,9 @@

    Source code for tensorly.decomposition._tucker

    return core, factors -

    [docs]def partial_tucker( +
    +[docs] +def partial_tucker( tensor, rank, modes=None, @@ -344,7 +352,7 @@

    Source code for tensorly.decomposition._tucker

    core = multi_mode_dot(tensor, factors, modes=modes, transpose=True) # The factors are orthonormal and therefore do not affect the reconstructed tensor's norm - rec_error = sqrt(abs(norm_tensor**2 - tl.norm(core, 2) ** 2)) / norm_tensor + rec_error = sqrt(tl.abs(norm_tensor**2 - tl.norm(core, 2) ** 2)) / norm_tensor rec_errors.append(rec_error) if iteration > 1: @@ -353,7 +361,7 @@

    Source code for tensorly.decomposition._tucker

    f"reconstruction error={rec_errors[-1]}, variation={rec_errors[-2] - rec_errors[-1]}." ) - if tol and abs(rec_errors[-2] - rec_errors[-1]) < tol: + if tol and tl.abs(rec_errors[-2] - rec_errors[-1]) < tol: if verbose: print(f"converged in {iteration} iterations.") break @@ -361,7 +369,10 @@

    Source code for tensorly.decomposition._tucker

    return (core, factors), rec_errors

    -
    [docs]def tucker( + +
    +[docs] +def tucker( tensor, rank, fixed_factors=None, @@ -485,7 +496,10 @@

    Source code for tensorly.decomposition._tucker

    return tensor

    -
    [docs]def non_negative_tucker( + +
    +[docs] +def non_negative_tucker( tensor, rank, n_iter_max=10, @@ -581,7 +595,7 @@

    Source code for tensorly.decomposition._tucker

    f"reconstruction error={rec_errors[-1]}, variation={rec_errors[-2] - rec_errors[-1]}." ) - if iteration > 1 and abs(rec_errors[-2] - rec_errors[-1]) < tol: + if iteration > 1 and tl.abs(rec_errors[-2] - rec_errors[-1]) < tol: if verbose: print(f"converged in {iteration} iterations.") break @@ -594,7 +608,10 @@

    Source code for tensorly.decomposition._tucker

    return tensor

    -
    [docs]def non_negative_tucker_hals( + +
    +[docs] +def non_negative_tucker_hals( tensor, rank, n_iter_max=100, @@ -766,7 +783,7 @@

    Source code for tensorly.decomposition._tucker

    UtM = tl.transpose(MtU) # Call the hals resolution with nnls, optimizing the current mode - nn_factor, _, _, _ = hals_nnls( + nn_factor = hals_nnls( UtM, UtU, tl.transpose(nn_factors[mode]), @@ -821,7 +838,7 @@

    Source code for tensorly.decomposition._tucker

    f"reconstruction error={rec_errors[-1]}, variation={rec_errors[-2] - rec_errors[-1]}." ) - if tol and abs(rec_errors[-2] - rec_errors[-1]) < tol: + if tol and tl.abs(rec_errors[-2] - rec_errors[-1]) < tol: if verbose: print(f"converged in {iteration} iterations.") break @@ -834,7 +851,10 @@

    Source code for tensorly.decomposition._tucker

    return tensor

    -
    [docs]class Tucker(DecompositionMixin): + +
    +[docs] +class Tucker(DecompositionMixin): """Tucker decomposition via Higher Order Orthogonal Iteration (HOI). Decomposes `tensor` into a Tucker decomposition: @@ -937,6 +957,7 @@

    Source code for tensorly.decomposition._tucker

    return f"Rank-{self.rank} Tucker decomposition via HOOI."

    + class Tucker_NN(DecompositionMixin): """Non-Negative Tucker decomposition via iterative multiplicative update. @@ -1163,7 +1184,7 @@

    Source code for tensorly.decomposition._tucker

    - © Copyright 2016 - 2023, TensorLy Developers.
    + © Copyright 2016 - 2024, TensorLy Developers.
    diff --git a/stable/_modules/tensorly/decomposition/robust_decomposition.html b/stable/_modules/tensorly/decomposition/robust_decomposition.html index bc06f38ae..3546680d5 100644 --- a/stable/_modules/tensorly/decomposition/robust_decomposition.html +++ b/stable/_modules/tensorly/decomposition/robust_decomposition.html @@ -1,7 +1,6 @@ - - + @@ -15,17 +14,17 @@ - - - - - - + + + + + + - - - + + + @@ -145,8 +144,7 @@

    Source code for tensorly.decomposition.robust_decomposition

    -import numpy as np
    -from .. import backend as T
    +from .. import backend as T
     from ..base import fold, unfold
     from ..tenalg.proximal import soft_thresholding, svd_thresholding
     
    @@ -156,7 +154,9 @@ 

    Source code for tensorly.decomposition.robust_decomposition

    # License: BSD 3 clause -
    [docs]def robust_pca( +
    +[docs] +def robust_pca( X, mask=None, tol=10e-7, @@ -233,7 +233,6 @@

    Source code for tensorly.decomposition.robust_decomposition

    if mask is None: mask = 1 else: - # Fix to address surprising MXNet.numpy behavior (Issue #19891) mask = T.tensor(mask, **T.context(X)) # Initialise the decompositions @@ -291,6 +290,7 @@

    Source code for tensorly.decomposition.robust_decomposition

    return D, E, rec_X else: return D, E
    +
    @@ -300,7 +300,7 @@

    Source code for tensorly.decomposition.robust_decomposition

    - © Copyright 2016 - 2023, TensorLy Developers.
    + © Copyright 2016 - 2024, TensorLy Developers.
    diff --git a/stable/_modules/tensorly/metrics/factors.html b/stable/_modules/tensorly/metrics/factors.html index 5e28a6c14..48fd59fc9 100644 --- a/stable/_modules/tensorly/metrics/factors.html +++ b/stable/_modules/tensorly/metrics/factors.html @@ -1,7 +1,6 @@ - - + @@ -15,17 +14,17 @@ - - - - - - + + + + + + - - - + + + @@ -150,7 +149,9 @@

    Source code for tensorly.metrics.factors

     import numpy as np
     
     
    -
    [docs]def congruence_coefficient(matrix1, matrix2, absolute_value=True): +
    +[docs] +def congruence_coefficient(matrix1, matrix2, absolute_value=True): """Compute the optimal mean (Tucker) congruence coefficient between the columns of two matrices. Another name for the congruence coefficient is the cosine similarity. @@ -214,6 +215,7 @@

    Source code for tensorly.metrics.factors

         indices = dict(zip(row_ind, col_ind))
         permutation = [indices[i] for i in range(T.shape(matrix1[0])[1])]
         return all_congruences[row_ind, col_ind].mean(), permutation
    +
    @@ -223,7 +225,7 @@

    Source code for tensorly.metrics.factors

             
    - © Copyright 2016 - 2023, TensorLy Developers.
    + © Copyright 2016 - 2024, TensorLy Developers.
    diff --git a/stable/_modules/tensorly/metrics/regression.html b/stable/_modules/tensorly/metrics/regression.html index af717f836..23d9e0b49 100644 --- a/stable/_modules/tensorly/metrics/regression.html +++ b/stable/_modules/tensorly/metrics/regression.html @@ -1,7 +1,6 @@ - - + @@ -15,17 +14,17 @@ - - - - - - + + + + + + - - - + + + @@ -150,7 +149,9 @@

    Source code for tensorly.metrics.regression

     # Author: Jean Kossaifi <jean.kossaifi+tensors@gmail.com>
     
     
    -
    [docs]def MSE(y_true, y_pred, axis=None): +
    +[docs] +def MSE(y_true, y_pred, axis=None): """Returns the mean squared error between the two predictions Parameters @@ -167,7 +168,10 @@

    Source code for tensorly.metrics.regression

         return T.mean((y_true - y_pred) ** 2, axis=axis)
    -
    [docs]def RMSE(y_true, y_pred, axis=None): + +
    +[docs] +def RMSE(y_true, y_pred, axis=None): """Returns the regularised mean squared error between the two predictions (the square-root is applied to the mean_squared_error) @@ -185,6 +189,26 @@

    Source code for tensorly.metrics.regression

         return T.sqrt(MSE(y_true, y_pred, axis=axis))
    + +def R2_score(X_original, X_predicted): + """Returns the R^2 (coefficient of determination) regression score function. + Best possible score is 1.0 and it can be negative (because prediction can be + arbitrarily worse). + + Parameters + ---------- + X_original: array + The original array + X_predicted: array + Thre predicted array. + + Returns + ------- + float + """ + return 1 - T.norm(X_predicted - X_original) ** 2.0 / T.norm(X_original) ** 2.0 + + def reflective_correlation_coefficient(y_true, y_pred, axis=None): """Reflective variant of Pearson's product moment correlation coefficient where the predictions are not centered around their mean values. @@ -243,7 +267,7 @@

    Source code for tensorly.metrics.regression

             
    - © Copyright 2016 - 2023, TensorLy Developers.
    + © Copyright 2016 - 2024, TensorLy Developers.
    diff --git a/stable/_modules/tensorly/metrics/similarity.html b/stable/_modules/tensorly/metrics/similarity.html index d92a1ba71..2108e624f 100644 --- a/stable/_modules/tensorly/metrics/similarity.html +++ b/stable/_modules/tensorly/metrics/similarity.html @@ -1,7 +1,6 @@ - - + @@ -15,17 +14,17 @@ - - - - - - + + + + + + - - - + + + @@ -151,7 +150,9 @@

    Source code for tensorly.metrics.similarity

     # similarity metrics for tensor decompositions
     
     
    -
    [docs]def correlation_index( +
    +[docs] +def correlation_index( factors_1: list, factors_2: list, tol: float = 5e-16, method: str = "stacked" ) -> float: """CorrIndex implementation to assess tensor decomposition outputs. @@ -236,6 +237,7 @@

    Source code for tensorly.metrics.similarity

         return score
    + def _compute_correlation_index(x1: list, x2: list, tol: float = 5e-16) -> float: """Computes the CorrIndex from the L2-normalized A matrices. @@ -276,7 +278,7 @@

    Source code for tensorly.metrics.similarity

             
    - © Copyright 2016 - 2023, TensorLy Developers.
    + © Copyright 2016 - 2024, TensorLy Developers.
    diff --git a/stable/_modules/tensorly/parafac2_tensor.html b/stable/_modules/tensorly/parafac2_tensor.html index eb206b0b4..0d4bdca56 100644 --- a/stable/_modules/tensorly/parafac2_tensor.html +++ b/stable/_modules/tensorly/parafac2_tensor.html @@ -1,7 +1,6 @@ - - + @@ -15,17 +14,17 @@ - - - - - - + + + + + + - - - + + + @@ -154,7 +153,6 @@

    Source code for tensorly.parafac2_tensor

     from . import backend as T
     from .base import unfold, tensor_to_vec
     from ._factorized_tensor import FactorizedTensor
    -import warnings
     
     
     class Parafac2Tensor(FactorizedTensor):
    @@ -252,7 +250,7 @@ 

    Source code for tensorly.parafac2_tensor

         Returns
         -------
         (shape, rank) : (int tuple, int)
    -        size of the full tensor and rank of the Kruskal tensor
    +        size of the full tensor and rank of the CP tensor
         """
         if isinstance(parafac2_tensor, Parafac2Tensor):
             # it's already been validated at creation
    @@ -393,7 +391,9 @@ 

    Source code for tensorly.parafac2_tensor

         return weights, (factors[0], evolving_factor, factors[2])
     
     
    -
    [docs]def parafac2_to_slice(parafac2_tensor, slice_idx, validate=True): +
    +[docs] +def parafac2_to_slice(parafac2_tensor, slice_idx, validate=True): r"""Generate a single slice along the first mode from the PARAFAC2 tensor. The decomposition is on the form :math:`(A [B_i] C)` such that the i-th frontal slice, @@ -455,7 +455,10 @@

    Source code for tensorly.parafac2_tensor

         return T.dot(B_i * a, Ct)
    -
    [docs]def parafac2_to_slices(parafac2_tensor, validate=True): + +
    +[docs] +def parafac2_to_slices(parafac2_tensor, validate=True): r"""Generate all slices along the first mode from a PARAFAC2 tensor. Generates a list of all slices from a PARAFAC2 tensor. A list is returned @@ -520,7 +523,10 @@

    Source code for tensorly.parafac2_tensor

         return [parafac2_to_slice(decomposition, i, validate=False) for i in range(I)]
    -
    [docs]def parafac2_to_tensor(parafac2_tensor): + +
    +[docs] +def parafac2_to_tensor(parafac2_tensor): r"""Construct a full tensor from a PARAFAC2 decomposition. The decomposition is on the form :math:`(A [B_i] C)` such that the i-th frontal slice, @@ -577,7 +583,10 @@

    Source code for tensorly.parafac2_tensor

         return tensor
    -
    [docs]def parafac2_to_unfolded(parafac2_tensor, mode): + +
    +[docs] +def parafac2_to_unfolded(parafac2_tensor, mode): r"""Construct an unfolded tensor from a PARAFAC2 decomposition. Uneven slices are padded by zeros. The decomposition is on the form :math:`(A [B_i] C)` such that the i-th frontal slice, @@ -627,7 +636,10 @@

    Source code for tensorly.parafac2_tensor

         return unfold(parafac2_to_tensor(parafac2_tensor), mode)
    -
    [docs]def parafac2_to_vec(parafac2_tensor): + +
    +[docs] +def parafac2_to_vec(parafac2_tensor): r"""Construct a vectorized tensor from a PARAFAC2 decomposition. Uneven slices are padded by zeros. The decomposition is on the form :math:`(A [B_i] C)` such that the i-th frontal slice, @@ -675,6 +687,7 @@

    Source code for tensorly.parafac2_tensor

             Full constructed tensor. Uneven slices are padded with zeros.
         """
         return tensor_to_vec(parafac2_to_tensor(parafac2_tensor))
    +
    @@ -684,7 +697,7 @@

    Source code for tensorly.parafac2_tensor

             
    - © Copyright 2016 - 2023, TensorLy Developers.
    + © Copyright 2016 - 2024, TensorLy Developers.
    diff --git a/stable/_modules/tensorly/plugins.html b/stable/_modules/tensorly/plugins.html index 873e2f38a..91c1b006b 100644 --- a/stable/_modules/tensorly/plugins.html +++ b/stable/_modules/tensorly/plugins.html @@ -1,7 +1,6 @@ - - + @@ -15,17 +14,17 @@ - - - - - - + + + + + + - - - + + + @@ -156,7 +155,9 @@

    Source code for tensorly.plugins

     CUQUANTUM_HANDLE = None
     
     
    -
    [docs]def use_default_einsum(): +
    +[docs] +def use_default_einsum(): """Revert to the original einsum for the current backend""" global PREVIOUS_EINSUM @@ -165,7 +166,10 @@

    Source code for tensorly.plugins

             PREVIOUS_EINSUM = None
    -
    [docs]def use_opt_einsum(optimize="auto-hq"): + +
    +[docs] +def use_opt_einsum(optimize="auto-hq"): """Plugin to use opt-einsum [1]_ to precompute (and cache) a better contraction path Examples @@ -229,7 +233,10 @@

    Source code for tensorly.plugins

         tl.backend.BackendManager.register_backend_method("einsum", cached_einsum)
    -
    [docs]def use_cuquantum(optimize="auto-hq"): + +
    +[docs] +def use_cuquantum(optimize="auto-hq"): """Plugin to use `cuQuantum <https://developer.nvidia.com/cuquantum-sdk>`_ to precompute (and cache) a better contraction path Examples @@ -312,6 +319,7 @@

    Source code for tensorly.plugins

             PREVIOUS_EINSUM = tl.backend.current_backend().einsum
     
         tl.backend.BackendManager.register_backend_method("einsum", cached_einsum)
    +
    @@ -321,7 +329,7 @@

    Source code for tensorly.plugins

             
    - © Copyright 2016 - 2023, TensorLy Developers.
    + © Copyright 2016 - 2024, TensorLy Developers.
    diff --git a/stable/_modules/tensorly/preprocessing.html b/stable/_modules/tensorly/preprocessing.html new file mode 100644 index 000000000..40a4ff0b5 --- /dev/null +++ b/stable/_modules/tensorly/preprocessing.html @@ -0,0 +1,338 @@ + + + + + + + tensorly.preprocessing — TensorLy: Tensor Learning in Python + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + + + + +
    + + +
    +
    + + + +
    + + +
    + + + + +
    + +

    Source code for tensorly.preprocessing

    +from tensorly import backend as T
    +
    +from .parafac2_tensor import Parafac2Tensor
    +from .tenalg.svd import svd_interface
    +
    +
    +
    +[docs] +def svd_compress_tensor_slices( + tensor_slices, compression_threshold=0.0, max_rank=None, svd="truncated_svd" +): + r"""Compress data with the SVD for running PARAFAC2. + + PARAFAC2 can be sped up massively for data where the number of rows in the tensor slices + is much greater than their rank. In that case, we can compress the data by computing the + SVD and fitting the PARAFAC2 model to the right singular vectors multiplied by the singular + values. Then, we can "decompress" the decomposition by left-multiplying the :math:`B_i`-matrices + by the left singular values to get a decomposition as if it was fitted to the uncompressed + data. We can essentially think of this as running a PCA without centering the data for each + tensor slice and fitting the PARAFAC2 model to the scores. Then, to get back the components, + we left-multiply the :math:`B_i`-matrices with the loading matrices. + + [1]_ states that we can constrain our :math:`B_i`-matrices to lie in a given vector space, + :math:`\mathscr{V}_i` by multiplying the data matrices with an orthogonal basis matrix that + spans :math:`\mathscr{V}_i`. However, since we know that :math:`B_i` lie in the column space + of :math:`X_i`, we can multiply the :math:`X_i`-matrices by an orthogonal matrix that spans + :math:`\text{col}(X_i)` without affecting the fit of the model. Thus we can compress our data + prior to fitting the PARAFAC2 model whenever the number of rows in our data matrices exceeds + the number of columns (as the rank of :math:`\text{col}(X_i)` cannot exceed the number of rows). + + To implement this, we use the SVD to get an orthogonal basis for the column space of :math:`X_i`. + Moreover, since :math:`S_i V_i^T = U_i^T X_i`, we can skip an additional matrix multiplication + by fitting the model to :math:`S_i V_i^T`. + + Finally, we note that this approach can also be implemented by truncating the SVD. If an appropriate + threshold is set, this will not affect the fitted model in any major form. + + .. note:: + This can be thought of as a simplified version of the DPAR approach for compressing PARAFAC2 models [2]_, + which compresses all modes of :math:`\mathcal{X}` to fit an approximate PARAFAC2 model. + + Parameters + ---------- + tensor_slices : list of matrices + The data matrices to compress. + compression_threshold : float (0 <= compression_threshold <= 1) + Threshold at which the singular values should be truncated. Any singular value less than + compression_threshold * s[0] is set to zero. Note that if this is nonzero, then the found + components will likely be affected. + max_rank : int + The maximum rank to allow in the datasets after compression. This also serves to speed up + the SVD calculation with matrices containing many rows and columns when paired with randomized + SVD solving. + svd : str, default is 'truncated_svd' + Function to use to compute the SVD, acceptable values in tensorly.SVD_FUNS + + Returns + ------- + list of matrices + The score matrices, used to fit the PARAFAC2 model to. + list of matrices + The loading matrices, used to decompress the PARAFAC2 components after fitting + to the scores. + + References + ---------- + .. [1] Helwig, N. E. (2017). Estimating latent trends in multivariate longitudinal + data via Parafac2 with functional and structural constraints. Biometrical + Journal, 59(4), 783-803. doi: 10.1002/bimj.201600045 + + .. [2] Jang JG, Kang U. Dpar2: Fast and scalable parafac2 decomposition for + irregular dense tensors. 38th International Conference on Data Engineering + (ICDE) 2022 May 9 (pp. 2454-2467). IEEE. + + """ + loading_matrices = [None for _ in tensor_slices] + score_matrices = [None for _ in tensor_slices] + + _, n_cols = T.shape(tensor_slices[0]) + + if max_rank is not None: + rank_limit = min(n_cols, max_rank) + else: + rank_limit = n_cols + + for i, tensor_slice in enumerate(tensor_slices): + n_rows, _ = T.shape(tensor_slice) + + if n_rows <= rank_limit and not compression_threshold: + score_matrices[i] = tensor_slice + continue + + U, s, Vh = svd_interface(tensor_slice, n_eigenvecs=rank_limit, method=svd) + + # Threshold SVD, keeping only singular values that satisfy s_i >= s_0 * epsilon + # where epsilon is the compression threshold + num_svds = len([s_i for s_i in s if s_i >= (s[0] * compression_threshold)]) + U, s, Vh = U[:, :num_svds], s[:num_svds], Vh[:num_svds, :] + + # Array broadcasting happens at the last dimension, since Vh is num_svds x n_cols + # we need to transpose it, multiply in the singular values and then transpose + # it again. This is equivalent to writing diag(s) @ Vh. If we skip the + # transposes, we would get Vh @ diag(s), which is wrong. + score_matrices[i] = T.transpose(s * T.transpose(Vh)) + loading_matrices[i] = U + + return score_matrices, loading_matrices
    + + + +
    +[docs] +def svd_decompress_parafac2_tensor(parafac2_tensor, loading_matrices): + """Decompress the factors obtained by fitting PARAFAC2 on SVD-compressed data + + Decompress a PARAFAC2 decomposition that describes the compressed data so that it + models the original uncompressed data. Fitting to compressed data, and then + decompressing is mathematically equivalent to fitting to the uncompressed data. + + See :py:meth:`svd_compress_tensor_slices` for information about SVD-compression and + decompression. + + .. note:: + To decompress the data, we left-multiply the loading-matrices into the + :math:`B_i`-matrices. However, :math:`B_i = P_i B`, so the decompression is + implemented by left-multiplying the loading matrices by the :math:`P_i`-matrices. + + Parameters + ---------- + parafac2_tensor: tl.Parafac2Tensor + A decomposition obtained from fitting a PARAFAC2 model to compressed data + loading_matrices: list of matrices + Loading matrices obtained when compressing the data. See + :py:meth:`svd_compress_tensor_slices` for more information. + + Returns + ------- + tl.Parafac2Tensor: + Decompressed PARAFAC2 decomposition - equivalent to the decomposition we would + get from fitting parafac2 to uncompressed data. + """ + weights, factors, projections = parafac2_tensor + projections = projections.copy() + + for i, projection in enumerate(projections): + if loading_matrices[i] is not None: + projections[i] = T.matmul(loading_matrices[i], projection) + + return Parafac2Tensor((weights, factors, projections))
    + +
    + +
    + + + +
    +
    +
    + © Copyright 2016 - 2024, TensorLy Developers.
    +
    +
    +
    + +
    + +
    + + + +
    +
    + + + + + + + + \ No newline at end of file diff --git a/stable/_modules/tensorly/random/base.html b/stable/_modules/tensorly/random/base.html index 6dcb906ec..a38cd6c54 100644 --- a/stable/_modules/tensorly/random/base.html +++ b/stable/_modules/tensorly/random/base.html @@ -1,7 +1,6 @@ - - + @@ -15,17 +14,17 @@ - - - - - - + + + + + + - - - + + + @@ -153,7 +152,6 @@

    Source code for tensorly.random.base

     from ..tr_tensor import TRTensor, tr_to_tensor, validate_tr_rank
     from ..parafac2_tensor import parafac2_to_tensor, Parafac2Tensor, parafac2_normalise
     from .. import backend as T
    -from ..utils import DefineDeprecated
     import warnings
     
     
    @@ -163,7 +161,9 @@ 

    Source code for tensorly.random.base

         return T.tensor(rns.random_sample(shape), **context)
     
     
    -
    [docs]def random_parafac2( +
    +[docs] +def random_parafac2( shapes, rank, full=False, random_state=None, normalise_factors=False, **context ): """Generate a random PARAFAC2 tensor @@ -209,7 +209,10 @@

    Source code for tensorly.random.base

             return parafac2_tensor
    -
    [docs]def random_cp( + +
    +[docs] +def random_cp( shape, rank, full=False, @@ -262,7 +265,10 @@

    Source code for tensorly.random.base

             return CPTensor((weights, factors))
    -
    [docs]def random_tucker( + +
    +[docs] +def random_tucker( shape, rank, full=False, @@ -327,7 +333,10 @@

    Source code for tensorly.random.base

             return TuckerTensor((core, factors))
    -
    [docs]def random_tt(shape, rank, full=False, random_state=None, **context): + +
    +[docs] +def random_tt(shape, rank, full=False, random_state=None, **context): """Generates a random TT/MPS tensor Parameters @@ -360,14 +369,10 @@

    Source code for tensorly.random.base

     
         # Initialization
         if rank[0] != 1:
    -        message = "Provided rank[0] == {} but boundaring conditions dictatate rank[0] == rank[-1] == 1.".format(
    -            rank[0]
    -        )
    +        message = f"Provided rank[0] == {rank[0]} but boundaring conditions dictatate rank[0] == rank[-1] == 1."
             raise ValueError(message)
         if rank[-1] != 1:
    -        message = "Provided rank[-1] == {} but boundaring conditions dictatate rank[0] == rank[-1] == 1.".format(
    -            rank[-1]
    -        )
    +        message = f"Provided rank[-1] == {rank[-1]} but boundaring conditions dictatate rank[0] == rank[-1] == 1."
             raise ValueError(message)
     
         rns = T.check_random_state(random_state)
    @@ -382,7 +387,10 @@ 

    Source code for tensorly.random.base

             return TTTensor(factors)
    -
    [docs]def random_tt_matrix(shape, rank, full=False, random_state=None, **context): + +
    +[docs] +def random_tt_matrix(shape, rank, full=False, random_state=None, **context): """Generates a random tensor in TT-Matrix format Parameters @@ -428,6 +436,7 @@

    Source code for tensorly.random.base

             return TTMatrix(factors)
    + def random_tr(shape, rank, full=False, random_state=None, **context): """Generates a random TR tensor @@ -472,12 +481,6 @@

    Source code for tensorly.random.base

             return tr_to_tensor(factors)
         else:
             return TRTensor(factors)
    -
    -
    -random_kruskal = DefineDeprecated(
    -    deprecated_name="random_kruskal", use_instead=random_cp
    -)
    -random_mps = DefineDeprecated(deprecated_name="random_mps", use_instead=random_tt)
     
    @@ -487,7 +490,7 @@

    Source code for tensorly.random.base

             
    - © Copyright 2016 - 2023, TensorLy Developers.
    + © Copyright 2016 - 2024, TensorLy Developers.
    diff --git a/stable/_modules/tensorly/regression/cp_plsr.html b/stable/_modules/tensorly/regression/cp_plsr.html index ffe6ac760..54d14ae0b 100644 --- a/stable/_modules/tensorly/regression/cp_plsr.html +++ b/stable/_modules/tensorly/regression/cp_plsr.html @@ -1,7 +1,6 @@ - - + @@ -15,17 +14,17 @@ - - - - - - + + + + + + - - - + + + @@ -145,18 +144,19 @@

    Source code for tensorly.regression.cp_plsr

    -from ..tenalg import khatri_rao, multi_mode_dot
    -from ..cp_tensor import CPTensor
    +from ..tenalg import multi_mode_dot, outer
     from .. import backend as T
    -from .. import unfold, tensor_to_vec
    -from ..decomposition._cp import parafac
    +from .. import tensor_to_vec
    +from ..decomposition._cp import initialize_cp
     
     # Author: Cyrillus Tan, Jackson Chin, Aaron Meyer
     
     # License: BSD 3 clause
     
     
    -
    [docs]class CP_PLSR: +
    +[docs] +class CP_PLSR: """CP tensor regression Learns a low rank CP tensor weight, This performs a partial least square regression to a tensor X (>= 2 modes) @@ -189,18 +189,26 @@

    Source code for tensorly.regression.cp_plsr

             self.random_state = random_state
             self.verbose = verbose
     
    -
    [docs] def get_params(self, **kwargs): +
    +[docs] + def get_params(self, **kwargs): """Returns a dictionary of parameters""" params = ["n_components", "tol", "n_iter_max", "random_state", "verbose"] return {param_name: getattr(self, param_name) for param_name in params}
    -
    [docs] def set_params(self, **parameters): + +
    +[docs] + def set_params(self, **parameters): """Sets the value of the provided parameters""" for parameter, value in parameters.items(): setattr(self, parameter, value) return self
    -
    [docs] def fit(self, X, Y): + +
    +[docs] + def fit(self, X, Y): """Fits the model to the data (X, Y) Parameters @@ -210,6 +218,18 @@

    Source code for tensorly.regression.cp_plsr

             Y : 2D-array of shape (n_samples, n_predictions)
                 labels associated with each sample
     
    +        Attributes
    +        ----------
    +        X_factors : list of ndarray of shape (X.shape[i], n_components)
    +            The factors of X tensor to approximate X. The first component, X_factors[0],
    +            directs to the maximal covariance with Y_factors[0]
    +        Y_factors : list of ndarray of shape (Y.shape[i], n_components)
    +            The factors of Y matrix to approximate Y. The first component, Y_factors[0],
    +            directs to the maximal covariance with X_factors[0]
    +        coef_ : ndarray of shape (n_component, n_component)
    +            The coefficients of the linear model such that `Y_factors[0]` is approximated as
    +            `Y_factors[0] = X_factors[0] @ coef_`.
    +
             Returns
             -------
             self
    @@ -233,6 +253,7 @@ 

    Source code for tensorly.regression.cp_plsr

             # Mean center the data, record info the object
             self.X_shape_ = T.shape(X)
             self.Y_shape_ = T.shape(Y)
    +
             self.X_mean_ = T.mean(X, axis=0)
             self.Y_mean_ = T.mean(Y, axis=0)
             X -= self.X_mean_
    @@ -244,6 +265,11 @@ 

    Source code for tensorly.regression.cp_plsr

             self.Y_factors = [
                 T.zeros((l, self.n_components), **T.context(X)) for l in T.shape(Y)
             ]
    +        self.X_r2 = T.zeros((self.n_components,), **T.context(X))
    +        self.Y_r2 = T.zeros((self.n_components,), **T.context(Y))
    +
    +        # Coefficients of the linear model
    +        self.coef_ = T.zeros((self.n_components, self.n_components), **T.context(X))
     
             ## FITTING EACH COMPONENT
             for component in range(self.n_components):
    @@ -254,15 +280,15 @@ 

    Source code for tensorly.regression.cp_plsr

                 for iter in range(self.n_iter_max):
                     Z = T.tensordot(X, comp_Y_factors_0, axes=((0,), (0,)))
     
    +                if iter == 0:
    +                    Z_comp = initialize_cp(Z, 1, normalize_factors=True).factors
    +                    Z_comp = [T.reshape(zz, (-1,)) for zz in Z_comp]
    +
                     if T.ndim(Z) >= 2:
    -                    Z_comp = parafac(
    -                        Z,
    -                        1,
    -                        tol=self.tol,
    -                        init="svd",
    -                        svd="randomized_svd",
    -                        normalize_factors=True,
    -                    )[1]
    +                    for mode in range(len(Z_comp)):
    +                        factor = multi_mode_dot(Z, Z_comp, skip=mode)
    +                        factor = factor / T.norm(factor, 2)
    +                        Z_comp[mode] = factor
                     else:
                         Z_comp = [Z / T.norm(Z)]
     
    @@ -298,21 +324,29 @@ 

    Source code for tensorly.regression.cp_plsr

                     self.Y_factors[1], T.index[:, component], comp_Y_factors_1
                 )
     
    +            B = T.lstsq(self.X_factors[0], T.reshape(comp_Y_factors_0, (-1, 1)))[0]
    +            self.coef_ = T.index_update(
    +                self.coef_,
    +                T.index[:, component],
    +                T.reshape(B, (-1,)),
    +            )
    +
                 # Deflation
    -            X -= CPTensor(
    -                (None, [T.reshape(ff, (-1, 1)) for ff in comp_X_factors])
    -            ).to_tensor()
    +            X -= outer(comp_X_factors)
                 Y -= T.dot(
                     T.dot(
                         self.X_factors[0],
    -                    T.lstsq(self.X_factors[0], T.reshape(comp_Y_factors_0, (-1, 1)))[0],
    +                    T.reshape(B, (-1, 1)),
                     ),
                     T.reshape(comp_Y_factors_1, (1, -1)),
    -            )  # Y -= T pinv(T) u q'
    +            )  # Y -= T b q' = T pinv(T) u q'
     
             return self
    -
    [docs] def predict(self, X): + +
    +[docs] + def predict(self, X): """Returns the predicted labels for a new data tensor Parameters @@ -324,19 +358,33 @@

    Source code for tensorly.regression.cp_plsr

                 raise ValueError(
                     f"Training X has shape {self.X_shape_}, while the new X has shape {T.shape(X)}"
                 )
    +        X = T.copy(X)
             X -= self.X_mean_
    -        factors_kr = khatri_rao(self.X_factors, skip_matrix=0)
    -        unfolded = unfold(X, 0)
    -        scores = T.lstsq(factors_kr, T.transpose(unfolded))[0]  # = Tnew
    -        estimators = T.lstsq(self.X_factors[0], self.Y_factors[0])[0]
    -        return (
    -            T.dot(
    -                T.dot(T.transpose(scores), estimators), T.transpose(self.Y_factors[1])
    +        X_projection = T.zeros((T.shape(X)[0], self.n_components), **T.context(X))
    +        for component in range(self.n_components):
    +            X_projection = T.index_update(
    +                X_projection,
    +                T.index[:, component],
    +                multi_mode_dot(
    +                    X,
    +                    [factor[:, component] for factor in self.X_factors[1:]],
    +                    range(1, T.ndim(X)),
    +                ),
    +            )
    +            X -= outer(
    +                [X_projection[:, component]]
    +                + [factor[:, component] for factor in self.X_factors[1:]],
                 )
    +
    +        return (
    +            T.dot(T.dot(X_projection, self.coef_), T.transpose(self.Y_factors[1]))
                 + self.Y_mean_
             )
    -
    [docs] def transform(self, X, Y=None): + +
    +[docs] + def transform(self, X, Y=None): """Apply the dimension reduction from fitting to a new tensor. Parameters @@ -369,16 +417,10 @@

    Source code for tensorly.regression.cp_plsr

                         range(1, T.ndim(X)),
                     ),
                 )
    -            X -= CPTensor(
    -                (
    -                    None,
    -                    [T.reshape(X_scores[:, component], (-1, 1))]
    -                    + [
    -                        T.reshape(ff[:, component], (-1, 1))
    -                        for ff in self.X_factors[1:]
    -                    ],
    -                )
    -            ).to_tensor()
    +            X -= outer(
    +                [X_scores[:, component]]
    +                + [ff[:, component] for ff in self.X_factors[1:]],
    +            )
     
             if Y is not None:
                 Y = T.copy(Y)
    @@ -403,16 +445,19 @@ 

    Source code for tensorly.regression.cp_plsr

     
                     Y -= T.dot(
                         T.dot(
    -                        T.lstsq(T.transpose(X_scores), T.transpose(X_scores))[0],
    -                        Y_scores[:, [component]],
    +                        X_scores,
    +                        T.reshape(self.coef_[:, component], (-1, 1)),
                         ),
    -                    T.transpose(self.Y_factors[1][:, [component]]),
    -                )  # Y -= T pinv(T) u q'
    +                    T.reshape(self.Y_factors[1][:, component], (1, -1)),
    +                )
                 return X_scores, Y_scores
     
             return X_scores
    -
    [docs] def fit_transform(self, X, Y): + +
    +[docs] + def fit_transform(self, X, Y): """Learn and apply the dimension reduction on the train data. Parameters @@ -429,7 +474,27 @@

    Source code for tensorly.regression.cp_plsr

             self : ndarray of shape (n_samples, n_components)
                 Return `x_scores` if `Y` is not given, `(x_scores, y_scores)` otherwise.
             """
    -        return self.fit(X, Y).transform(X, Y)
    + return self.fit(X, Y).transform(X, Y)
    + + +
    +[docs] + def score(self, X, Y): + """Calculate the R^2 of prediction on X compared to the ground truth Y provided. + + Parameters + ---------- + X : ndarray + tensor data of shape (n_samples, N1, ..., NS), same dimension as the X + in self.fit() all except the first dimension + Y : 2D-array of shape (n_samples, n_predictions) + the ground truth labels associated with each sample + """ + from ..metrics.regression import R2_score + + return R2_score(Y - self.Y_mean_, self.predict(X) - self.Y_mean_)
    +
    +
    @@ -439,7 +504,7 @@

    Source code for tensorly.regression.cp_plsr

             
    - © Copyright 2016 - 2023, TensorLy Developers.
    + © Copyright 2016 - 2024, TensorLy Developers.
    diff --git a/stable/_modules/tensorly/regression/cp_regression.html b/stable/_modules/tensorly/regression/cp_regression.html index 044415465..312128043 100644 --- a/stable/_modules/tensorly/regression/cp_regression.html +++ b/stable/_modules/tensorly/regression/cp_regression.html @@ -1,7 +1,6 @@ - - + @@ -15,17 +14,17 @@ - - - - - - + + + + + + - - - + + + @@ -145,20 +144,22 @@

    Source code for tensorly.regression.cp_regression

    -import numpy as np
    +import tensorly as tl
    +import math
     from ..base import partial_tensor_to_vec, partial_unfold
     from ..tenalg import khatri_rao
     from ..cp_tensor import cp_to_tensor, cp_to_vec
     from .. import backend as T
    -from ..utils import DefineDeprecated
     
     # Author: Jean Kossaifi
     
     # License: BSD 3 clause
     
     
    -
    [docs]class CPRegressor: - """CP tensor regression +
    +[docs] +class CPRegressor: + r"""CP tensor regression Learns a low rank CP tensor weight @@ -168,8 +169,8 @@

    Source code for tensorly.regression.cp_regression

    rank of the CP decomposition of the regression weights tol : float convergence value - reg_W : int, optional, default is 1 - regularisation on the weights + reg_W : float, optional, default is 1 + l2 regularisation constant for the regression weights (:math:`reg_W * \sum_i ||factors[i]||_F^2`) n_iter_max : int, optional, default is 100 maximum number of iteration random_state : None, int or RandomState, optional, default is None @@ -193,7 +194,9 @@

    Source code for tensorly.regression.cp_regression

    self.random_state = random_state self.verbose = verbose -
    [docs] def get_params(self, **kwargs): +
    +[docs] + def get_params(self, **kwargs): """Returns a dictionary of parameters""" params = [ "weight_rank", @@ -205,20 +208,25 @@

    Source code for tensorly.regression.cp_regression

    ] return {param_name: getattr(self, param_name) for param_name in params}
    -
    [docs] def set_params(self, **parameters): + +
    +[docs] + def set_params(self, **parameters): """Sets the value of the provided parameters""" for parameter, value in parameters.items(): setattr(self, parameter, value) return self
    -
    [docs] def fit(self, X, y): + +
    +[docs] + def fit(self, X, y): """Fits the model to the data (X, y) Parameters ---------- - X : ndarray - tensor data of shape (n_samples, N1, ..., NS) - y : 1D-array of shape (n_samples, ) + X : tensor data of shape (n_samples, I_1, ..., I_p) + y : tensor of shape (n_samples, O_1, ..., O_q) labels associated with each sample Returns @@ -227,12 +235,12 @@

    Source code for tensorly.regression.cp_regression

    """ rng = T.check_random_state(self.random_state) - # Initialise randomly the weights + # Initialise the weights randomly W = [] - for i in range( - 1, T.ndim(X) - ): # The first dimension of X is the number of samples + for i in range(1, T.ndim(X)): # The first dimension is the number of samples W.append(T.tensor(rng.randn(X.shape[i], self.weight_rank), **T.context(X))) + for i in range(1, T.ndim(y)): + W.append(T.tensor(rng.randn(y.shape[i], self.weight_rank), **T.context(X))) # Norm of the weight tensor at each iteration norm_W = [] @@ -241,26 +249,55 @@

    Source code for tensorly.regression.cp_regression

    for iteration in range(self.n_iter_max): # Optimise each factor of W for i in range(len(W)): - phi = T.reshape( - T.dot( - partial_unfold(X, i, skip_begin=1), khatri_rao(W, skip_matrix=i) - ), - (X.shape[0], -1), - ) - inv_term = T.dot(T.transpose(phi), phi) + self.reg_W * T.tensor( - np.eye(phi.shape[1]), **T.context(X) - ) - W[i] = T.reshape( - T.solve(inv_term, T.dot(T.transpose(phi), y)), - (X.shape[i + 1], self.weight_rank), - ) + if i < T.ndim(X) - 1: + X_unfolded = partial_unfold(X, i, skip_begin=1) + phi = T.dot( + X_unfolded, + T.reshape( + khatri_rao(W, skip_matrix=i), (X_unfolded.shape[-1], -1) + ), + ) + phi = T.transpose( + T.reshape( + phi, (X.shape[0], X.shape[i + 1], -1, self.weight_rank) + ), + (0, 2, 1, 3), + ) + phi = T.reshape(phi, (-1, X.shape[i + 1] * self.weight_rank)) + y_reshaped = T.reshape(y, (-1,)) + inv_term = T.dot(T.transpose(phi), phi) + self.reg_W * T.eye( + phi.shape[1], **T.context(X) + ) + W[i] = T.reshape( + T.solve(inv_term, T.dot(T.transpose(phi), y_reshaped)), + (-1, self.weight_rank), + ) + else: + X_unfolded = partial_tensor_to_vec(X, skip_begin=1) + phi = T.dot( + X_unfolded, + T.reshape( + khatri_rao(W, skip_matrix=i), (X_unfolded.shape[-1], -1) + ), + ) + phi = T.reshape(phi, (-1, self.weight_rank)) + y_reshaped = T.reshape( + T.moveaxis(y, i - T.ndim(X) + 2, -1), + (-1, y.shape[i - T.ndim(X) + 2]), + ) + inv_term = T.dot(T.transpose(phi), phi) + self.reg_W * T.eye( + phi.shape[1], **T.context(X) + ) + W[i] = T.transpose( + T.solve(inv_term, T.dot(T.transpose(phi), y_reshaped)) + ) weight_tensor_ = cp_to_tensor((weights, W)) norm_W.append(T.norm(weight_tensor_, 2)) # Convergence check if iteration > 1: - weight_evolution = abs(norm_W[-1] - norm_W[-2]) / norm_W[-1] + weight_evolution = tl.abs(norm_W[-1] - norm_W[-2]) / norm_W[-1] if weight_evolution <= self.tol: if self.verbose: @@ -276,18 +313,33 @@

    Source code for tensorly.regression.cp_regression

    return self
    -
    [docs] def predict(self, X): + +
    +[docs] + def predict(self, X): """Returns the predicted labels for a new data tensor Parameters ---------- X : ndarray - tensor data of shape (n_samples, N1, ..., NS) + tensor data of shape (n_samples, I_1, ..., I_p) """ - return T.dot(partial_tensor_to_vec(X), self.vec_W_)
    - + out_shape = (-1, *self.weight_tensor_.shape[T.ndim(X) - 1 :]) + if T.ndim(self.weight_tensor_) > T.ndim(X) - 1: + weight_shape = ( + -1, + int(math.prod(self.weight_tensor_.shape[T.ndim(X) - 1 :])), + ) + else: + weight_shape = (-1,) + return T.reshape( + T.dot( + partial_tensor_to_vec(X), T.reshape(self.weight_tensor_, weight_shape) + ), + out_shape, + )
    +
    -KruskalRegressor = DefineDeprecated("KruskalRegressor", CPRegressor)
    @@ -297,7 +349,7 @@

    Source code for tensorly.regression.cp_regression

    - © Copyright 2016 - 2023, TensorLy Developers.
    + © Copyright 2016 - 2024, TensorLy Developers.
    diff --git a/stable/_modules/tensorly/regression/tucker_regression.html b/stable/_modules/tensorly/regression/tucker_regression.html index 4d32fa209..0078ebeef 100644 --- a/stable/_modules/tensorly/regression/tucker_regression.html +++ b/stable/_modules/tensorly/regression/tucker_regression.html @@ -1,7 +1,6 @@ - - + @@ -15,17 +14,17 @@ - - - - - - + + + + + + - - - + + + @@ -145,7 +144,8 @@

    Source code for tensorly.regression.tucker_regression

    -import numpy as np
    +import tensorly as tl
    +import numpy as np
     from ..base import unfold, vec_to_tensor
     from ..base import partial_tensor_to_vec, partial_unfold
     from ..tenalg import kronecker
    @@ -157,7 +157,9 @@ 

    Source code for tensorly.regression.tucker_regression

    # License: BSD 3 clause -
    [docs]class TuckerRegressor: +
    +[docs] +class TuckerRegressor: """Tucker tensor regression Learns a low rank Tucker weight for the regression @@ -193,7 +195,9 @@

    Source code for tensorly.regression.tucker_regression

    self.random_state = random_state self.verbose = verbose -
    [docs] def get_params(self, **kwargs): +
    +[docs] + def get_params(self, **kwargs): """Returns a dictionary of parameters""" params = [ "weight_ranks", @@ -205,13 +209,19 @@

    Source code for tensorly.regression.tucker_regression

    ] return {param_name: getattr(self, param_name) for param_name in params}
    -
    [docs] def set_params(self, **parameters): + +
    +[docs] + def set_params(self, **parameters): """Sets the value of the provided parameters""" for parameter, value in parameters.items(): setattr(self, parameter, value) return self
    -
    [docs] def fit(self, X, y): + +
    +[docs] + def fit(self, X, y): """Fits the model to the data (X, y) Parameters @@ -270,7 +280,7 @@

    Source code for tensorly.regression.tucker_regression

    # Convergence check if iteration > 1: - weight_evolution = abs(norm_W[-1] - norm_W[-2]) / norm_W[-1] + weight_evolution = tl.abs(norm_W[-1] - norm_W[-2]) / norm_W[-1] if weight_evolution <= self.tol: if self.verbose: @@ -285,7 +295,10 @@

    Source code for tensorly.regression.tucker_regression

    return self
    -
    [docs] def predict(self, X): + +
    +[docs] + def predict(self, X): """Returns the predicted labels for a new data tensor Parameters @@ -293,7 +306,9 @@

    Source code for tensorly.regression.tucker_regression

    X : ndarray tensor data of shape (n_samples, N1, ..., NS) """ - return T.dot(partial_tensor_to_vec(X), self.vec_W_)
    + return T.dot(partial_tensor_to_vec(X), self.vec_W_)
    +
    +
    @@ -303,7 +318,7 @@

    Source code for tensorly.regression.tucker_regression

    - © Copyright 2016 - 2023, TensorLy Developers.
    + © Copyright 2016 - 2024, TensorLy Developers.
    diff --git a/stable/_modules/tensorly/solvers/admm.html b/stable/_modules/tensorly/solvers/admm.html new file mode 100644 index 000000000..cee61152a --- /dev/null +++ b/stable/_modules/tensorly/solvers/admm.html @@ -0,0 +1,347 @@ + + + + + + + tensorly.solvers.admm — TensorLy: Tensor Learning in Python + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + + + + +
    + + +
    +
    + + + +
    + + +
    + + + + +
    + +

    Source code for tensorly.solvers.admm

    +import tensorly as tl
    +from tensorly.tenalg.proximal import *
    +
    +
    +
    +[docs] +def admm( + UtM, + UtU, + x, + dual_var, + n_iter_max=100, + n_const=None, + order=None, + non_negative=None, + l1_reg=None, + l2_reg=None, + l2_square_reg=None, + unimodality=None, + normalize=None, + simplex=None, + normalized_sparsity=None, + soft_sparsity=None, + smoothness=None, + monotonicity=None, + hard_sparsity=None, + tol=1e-4, +): + """ + Alternating direction method of multipliers (ADMM) algorithm to minimize a quadratic function under convex constraints. + + Parameters + ---------- + UtM: ndarray + Pre-computed product of the transposed of U and M. + UtU: ndarray + Pre-computed product of the transposed of U and U. + x: init + Default: None + dual_var : ndarray + Dual variable to update x + n_iter_max : int + Maximum number of iteration + Default: 100 + n_const : int + Number of constraints. If it is None, function solves least square problem without proximity operator + If ADMM function is used with a constraint apart from constrained parafac decomposition, + n_const value should be changed to '1'. + Default : None + order : int + Specifies which constraint to implement if several constraints are selected as input + Default : None + non_negative : bool or dictionary + This constraint is clipping negative values to '0'. + If it is True, non-negative constraint is applied to all modes. + l1_reg : float or list or dictionary, optional + Penalizes the factor with the l1 norm using the input value as regularization parameter. + l2_reg : float or list or dictionary, optional + Penalizes the factor with the l2 norm using the input value as regularization parameter. + l2_square_reg : float or list or dictionary, optional + Penalizes the factor with the l2 square norm using the input value as regularization parameter. + unimodality : bool or dictionary, optional + If it is True, unimodality constraint is applied to all modes. + Applied to each column seperately. + normalize : bool or dictionary, optional + This constraint divides all the values by maximum value of the input array. + If it is True, normalize constraint is applied to all modes. + simplex : float or list or dictionary, optional + Projects on the simplex with the given parameter + Applied to each column seperately. + normalized_sparsity : float or list or dictionary, optional + Normalizes with the norm after hard thresholding + soft_sparsity : float or list or dictionary, optional + Impose that the columns of factors have L1 norm bounded by a user-defined threshold. + smoothness : float or list or dictionary, optional + Optimizes the factors by solving a banded system + monotonicity : bool or dictionary, optional + Projects columns to monotonically decreasing distrbution + Applied to each column seperately. + If it is True, monotonicity constraint is applied to all modes. + hard_sparsity : float or list or dictionary, optional + Hard thresholding with the given threshold + tol : float + + Returns + ------- + x : Updated ndarray + x_split : Updated ndarray + dual_var : Updated ndarray + + Notes + ----- + ADMM solves the convex optimization problem + + .. math:: \\min_ f(x) + g(z),\\; A(x_{split}) + Bx = c. + + Following updates are iterated to solve the problem + + .. math:: x_{split} = argmin_{x_{split}}~ f(x_{split}) + (\\rho/2)\\|Ax_{split} + Bx - c\\|_2^2 + .. math:: x = argmin_x~ g(x) + (\\rho/2)\\|Ax_{split} + Bx - c\\|_2^2 + .. math:: dual\_var = dual\_var + (Ax + Bx_{split} - c) + + where rho is a constant defined by the user. + + Let us define a least square problem such as :math:`\\|Ux - M\\|^2 + r(x)`. + + ADMM can be adapted to this least square problem as following + + .. math:: x_{split} = (UtU + \\rho\\times I)\\times(UtM + \\rho\\times(x + dual\_var)^T) + .. math:: x = argmin_{x}~ r(x) + (\\rho/2)\\|x - x_{split}^T + dual\_var\\|_2^2 + .. math:: dual\_var = dual\_var + x - x_{split}^T + + where r is the regularization operator. Here, x can be updated by using proximity operator + of :math:`x_{split}^T - dual\_var`. + + References + ---------- + .. [1] Huang, Kejun, Nicholas D. Sidiropoulos, and Athanasios P. Liavas. + "A flexible and efficient algorithmic framework for constrained matrix and tensor factorization." + IEEE Transactions on Signal Processing 64.19 (2016): 5052-5065. + """ + rho = tl.trace(UtU) / tl.shape(x)[1] + for iteration in range(n_iter_max): + x_old = tl.copy(x) + x_split = tl.solve( + tl.transpose(UtU + rho * tl.eye(tl.shape(UtU)[1])), + tl.transpose(UtM + rho * (x + dual_var)), + ) + x = proximal_operator( + tl.transpose(x_split) - dual_var, + non_negative=non_negative, + l1_reg=l1_reg, + l2_reg=l2_reg, + l2_square_reg=l2_square_reg, + unimodality=unimodality, + normalize=normalize, + simplex=simplex, + normalized_sparsity=normalized_sparsity, + soft_sparsity=soft_sparsity, + smoothness=smoothness, + monotonicity=monotonicity, + hard_sparsity=hard_sparsity, + n_const=n_const, + order=order, + ) + if n_const is None: + x = tl.transpose(tl.solve(tl.transpose(UtU), tl.transpose(UtM))) + return x, x_split, dual_var + dual_var = dual_var + x - tl.transpose(x_split) + + dual_residual = x - tl.transpose(x_split) + primal_residual = x - x_old + + if tl.norm(dual_residual) < tol * tl.norm(x) and tl.norm( + primal_residual + ) < tol * tl.norm(dual_var): + break + return x, x_split, dual_var
    + +
    + +
    + + + +
    +
    +
    + © Copyright 2016 - 2024, TensorLy Developers.
    +
    +
    +
    + +
    + +
    + + + +
    +
    + + + + + + + + \ No newline at end of file diff --git a/stable/_modules/tensorly/solvers/nnls.html b/stable/_modules/tensorly/solvers/nnls.html new file mode 100644 index 000000000..60a7e3c56 --- /dev/null +++ b/stable/_modules/tensorly/solvers/nnls.html @@ -0,0 +1,625 @@ + + + + + + + tensorly.solvers.nnls — TensorLy: Tensor Learning in Python + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + + + + +
    + + +
    +
    + + + +
    + + +
    + + + + +
    + +

    Source code for tensorly.solvers.nnls

    +import tensorly as tl
    +from math import sqrt
    +
    +
    +
    +[docs] +def hals_nnls( + UtM, + UtU, + V=None, + n_iter_max=500, + tol=1e-8, + sparsity_coefficient=None, + ridge_coefficient=None, + nonzero_rows=False, + exact=False, + epsilon=0.0, + callback=None, +): + """ + Non Negative Least Squares (NNLS) + + Computes an approximate solution of a nonnegative least + squares problem (NNLS) with an exact block-coordinate descent scheme. + M is m by n, U is m by r, V is r by n. + All matrices are nonnegative componentwise. + + This algorithm is a simplified implementation of the accelerated HALS defined in [1]. It features an early stop stopping criterion. It is simplified to ensure reproducibility and expose a simple API to control the number of inner iterations. + + This function is made for being used repetively inside an + outer-loop alternating algorithm, for instance for computing nonnegative + matrix Factorization or tensor factorization. To use as a stand-alone solver, set the exact flag to True. + + Parameters + ---------- + UtM: r-by-n array + Pre-computed product of the transposed of U and M, used in the update rule + UtU: r-by-r array + Pre-computed product of the transposed of U and U, used in the update rule + V: r-by-n initialization matrix (mutable) + Initialized V array + By default, is initialized with one non-zero entry per column + corresponding to the closest column of U of the corresponding column of M. + n_iter_max: Positive integer + Upper bound on the number of iterations + Default: 500 + tol : float in [0,1] + early stop criterion, while err_k > delta*err_0. Set small for + almost exact nnls solution, or larger (e.g. 1e-2) for inner loops + of a PARAFAC computation. + Default: 10e-8 + sparsity_coefficient: float or None + The coefficient controling the sparisty level in the objective function. + If set to None, the problem is solved unconstrained. + Default: None + ridge_coefficient: float or None + The coefficient controling the ridge (l2) penalty in the objective function. + If set to None, no ridge penalty is imposed. + Default: None + nonzero_rows: boolean + True if the lines of the V matrix can't be zero, + False if they can be zero + Default: False + exact: If it is True, the algorithm gives a results with high precision but it needs high computational cost. + If it is False, the algorithm gives an approximate solution + Default: False + epsilon: float + Small constant such that V>=epsilon instead of V>=0. + Required to ensure convergence, avoids division by zero and reset. + Default: 0 + callback: callable, optional + A callable called after each iteration. The supported signature is + + callback(V: tensor, error: float) + + where V is the last estimated nonnegative least squares solution, and error is the squared Euclidean norm of the difference between V at the current iteration k, and V at iteration k-1 (therefore error is not the loss function which is costly to compute). + Moreover, the algorithm will also terminate if the callback callable returns True. + Default: None + + Returns + ------- + V: array + a r-by-n nonnegative matrix, see notes. + + Notes + ----- + We solve the following problem + + .. math:: + + \\min_{V >= \\epsilon} ||M-UV||_F^2 + + The matrix V is updated linewise. The update rule for this resolution is + + .. math:: + + \\begin{equation} + V[k,:]_{(j+1)} = V[k,:]_{(j)} + (UtM[k,:] - UtU[k,:]\\times V_{(j)})/UtU[k,k] + \\end{equation} + + with j the update iteration index. V is then thresholded to be larger than epsilon. + + This problem can also be defined by adding respectively a sparsity coefficient and a ridge coefficients + + .. math:: \lambda_s, \lambda_r + + enhancing sparsity or smoothness in the solution [2]. In this sparse/ridge version, the update rule becomes + + .. math:: + + \\begin{equation} + V[k,:]_{(j+1)} = V[k,:]_{(j)} + (UtM[k,:] - UtU[k,:]\\times V_{(j)} - \lambda_s)/(UtU[k,k]+2\lambda_r) + \\end{equation} + + Note that the data fitting is halved but not the ridge penalization. + + References + ---------- + .. [1] N. Gillis and F. Glineur, Accelerated Multiplicative Updates and + Hierarchical ALS Algorithms for Nonnegative Matrix Factorization, + Neural Computation 24 (4): 1085-1105, 2012. + + .. [2] J. Eggert, and E. Korner. "Sparse coding and NMF." + 2004 IEEE International Joint Conference on Neural Networks + (IEEE Cat. No. 04CH37541). Vol. 4. IEEE, 2004. + + """ + + rank, _ = tl.shape(UtM) + if V is None: + V = tl.solve(UtU, UtM) + V = tl.clip(V, a_min=0, a_max=None) + # Scaling + scale = tl.sum(UtM * V) / tl.sum(UtU * tl.dot(V, tl.transpose(V))) + V = V * scale + + if exact: + n_iter_max = 50000 + tol = 1e-16 + + for iteration in range(n_iter_max): + rec_error = 0 + for k in range(rank): + if UtU[k, k]: + num = UtM[k, :] - tl.dot(UtU[k, :], V) + UtU[k, k] * V[k, :] + den = UtU[k, k] + + if sparsity_coefficient is not None: + num -= sparsity_coefficient + if ridge_coefficient is not None: + den += 2 * ridge_coefficient + + newV = tl.clip(num / den, a_min=epsilon) + rec_error += tl.norm(V - newV) ** 2 + V = tl.index_update(V, tl.index[k, :], newV) + + # Safety procedure, if columns aren't allow to be zero + if nonzero_rows and tl.all(V[k, :] == 0): + V[k, :] = tl.eps(V.dtype) * tl.max(V) + elif nonzero_rows: + raise ValueError( + "Column " + str(k) + " of U is zero with nonzero condition" + ) + + if callback is not None: + retVal = callback(V, rec_error) + if retVal is True: + print("Received True from callback function. Exiting.") + break + + if iteration == 0: + rec_error0 = rec_error + if rec_error < tol * rec_error0: + break + + return V
    + + + +
    +[docs] +def fista( + UtM, + UtU, + x=None, + n_iter_max=100, + non_negative=True, + sparsity_coef=0, + ridge_coef=0, + lr=None, + tol=1e-8, + epsilon=1e-8, +): + """ + Fast Iterative Shrinkage Thresholding Algorithm (FISTA), see [1]_ + + Computes an approximate (nonnegative) solution for Ux=M linear system. + + Parameters + ---------- + UtM : ndarray + Pre-computed product of the transposed of U and M + UtU : ndarray + Pre-computed product of the transposed of U and U + x : init + Default: None + n_iter_max : int + Maximum number of iteration + Default: 100 + non_negative : bool, default is False + if True, result will be non-negative + lr : float + learning rate + Default : None + sparsity_coef : float or None + ridge_coef : float or None + tol : float + stopping criterion for the l1 error decrease relative to the first iteration error + epsilon : float + Small constant such that the solution is greater than epsilon instead of zero. + Required to ensure convergence, avoids division by zero and reset. + Default: 1e-8 + + Returns + ------- + x : approximate solution such that Ux = M + + Notes + ----- + We solve the following problem + + .. math:: + + \\frac{1}{2} \\|m - Ux \\|_2^2 + \\lambda_1 \\|x\\|_1 + \\lambda_2 \\|x\\|_2^2 + + References + ---------- + .. [1] Beck, A., & Teboulle, M. (2009). A fast iterative + shrinkage-thresholding algorithm for linear inverse problems. + SIAM journal on imaging sciences, 2(1), 183-202. + """ + if sparsity_coef is None: + sparsity_coef = 0 + + if x is None: + x = tl.zeros(tl.shape(UtM), **tl.context(UtM)) + if lr is None: + lr = 1 / (tl.truncated_svd(UtU)[1][0] + 2 * ridge_coef) + # Parameters + momentum_old = 1.0 # tl.tensor(1.0) + norm_0 = 0.0 + x_update = tl.copy(x) + + for iteration in range(n_iter_max): + if isinstance(UtU, list): + x_gradient = ( + -UtM + + tl.tenalg.multi_mode_dot(x_update, UtU, transpose=False) + + sparsity_coef + + 2 * ridge_coef * x_update + ) + else: + x_gradient = ( + -UtM + tl.dot(UtU, x_update) + sparsity_coef + 2 * ridge_coef * x_update + ) + + x_new = x_update - lr * x_gradient + if non_negative: + x_new = tl.where(x_new < epsilon, epsilon, x_new) + momentum = (1 + sqrt(1 + 4 * momentum_old**2)) / 2 + x_update = x_new + ((momentum_old - 1) / momentum) * (x_new - x) + momentum_old = momentum + norm = tl.abs( + tl.sum(x - x_new) + ) # for tracking loss decrease, l2 has square overflow issues + x = tl.copy(x_new) + if iteration == 0: + norm_0 = norm + if norm < tol * norm_0: + break + return x
    + + + +
    +[docs] +def active_set_nnls(Utm, UtU, x=None, n_iter_max=100, tol=10e-8): + """ + Active set algorithm for non-negative least square solution, see [1]_ + + Computes an approximate non-negative solution for Ux=m linear system. + + Parameters + ---------- + Utm : vectorized ndarray + Pre-computed product of the transposed of U and m + UtU : ndarray + Pre-computed Kronecker product of the transposed of U and U + x : init + Default: None + n_iter_max : int + Maximum number of iteration + Default: 100 + tol : float + Early stopping criterion + + Returns + ------- + x : ndarray + + Notes + ----- + This function solves following problem: + + .. math:: + + \\begin{equation} + \\min_{x} \\|Ux - m\\|^2 + \\end{equation} + + According to [1], non-negativity-constrained least square estimation problem becomes: + + .. math:: + + \\begin{equation} + x' = Utm - UtU x + \\end{equation} + + References + ---------- + .. [1] Bro, R., & De Jong, S. (1997). A fast non‐negativity‐constrained + least squares algorithm. Journal of Chemometrics: A Journal of + the Chemometrics Society, 11(5), 393-401. + """ + if tl.get_backend() == "tensorflow": + raise ValueError( + "Active set is not supported with the tensorflow backend. Consider using fista method with tensorflow." + ) + + if x is None: + x_vec = tl.zeros(tl.shape(UtU)[1], **tl.context(UtU)) + else: + x_vec = tl.base.tensor_to_vec(x) + + x_gradient = Utm - tl.dot(UtU, x_vec) + passive_set = x_vec > 0 + active_set = x_vec <= 0 + support_vec = tl.zeros(tl.shape(x_vec), **tl.context(x_vec)) + + for iteration in range(n_iter_max): + if iteration > 0 or tl.all(x_vec == 0): + indice = tl.argmax(x_gradient) + passive_set = tl.index_update(passive_set, tl.index[indice], True) + active_set = tl.index_update(active_set, tl.index[indice], False) + # To avoid singularity error when initial x exists + try: + passive_solution = tl.solve( + UtU[passive_set, :][:, passive_set], Utm[passive_set] + ) + indice_list = [] + for i in range(tl.shape(support_vec)[0]): + if passive_set[i]: + indice_list.append(i) + support_vec = tl.index_update( + support_vec, + tl.index[int(i)], + passive_solution[len(indice_list) - 1], + ) + else: + support_vec = tl.index_update(support_vec, tl.index[int(i)], 0) + # Start from zeros if solve is not achieved + except: + x_vec = tl.zeros(tl.shape(UtU)[1]) + support_vec = tl.zeros(tl.shape(x_vec), **tl.context(x_vec)) + passive_set = x_vec > 0 + active_set = x_vec <= 0 + if tl.any(active_set): + indice = tl.argmax(x_gradient) + passive_set = tl.index_update(passive_set, tl.index[indice], True) + active_set = tl.index_update(active_set, tl.index[indice], False) + passive_solution = tl.solve( + UtU[passive_set, :][:, passive_set], Utm[passive_set] + ) + indice_list = [] + for i in range(tl.shape(support_vec)[0]): + if passive_set[i]: + indice_list.append(i) + support_vec = tl.index_update( + support_vec, + tl.index[int(i)], + passive_solution[len(indice_list) - 1], + ) + else: + support_vec = tl.index_update(support_vec, tl.index[int(i)], 0) + + # update support vector if it is necessary + if tl.min(support_vec[passive_set]) <= 0: + for i in range(len(passive_set)): + alpha = tl.min( + x_vec[passive_set][support_vec[passive_set] <= 0] + / ( + x_vec[passive_set][support_vec[passive_set] <= 0] + - support_vec[passive_set][support_vec[passive_set] <= 0] + ) + ) + update = alpha * (support_vec - x_vec) + x_vec = x_vec + update + passive_set = x_vec > 0 + active_set = x_vec <= 0 + passive_solution = tl.solve( + UtU[passive_set, :][:, passive_set], Utm[passive_set] + ) + indice_list = [] + for i in range(tl.shape(support_vec)[0]): + if passive_set[i]: + indice_list.append(i) + support_vec = tl.index_update( + support_vec, + tl.index[int(i)], + passive_solution[len(indice_list) - 1], + ) + else: + support_vec = tl.index_update(support_vec, tl.index[int(i)], 0) + + if tl.any(passive_set) != True or tl.min(support_vec[passive_set]) > 0: + break + # set x to s + x_vec = tl.clip(support_vec, 0, tl.max(support_vec)) + + # gradient update + x_gradient = Utm - tl.dot(UtU, x_vec) + + if tl.any(active_set) != True or tl.max(x_gradient[active_set]) <= tol: + break + + return x_vec
    + +
    + +
    + + + +
    +
    +
    + © Copyright 2016 - 2024, TensorLy Developers.
    +
    +
    +
    + +
    + +
    + + + +
    +
    + + + + + + + + \ No newline at end of file diff --git a/stable/_modules/tensorly/tenalg/core_tenalg/_batched_tensordot.html b/stable/_modules/tensorly/tenalg/core_tenalg/_batched_tensordot.html index 18020b416..782ffb50f 100644 --- a/stable/_modules/tensorly/tenalg/core_tenalg/_batched_tensordot.html +++ b/stable/_modules/tensorly/tenalg/core_tenalg/_batched_tensordot.html @@ -1,7 +1,6 @@ - - + @@ -15,17 +14,17 @@ - - - - - - + + + + + + - - - + + + @@ -145,12 +144,14 @@

    Source code for tensorly.tenalg.core_tenalg._batched_tensordot

    -from ..tenalg_utils import _validate_contraction_modes
    -from ...utils import prod
    +from math import prod
    +from ..tenalg_utils import _validate_contraction_modes
     import tensorly as tl
     
     
    -
    [docs]def tensordot(tensor1, tensor2, modes, batched_modes=()): +
    +[docs] +def tensordot(tensor1, tensor2, modes, batched_modes=()): """Batched tensor contraction between two tensors on specified modes Parameters @@ -211,6 +212,7 @@

    Source code for tensorly.tenalg.core_tenalg._batched_tensordot

    res = tl.transpose(res, final_modes) return res
    +
    @@ -220,7 +222,7 @@

    Source code for tensorly.tenalg.core_tenalg._batched_tensordot

    @@ -209,7 +211,7 @@

    Source code for tensorly.tenalg.core_tenalg._kronecker

    - © Copyright 2016 - 2023, TensorLy Developers.
    + © Copyright 2016 - 2024, TensorLy Developers.
    diff --git a/stable/_modules/tensorly/tenalg/core_tenalg/_tt_matrix.html b/stable/_modules/tensorly/tenalg/core_tenalg/_tt_matrix.html index 8816d58e3..7837b6679 100644 --- a/stable/_modules/tensorly/tenalg/core_tenalg/_tt_matrix.html +++ b/stable/_modules/tensorly/tenalg/core_tenalg/_tt_matrix.html @@ -1,7 +1,6 @@ - - + @@ -15,17 +14,17 @@ - - - - - - + + + + + + - - - + + + @@ -151,7 +150,9 @@

    Source code for tensorly.tenalg.core_tenalg._tt_matrix

    from ._batched_tensordot import tensordot -
    [docs]def tt_matrix_to_tensor(tt_matrix): +
    +[docs] +def tt_matrix_to_tensor(tt_matrix): """Returns the full tensor whose TT-Matrix decomposition is given by 'factors' Re-assembles 'factors', which represent a tensor in TT-Matrix format @@ -183,6 +184,7 @@

    Source code for tensorly.tenalg.core_tenalg._tt_matrix

    res = tensordot(res, factor, ([-1], [0])) return tl.transpose(tl.reshape(res, full_shape), order)
    +
    @@ -192,7 +194,7 @@

    Source code for tensorly.tenalg.core_tenalg._tt_matrix

    - © Copyright 2016 - 2023, TensorLy Developers.
    + © Copyright 2016 - 2024, TensorLy Developers.
    diff --git a/stable/_modules/tensorly/tenalg/core_tenalg/generalised_inner_product.html b/stable/_modules/tensorly/tenalg/core_tenalg/generalised_inner_product.html index a4d0a9c75..4541c2748 100644 --- a/stable/_modules/tensorly/tenalg/core_tenalg/generalised_inner_product.html +++ b/stable/_modules/tensorly/tenalg/core_tenalg/generalised_inner_product.html @@ -1,7 +1,6 @@ - - + @@ -15,17 +14,17 @@ - - - - - - + + + + + + - - - + + + @@ -152,7 +151,9 @@

    Source code for tensorly.tenalg.core_tenalg.generalised_inner_product

    # License: BSD 3 clause -
    [docs]def inner(tensor1, tensor2, n_modes=None): +
    +[docs] +def inner(tensor1, tensor2, n_modes=None): """Generalised inner products between tensors Takes the inner product between the last (respectively first) @@ -198,6 +199,7 @@

    Source code for tensorly.tenalg.core_tenalg.generalised_inner_product

    T.reshape(tensor1, (-1, common_size)), T.reshape(tensor2, (common_size, -1)) ) return T.reshape(inner_product, output_shape)
    +
    @@ -207,7 +209,7 @@

    Source code for tensorly.tenalg.core_tenalg.generalised_inner_product

    - © Copyright 2016 - 2023, TensorLy Developers.
    + © Copyright 2016 - 2024, TensorLy Developers.
    diff --git a/stable/_modules/tensorly/tenalg/core_tenalg/moments.html b/stable/_modules/tensorly/tenalg/core_tenalg/moments.html index 5097647e8..a65cb919e 100644 --- a/stable/_modules/tensorly/tenalg/core_tenalg/moments.html +++ b/stable/_modules/tensorly/tenalg/core_tenalg/moments.html @@ -1,7 +1,6 @@ - - + @@ -15,17 +14,17 @@ - - - - - - + + + + + + - - - + + + @@ -149,7 +148,9 @@

    Source code for tensorly.tenalg.core_tenalg.moments

    from . import batched_outer -
    [docs]def higher_order_moment(tensor, order): +
    +[docs] +def higher_order_moment(tensor, order): """Computes the Higher-Order Momemt Parameters @@ -172,6 +173,7 @@

    Source code for tensorly.tenalg.core_tenalg.moments

    moment = batched_outer(moment, tensor) return tl.mean(moment, axis=0)
    +
    @@ -181,7 +183,7 @@

    Source code for tensorly.tenalg.core_tenalg.moments

    - © Copyright 2016 - 2023, TensorLy Developers.
    + © Copyright 2016 - 2024, TensorLy Developers.
    diff --git a/stable/_modules/tensorly/tenalg/core_tenalg/mttkrp.html b/stable/_modules/tensorly/tenalg/core_tenalg/mttkrp.html index 52aa9fcfa..c6d28b96c 100644 --- a/stable/_modules/tensorly/tenalg/core_tenalg/mttkrp.html +++ b/stable/_modules/tensorly/tenalg/core_tenalg/mttkrp.html @@ -1,7 +1,6 @@ - - + @@ -15,17 +14,17 @@ - - - - - - + + + + + + - - - + + + @@ -146,12 +145,16 @@

    Source code for tensorly.tenalg.core_tenalg.mttkrp

     from .n_mode_product import multi_mode_dot
    +from ._khatri_rao import khatri_rao
     from ... import backend as T
    +from ...base import unfold
     
     # Author: Jean Kossaifi
     
     
    -
    [docs]def unfolding_dot_khatri_rao(tensor, cp_tensor, mode): +
    +[docs] +def unfolding_dot_khatri_rao(tensor, cp_tensor, mode): """mode-n unfolding times khatri-rao product of factors Parameters @@ -170,24 +173,61 @@

    Source code for tensorly.tenalg.core_tenalg.mttkrp

    Notes ----- - This is a variant of:: + Default unfolding_dot_khatri_rao implementation. + + Implemented as the product between an unfolded tensor + and a Khatri-Rao product explicitly formed. Due to matrix-matrix + products being extremely efficient operations, this is a + simple yet hard-to-beat implementation of MTTKRP. + + If working with sparse tensors, or when the CP-rank of the CP-tensor is comparable to, or larger than, + the dimensions of the input tensor, however, this method requires a lot + of memory, which can be harmful when dealing with large tensors. In this + case, please use the memory-efficient version of MTTKRP. + + To use the slower memory efficient version, run + + >>> from tensorly.tenalg.core_tenalg.mttkrp import unfolding_dot_khatri_rao_memory + >>> tl.tenalg.register_backend_method("unfolding_dot_khatri_rao", unfolding_dot_khatri_rao_memory) + >>> tl.tenalg.use_dynamic_dispatch() + + """ + weights, factors = cp_tensor + kr_factors = khatri_rao(factors, weights=weights, skip_matrix=mode) + mttkrp = T.dot(unfold(tensor, mode), T.conj(kr_factors)) + return mttkrp
    - unfolded = unfold(tensor, mode) - kr_factors = khatri_rao(factors, skip_matrix=mode) - mttkrp2 = tl.dot(unfolded, kr_factors) - Multiplying with the Khatri-Rao product is equivalent to multiplying, - for each rank, with the kronecker product of each factor. - In code:: - mttkrp_parts = [] - for r in range(rank): - component = tl.tenalg.multi_mode_dot(tensor, [f[:, r] for f in factors], skip=mode) - mttkrp_parts.append(component) - mttkrp = tl.stack(mttkrp_parts, axis=1) - return mttkrp +def unfolding_dot_khatri_rao_memory(tensor, cp_tensor, mode): + """mode-n unfolding times khatri-rao product of factors - This can be done by taking n-mode-product with the full factors + Parameters + ---------- + tensor : tl.tensor + tensor to unfold + factors : tl.tensor list + list of matrices of which to the khatri-rao product + mode : int + mode on which to unfold `tensor` + + Returns + ------- + mttkrp + dot(unfold(tensor, mode), khatri-rao(factors)) + + Notes + ----- + Implemented as a sequence of Tensor-times-vectors products between a tensor + and a Khatri-Rao product. The Khatri-Rao product is never computed explicitly, + rather each column in the Khatri-Rao product is contracted with the tensor. This + operation is implemented in Python and without making of use of parallelism, and it + is therefore in general slower than the naive MTTKRP product. + When the CP-rank of the CP-tensor is comparable to, or larger than, + the dimensions of the input tensor, this method however requires much less + memory. + + This method can also be implemented by taking n-mode-product with the full factors (faster but more memory consuming):: projected = multi_mode_dot(tensor, factors, skip=mode, transpose=True) @@ -197,22 +237,6 @@

    Source code for tensorly.tenalg.core_tenalg.mttkrp

    index = tuple([slice(None) if k == mode else i for k in range(ndims)]) res.append(projected[index]) return T.stack(res, axis=-1) - - - The same idea could be expressed using einsum:: - - ndims = tl.ndim(tensor) - tensor_idx = ''.join(chr(ord('a') + i) for i in range(ndims)) - rank = chr(ord('a') + ndims + 1) - op = tensor_idx - for i in range(ndims): - if i != mode: - op += ',' + ''.join([tensor_idx[i], rank]) - else: - result = ''.join([tensor_idx[i], rank]) - op += '->' + result - factors = [f for (i, f) in enumerate(factors) if i != mode] - return tl_einsum(op, tensor, *factors) """ mttkrp_parts = [] weights, factors = cp_tensor @@ -226,7 +250,7 @@

    Source code for tensorly.tenalg.core_tenalg.mttkrp

    if weights is None: return T.stack(mttkrp_parts, axis=1) else: - return T.stack(mttkrp_parts, axis=1) * T.reshape(weights, (1, -1))
    + return T.stack(mttkrp_parts, axis=1) * T.reshape(weights, (1, -1))
    @@ -236,7 +260,7 @@

    Source code for tensorly.tenalg.core_tenalg.mttkrp

    - © Copyright 2016 - 2023, TensorLy Developers.
    + © Copyright 2016 - 2024, TensorLy Developers.
    diff --git a/stable/_modules/tensorly/tenalg/core_tenalg/n_mode_product.html b/stable/_modules/tensorly/tenalg/core_tenalg/n_mode_product.html index cdedf981a..82fce69d3 100644 --- a/stable/_modules/tensorly/tenalg/core_tenalg/n_mode_product.html +++ b/stable/_modules/tensorly/tenalg/core_tenalg/n_mode_product.html @@ -1,7 +1,6 @@ - - + @@ -15,17 +14,17 @@ - - - - - - + + + + + + - - - + + + @@ -149,7 +148,9 @@

    Source code for tensorly.tenalg.core_tenalg.n_mode_product

    from ... import unfold, fold, vec_to_tensor -
    [docs]def mode_dot(tensor, matrix_or_vector, mode, transpose=False): +
    +[docs] +def mode_dot(tensor, matrix_or_vector, mode, transpose=False): """n-mode product of a tensor and a matrix or vector at the specified mode Mathematically: :math:`\\text{tensor} \\times_{\\text{mode}} \\text{matrix or vector}` @@ -206,9 +207,7 @@

    Source code for tensorly.tenalg.core_tenalg.n_mode_product

    if len(new_shape) > 1: new_shape.pop(mode) else: - # Ideally this should be (), i.e. order-0 tensors - # MXNet currently doesn't support this though.. - new_shape = [] + new_shape = () vec = True else: @@ -225,7 +224,10 @@

    Source code for tensorly.tenalg.core_tenalg.n_mode_product

    return fold(res, fold_mode, new_shape)
    -
    [docs]def multi_mode_dot(tensor, matrix_or_vec_list, modes=None, skip=None, transpose=False): + +
    +[docs] +def multi_mode_dot(tensor, matrix_or_vec_list, modes=None, skip=None, transpose=False): """n-mode product of a tensor and several matrices or vectors over several modes Parameters @@ -282,6 +284,7 @@

    Source code for tensorly.tenalg.core_tenalg.n_mode_product

    decrement += 1 return res
    +
    @@ -291,7 +294,7 @@

    Source code for tensorly.tenalg.core_tenalg.n_mode_product

    - © Copyright 2016 - 2023, TensorLy Developers.
    + © Copyright 2016 - 2024, TensorLy Developers.
    diff --git a/stable/_modules/tensorly/tenalg/core_tenalg/outer_product.html b/stable/_modules/tensorly/tenalg/core_tenalg/outer_product.html index 7a8bfdb01..0c460c8df 100644 --- a/stable/_modules/tensorly/tenalg/core_tenalg/outer_product.html +++ b/stable/_modules/tensorly/tenalg/core_tenalg/outer_product.html @@ -1,7 +1,6 @@ - - + @@ -15,17 +14,17 @@ - - - - - - + + + + + + - - - + + + @@ -149,7 +148,9 @@

    Source code for tensorly.tenalg.core_tenalg.outer_product

    # TODO : add batched_modes as in batched_tensor_dot? -
    [docs]def batched_outer(tensors): +
    +[docs] +def batched_outer(tensors): """Returns a generalized outer product of the two tensors Parameters @@ -190,7 +191,10 @@

    Source code for tensorly.tenalg.core_tenalg.outer_product

    return res
    -
    [docs]def outer(tensors): + +
    +[docs] +def outer(tensors): """Returns a generalized outer product of the two tensors Parameters @@ -221,6 +225,7 @@

    Source code for tensorly.tenalg.core_tenalg.outer_product

    sres = len(shape_res) return res
    +
    @@ -230,7 +235,7 @@

    Source code for tensorly.tenalg.core_tenalg.outer_product

    - © Copyright 2016 - 2023, TensorLy Developers.
    + © Copyright 2016 - 2024, TensorLy Developers.
    diff --git a/stable/_modules/tensorly/tenalg/proximal.html b/stable/_modules/tensorly/tenalg/proximal.html index 195cf8e2d..2f8afad73 100644 --- a/stable/_modules/tensorly/tenalg/proximal.html +++ b/stable/_modules/tensorly/tenalg/proximal.html @@ -1,7 +1,6 @@ - - + @@ -15,17 +14,17 @@ - - - - - - + + + + + + - - - + + + @@ -146,14 +145,6 @@

    Source code for tensorly.tenalg.proximal

     import tensorly as tl
    -import numpy as np
    -
    -# Author: Jean Kossaifi
    -#         Jeremy Cohen <jeremy.cohen@irisa.fr>
    -#         Axel Marmoret <axel.marmoret@inria.fr>
    -#         Caglayan Tuna <caglayantun@gmail.com>
    -
    -# License: BSD 3 clause
     
     
     def validate_constraints(
    @@ -221,230 +212,82 @@ 

    Source code for tensorly.tenalg.proximal

         """
         constraints = [None] * n_const
         parameters = [None] * n_const
    -    if non_negative:
    -        if isinstance(non_negative, dict):
    -            modes = list(non_negative)
    -            for i in range(len(modes)):
    -                constraints[modes[i]] = "non_negative"
    -        else:
    -            for i in range(len(constraints)):
    -                constraints[i] = "non_negative"
    -    if l1_reg:
    -        if isinstance(l1_reg, dict):
    -            modes = list(l1_reg)
    -            for i in range(len(modes)):
    -                if constraints[modes[i]] is not None:
    -                    raise ValueError(
    -                        "You selected two constraints for the same mode. Consider to check your input"
    -                    )
    -                constraints[modes[i]] = "l1_reg"
    -                parameters[modes[i]] = l1_reg[modes[i]]
    -        else:
    -            for i in range(len(constraints)):
    -                if constraints[i] is not None:
    -                    raise ValueError(
    -                        "You selected two constraints for the same mode. Consider to check your input"
    -                    )
    -                constraints[i] = "l1_reg"
    -                if isinstance(l1_reg, list):
    -                    parameters[i] = l1_reg[i]
    -                else:
    -                    parameters[i] = l1_reg
    -    if l2_reg:
    -        if isinstance(l2_reg, dict):
    -            modes = list(l2_reg)
    -            for i in range(len(modes)):
    -                if constraints[modes[i]] is not None:
    -                    raise ValueError(
    -                        "You selected two constraints for the same mode. Consider to check your input"
    -                    )
    -                constraints[modes[i]] = "l2_reg"
    -                parameters[modes[i]] = l2_reg[modes[i]]
    -        else:
    -            for i in range(len(constraints)):
    -                if constraints[i] is not None:
    -                    raise ValueError(
    -                        "You selected two constraints for the same mode. Consider to check your input"
    -                    )
    -                constraints[i] = "l2_reg"
    -                if isinstance(l2_reg, list):
    -                    parameters[i] = l2_reg[i]
    -                else:
    -                    parameters[i] = l2_reg
    -    if l2_square_reg:
    -        if isinstance(l2_square_reg, dict):
    -            modes = list(l2_square_reg)
    -            for i in range(len(modes)):
    -                if constraints[modes[i]] is not None:
    -                    raise ValueError(
    -                        "You selected two constraints for the same mode. Consider to check your input"
    -                    )
    -                constraints[modes[i]] = "l2_square_reg"
    -                parameters[modes[i]] = l2_square_reg[modes[i]]
    -        else:
    -            for i in range(len(constraints)):
    -                if constraints[i] is not None:
    -                    raise ValueError(
    -                        "You selected two constraints for the same mode. Consider to check your input"
    -                    )
    -                constraints[i] = "l2_square_reg"
    -                if isinstance(l2_square_reg, list):
    -                    parameters[i] = l2_square_reg[i]
    -                else:
    -                    parameters[i] = l2_square_reg
    -    if normalized_sparsity:
    -        if isinstance(normalized_sparsity, dict):
    -            modes = list(normalized_sparsity)
    -            for i in range(len(modes)):
    -                if constraints[modes[i]] is not None:
    -                    raise ValueError(
    -                        "You selected two constraints for the same mode. Consider to check your input"
    -                    )
    -                constraints[modes[i]] = "normalized_sparsity"
    -                parameters[modes[i]] = normalized_sparsity[modes[i]]
    -        else:
    -            for i in range(len(constraints)):
    -                if constraints[i] is not None:
    -                    raise ValueError(
    -                        "You selected two constraints for the same mode. Consider to check your input"
    -                    )
    -                constraints[i] = "normalized_sparsity"
    -                if isinstance(normalized_sparsity, list):
    -                    parameters[i] = normalized_sparsity[i]
    -                else:
    -                    parameters[i] = normalized_sparsity
    -    if soft_sparsity:
    -        if isinstance(soft_sparsity, dict):
    -            modes = list(soft_sparsity)
    -            for i in range(len(modes)):
    -                if constraints[modes[i]] is not None:
    -                    raise ValueError(
    -                        "You selected two constraints for the same mode. Consider to check your input"
    -                    )
    -                constraints[modes[i]] = "soft_sparsity"
    -                parameters[modes[i]] = soft_sparsity[modes[i]]
    -        else:
    -            for i in range(len(constraints)):
    -                if constraints[i] is not None:
    -                    raise ValueError(
    -                        "You selected two constraints for the same mode. Consider to check your input"
    -                    )
    -                constraints[i] = "soft_sparsity"
    -                if isinstance(soft_sparsity, list):
    -                    parameters[i] = soft_sparsity[i]
    -                else:
    -                    parameters[i] = soft_sparsity
    -    if hard_sparsity:
    -        if isinstance(hard_sparsity, dict):
    -            modes = list(hard_sparsity)
    -            for i in range(len(modes)):
    -                if constraints[modes[i]] is not None:
    -                    raise ValueError(
    -                        "You selected two constraints for the same mode. Consider to check your input"
    -                    )
    -                constraints[modes[i]] = "hard_sparsity"
    -                parameters[modes[i]] = hard_sparsity[modes[i]]
    -        else:
    -            for i in range(len(constraints)):
    -                if constraints[i] is not None:
    -                    raise ValueError(
    -                        "You selected two constraints for the same mode. Consider to check your input"
    -                    )
    -                constraints[i] = "hard_sparsity"
    -                if isinstance(hard_sparsity, list):
    -                    parameters[i] = hard_sparsity[i]
    -                else:
    -                    parameters[i] = hard_sparsity
    -    if simplex:
    -        if isinstance(simplex, dict):
    -            modes = list(simplex)
    -            for i in range(len(modes)):
    -                if constraints[modes[i]] is not None:
    -                    raise ValueError(
    -                        "You selected two constraints for the same mode. Consider to check your input"
    -                    )
    -                constraints[modes[i]] = "simplex"
    -                parameters[modes[i]] = simplex[modes[i]]
    -        else:
    -            for i in range(len(constraints)):
    -                if constraints[i] is not None:
    +
    +    constraints_list = [
    +        non_negative,
    +        l1_reg,
    +        l2_reg,
    +        l2_square_reg,
    +        unimodality,
    +        normalize,
    +        simplex,
    +        normalized_sparsity,
    +        soft_sparsity,
    +        smoothness,
    +        monotonicity,
    +        hard_sparsity,
    +    ]
    +
    +    constraints_names = [
    +        "non_negative",
    +        "l1_reg",
    +        "l2_reg",
    +        "l2_square_reg",
    +        "unimodality",
    +        "normalize",
    +        "simplex",
    +        "normalized_sparsity",
    +        "soft_sparsity",
    +        "smoothness",
    +        "monotonicity",
    +        "hard_sparsity",
    +    ]
    +
    +    # Checking that no mode is constrained twice
    +    modes_constrained = set()
    +    for each_constraint in constraints_list:
    +        if each_constraint:
    +            if isinstance(each_constraint, dict):
    +                for mode in each_constraint:
    +                    if mode in modes_constrained:
    +                        raise ValueError(
    +                            "You selected two constraints for the same mode. Consider to check your input"
    +                        )
    +                    modes_constrained.add(mode)
    +            elif isinstance(each_constraint, list):
    +                for mode in range(len(each_constraint)):
    +                    if each_constraint[mode]:
    +                        if mode in modes_constrained:
    +                            raise ValueError(
    +                                "You selected two constraints for the same mode. Consider to check your input"
    +                            )
    +                        modes_constrained.add(mode)
    +            else:  # each_constraint is a float or int applied to all modes
    +                if len(modes_constrained) > 0:
                         raise ValueError(
                             "You selected two constraints for the same mode. Consider to check your input"
                         )
    -                constraints[i] = "simplex"
    -                if isinstance(simplex, list):
    -                    parameters[i] = simplex[i]
    -                else:
    -                    parameters[i] = simplex
    -    if smoothness:
    -        if isinstance(smoothness, dict):
    -            modes = list(smoothness)
    +                for i in range(n_const):
    +                    modes_constrained.add(i)
    +
    +    def registrer_constraint(list_or_dict_or_float, name_constraint):
    +        if isinstance(list_or_dict_or_float, dict):
    +            modes = list(list_or_dict_or_float)
                 for i in range(len(modes)):
    -                if constraints[modes[i]] is not None:
    -                    raise ValueError(
    -                        "You selected two constraints for the same mode. Consider to check your input"
    -                    )
    -                constraints[modes[i]] = "smoothness"
    -                parameters[modes[i]] = smoothness[modes[i]]
    +                constraints[modes[i]] = name_constraint
    +                parameters[modes[i]] = list_or_dict_or_float[modes[i]]
             else:
                 for i in range(len(constraints)):
    -                if constraints[i] is not None:
    -                    raise ValueError(
    -                        "You selected two constraints for the same mode. Consider to check your input"
    -                    )
    -                constraints[i] = "smoothness"
    -                if isinstance(smoothness, list):
    -                    parameters[i] = smoothness[i]
    +                constraints[i] = name_constraint
    +                if isinstance(list_or_dict_or_float, list):
    +                    parameters[i] = list_or_dict_or_float[i]
                     else:
    -                    parameters[i] = smoothness
    -    if unimodality:
    -        if isinstance(unimodality, dict):
    -            modes = list(unimodality)
    -            for i in range(len(modes)):
    -                if constraints[modes[i]] is not None:
    -                    raise ValueError(
    -                        "You selected two constraints for the same mode. Consider to check your input"
    -                    )
    -                constraints[modes[i]] = "unimodality"
    -        else:
    -            for i in range(len(constraints)):
    -                if constraints[i] is not None:
    -                    raise ValueError(
    -                        "You selected two constraints for the same mode. Consider to check your input"
    -                    )
    -                constraints[i] = "unimodality"
    -    if monotonicity:
    -        if isinstance(monotonicity, dict):
    -            modes = list(monotonicity)
    -            for i in range(len(modes)):
    -                if constraints[modes[i]] is not None:
    -                    raise ValueError(
    -                        "You selected two constraints for the same mode. Consider to check your input"
    -                    )
    -                constraints[modes[i]] = "monotonicity"
    -        else:
    -            for i in range(len(constraints)):
    -                if constraints[i] is not None:
    -                    raise ValueError(
    -                        "You selected two constraints for the same mode. Consider to check your input"
    -                    )
    -                constraints[i] = "monotonicity"
    -    if normalize:
    -        if isinstance(normalize, dict):
    -            modes = list(normalize)
    -            for i in range(len(modes)):
    -                if constraints[modes[i]] is not None:
    -                    raise ValueError(
    -                        "You selected two constraints for the same mode. Consider to check your input"
    -                    )
    -                constraints[modes[i]] = "normalize"
    -        else:
    -            for i in range(len(constraints)):
    -                if constraints[i] is not None:
    -                    raise ValueError(
    -                        "You selected two constraints for the same mode. Consider to check your input"
    -                    )
    -                constraints[i] = "normalize"
    +                    parameters[i] = list_or_dict_or_float
    +
    +    for each_constraint, each_name in zip(constraints_list, constraints_names):
    +        if each_constraint:
    +            registrer_constraint(each_constraint, each_name)
    +
         return constraints[order], parameters[order]
     
     
    @@ -566,6 +409,8 @@ 

    Source code for tensorly.tenalg.proximal

             return monotonicity_prox(tensor)
         elif constraint == "hard_sparsity":
             return hard_thresholding(tensor, parameter)
    +    else:
    +        raise RuntimeError("Invalid constraint name")
     
     
     def smoothness_prox(tensor, regularizer):
    @@ -581,10 +426,11 @@ 

    Source code for tensorly.tenalg.proximal

         ndarray
     
         """
    -    diag_matrix = (
    +    diag_matrix = tl.tensor(
             tl.diag(2 * regularizer * tl.ones(tl.shape(tensor)[0]) + 1)
             + tl.diag(-regularizer * tl.ones(tl.shape(tensor)[0] - 1), k=-1)
    -        + tl.diag(-regularizer * tl.ones(tl.shape(tensor)[0] - 1), k=1)
    +        + tl.diag(-regularizer * tl.ones(tl.shape(tensor)[0] - 1), k=1),
    +        **tl.context(tensor)
         )
         return tl.solve(diag_matrix, tensor)
     
    @@ -727,7 +573,6 @@ 

    Source code for tensorly.tenalg.proximal

             tl.max(sum_inc + tl.flip(sum_dec, axis=0)),
         )
         min_indice = tl.argmin(tl.tensor(difference), axis=0)
    -
         for i in range(len(min_indice)):
             tensor_unimodal = tl.index_update(
                 tensor_unimodal,
    @@ -929,7 +774,9 @@ 

    Source code for tensorly.tenalg.proximal

         )
     
     
    -
    [docs]def soft_thresholding(tensor, threshold): +
    +[docs] +def soft_thresholding(tensor, threshold): """Soft-thresholding operator sign(tensor) * max[abs(tensor) - threshold, 0] @@ -951,7 +798,7 @@

    Source code for tensorly.tenalg.proximal

         Basic shrinkage
     
         >>> import tensorly.backend as T
    -    >>> from tensorly.tenalg.proximal import soft_thresholding
    +    >>> from tensorly.solvers.proximal import soft_thresholding
         >>> tensor = tl.tensor([[1, -2, 1.5], [-4, 3, -0.5]])
         >>> soft_thresholding(tensor, 1.1)
         array([[ 0. , -0.9,  0.4],
    @@ -972,7 +819,10 @@ 

    Source code for tensorly.tenalg.proximal

         return tl.sign(tensor) * tl.clip(tl.abs(tensor) - threshold, a_min=0)
    -
    [docs]def svd_thresholding(matrix, threshold): + +
    +[docs] +def svd_thresholding(matrix, threshold): """Singular value thresholding operator Parameters @@ -993,7 +843,10 @@

    Source code for tensorly.tenalg.proximal

         return tl.dot(U, tl.reshape(soft_thresholding(s, threshold), (-1, 1)) * V)
    -
    [docs]def procrustes(matrix): + +
    +[docs] +def procrustes(matrix): """Procrustes operator Parameters @@ -1014,570 +867,6 @@

    Source code for tensorly.tenalg.proximal

         U, _, V = tl.truncated_svd(matrix, n_eigenvecs=min(matrix.shape))
         return tl.dot(U, V)
    - -def hals_nnls( - UtM, - UtU, - V=None, - n_iter_max=500, - tol=10e-8, - sparsity_coefficient=None, - normalize=False, - nonzero_rows=False, - exact=False, -): - """ - Non Negative Least Squares (NNLS) - - Computes an approximate solution of a nonnegative least - squares problem (NNLS) with an exact block-coordinate descent scheme. - M is m by n, U is m by r, V is r by n. - All matrices are nonnegative componentwise. - - This algorithm is defined in [1], as an accelerated version of the HALS algorithm. - - It features two accelerations: an early stop stopping criterion, and a - complexity averaging between precomputations and loops, so as to use large - precomputations several times. - - This function is made for being used repetively inside an - outer-loop alternating algorithm, for instance for computing nonnegative - matrix Factorization or tensor factorization. - - Parameters - ---------- - UtM: r-by-n array - Pre-computed product of the transposed of U and M, used in the update rule - UtU: r-by-r array - Pre-computed product of the transposed of U and U, used in the update rule - V: r-by-n initialization matrix (mutable) - Initialized V array - By default, is initialized with one non-zero entry per column - corresponding to the closest column of U of the corresponding column of M. - n_iter_max: Postivie integer - Upper bound on the number of iterations - Default: 500 - tol : float in [0,1] - early stop criterion, while err_k > delta*err_0. Set small for - almost exact nnls solution, or larger (e.g. 1e-2) for inner loops - of a PARAFAC computation. - Default: 10e-8 - sparsity_coefficient: float or None - The coefficient controling the sparisty level in the objective function. - If set to None, the problem is solved unconstrained. - Default: None - nonzero_rows: boolean - True if the lines of the V matrix can't be zero, - False if they can be zero - Default: False - exact: If it is True, the algorithm gives a results with high precision but it needs high computational cost. - If it is False, the algorithm gives an approximate solution - Default: False - - Returns - ------- - V: array - a r-by-n nonnegative matrix \approx argmin_{V >= 0} ||M-UV||_F^2 - rec_error: float - number of loops authorized by the error stop criterion - iteration: integer - final number of update iteration performed - complexity_ratio: float - number of loops authorized by the stop criterion - - Notes - ----- - We solve the following problem :math:`\\min_{V >= 0} ||M-UV||_F^2` - - The matrix V is updated linewise. The update rule for this resolution is:: - - .. math:: - \\begin{equation} - V[k,:]_(j+1) = V[k,:]_(j) + (UtM[k,:] - UtU[k,:]\\times V_(j))/UtU[k,k] - \\end{equation} - - with j the update iteration. - - This problem can also be defined by adding a sparsity coefficient, - enhancing sparsity in the solution [2]. In this sparse version, the update rule becomes:: - - .. math:: - \\begin{equation} - V[k,:]_(j+1) = V[k,:]_(j) + (UtM[k,:] - UtU[k,:]\\times V_(j) - sparsity_coefficient)/UtU[k,k] - \\end{equation} - - References - ---------- - .. [1]: N. Gillis and F. Glineur, Accelerated Multiplicative Updates and - Hierarchical ALS Algorithms for Nonnegative Matrix Factorization, - Neural Computation 24 (4): 1085-1105, 2012. - - .. [2] J. Eggert, and E. Korner. "Sparse coding and NMF." - 2004 IEEE International Joint Conference on Neural Networks - (IEEE Cat. No. 04CH37541). Vol. 4. IEEE, 2004. - - """ - - rank, n_col_M = tl.shape(UtM) - if V is None: # checks if V is empty - V = tl.solve(UtU, UtM) - - V = tl.clip(V, a_min=0, a_max=None) - # Scaling - scale = tl.sum(UtM * V) / tl.sum(UtU * tl.dot(V, tl.transpose(V))) - V = V * scale - - if exact: - n_iter_max = 50000 - tol = 10e-16 - for iteration in range(n_iter_max): - rec_error = 0 - for k in range(rank): - if UtU[k, k]: - if ( - sparsity_coefficient is not None - ): # Modifying the function for sparsification - deltaV = tl.where( - (UtM[k, :] - tl.dot(UtU[k, :], V) - sparsity_coefficient) - / UtU[k, k] - > -V[k, :], - (UtM[k, :] - tl.dot(UtU[k, :], V) - sparsity_coefficient) - / UtU[k, k], - -V[k, :], - ) - V = tl.index_update(V, tl.index[k, :], V[k, :] + deltaV) - - else: # without sparsity - deltaV = tl.where( - (UtM[k, :] - tl.dot(UtU[k, :], V)) / UtU[k, k] > -V[k, :], - (UtM[k, :] - tl.dot(UtU[k, :], V)) / UtU[k, k], - -V[k, :], - ) - V = tl.index_update(V, tl.index[k, :], V[k, :] + deltaV) - - rec_error = rec_error + tl.dot(deltaV, tl.transpose(deltaV)) - - # Safety procedure, if columns aren't allow to be zero - if nonzero_rows and tl.all(V[k, :] == 0): - V[k, :] = tl.eps(V.dtype) * tl.max(V) - - elif nonzero_rows: - raise ValueError( - "Column " + str(k) + " of U is zero with nonzero condition" - ) - - if normalize: - norm = tl.norm(V[k, :]) - if norm != 0: - V[k, :] /= norm - else: - sqrt_n = 1 / n_col_M ** (1 / 2) - V[k, :] = [sqrt_n for i in range(n_col_M)] - if iteration == 0: - rec_error0 = rec_error - - numerator = tl.shape(V)[0] * tl.shape(V)[1] + tl.shape(V)[1] * rank - denominator = tl.shape(V)[0] * rank + tl.shape(V)[0] - complexity_ratio = 1 + (numerator / denominator) - if exact: - if rec_error < tol * rec_error0: - break - else: - if rec_error < tol * rec_error0 or iteration > 1 + 0.5 * complexity_ratio: - break - return V, rec_error, iteration, complexity_ratio - - -def fista( - UtM, - UtU, - x=None, - n_iter_max=100, - non_negative=True, - sparsity_coef=0, - lr=None, - tol=10e-8, -): - """ - Fast Iterative Shrinkage Thresholding Algorithm (FISTA) - - Computes an approximate (nonnegative) solution for Ux=M linear system. - - Parameters - ---------- - UtM : ndarray - Pre-computed product of the transposed of U and M - UtU : ndarray - Pre-computed product of the transposed of U and U - x : init - Default: None - n_iter_max : int - Maximum number of iteration - Default: 100 - non_negative : bool, default is False - if True, result will be non-negative - lr : float - learning rate - Default : None - sparsity_coef : float or None - tol : float - stopping criterion - - Returns - ------- - x : approximate solution such that Ux = M - - Notes - ----- - We solve the following problem :math: `1/2 ||m - Ux ||_2^2 + \\lambda |x|_1` - - Reference - ---------- - [1] : Beck, A., & Teboulle, M. (2009). A fast iterative - shrinkage-thresholding algorithm for linear inverse problems. - SIAM journal on imaging sciences, 2(1), 183-202. - """ - if sparsity_coef is None: - sparsity_coef = 0 - - if x is None: - x = tl.zeros(tl.shape(UtM), **tl.context(UtM)) - if lr is None: - lr = 1 / (tl.truncated_svd(UtU)[1][0]) - # Parameters - momentum_old = tl.tensor(1.0) - norm_0 = 0.0 - x_update = tl.copy(x) - - for iteration in range(n_iter_max): - if isinstance(UtU, list): - x_gradient = ( - -UtM - + tl.tenalg.multi_mode_dot(x_update, UtU, transpose=False) - + sparsity_coef - ) - else: - x_gradient = -UtM + tl.dot(UtU, x_update) + sparsity_coef - - if non_negative is True: - x_gradient = tl.where(lr * x_gradient < x_update, x_gradient, x_update / lr) - - x_new = x_update - lr * x_gradient - momentum = (1 + tl.sqrt(1 + 4 * momentum_old**2)) / 2 - x_update = x_new + ((momentum_old - 1) / momentum) * (x_new - x) - momentum_old = momentum - x = tl.copy(x_new) - norm = tl.norm(lr * x_gradient) - if iteration == 1: - norm_0 = norm - if norm < tol * norm_0: - break - return x - - -def active_set_nnls(Utm, UtU, x=None, n_iter_max=100, tol=10e-8): - """ - Active set algorithm for non-negative least square solution. - - Computes an approximate non-negative solution for Ux=m linear system. - - Parameters - ---------- - Utm : vectorized ndarray - Pre-computed product of the transposed of U and m - UtU : ndarray - Pre-computed Kronecker product of the transposed of U and U - x : init - Default: None - n_iter_max : int - Maximum number of iteration - Default: 100 - tol : float - Early stopping criterion - - Returns - ------- - x : ndarray - - Notes - ----- - This function solves following problem: - .. math:: - \\begin{equation} - \\min_{x} ||Ux - m||^2 - \\end{equation} - - According to [1], non-negativity-constrained least square estimation problem becomes: - .. math:: - \\begin{equation} - x' = (Utm) - (UTU)\\times x - \\end{equation} - - Reference - ---------- - [1] : Bro, R., & De Jong, S. (1997). A fast non‐negativity‐constrained - least squares algorithm. Journal of Chemometrics: A Journal of - the Chemometrics Society, 11(5), 393-401. - """ - if tl.get_backend() == "tensorflow": - raise ValueError( - "Active set is not supported with the tensorflow backend. Consider using fista method with tensorflow." - ) - - if x is None: - x_vec = tl.zeros(tl.shape(UtU)[1], **tl.context(UtU)) - else: - x_vec = tl.base.tensor_to_vec(x) - - x_gradient = Utm - tl.dot(UtU, x_vec) - passive_set = x_vec > 0 - active_set = x_vec <= 0 - support_vec = tl.zeros(tl.shape(x_vec), **tl.context(x_vec)) - - for iteration in range(n_iter_max): - if iteration > 0 or tl.all(x_vec == 0): - indice = tl.argmax(x_gradient) - passive_set = tl.index_update(passive_set, tl.index[indice], True) - active_set = tl.index_update(active_set, tl.index[indice], False) - # To avoid singularity error when initial x exists - try: - passive_solution = tl.solve( - UtU[passive_set, :][:, passive_set], Utm[passive_set] - ) - indice_list = [] - for i in range(tl.shape(support_vec)[0]): - if passive_set[i]: - indice_list.append(i) - support_vec = tl.index_update( - support_vec, - tl.index[int(i)], - passive_solution[len(indice_list) - 1], - ) - else: - support_vec = tl.index_update(support_vec, tl.index[int(i)], 0) - # Start from zeros if solve is not achieved - except: - x_vec = tl.zeros(tl.shape(UtU)[1]) - support_vec = tl.zeros(tl.shape(x_vec), **tl.context(x_vec)) - passive_set = x_vec > 0 - active_set = x_vec <= 0 - if tl.any(active_set): - indice = tl.argmax(x_gradient) - passive_set = tl.index_update(passive_set, tl.index[indice], True) - active_set = tl.index_update(active_set, tl.index[indice], False) - passive_solution = tl.solve( - UtU[passive_set, :][:, passive_set], Utm[passive_set] - ) - indice_list = [] - for i in range(tl.shape(support_vec)[0]): - if passive_set[i]: - indice_list.append(i) - support_vec = tl.index_update( - support_vec, - tl.index[int(i)], - passive_solution[len(indice_list) - 1], - ) - else: - support_vec = tl.index_update(support_vec, tl.index[int(i)], 0) - - # update support vector if it is necessary - if tl.min(support_vec[passive_set]) <= 0: - for i in range(len(passive_set)): - alpha = tl.min( - x_vec[passive_set][support_vec[passive_set] <= 0] - / ( - x_vec[passive_set][support_vec[passive_set] <= 0] - - support_vec[passive_set][support_vec[passive_set] <= 0] - ) - ) - update = alpha * (support_vec - x_vec) - x_vec = x_vec + update - passive_set = x_vec > 0 - active_set = x_vec <= 0 - passive_solution = tl.solve( - UtU[passive_set, :][:, passive_set], Utm[passive_set] - ) - indice_list = [] - for i in range(tl.shape(support_vec)[0]): - if passive_set[i]: - indice_list.append(i) - support_vec = tl.index_update( - support_vec, - tl.index[int(i)], - passive_solution[len(indice_list) - 1], - ) - else: - support_vec = tl.index_update(support_vec, tl.index[int(i)], 0) - - if tl.any(passive_set) != True or tl.min(support_vec[passive_set]) > 0: - break - # set x to s - x_vec = tl.clip(support_vec, 0, tl.max(support_vec)) - - # gradient update - x_gradient = Utm - tl.dot(UtU, x_vec) - - if tl.any(active_set) != True or tl.max(x_gradient[active_set]) <= tol: - break - - return x_vec - - -def admm( - UtM, - UtU, - x, - dual_var, - n_iter_max=100, - n_const=None, - order=None, - non_negative=None, - l1_reg=None, - l2_reg=None, - l2_square_reg=None, - unimodality=None, - normalize=None, - simplex=None, - normalized_sparsity=None, - soft_sparsity=None, - smoothness=None, - monotonicity=None, - hard_sparsity=None, - tol=1e-4, -): - """ - Alternating direction method of multipliers (ADMM) algorithm to minimize a quadratic function under convex constraints. - - Parameters - ---------- - UtM: ndarray - Pre-computed product of the transposed of U and M. - UtU: ndarray - Pre-computed product of the transposed of U and U. - x: init - Default: None - dual_var : ndarray - Dual variable to update x - n_iter_max : int - Maximum number of iteration - Default: 100 - n_const : int - Number of constraints. If it is None, function solves least square problem without proximity operator - If ADMM function is used with a constraint apart from constrained parafac decomposition, - n_const value should be changed to '1'. - Default : None - order : int - Specifies which constraint to implement if several constraints are selected as input - Default : None - non_negative : bool or dictionary - This constraint is clipping negative values to '0'. - If it is True, non-negative constraint is applied to all modes. - l1_reg : float or list or dictionary, optional - Penalizes the factor with the l1 norm using the input value as regularization parameter. - l2_reg : float or list or dictionary, optional - Penalizes the factor with the l2 norm using the input value as regularization parameter. - l2_square_reg : float or list or dictionary, optional - Penalizes the factor with the l2 square norm using the input value as regularization parameter. - unimodality : bool or dictionary, optional - If it is True, unimodality constraint is applied to all modes. - Applied to each column seperately. - normalize : bool or dictionary, optional - This constraint divides all the values by maximum value of the input array. - If it is True, normalize constraint is applied to all modes. - simplex : float or list or dictionary, optional - Projects on the simplex with the given parameter - Applied to each column seperately. - normalized_sparsity : float or list or dictionary, optional - Normalizes with the norm after hard thresholding - soft_sparsity : float or list or dictionary, optional - Impose that the columns of factors have L1 norm bounded by a user-defined threshold. - smoothness : float or list or dictionary, optional - Optimizes the factors by solving a banded system - monotonicity : bool or dictionary, optional - Projects columns to monotonically decreasing distrbution - Applied to each column seperately. - If it is True, monotonicity constraint is applied to all modes. - hard_sparsity : float or list or dictionary, optional - Hard thresholding with the given threshold - tol : float - - Returns - ------- - x : Updated ndarray - x_split : Updated ndarray - dual_var : Updated ndarray - - Notes - ----- - ADMM solves the convex optimization problem :math:`\\min_ f(x) + g(z)` where :math: A(x_split) + Bx = c. - - Following updates are iterated to solve the problem:: - - .. math:: - \\begin{equation} - x_split = argmin_(x_split) f(x_split) + (rho/2)||A(x_split) + Bx - c||_2^2 - x = argmin_x g(x) + (rho/2)||A(x_split) + Bx - c||_2^2 - dual_var = dual_var + (Ax + B(x_split) - c) - \\end{equation} - - where rho is a constant defined by the user. - - Let us define a least square problem such as :math:`\\||Ux - M||^2 + r(x)`. - - ADMM can be adapted to this least square problem as following:: - - .. math:: - \\begin{equation} - x_split = (UtU + rho\times I)\times(UtM + rho\times(x + dual_var)^T) - x = argmin r(x) + (rho/2)||x - x_split^T + dual_var||_2^2 - dual_var = dual_var + x - x_split^T - \\end{equation} - where r is the regularization operator. Here, x can be updated by using proximity operator - of :math:`x_split^T - dual_var`. - - References - ---------- - .. [1] Huang, Kejun, Nicholas D. Sidiropoulos, and Athanasios P. Liavas. - "A flexible and efficient algorithmic framework for constrained matrix and tensor factorization." - IEEE Transactions on Signal Processing 64.19 (2016): 5052-5065. - """ - rho = tl.trace(UtU) / tl.shape(x)[1] - for iteration in range(n_iter_max): - x_old = tl.copy(x) - x_split = tl.solve( - tl.transpose(UtU + rho * tl.eye(tl.shape(UtU)[1])), - tl.transpose(UtM + rho * (x + dual_var)), - ) - x = proximal_operator( - tl.transpose(x_split) - dual_var, - non_negative=non_negative, - l1_reg=l1_reg, - l2_reg=l2_reg, - l2_square_reg=l2_square_reg, - unimodality=unimodality, - normalize=normalize, - simplex=simplex, - normalized_sparsity=normalized_sparsity, - soft_sparsity=soft_sparsity, - smoothness=smoothness, - monotonicity=monotonicity, - hard_sparsity=hard_sparsity, - n_const=n_const, - order=order, - ) - if n_const is None: - x = tl.transpose(tl.solve(tl.transpose(UtU), tl.transpose(UtM))) - return x, x_split, dual_var - dual_var = dual_var + x - tl.transpose(x_split) - - dual_residual = x - tl.transpose(x_split) - primal_residual = x - x_old - - if tl.norm(dual_residual) < tol * tl.norm(x) and tl.norm( - primal_residual - ) < tol * tl.norm(dual_var): - break - return x, x_split, dual_var
    @@ -1587,7 +876,7 @@

    Source code for tensorly.tenalg.proximal

             
    - © Copyright 2016 - 2023, TensorLy Developers.
    + © Copyright 2016 - 2024, TensorLy Developers.
    diff --git a/stable/_modules/tensorly/tenalg/svd.html b/stable/_modules/tensorly/tenalg/svd.html index 94baf5f9b..dcd130448 100644 --- a/stable/_modules/tensorly/tenalg/svd.html +++ b/stable/_modules/tensorly/tenalg/svd.html @@ -1,7 +1,6 @@ - - + @@ -15,17 +14,17 @@ - - - - - - + + + + + + - - - + + + @@ -146,6 +145,7 @@

    Source code for tensorly.tenalg.svd

     import warnings
    +from typing import Literal
     import tensorly as tl
     from .proximal import soft_thresholding
     
    @@ -186,7 +186,9 @@ 

    Source code for tensorly.tenalg.svd

             )
             U = U * signs
             if tl.shape(V)[0] > tl.shape(U)[1]:
    -            signs = tl.concatenate((signs, tl.ones(tl.shape(V)[0] - tl.shape(U)[1])))
    +            signs = tl.concatenate(
    +                (signs, tl.ones(tl.shape(V)[0] - tl.shape(U)[1], **tl.context(V)))
    +            )
             V = V * signs[: tl.shape(V)[0]][:, None]
         else:
             # rows of V, columns of U
    @@ -199,7 +201,9 @@ 

    Source code for tensorly.tenalg.svd

             )
             V = V * signs[:, None]
             if tl.shape(U)[1] > tl.shape(V)[0]:
    -            signs = tl.concatenate((signs, tl.ones(tl.shape(U)[1] - tl.shape(V)[0])))
    +            signs = tl.concatenate(
    +                (signs, tl.ones(tl.shape(U)[1] - tl.shape(V)[0], **tl.context(V)))
    +            )
             U = U * signs[: tl.shape(U)[1]]
     
         return U, V
    @@ -230,7 +234,7 @@ 

    Source code for tensorly.tenalg.svd

         W = tl.index_update(W, tl.index[:, 0], tl.sqrt(S[0]) * tl.abs(U[:, 0]))
         H = tl.index_update(H, tl.index[0, :], tl.sqrt(S[0]) * tl.abs(V[0, :]))
     
    -    for j in range(1, tl.shape(U)[1]):
    +    for j in range(1, min(tl.shape(U)[1], tl.shape(V)[0])):
             x, y = U[:, j], V[j, :]
     
             # extract positive and negative parts of column vectors
    @@ -262,15 +266,17 @@ 

    Source code for tensorly.tenalg.svd

     
         if nntype == "nndsvd":
             W = soft_thresholding(W, eps)
    +        H = soft_thresholding(H, eps)
         elif nntype == "nndsvda":
             avg = tl.mean(tensor)
             W = tl.where(W < eps, tl.ones(tl.shape(W), **tl.context(W)) * avg, W)
    +        H = tl.where(H < eps, tl.ones(tl.shape(H), **tl.context(H)) * avg, H)
         else:
             raise ValueError(
                 f'Invalid nntype parameter: got {nntype} instead of one of ("nndsvd", "nndsvda")'
             )
     
    -    return W
    +    return W, H
     
     
     def randomized_range_finder(A, n_dims, n_iter=2, random_state=None):
    @@ -369,7 +375,6 @@ 

    Source code for tensorly.tenalg.svd

         """
         n_eigenvecs, min_dim, _ = svd_checks(matrix, n_eigenvecs=n_eigenvecs)
         full_matrices = True if n_eigenvecs > min_dim else False
    -
         U, S, V = tl.svd(matrix, full_matrices=full_matrices)
         return U[:, :n_eigenvecs], S[:n_eigenvecs], V[:n_eigenvecs, :]
     
    @@ -499,9 +504,12 @@ 

    Source code for tensorly.tenalg.svd

     
     
     SVD_FUNS = ["truncated_svd", "symeig_svd", "randomized_svd"]
    +SVD_TYPES = Literal["truncated_svd", "symeig_svd", "randomized_svd"]
     
     
    -
    [docs]def svd_interface( +
    +[docs] +def svd_interface( matrix, method="truncated_svd", n_eigenvecs=None, @@ -534,6 +542,8 @@

    Source code for tensorly.tenalg.svd

         mask : tensor, default is None.
             Array of booleans with the same shape as ``matrix``. Should be 0 where
             the values are missing and 1 everywhere else. None if nothing is missing.
    +        Imputation is done by iterative low rank approximation, so n_eigenvecs should be provided
    +        and be lower than the rank of the matrix.
         n_iter_mask_imputation : int, default is 5
             Number of repetitions to apply in missing value imputation.
         **kwargs : optional
    @@ -555,7 +565,7 @@ 

    Source code for tensorly.tenalg.svd

             svd_fun = symeig_svd
         elif method == "randomized_svd":
             svd_fun = randomized_svd
    -    elif callable(method):
    +    elif callable(method):
             svd_fun = method
         else:
             raise ValueError(
    @@ -564,18 +574,24 @@ 

    Source code for tensorly.tenalg.svd

     
         U, S, V = svd_fun(matrix, n_eigenvecs=n_eigenvecs, **kwargs)
     
    -    if mask is not None:
    +    if mask is not None and n_eigenvecs is not None:
             for _ in range(n_iter_mask_imputation):
    -            matrix = matrix * mask + (U @ tl.diag(S) @ V) * (1 - mask)
    +            # Workaround to avoid needing fill_diagonal
    +            St = tl.eye(tl.shape(U)[1], tl.shape(V)[0], **tl.context(matrix))
    +            for i in range(tl.shape(S)[0]):
    +                St = tl.index_update(St, tl.index[i, i], S[i])
    +
    +            matrix = matrix * mask + (U @ St @ V) * (1 - mask)
                 U, S, V = svd_fun(matrix, n_eigenvecs=n_eigenvecs, **kwargs)
     
         if flip_sign:
             U, V = svd_flip(U, V, u_based_decision=u_based_flip_sign)
     
         if non_negative is not False and non_negative is not None:
    -        U = make_svd_non_negative(matrix, U, S, V, non_negative)
    +        U, V = make_svd_non_negative(matrix, U, S, V, non_negative)
     
         return U, S, V
    +
    @@ -585,7 +601,7 @@

    Source code for tensorly.tenalg.svd

             
    - © Copyright 2016 - 2023, TensorLy Developers.
    + © Copyright 2016 - 2024, TensorLy Developers.
    diff --git a/stable/_modules/tensorly/tt_matrix.html b/stable/_modules/tensorly/tt_matrix.html index b789ff5fc..33f534992 100644 --- a/stable/_modules/tensorly/tt_matrix.html +++ b/stable/_modules/tensorly/tt_matrix.html @@ -1,7 +1,6 @@ - - + @@ -15,17 +14,17 @@ - - - - - - + + + + + + - - - + + + @@ -260,7 +259,9 @@

    Source code for tensorly.tt_matrix

         return tl.reshape(tt_matrix_to_tensor(tt_matrix), (np.prod(in_shape), -1))
     
     
    -
    [docs]def tt_matrix_to_unfolded(tt_matrix, mode): +
    +[docs] +def tt_matrix_to_unfolded(tt_matrix, mode): """Returns the unfolding matrix of a tensor given in TT-Matrix format Reassembles a full tensor from 'factors' and returns its unfolding matrix @@ -281,7 +282,10 @@

    Source code for tensorly.tt_matrix

         return tl.unfold(tt_matrix_to_tensor(tt_matrix), mode)
    -
    [docs]def tt_matrix_to_vec(tt_matrix): + +
    +[docs] +def tt_matrix_to_vec(tt_matrix): """Returns the tensor defined by its TT-Matrix format ('factors') into its vectorized format @@ -298,6 +302,7 @@

    Source code for tensorly.tt_matrix

         return tl.tensor_to_vec(tt_matrix_to_tensor(tt_matrix))
    + def _validate_tt_matrix(tt_tensor): factors = tt_tensor n_factors = len(factors) @@ -407,7 +412,7 @@

    Source code for tensorly.tt_matrix

             
    - © Copyright 2016 - 2023, TensorLy Developers.
    + © Copyright 2016 - 2024, TensorLy Developers.
    diff --git a/stable/_modules/tensorly/tt_tensor.html b/stable/_modules/tensorly/tt_tensor.html index 6b05c6908..fd2b8a52e 100644 --- a/stable/_modules/tensorly/tt_tensor.html +++ b/stable/_modules/tensorly/tt_tensor.html @@ -1,7 +1,6 @@ - - + @@ -15,17 +14,17 @@ - - - - - - + + + + + + - - - + + + @@ -151,9 +150,7 @@

    Source code for tensorly.tt_tensor

     
     import tensorly as tl
     from ._factorized_tensor import FactorizedTensor
    -from .utils import DefineDeprecated
     import numpy as np
    -from scipy.optimize import brentq
     import warnings
     
     
    @@ -207,7 +204,9 @@ 

    Source code for tensorly.tt_tensor

         return tuple(shape), tuple(rank)
     
     
    -
    [docs]def tt_to_tensor(factors): +
    +[docs] +def tt_to_tensor(factors): """Returns the full tensor whose TT decomposition is given by 'factors' Re-assembles 'factors', which represent a tensor in TT/Matrix-Product-State format @@ -238,7 +237,10 @@

    Source code for tensorly.tt_tensor

         return tl.reshape(full_tensor, full_shape)
    -
    [docs]def tt_to_unfolded(factors, mode): + +
    +[docs] +def tt_to_unfolded(factors, mode): """Returns the unfolding matrix of a tensor given in TT (or Tensor-Train) format Reassembles a full tensor from 'factors' and returns its unfolding matrix @@ -259,7 +261,10 @@

    Source code for tensorly.tt_tensor

         return tl.unfold(tt_to_tensor(factors), mode)
    -
    [docs]def tt_to_vec(factors): + +
    +[docs] +def tt_to_vec(factors): """Returns the tensor defined by its TT format ('factors') into its vectorized format @@ -276,6 +281,7 @@

    Source code for tensorly.tt_tensor

         return tl.tensor_to_vec(tt_to_tensor(factors))
    + def _tt_n_param(tensor_shape, rank): """Number of parameters of a MPS decomposition for a given `rank` and full `tensor_shape`. @@ -409,14 +415,10 @@

    Source code for tensorly.tt_tensor

     
             # Initialization
             if rank[0] != 1:
    -            message = "Provided rank[0] == {} but boundary conditions dictate rank[0] == rank[-1] == 1.".format(
    -                rank[0]
    -            )
    +            message = f"Provided rank[0] == {rank[0]} but boundary conditions dictate rank[0] == rank[-1] == 1."
                 raise ValueError(message)
             if rank[-1] != 1:
    -            message = "Provided rank[-1] == {} but boundary conditions dictate rank[0] == rank[-1] == 1.".format(
    -                rank[-1]
    -            )
    +            message = f"Provided rank[-1] == {rank[-1]} but boundary conditions dictate rank[0] == rank[-1] == 1."
                 raise ValueError(message)
     
         if allow_overparametrization:
    @@ -470,7 +472,9 @@ 

    Source code for tensorly.tt_tensor

             return tt_to_vec(self)
     
     
    -
    [docs]def pad_tt_rank(factor_list, n_padding=1, pad_boundaries=False): +
    +[docs] +def pad_tt_rank(factor_list, n_padding=1, pad_boundaries=False): """Pads the factors of a Tensor-Train so as to increase its rank without changing its reconstruction The tensor-train (ring) will be padded with 0s to increase its rank only but not the underlying tensor it represents. @@ -506,17 +510,6 @@

    Source code for tensorly.tt_tensor

     
         return new_factors
    - -mps_to_tensor = DefineDeprecated( - deprecated_name="mps_to_tensor", use_instead=tt_to_tensor -) -mps_to_unfolded = DefineDeprecated( - deprecated_name="mps_to_unfolded", use_instead=tt_to_unfolded -) -mps_to_vec = DefineDeprecated(deprecated_name="mps_to_vec", use_instead=tt_to_vec) -_validate_mps_tensor = DefineDeprecated( - deprecated_name="_validate_mps_tensor", use_instead=_validate_tt_tensor -)
    @@ -526,7 +519,7 @@

    Source code for tensorly.tt_tensor

             
    - © Copyright 2016 - 2023, TensorLy Developers.
    + © Copyright 2016 - 2024, TensorLy Developers.
    diff --git a/stable/_modules/tensorly/tucker_tensor.html b/stable/_modules/tensorly/tucker_tensor.html index f8bf69765..db1530b9a 100644 --- a/stable/_modules/tensorly/tucker_tensor.html +++ b/stable/_modules/tensorly/tucker_tensor.html @@ -1,7 +1,6 @@ - - + @@ -15,17 +14,17 @@ - - - - - - + + + + + + - - - + + + @@ -194,7 +193,9 @@

    Source code for tensorly.tucker_tensor

         return tuple(shape), tuple(rank)
     
     
    -
    [docs]def tucker_to_tensor(tucker_tensor, skip_factor=None, transpose_factors=False): +
    +[docs] +def tucker_to_tensor(tucker_tensor, skip_factor=None, transpose_factors=False): """Converts the Tucker tensor into a full tensor Parameters @@ -216,6 +217,7 @@

    Source code for tensorly.tucker_tensor

         return multi_mode_dot(core, factors, skip=skip_factor, transpose=transpose_factors)
    + def tucker_normalize(tucker_tensor): """Returns tucker_tensor with factors normalised to unit length with the normalizing constants absorbed into `core`. @@ -243,7 +245,9 @@

    Source code for tensorly.tucker_tensor

         return TuckerTensor((core, normalized_factors))
     
     
    -
    [docs]def tucker_to_unfolded( +
    +[docs] +def tucker_to_unfolded( tucker_tensor, mode=0, skip_factor=None, transpose_factors=False ): """Converts the Tucker decomposition into an unfolded tensor (i.e. a matrix) @@ -272,7 +276,10 @@

    Source code for tensorly.tucker_tensor

         )
    -
    [docs]def tucker_to_vec(tucker_tensor, skip_factor=None, transpose_factors=False): + +
    +[docs] +def tucker_to_vec(tucker_tensor, skip_factor=None, transpose_factors=False): """Converts a Tucker decomposition into a vectorised tensor Parameters @@ -305,7 +312,10 @@

    Source code for tensorly.tucker_tensor

         )
    -
    [docs]def tucker_mode_dot(tucker_tensor, matrix_or_vector, mode, keep_dim=False, copy=False): + +
    +[docs] +def tucker_mode_dot(tucker_tensor, matrix_or_vector, mode, keep_dim=False, copy=False): """n-mode product of a Tucker tensor and a matrix or vector at the specified mode Parameters @@ -365,6 +375,7 @@

    Source code for tensorly.tucker_tensor

         return TuckerTensor((core, factors))
    + class TuckerTensor(FactorizedTensor): def __init__(self, tucker_tensor): super().__init__() @@ -453,6 +464,13 @@

    Source code for tensorly.tucker_tensor

                 self, matrix_or_vector, mode, keep_dim=keep_dim, copy=copy
             )
     
    +    def normalize(self):
    +        """
    +        Transforms the tucker_tensor with `self.factors` normalised to unit length.
    +        The normalizing constants absorbed into `self.core`.
    +        """
    +        self.core, self.factors = tucker_normalize(self)
    +
     
     def _tucker_n_param(tensor_shape, rank):
         """Number of parameters of a Tucker decomposition for a given `rank` and full `tensor_shape`.
    @@ -588,7 +606,7 @@ 

    Source code for tensorly.tucker_tensor

             
    - © Copyright 2016 - 2023, TensorLy Developers.
    + © Copyright 2016 - 2024, TensorLy Developers.
    diff --git a/stable/_sources/about.rst.txt b/stable/_sources/about.rst.txt index 882196e9d..7e29691d9 100644 --- a/stable/_sources/about.rst.txt +++ b/stable/_sources/about.rst.txt @@ -13,7 +13,7 @@ later published as a `JMLR paper `_ titl by `Jean Kossaifi`_, `Yannis Panagakis`_, `Anima Anandkumar`_ and `Maja Pantic`_. Originally, TensorLy was built on top of NumPy and SciPy only. In order to combine tensor methods with deep learning and run them on multiple devices, CPU and GPU, a flexible backend system was added. -This allows algorithms written in TensorLy to be ran with any major framework such as PyTorch, MXNet, TensorFlow, CuPy and JAX. +This allows algorithms written in TensorLy to be ran with any major framework such as PyTorch, TensorFlow, CuPy, JAX and Paddle. Core developers ----------------- diff --git a/stable/_sources/auto_examples/applications/index.rst.txt b/stable/_sources/auto_examples/applications/index.rst.txt index b7fa29bc0..a51ce8b60 100644 --- a/stable/_sources/auto_examples/applications/index.rst.txt +++ b/stable/_sources/auto_examples/applications/index.rst.txt @@ -13,15 +13,16 @@ See how you can use TensorLy on practical applications and datasets.
    +.. thumbnail-parent-div-open .. raw:: html -
    +
    .. only:: html .. image:: /auto_examples/applications/images/thumb/sphx_glr_plot_image_compression_thumb.png - :alt: Image compression via tensor decomposition + :alt: :ref:`sphx_glr_auto_examples_applications_plot_image_compression.py` @@ -33,12 +34,12 @@ See how you can use TensorLy on practical applications and datasets. .. raw:: html -
    +
    .. only:: html .. image:: /auto_examples/applications/images/thumb/sphx_glr_plot_IL2_thumb.png - :alt: Non-negative PARAFAC Decomposition of IL-2 Response Data + :alt: :ref:`sphx_glr_auto_examples_applications_plot_IL2.py` @@ -55,7 +56,7 @@ See how you can use TensorLy on practical applications and datasets. .. only:: html .. image:: /auto_examples/applications/images/thumb/sphx_glr_plot_covid_thumb.png - :alt: COVID-19 Serology Dataset Analysis with CP + :alt: :ref:`sphx_glr_auto_examples_applications_plot_covid.py` @@ -65,6 +66,8 @@ See how you can use TensorLy on practical applications and datasets.
    +.. thumbnail-parent-div-close + .. raw:: html
    diff --git a/stable/_sources/auto_examples/applications/plot_IL2.rst.txt b/stable/_sources/auto_examples/applications/plot_IL2.rst.txt index 699700226..a37bb671d 100644 --- a/stable/_sources/auto_examples/applications/plot_IL2.rst.txt +++ b/stable/_sources/auto_examples/applications/plot_IL2.rst.txt @@ -10,8 +10,8 @@ .. note:: :class: sphx-glr-download-link-note - Click :ref:`here ` - to download the full example code + :ref:`Go to the end ` + to download the full example code. .. rst-class:: sphx-glr-example-title @@ -30,7 +30,7 @@ To do this, we will work with a tensor of experimentally measured cell signaling .. GENERATED FROM PYTHON SOURCE LINES 12-19 -.. code-block:: default +.. code-block:: Python import numpy as np @@ -48,18 +48,18 @@ To do this, we will work with a tensor of experimentally measured cell signaling .. GENERATED FROM PYTHON SOURCE LINES 20-50 -Here we will load a tensor of experimentally measured cellular responses to -IL-2 stimulation. IL-2 is a naturally occurring immune signaling molecule -which has been engineered by pharmaceutical companies and drug designers +Here we will load a tensor of experimentally measured cellular responses to +IL-2 stimulation. IL-2 is a naturally occurring immune signaling molecule +which has been engineered by pharmaceutical companies and drug designers in attempts to act as an effective immunotherapy. In order to make effective IL-2 therapies, pharmaceutical engineer have altered IL-2's signaling activity in order to -increase or decrease its interactions with particular cell types. +increase or decrease its interactions with particular cell types. -IL-2 signals through the Jak/STAT pathway and transmits a signal into immune cells by -phosphorylating STAT5 (pSTAT5). When phosphorylated, STAT5 will cause various immune -cell types to proliferate, and depending on whether regulatory (regulatory T cells, or Tregs) +IL-2 signals through the Jak/STAT pathway and transmits a signal into immune cells by +phosphorylating STAT5 (pSTAT5). When phosphorylated, STAT5 will cause various immune +cell types to proliferate, and depending on whether regulatory (regulatory T cells, or Tregs) or effector cells (helper T cells, natural killer cells, and cytotoxic T cells, -or Thelpers, NKs, and CD8+ cells) respond, IL-2 signaling can result in +or Thelpers, NKs, and CD8+ cells) respond, IL-2 signaling can result in immunosuppression or immunostimulation respectively. Thus, when designing a drug meant to repress the immune system, potentially for the treatment of autoimmune diseases, IL-2 which primarily enacts a response in Tregs is desirable. Conversely, @@ -67,21 +67,21 @@ when designing a drug that is meant to stimulate the immune system, potentially the treatment of cancer, IL-2 which primarily enacts a response in effector cells is desirable. In order to achieve either signaling bias, IL-2 variants with altered affinity for it's various receptors (IL2Rα or IL2Rβ) have been designed. Furthermore -IL-2 variants with multiple binding domains have been designed as multivalent +IL-2 variants with multiple binding domains have been designed as multivalent IL-2 may act as a more effective therapeutic. In order to understand how these mutations -and alterations affect which cells respond to an IL-2 mutant, we will perform +and alterations affect which cells respond to an IL-2 mutant, we will perform non-negative PARAFAC tensor decomposition on our cell response data tensor. -Here, our data contains the responses of 8 different cell types to 13 different +Here, our data contains the responses of 8 different cell types to 13 different IL-2 mutants, at 4 different timepoints, at 12 standardized IL-2 concentrations. Therefore, our tensor will have shape (13 x 4 x 12 x 8), with dimensions representing IL-2 mutant, stimulation time, dose, and cell type respectively. Each -measured quantity represents the amount of phosphorlyated STAT5 (pSTAT5) in a +measured quantity represents the amount of phosphorlyated STAT5 (pSTAT5) in a given cell population following stimulation with the specified IL-2 mutant. .. GENERATED FROM PYTHON SOURCE LINES 50-55 -.. code-block:: default +.. code-block:: Python response_data = load_IL2data() @@ -103,17 +103,17 @@ given cell population following stimulation with the specified IL-2 mutant. .. GENERATED FROM PYTHON SOURCE LINES 56-63 -Now we will run non-negative PARAFAC tensor decomposition to reduce the dimensionality -of our tensor. We will use 3 components, and normalize our resulting tensor to aid in +Now we will run non-negative PARAFAC tensor decomposition to reduce the dimensionality +of our tensor. We will use 3 components, and normalize our resulting tensor to aid in future comparisons of correlations across components. -First we must preprocess our tensor to ready it for factorization. Our data has a +First we must preprocess our tensor to ready it for factorization. Our data has a few missing values, and so we must first generate a mask to mark where those values occur. .. GENERATED FROM PYTHON SOURCE LINES 63-66 -.. code-block:: default +.. code-block:: Python tensor_mask = np.isfinite(response_data.tensor) @@ -127,12 +127,12 @@ occur. .. GENERATED FROM PYTHON SOURCE LINES 67-69 -Now that we've marked where those non-finite values occur, we can regenerate our +Now that we've marked where those non-finite values occur, we can regenerate our tensor without including non-finite values, allowing it to be factorized. .. GENERATED FROM PYTHON SOURCE LINES 69-72 -.. code-block:: default +.. code-block:: Python response_data_fin = np.nan_to_num(response_data.tensor) @@ -150,12 +150,20 @@ Using this mask, and finite-value only tensor, we can decompose our signaling da three components. We will also normalize this tensor, which will allow for easier comparisons to be made between the meanings, and magnitudes of our resulting components. -.. GENERATED FROM PYTHON SOURCE LINES 76-80 +.. GENERATED FROM PYTHON SOURCE LINES 76-88 -.. code-block:: default +.. code-block:: Python - sig_tensor_fact = non_negative_parafac(response_data_fin, init='random', rank=3, mask=tensor_mask, n_iter_max=5000, tol=1e-9, random_state=1) + sig_tensor_fact = non_negative_parafac( + response_data_fin, + init="random", + rank=3, + mask=tensor_mask, + n_iter_max=5000, + tol=1e-9, + random_state=1, + ) sig_tensor_fact = cp_normalize(sig_tensor_fact) @@ -165,18 +173,18 @@ comparisons to be made between the meanings, and magnitudes of our resulting com -.. GENERATED FROM PYTHON SOURCE LINES 81-87 +.. GENERATED FROM PYTHON SOURCE LINES 89-95 -Now we will load the names of our cell types and IL-2 mutants, in the order in which -they are present in our original tensor. IL-2 mutant names refer to the specific -mutations made to their amino acid sequence, as well as their valency +Now we will load the names of our cell types and IL-2 mutants, in the order in which +they are present in our original tensor. IL-2 mutant names refer to the specific +mutations made to their amino acid sequence, as well as their valency format (monovalent or bivalent). Finally, we label, plot, and analyze our factored tensor of data. -.. GENERATED FROM PYTHON SOURCE LINES 87-120 +.. GENERATED FROM PYTHON SOURCE LINES 95-132 -.. code-block:: default +.. code-block:: Python f, ax = plt.subplots(1, 2, figsize=(9, 4.5)) @@ -188,9 +196,9 @@ Finally, we label, plot, and analyze our factored tensor of data. ligands = IL2mutants x_lig = np.arange(len(ligands)) - lig_rects_comp1 = ax[0].bar(x_lig - width, lig_facs[:, 0], width, label='Component 1') - lig_rects_comp2 = ax[0].bar(x_lig, lig_facs[:, 1], width, label='Component 2') - lig_rects_comp3 = ax[0].bar(x_lig + width, lig_facs[:, 2], width, label='Component 3') + lig_rects_comp1 = ax[0].bar(x_lig - width, lig_facs[:, 0], width, label="Component 1") + lig_rects_comp2 = ax[0].bar(x_lig, lig_facs[:, 1], width, label="Component 2") + lig_rects_comp3 = ax[0].bar(x_lig + width, lig_facs[:, 2], width, label="Component 3") ax[0].set(xlabel="Ligand", ylabel="Component Weight", ylim=(0, 1)) ax[0].set_xticks(x_lig, ligands) ax[0].set_xticklabels(ax[0].get_xticklabels(), rotation=60, ha="right", fontsize=9) @@ -200,9 +208,13 @@ Finally, we label, plot, and analyze our factored tensor of data. cell_facs = sig_tensor_fact[1][3] x_cell = np.arange(len(cells)) - cell_rects_comp1 = ax[1].bar(x_cell - width, cell_facs[:, 0], width, label='Component 1') - cell_rects_comp2 = ax[1].bar(x_cell, cell_facs[:, 1], width, label='Component 2') - cell_rects_comp3 = ax[1].bar(x_cell + width, cell_facs[:, 2], width, label='Component 3') + cell_rects_comp1 = ax[1].bar( + x_cell - width, cell_facs[:, 0], width, label="Component 1" + ) + cell_rects_comp2 = ax[1].bar(x_cell, cell_facs[:, 1], width, label="Component 2") + cell_rects_comp3 = ax[1].bar( + x_cell + width, cell_facs[:, 2], width, label="Component 3" + ) ax[1].set(xlabel="Cell", ylabel="Component Weight", ylim=(0, 1)) ax[1].set_xticks(x_cell, cells) ax[1].set_xticklabels(ax[1].get_xticklabels(), rotation=45, ha="right") @@ -223,26 +235,26 @@ Finally, we label, plot, and analyze our factored tensor of data. -.. GENERATED FROM PYTHON SOURCE LINES 121-134 +.. GENERATED FROM PYTHON SOURCE LINES 133-146 -Here we observe the correlations which both ligands and cell types have with each of -our three components - we can interepret our tensor factorization for looking for -patterns among these correlations. +Here we observe the correlations which both ligands and cell types have with each of +our three components - we can interepret our tensor factorization for looking for +patterns among these correlations. For example, we can see that bivalent mutants generally have higher correlations with -component two, as do regulatory T cells. Thus we can infer that bivalent ligands -activate regulatory T cells more than monovalent ligands. We also see that this +component two, as do regulatory T cells. Thus we can infer that bivalent ligands +activate regulatory T cells more than monovalent ligands. We also see that this relationship is strengthened by the availability of IL2Rα, one subunit of the IL-2 receptor. -This is just one example of an insight we can make using tensor factorization. -By plotting the correlations which time and dose have with each component, we -could additionally make inferences as to the dynamics and dose dependence of how mutations +This is just one example of an insight we can make using tensor factorization. +By plotting the correlations which time and dose have with each component, we +could additionally make inferences as to the dynamics and dose dependence of how mutations affect IL-2 signaling in immune cells. .. rst-class:: sphx-glr-timing - **Total running time of the script:** ( 0 minutes 2.337 seconds) + **Total running time of the script:** (0 minutes 1.412 seconds) .. _sphx_glr_download_auto_examples_applications_plot_IL2.py: @@ -251,14 +263,17 @@ affect IL-2 signaling in immune cells. .. container:: sphx-glr-footer sphx-glr-footer-example + .. container:: sphx-glr-download sphx-glr-download-jupyter + + :download:`Download Jupyter notebook: plot_IL2.ipynb ` .. container:: sphx-glr-download sphx-glr-download-python :download:`Download Python source code: plot_IL2.py ` - .. container:: sphx-glr-download sphx-glr-download-jupyter + .. container:: sphx-glr-download sphx-glr-download-zip - :download:`Download Jupyter notebook: plot_IL2.ipynb ` + :download:`Download zipped: plot_IL2.zip ` .. only:: html diff --git a/stable/_sources/auto_examples/applications/plot_covid.rst.txt b/stable/_sources/auto_examples/applications/plot_covid.rst.txt index e296c8d4e..73d611acf 100644 --- a/stable/_sources/auto_examples/applications/plot_covid.rst.txt +++ b/stable/_sources/auto_examples/applications/plot_covid.rst.txt @@ -10,8 +10,8 @@ .. note:: :class: sphx-glr-download-link-note - Click :ref:`here ` - to download the full example code + :ref:`Go to the end ` + to download the full example code. .. rst-class:: sphx-glr-example-title @@ -25,7 +25,7 @@ Apply CP decomposition to COVID-19 Serology Dataset .. GENERATED FROM PYTHON SOURCE LINES 7-10 -.. code-block:: default +.. code-block:: Python # sphinx_gallery_thumbnail_number = 2 @@ -49,7 +49,7 @@ Systems serology is a new technology that examines the antibodies from a patient to comprehensively profile the interactions between the antibodies and `Fc receptors `_ alongside other types of immunological and demographic data. Here, we will apply CP decomposition to a -`COVID-19 system serology dataset `_. +`COVID-19 system serology dataset `_. In this dataset, serum antibodies of 438 samples collected from COVID-19 patients were systematically profiled by their binding behavior to SARS-CoV-2 (the virus that causes COVID-19) antigens and Fc receptors activities. Samples are @@ -63,7 +63,7 @@ We first import this dataset of a panel of COVID-19 patients: .. GENERATED FROM PYTHON SOURCE LINES 32-42 -.. code-block:: default +.. code-block:: Python import numpy as np @@ -88,14 +88,18 @@ Apply CP decomposition to this dataset with Tensorly ---------------------------------------------------- Now we apply CP decomposition to this dataset. -.. GENERATED FROM PYTHON SOURCE LINES 46-51 +.. GENERATED FROM PYTHON SOURCE LINES 46-55 -.. code-block:: default +.. code-block:: Python comps = np.arange(1, 7) - CMTFfacs = [parafac(data.tensor, cc, tol=1e-10, n_iter_max=1000, - linesearch=True, orthogonalise=2) for cc in comps] + CMTFfacs = [ + parafac( + data.tensor, cc, tol=1e-10, n_iter_max=1000, linesearch=True, orthogonalise=2 + ) + for cc in comps + ] @@ -104,23 +108,25 @@ Now we apply CP decomposition to this dataset. -.. GENERATED FROM PYTHON SOURCE LINES 52-54 +.. GENERATED FROM PYTHON SOURCE LINES 56-58 To evaluate how well CP decomposition explains the variance in the dataset, we plot the percent variance reconstructed (R2X) for a range of ranks. -.. GENERATED FROM PYTHON SOURCE LINES 54-71 +.. GENERATED FROM PYTHON SOURCE LINES 58-77 + +.. code-block:: Python -.. code-block:: default def reconstructed_variance(tFac, tIn=None): - """ This function calculates the amount of variance captured (R2X) by the tensor method. """ + """This function calculates the amount of variance captured (R2X) by the tensor method.""" tMask = np.isfinite(tIn) vTop = np.sum(np.square(tl.cp_to_tensor(tFac) * tMask - np.nan_to_num(tIn))) vBottom = np.sum(np.square(np.nan_to_num(tIn))) return 1.0 - vTop / vBottom + fig1 = plt.figure() CMTFR2X = np.array([reconstructed_variance(f, data.tensor) for f in CMTFfacs]) plt.plot(comps, CMTFR2X, "bo") @@ -148,7 +154,7 @@ variance reconstructed (R2X) for a range of ranks. -.. GENERATED FROM PYTHON SOURCE LINES 72-77 +.. GENERATED FROM PYTHON SOURCE LINES 78-83 Inspect the biological insights from CP components -------------------------------------------------- @@ -156,9 +162,9 @@ Eventually, we wish CP decomposition can bring insights to this dataset. For exa case, revealing the underlying trend of COVID-19 serum-level immunity. To do this, we can inspect how each component looks like on weights. -.. GENERATED FROM PYTHON SOURCE LINES 77-104 +.. GENERATED FROM PYTHON SOURCE LINES 83-118 -.. code-block:: default +.. code-block:: Python tfac = CMTFfacs[1] @@ -167,8 +173,8 @@ how each component looks like on weights. tfac.factors[1][:, 0] *= -1 tfac.factors[2][:, 0] *= -1 - fig2, ax = plt.subplots(1, 3, figsize=(16,6)) - for ii in [0,1,2]: + fig2, ax = plt.subplots(1, 3, figsize=(16, 6)) + for ii in [0, 1, 2]: fac = tfac.factors[ii] scales = np.linalg.norm(fac, ord=np.inf, axis=0) fac /= scales @@ -178,12 +184,20 @@ how each component looks like on weights. ax[ii].set_xticklabels(["Comp. 1", "Comp. 2"]) ax[ii].set_yticks(range(len(data.ticks[ii]))) if ii == 0: - ax[0].set_yticklabels([data.ticks[0][i] if i==0 or data.ticks[0][i]!=data.ticks[0][i-1] - else "" for i in range(len(data.ticks[0]))]) + ax[0].set_yticklabels( + [ + ( + data.ticks[0][i] + if i == 0 or data.ticks[0][i] != data.ticks[0][i - 1] + else "" + ) + for i in range(len(data.ticks[0])) + ] + ) else: ax[ii].set_yticklabels(data.ticks[ii]) ax[ii].set_title(data.dims[ii]) - ax[ii].set_aspect('auto') + ax[ii].set_aspect("auto") fig2.colorbar(ScalarMappable(norm=plt.Normalize(-1, 1), cmap="PiYG")) @@ -200,12 +214,14 @@ how each component looks like on weights. .. code-block:: none + /home/runner/work/tensorly/tensorly/examples/applications/plot_covid.py:116: MatplotlibDeprecationWarning: Unable to determine Axes to steal space for Colorbar. Using gca(), but will raise in the future. Either provide the *cax* argument to use as the Axes for the Colorbar, provide the *ax* argument to steal space from it, or add *mappable* to an Axes. + fig2.colorbar(ScalarMappable(norm=plt.Normalize(-1, 1), cmap="PiYG")) - + -.. GENERATED FROM PYTHON SOURCE LINES 105-110 +.. GENERATED FROM PYTHON SOURCE LINES 119-124 From the results, we can see that serum COVID-19 immunity separates into two distinct signals, represented by two CP components: a clear acute response with IgG3, IgM, and IgA, and a long-term, @@ -213,7 +229,7 @@ IgG1-specific response. Samples from patients with different symptoms can be dis these two components. This indicates that CP decomposition is a great tool to find these biologically significant signals. -.. GENERATED FROM PYTHON SOURCE LINES 112-121 +.. GENERATED FROM PYTHON SOURCE LINES 126-135 References ---------- @@ -228,7 +244,7 @@ References .. rst-class:: sphx-glr-timing - **Total running time of the script:** ( 0 minutes 6.268 seconds) + **Total running time of the script:** (0 minutes 3.405 seconds) .. _sphx_glr_download_auto_examples_applications_plot_covid.py: @@ -237,14 +253,17 @@ References .. container:: sphx-glr-footer sphx-glr-footer-example + .. container:: sphx-glr-download sphx-glr-download-jupyter + + :download:`Download Jupyter notebook: plot_covid.ipynb ` .. container:: sphx-glr-download sphx-glr-download-python :download:`Download Python source code: plot_covid.py ` - .. container:: sphx-glr-download sphx-glr-download-jupyter + .. container:: sphx-glr-download sphx-glr-download-zip - :download:`Download Jupyter notebook: plot_covid.ipynb ` + :download:`Download zipped: plot_covid.zip ` .. only:: html diff --git a/stable/_sources/auto_examples/applications/plot_image_compression.rst.txt b/stable/_sources/auto_examples/applications/plot_image_compression.rst.txt index 8c0601577..a202e1ad5 100644 --- a/stable/_sources/auto_examples/applications/plot_image_compression.rst.txt +++ b/stable/_sources/auto_examples/applications/plot_image_compression.rst.txt @@ -10,8 +10,8 @@ .. note:: :class: sphx-glr-download-link-note - Click :ref:`here ` - to download the full example code + :ref:`Go to the end ` + to download the full example code. .. rst-class:: sphx-glr-example-title @@ -23,7 +23,7 @@ Image compression via tensor decomposition Example on how to use :func:`tensorly.decomposition.parafac` and :func:`tensorly.decomposition.tucker` on images. -.. GENERATED FROM PYTHON SOURCE LINES 8-65 +.. GENERATED FROM PYTHON SOURCE LINES 7-68 @@ -37,10 +37,10 @@ Example on how to use :func:`tensorly.decomposition.parafac` and :func:`tensorly .. code-block:: none - /home/runner/work/tensorly/tensorly/examples/applications/plot_image_compression.py:21: DeprecationWarning: scipy.misc.face has been deprecated in SciPy v1.10.0; and will be completely removed in SciPy v1.12.0. Dataset methods have moved into the scipy.datasets module. Use scipy.datasets.face instead. + /home/runner/work/tensorly/tensorly/examples/applications/plot_image_compression.py:20: DeprecationWarning: scipy.misc.face has been deprecated in SciPy v1.10.0; and will be completely removed in SciPy v1.12.0. Dataset methods have moved into the scipy.datasets module. Use scipy.datasets.face instead. image = face() - /home/runner/work/tensorly/tensorly/examples/applications/plot_image_compression.py:22: DeprecationWarning: scipy.misc.face has been deprecated in SciPy v1.10.0; and will be completely removed in SciPy v1.12.0. Dataset methods have moved into the scipy.datasets module. Use scipy.datasets.face instead. - image = tl.tensor(zoom(face(), (0.3, 0.3, 1)), dtype='float64') + /home/runner/work/tensorly/tensorly/examples/applications/plot_image_compression.py:21: DeprecationWarning: scipy.misc.face has been deprecated in SciPy v1.10.0; and will be completely removed in SciPy v1.12.0. Dataset methods have moved into the scipy.datasets module. Use scipy.datasets.face instead. + image = tl.tensor(zoom(face(), (0.3, 0.3, 1)), dtype="float64") @@ -49,7 +49,7 @@ Example on how to use :func:`tensorly.decomposition.parafac` and :func:`tensorly | -.. code-block:: default +.. code-block:: Python import matplotlib.pyplot as plt @@ -65,7 +65,8 @@ Example on how to use :func:`tensorly.decomposition.parafac` and :func:`tensorly random_state = 12345 image = face() - image = tl.tensor(zoom(face(), (0.3, 0.3, 1)), dtype='float64') + image = tl.tensor(zoom(face(), (0.3, 0.3, 1)), dtype="float64") + def to_image(tensor): """A convenience function to convert from a float dtype back to uint8""" @@ -75,18 +76,21 @@ Example on how to use :func:`tensorly.decomposition.parafac` and :func:`tensorly im *= 255 return im.astype(np.uint8) + # Rank of the CP decomposition cp_rank = 25 # Rank of the Tucker decomposition tucker_rank = [100, 100, 2] # Perform the CP decomposition - weights, factors = parafac(image, rank=cp_rank, init='random', tol=10e-6) + weights, factors = parafac(image, rank=cp_rank, init="random", tol=10e-6) # Reconstruct the image from the factors cp_reconstruction = tl.cp_to_tensor((weights, factors)) # Tucker decomposition - core, tucker_factors = tucker(image, rank=tucker_rank, init='random', tol=10e-5, random_state=random_state) + core, tucker_factors = tucker( + image, rank=tucker_rank, init="random", tol=10e-5, random_state=random_state + ) tucker_reconstruction = tl.tucker_to_tensor((core, tucker_factors)) # Plotting the original and reconstruction from the decompositions @@ -94,17 +98,17 @@ Example on how to use :func:`tensorly.decomposition.parafac` and :func:`tensorly ax = fig.add_subplot(1, 3, 1) ax.set_axis_off() ax.imshow(to_image(image)) - ax.set_title('original') + ax.set_title("original") ax = fig.add_subplot(1, 3, 2) ax.set_axis_off() ax.imshow(to_image(cp_reconstruction)) - ax.set_title('CP') + ax.set_title("CP") ax = fig.add_subplot(1, 3, 3) ax.set_axis_off() ax.imshow(to_image(tucker_reconstruction)) - ax.set_title('Tucker') + ax.set_title("Tucker") plt.tight_layout() plt.show() @@ -112,7 +116,7 @@ Example on how to use :func:`tensorly.decomposition.parafac` and :func:`tensorly .. rst-class:: sphx-glr-timing - **Total running time of the script:** ( 0 minutes 3.039 seconds) + **Total running time of the script:** (0 minutes 1.443 seconds) .. _sphx_glr_download_auto_examples_applications_plot_image_compression.py: @@ -121,14 +125,17 @@ Example on how to use :func:`tensorly.decomposition.parafac` and :func:`tensorly .. container:: sphx-glr-footer sphx-glr-footer-example + .. container:: sphx-glr-download sphx-glr-download-jupyter + + :download:`Download Jupyter notebook: plot_image_compression.ipynb ` .. container:: sphx-glr-download sphx-glr-download-python :download:`Download Python source code: plot_image_compression.py ` - .. container:: sphx-glr-download sphx-glr-download-jupyter + .. container:: sphx-glr-download sphx-glr-download-zip - :download:`Download Jupyter notebook: plot_image_compression.ipynb ` + :download:`Download zipped: plot_image_compression.zip ` .. only:: html diff --git a/stable/_sources/auto_examples/applications/sg_execution_times.rst.txt b/stable/_sources/auto_examples/applications/sg_execution_times.rst.txt index d92a9792c..77c0275c6 100644 --- a/stable/_sources/auto_examples/applications/sg_execution_times.rst.txt +++ b/stable/_sources/auto_examples/applications/sg_execution_times.rst.txt @@ -3,14 +3,41 @@ .. _sphx_glr_auto_examples_applications_sg_execution_times: + Computation times ================= -**00:11.645** total execution time for **auto_examples_applications** files: - -+------------------------------------------------------------------------------------------------------+-----------+--------+ -| :ref:`sphx_glr_auto_examples_applications_plot_covid.py` (``plot_covid.py``) | 00:06.268 | 0.0 MB | -+------------------------------------------------------------------------------------------------------+-----------+--------+ -| :ref:`sphx_glr_auto_examples_applications_plot_image_compression.py` (``plot_image_compression.py``) | 00:03.039 | 0.0 MB | -+------------------------------------------------------------------------------------------------------+-----------+--------+ -| :ref:`sphx_glr_auto_examples_applications_plot_IL2.py` (``plot_IL2.py``) | 00:02.337 | 0.0 MB | -+------------------------------------------------------------------------------------------------------+-----------+--------+ +**00:06.260** total execution time for 3 files **from auto_examples/applications**: + +.. container:: + + .. raw:: html + + + + + + + + .. list-table:: + :header-rows: 1 + :class: table table-striped sg-datatable + + * - Example + - Time + - Mem (MB) + * - :ref:`sphx_glr_auto_examples_applications_plot_covid.py` (``plot_covid.py``) + - 00:03.405 + - 0.0 + * - :ref:`sphx_glr_auto_examples_applications_plot_image_compression.py` (``plot_image_compression.py``) + - 00:01.443 + - 0.0 + * - :ref:`sphx_glr_auto_examples_applications_plot_IL2.py` (``plot_IL2.py``) + - 00:01.412 + - 0.0 diff --git a/stable/_sources/auto_examples/decomposition/index.rst.txt b/stable/_sources/auto_examples/decomposition/index.rst.txt index 435e27707..69e1c1e45 100644 --- a/stable/_sources/auto_examples/decomposition/index.rst.txt +++ b/stable/_sources/auto_examples/decomposition/index.rst.txt @@ -12,6 +12,7 @@ Tensor decomposition
    +.. thumbnail-parent-div-open .. raw:: html @@ -20,7 +21,7 @@ Tensor decomposition .. only:: html .. image:: /auto_examples/decomposition/images/thumb/sphx_glr_plot_permute_factors_thumb.png - :alt: Permuting CP factors + :alt: :ref:`sphx_glr_auto_examples_decomposition_plot_permute_factors.py` @@ -32,12 +33,12 @@ Tensor decomposition .. raw:: html -
    +
    .. only:: html .. image:: /auto_examples/decomposition/images/thumb/sphx_glr_plot_cp_line_search_thumb.png - :alt: Using line search with PARAFAC + :alt: :ref:`sphx_glr_auto_examples_decomposition_plot_cp_line_search.py` @@ -54,7 +55,7 @@ Tensor decomposition .. only:: html .. image:: /auto_examples/decomposition/images/thumb/sphx_glr_plot_guide_for_constrained_cp_thumb.png - :alt: Constrained CP decomposition in Tensorly >=0.7 + :alt: :ref:`sphx_glr_auto_examples_decomposition_plot_guide_for_constrained_cp.py` @@ -71,7 +72,7 @@ Tensor decomposition .. only:: html .. image:: /auto_examples/decomposition/images/thumb/sphx_glr_plot_nn_tucker_thumb.png - :alt: Non-negative Tucker decomposition + :alt: :ref:`sphx_glr_auto_examples_decomposition_plot_nn_tucker.py` @@ -88,7 +89,7 @@ Tensor decomposition .. only:: html .. image:: /auto_examples/decomposition/images/thumb/sphx_glr_plot_nn_cp_hals_thumb.png - :alt: Non-negative CP decomposition in Tensorly >=0.6 + :alt: :ref:`sphx_glr_auto_examples_decomposition_plot_nn_cp_hals.py` @@ -98,6 +99,23 @@ Tensor decomposition
    +.. raw:: html + +
    + +.. only:: html + + .. image:: /auto_examples/decomposition/images/thumb/sphx_glr_plot_parafac2_compression_thumb.png + :alt: + + :ref:`sphx_glr_auto_examples_decomposition_plot_parafac2_compression.py` + +.. raw:: html + +
    Speeding up PARAFAC2 with SVD compression
    +
    + + .. raw:: html
    @@ -105,7 +123,7 @@ Tensor decomposition .. only:: html .. image:: /auto_examples/decomposition/images/thumb/sphx_glr_plot_parafac2_thumb.png - :alt: Demonstration of PARAFAC2 + :alt: :ref:`sphx_glr_auto_examples_decomposition_plot_parafac2.py` @@ -115,6 +133,8 @@ Tensor decomposition
    +.. thumbnail-parent-div-close + .. raw:: html
    @@ -128,5 +148,6 @@ Tensor decomposition /auto_examples/decomposition/plot_guide_for_constrained_cp /auto_examples/decomposition/plot_nn_tucker /auto_examples/decomposition/plot_nn_cp_hals + /auto_examples/decomposition/plot_parafac2_compression /auto_examples/decomposition/plot_parafac2 diff --git a/stable/_sources/auto_examples/decomposition/plot_cp_line_search.rst.txt b/stable/_sources/auto_examples/decomposition/plot_cp_line_search.rst.txt index 938c3a737..e08a04c0b 100644 --- a/stable/_sources/auto_examples/decomposition/plot_cp_line_search.rst.txt +++ b/stable/_sources/auto_examples/decomposition/plot_cp_line_search.rst.txt @@ -10,8 +10,8 @@ .. note:: :class: sphx-glr-download-link-note - Click :ref:`here ` - to download the full example code + :ref:`Go to the end ` + to download the full example code. .. rst-class:: sphx-glr-example-title @@ -23,7 +23,7 @@ Using line search with PARAFAC Example on how to use :func:`tensorly.decomposition.parafac` with line search to accelerate convergence. -.. GENERATED FROM PYTHON SOURCE LINES 7-53 +.. GENERATED FROM PYTHON SOURCE LINES 7-55 @@ -36,7 +36,8 @@ Example on how to use :func:`tensorly.decomposition.parafac` with line search to -.. code-block:: default +.. code-block:: Python + import matplotlib.pyplot as plt @@ -58,7 +59,7 @@ Example on how to use :func:`tensorly.decomposition.parafac` with line search to err_min = tl.norm(tl.cp_to_tensor(fac) - tensor) for ii, toll in enumerate(tol): - # Run PARAFAC decomposition without line search and time + # Run PARAFAC decomposition without line search and time start = time() cp = CP(rank=3, n_iter_max=2000000, tol=toll, linesearch=False) fac = cp.fit_transform(tensor) @@ -78,17 +79,18 @@ Example on how to use :func:`tensorly.decomposition.parafac` with line search to fig = plt.figure() ax = fig.add_subplot(1, 1, 1) - ax.loglog(tt, err - err_min, '.', label="No line search") - ax.loglog(tt_ls, err_ls - err_min, '.r', label="Line search") + ax.loglog(tt, err - err_min, ".", label="No line search") + ax.loglog(tt_ls, err_ls - err_min, ".r", label="Line search") ax.legend() ax.set_ylabel("Time") ax.set_xlabel("Error") plt.show() + .. rst-class:: sphx-glr-timing - **Total running time of the script:** ( 0 minutes 8.314 seconds) + **Total running time of the script:** (0 minutes 3.835 seconds) .. _sphx_glr_download_auto_examples_decomposition_plot_cp_line_search.py: @@ -97,14 +99,17 @@ Example on how to use :func:`tensorly.decomposition.parafac` with line search to .. container:: sphx-glr-footer sphx-glr-footer-example + .. container:: sphx-glr-download sphx-glr-download-jupyter + + :download:`Download Jupyter notebook: plot_cp_line_search.ipynb ` .. container:: sphx-glr-download sphx-glr-download-python :download:`Download Python source code: plot_cp_line_search.py ` - .. container:: sphx-glr-download sphx-glr-download-jupyter + .. container:: sphx-glr-download sphx-glr-download-zip - :download:`Download Jupyter notebook: plot_cp_line_search.ipynb ` + :download:`Download zipped: plot_cp_line_search.zip ` .. only:: html diff --git a/stable/_sources/auto_examples/decomposition/plot_guide_for_constrained_cp.rst.txt b/stable/_sources/auto_examples/decomposition/plot_guide_for_constrained_cp.rst.txt index dac9481b7..4786ae4b5 100644 --- a/stable/_sources/auto_examples/decomposition/plot_guide_for_constrained_cp.rst.txt +++ b/stable/_sources/auto_examples/decomposition/plot_guide_for_constrained_cp.rst.txt @@ -10,8 +10,8 @@ .. note:: :class: sphx-glr-download-link-note - Click :ref:`here ` - to download the full example code + :ref:`Go to the end ` + to download the full example code. .. rst-class:: sphx-glr-example-title @@ -27,10 +27,10 @@ On this page, you will find examples showing how to use constrained CP/Parafac. Introduction ----------------------- Since version 0.7, Tensorly includes constrained CP decomposition which penalizes or -constrains factors as chosen by the user. The proposed implementation of constrained CP uses the +constrains factors as chosen by the user. The proposed implementation of constrained CP uses the Alternating Optimization Alternating Direction Method of Multipliers (AO-ADMM) algorithm from [1] which solves alternatively convex optimization problem using primal-dual optimization. In constrained CP -decomposition, an auxilliary factor is introduced which is constrained or regularized using an operator called the +decomposition, an auxilliary factor is introduced which is constrained or regularized using an operator called the proximal operator. The proximal operator may therefore change according to the selected constraint or penalization. Tensorly provides several constraints and their corresponding proximal operators, each can apply to one or all factors in the CP decomposition: @@ -86,7 +86,7 @@ constraints for all mode (or factors) or using different constraints for differe .. GENERATED FROM PYTHON SOURCE LINES 67-79 -.. code-block:: default +.. code-block:: Python import numpy as np @@ -112,7 +112,7 @@ constraints for all mode (or factors) or using different constraints for differe Using one constraint for all modes -------------------------------------------- Constraints are inputs of the constrained_parafac function, which itself uses the -``tensorly.tenalg.proximal.validate_constraints`` function in order to process the input +``tensorly.solver.proximal.validate_constraints`` function in order to process the input of the user. If a user wants to use the same constraint for all modes, an input (bool or a scalar value or list of scalar values) should be given to this constraint. Assume, one wants to use unimodality constraint for all modes. Since it does not require @@ -120,7 +120,7 @@ any scalar input, unimodality can be imposed by writing `True` for `unimodality` .. GENERATED FROM PYTHON SOURCE LINES 88-91 -.. code-block:: default +.. code-block:: Python _, factors = constrained_parafac(tensor, rank=rank, unimodality=True) @@ -138,13 +138,13 @@ This constraint imposes that each column of all the factors in the CP decomposit .. GENERATED FROM PYTHON SOURCE LINES 93-99 -.. code-block:: default +.. code-block:: Python fig = plt.figure() for i in range(rank): plt.plot(factors[0][:, i]) - plt.legend(['1. column', '2. column', '3. column'], loc='upper left') + plt.legend(["1. column", "2. column", "3. column"], loc="upper left") @@ -164,7 +164,7 @@ Constraints requiring a scalar input can be used similarly as follows: .. GENERATED FROM PYTHON SOURCE LINES 101-103 -.. code-block:: default +.. code-block:: Python _, factors = constrained_parafac(tensor, rank=rank, l1_reg=0.05) @@ -181,14 +181,14 @@ The same regularization coefficient l1_reg is used for all the modes. Here the l .. GENERATED FROM PYTHON SOURCE LINES 105-113 -.. code-block:: default +.. code-block:: Python fig = plt.figure() - plt.title('Histogram of 1. factor') + plt.title("Histogram of 1. factor") _, _, _ = plt.hist(factors[0].flatten()) fig = plt.figure() - plt.title('Histogram of 2. factor') + plt.title("Histogram of 2. factor") _, _, _ = plt.hist(factors[1].flatten()) @@ -224,7 +224,7 @@ a python dictionary: .. GENERATED FROM PYTHON SOURCE LINES 118-123 -.. code-block:: default +.. code-block:: Python _, factors = constrained_parafac(tensor, rank=rank, non_negative={0: True, 2: True}) @@ -240,21 +240,21 @@ a python dictionary: .. code-block:: none 1. factor - [[6.38 1.89 0.95] - [3.99 0. 0.11] - [5.58 1.26 1.09] - [4.62 0.27 1.33] - [5.32 1.62 0.15] - [3.57 0. 0. ]] + [[5.31 0. 1.22] + [4.9 0.56 1.23] + [5.03 2.09 1.21] + [4.59 1.25 0.6 ] + [6.32 1.62 1.87] + [4.57 0.01 1.1 ]] 2. factor - [[ 0.39 -0.35 -0.31] - [ 0.42 -0.6 0.04] - [ 0.47 -0.69 -0.55] - [ 0.34 -0.09 -0.3 ] - [ 0.41 -0.28 -0.56] - [ 0.41 -0.34 -0.44] - [ 0.34 -0.39 -0.14] - [ 0.42 -0.38 -0.48]] + [[ 0.43 0.26 -0.72] + [ 0.45 -0.18 -0.54] + [ 0.4 0.35 -0.77] + [ 0.39 -0.27 -0.25] + [ 0.43 -0.32 -0.33] + [ 0.3 0.43 -0.38] + [ 0.44 -0.5 -0.33] + [ 0.38 -0.02 -0.29]] @@ -272,21 +272,21 @@ using a list structure: .. GENERATED FROM PYTHON SOURCE LINES 132-147 -.. code-block:: default +.. code-block:: Python _, factors = constrained_parafac(tensor, rank=rank, l1_reg=[0.01, 0.02, 0.03]) fig = plt.figure() - plt.title('Histogram of 1. factor') + plt.title("Histogram of 1. factor") _, _, _ = plt.hist(factors[0].flatten()) fig = plt.figure() - plt.title('Histogram of 2. factor') + plt.title("Histogram of 2. factor") _, _, _ = plt.hist(factors[1].flatten()) fig = plt.figure() - plt.title('Histogram of 3. factor') + plt.title("Histogram of 3. factor") _, _, _ = plt.hist(factors[2].flatten()) @@ -327,13 +327,14 @@ Using different constraints for each mode To use different constraint for different modes, the dictionary structure should be preferred: -.. GENERATED FROM PYTHON SOURCE LINES 152-156 +.. GENERATED FROM PYTHON SOURCE LINES 152-157 -.. code-block:: default +.. code-block:: Python - _, factors = constrained_parafac(tensor, rank=rank, non_negative={1:True}, l1_reg={0: 0.01}, - l2_square_reg={2: 0.01}) + _, factors = constrained_parafac( + tensor, rank=rank, non_negative={1: True}, l1_reg={0: 0.01}, l2_square_reg={2: 0.01} + ) @@ -342,14 +343,14 @@ should be preferred: -.. GENERATED FROM PYTHON SOURCE LINES 157-159 +.. GENERATED FROM PYTHON SOURCE LINES 158-160 In the dictionary, `key` is the selected mode and `value` is a scalar value or only `True` depending on the selected constraint. -.. GENERATED FROM PYTHON SOURCE LINES 159-164 +.. GENERATED FROM PYTHON SOURCE LINES 160-165 -.. code-block:: default +.. code-block:: Python print("1. factor\n", factors[0]) @@ -365,43 +366,43 @@ only `True` depending on the selected constraint. .. code-block:: none 1. factor - [[ 18.33 -7.86 -12.27] - [ 16.41 -4.47 6.37] - [ 17.27 0. -9.37] - [ 16.26 2.41 -2.98] - [ 15.92 -13.49 -4.31] - [ 13.07 -21.45 19.31]] + [[ 14.9 -4.94 -7.41] + [ 15.11 -15.42 -1.25] + [ 12.98 2.37 -4.7 ] + [ 13.17 12.1 -3.71] + [ 14.26 -4.74 -15.63] + [ 11.75 5.05 -5.64]] 2. factor - [[0.38 0. 0. ] - [0.38 0.65 1.1 ] - [0.36 0.81 0.96] - [0.35 0.5 0.16] - [0.39 0.19 0. ] - [0.39 0.26 0.1 ] - [0.34 0.02 0.43] - [0.38 0.57 0. ]] + [[0.36 0. 0.48] + [0.33 0. 0.9 ] + [0.33 0.74 0.54] + [0.39 0.54 0.16] + [0.38 0. 0.45] + [0.37 1.09 0. ] + [0.38 0.58 0.24] + [0.42 0. 0. ]] 3. factor - [[ 0.07 -0.02 0.02] - [ 0.09 0.01 0. ] - [ 0.09 0.01 -0.01] - [ 0.09 0.02 0.02] - [ 0.08 -0.02 -0.03] - [ 0.08 -0.03 0. ] - [ 0.07 -0.02 -0.01] - [ 0.09 0.02 0.02] - [ 0.08 0. 0.02] - [ 0.06 -0.04 -0.01]] + [[ 0.07 -0.01 -0.02] + [ 0.1 0.02 -0.04] + [ 0.08 0. -0.04] + [ 0.11 0.03 0.03] + [ 0.1 -0. 0.01] + [ 0.1 -0.02 0.01] + [ 0.07 0.02 -0.05] + [ 0.09 0.01 -0.03] + [ 0.08 0.01 -0.03] + [ 0.09 0.03 -0.01]] -.. GENERATED FROM PYTHON SOURCE LINES 165-168 +.. GENERATED FROM PYTHON SOURCE LINES 166-169 Thus, first factor will be non-negative, second factor will be regularized by :math:`0.01` with :math:`l_1` and last factor will be regularized by :math:`0.01` with :math:`l_2^2`. -.. GENERATED FROM PYTHON SOURCE LINES 170-179 +.. GENERATED FROM PYTHON SOURCE LINES 171-180 References ---------- @@ -416,7 +417,7 @@ IEEE Transactions on Signal Processing 64.19 (2016): 5052-5065. .. rst-class:: sphx-glr-timing - **Total running time of the script:** ( 0 minutes 4.068 seconds) + **Total running time of the script:** (0 minutes 3.250 seconds) .. _sphx_glr_download_auto_examples_decomposition_plot_guide_for_constrained_cp.py: @@ -425,14 +426,17 @@ IEEE Transactions on Signal Processing 64.19 (2016): 5052-5065. .. container:: sphx-glr-footer sphx-glr-footer-example + .. container:: sphx-glr-download sphx-glr-download-jupyter + + :download:`Download Jupyter notebook: plot_guide_for_constrained_cp.ipynb ` .. container:: sphx-glr-download sphx-glr-download-python :download:`Download Python source code: plot_guide_for_constrained_cp.py ` - .. container:: sphx-glr-download sphx-glr-download-jupyter + .. container:: sphx-glr-download sphx-glr-download-zip - :download:`Download Jupyter notebook: plot_guide_for_constrained_cp.ipynb ` + :download:`Download zipped: plot_guide_for_constrained_cp.zip ` .. only:: html diff --git a/stable/_sources/auto_examples/decomposition/plot_nn_cp_hals.rst.txt b/stable/_sources/auto_examples/decomposition/plot_nn_cp_hals.rst.txt index f87793c9d..332181928 100644 --- a/stable/_sources/auto_examples/decomposition/plot_nn_cp_hals.rst.txt +++ b/stable/_sources/auto_examples/decomposition/plot_nn_cp_hals.rst.txt @@ -10,8 +10,8 @@ .. note:: :class: sphx-glr-download-link-note - Click :ref:`here ` - to download the full example code + :ref:`Go to the end ` + to download the full example code. .. rst-class:: sphx-glr-example-title @@ -39,7 +39,7 @@ obtained from a non-negative tensor. .. GENERATED FROM PYTHON SOURCE LINES 20-29 -.. code-block:: default +.. code-block:: Python import numpy as np @@ -66,7 +66,7 @@ Here we chose to generate a random from the sequence of integers from 1 to 24000 .. GENERATED FROM PYTHON SOURCE LINES 34-38 -.. code-block:: default +.. code-block:: Python # Tensor generation @@ -87,16 +87,18 @@ using these algorithms, we can use Tensorly to produce a good initial guess for our NCP. In fact, in order to compare both algorithmic options in a fair way, it is a good idea to use same initialized factors in decomposition algorithms. We make use of the ``initialize_cp`` function to initialize the -factors of the NCP (setting the ``non_negative`` option to `True`) +factors of the NCP (setting the ``non_negative`` option to `True`) and transform these factors (and factors weights) into an instance of the CPTensor class: -.. GENERATED FROM PYTHON SOURCE LINES 48-53 +.. GENERATED FROM PYTHON SOURCE LINES 48-55 -.. code-block:: default +.. code-block:: Python - weights_init, factors_init = initialize_cp(tensor, non_negative=True, init='random', rank=10) + weights_init, factors_init = initialize_cp( + tensor, non_negative=True, init="random", rank=10 + ) cp_init = CPTensor((weights_init, factors_init)) @@ -107,7 +109,7 @@ an instance of the CPTensor class: -.. GENERATED FROM PYTHON SOURCE LINES 54-59 +.. GENERATED FROM PYTHON SOURCE LINES 56-61 Non-negative Parafac ----------------------- @@ -115,15 +117,17 @@ From now on, we can use the same ``cp_init`` tensor as the initial tensor when we use decomposition functions. Now let us first use the algorithm based on Multiplicative Update, which can be called as follows: -.. GENERATED FROM PYTHON SOURCE LINES 59-65 +.. GENERATED FROM PYTHON SOURCE LINES 61-69 -.. code-block:: default +.. code-block:: Python tic = time.time() - tensor_mu, errors_mu = non_negative_parafac(tensor, rank=10, init=deepcopy(cp_init), return_errors=True) + tensor_mu, errors_mu = non_negative_parafac( + tensor, rank=10, init=deepcopy(cp_init), return_errors=True + ) cp_reconstruction_mu = tl.cp_to_tensor(tensor_mu) - time_mu = time.time()-tic + time_mu = time.time() - tic @@ -132,7 +136,7 @@ Multiplicative Update, which can be called as follows: -.. GENERATED FROM PYTHON SOURCE LINES 66-71 +.. GENERATED FROM PYTHON SOURCE LINES 70-75 Here, we also compute the output tensor from the decomposed factors by using the cp_to_tensor function. The tensor cp_reconstruction_mu is therefore a @@ -140,13 +144,13 @@ low-rank non-negative approximation of the input tensor; looking at the first few values of both tensors shows that this is indeed the case but the approximation is quite coarse. -.. GENERATED FROM PYTHON SOURCE LINES 71-75 +.. GENERATED FROM PYTHON SOURCE LINES 75-79 -.. code-block:: default +.. code-block:: Python - print('reconstructed tensor\n', cp_reconstruction_mu[10:12, 10:12, 10:12], '\n') - print('input data tensor\n', tensor[10:12, 10:12, 10:12]) + print("reconstructed tensor\n", cp_reconstruction_mu[10:12, 10:12, 10:12], "\n") + print("input data tensor\n", tensor[10:12, 10:12, 10:12]) @@ -157,11 +161,11 @@ the case but the approximation is quite coarse. .. code-block:: none reconstructed tensor - [[[8254.52 8207.2 ] - [8318.66 8264.93]] + [[[8237.63 8286.92] + [8336.44 8180.15]] - [[9098.08 9131.76] - [9117.1 9137.99]]] + [[9057.33 9233.85] + [9068.9 8992.75]]] input data tensor [[[8210. 8211.] @@ -173,22 +177,24 @@ the case but the approximation is quite coarse. -.. GENERATED FROM PYTHON SOURCE LINES 76-80 +.. GENERATED FROM PYTHON SOURCE LINES 80-84 Non-negative Parafac with HALS ------------------------------ Our second (new) option to compute NCP is the HALS algorithm, which can be used as follows: -.. GENERATED FROM PYTHON SOURCE LINES 80-86 +.. GENERATED FROM PYTHON SOURCE LINES 84-92 -.. code-block:: default +.. code-block:: Python tic = time.time() - tensor_hals, errors_hals = non_negative_parafac_hals(tensor, rank=10, init=deepcopy(cp_init), return_errors=True) + tensor_hals, errors_hals = non_negative_parafac_hals( + tensor, rank=10, init=deepcopy(cp_init), return_errors=True + ) cp_reconstruction_hals = tl.cp_to_tensor(tensor_hals) - time_hals = time.time()-tic + time_hals = time.time() - tic @@ -197,17 +203,17 @@ used as follows: -.. GENERATED FROM PYTHON SOURCE LINES 87-88 +.. GENERATED FROM PYTHON SOURCE LINES 93-94 Again, we can look at the reconstructed tensor entries. -.. GENERATED FROM PYTHON SOURCE LINES 88-92 +.. GENERATED FROM PYTHON SOURCE LINES 94-98 -.. code-block:: default +.. code-block:: Python - print('reconstructed tensor\n',cp_reconstruction_hals[10:12, 10:12, 10:12], '\n') - print('input data tensor\n', tensor[10:12, 10:12, 10:12]) + print("reconstructed tensor\n", cp_reconstruction_hals[10:12, 10:12, 10:12], "\n") + print("input data tensor\n", tensor[10:12, 10:12, 10:12]) @@ -218,11 +224,11 @@ Again, we can look at the reconstructed tensor entries. .. code-block:: none reconstructed tensor - [[[8180.72 8216.15] - [8216.23 8245.29]] + [[[8210.48 8210.36] + [8230.42 8230.47]] - [[8983.51 9015.66] - [9017.52 9043.95]]] + [[9010.47 9010.4 ] + [9030.41 9030.5 ]]] input data tensor [[[8210. 8211.] @@ -234,7 +240,7 @@ Again, we can look at the reconstructed tensor entries. -.. GENERATED FROM PYTHON SOURCE LINES 93-103 +.. GENERATED FROM PYTHON SOURCE LINES 99-109 Non-negative Parafac with Exact HALS ------------------------------------ @@ -247,15 +253,17 @@ the input data, but will need longer to reach convergence. Exact subroutine solution option can be used simply choosing exact as True in the function: -.. GENERATED FROM PYTHON SOURCE LINES 103-109 +.. GENERATED FROM PYTHON SOURCE LINES 109-117 -.. code-block:: default +.. code-block:: Python tic = time.time() - tensorhals_exact, errors_exact = non_negative_parafac_hals(tensor, rank=10, init=deepcopy(cp_init), return_errors=True, exact=True) + tensorhals_exact, errors_exact = non_negative_parafac_hals( + tensor, rank=10, init=deepcopy(cp_init), return_errors=True, exact=True + ) cp_reconstruction_exact_hals = tl.cp_to_tensor(tensorhals_exact) - time_exact_hals = time.time()-tic + time_exact_hals = time.time() - tic @@ -264,20 +272,20 @@ in the function: -.. GENERATED FROM PYTHON SOURCE LINES 110-113 +.. GENERATED FROM PYTHON SOURCE LINES 118-121 Comparison ----------------------- First comparison option is processing time for each algorithm: -.. GENERATED FROM PYTHON SOURCE LINES 113-118 +.. GENERATED FROM PYTHON SOURCE LINES 121-126 -.. code-block:: default +.. code-block:: Python - print(str("{:.2f}".format(time_mu)) + ' ' + 'seconds') - print(str("{:.2f}".format(time_hals)) + ' ' + 'seconds') - print(str("{:.2f}".format(time_exact_hals)) + ' ' + 'seconds') + print(str(f"{time_mu:.2f}") + " " + "seconds") + print(str(f"{time_hals:.2f}") + " " + "seconds") + print(str(f"{time_exact_hals:.2f}") + " " + "seconds") @@ -287,14 +295,14 @@ First comparison option is processing time for each algorithm: .. code-block:: none - 0.19 seconds - 0.26 seconds - 535.92 seconds + 0.04 seconds + 0.56 seconds + 199.95 seconds -.. GENERATED FROM PYTHON SOURCE LINES 119-126 +.. GENERATED FROM PYTHON SOURCE LINES 127-134 As it is expected, the exact solution takes much longer than the approximate solution, while the gain in performance is often void. Therefore we recommend @@ -304,12 +312,13 @@ However, a closer look suggest they are indeed behaving quite differently. Computing the error between the output and the input tensor tells that story better. In Tensorly, we provide a function to calculate Root Mean Square Error (RMSE): -.. GENERATED FROM PYTHON SOURCE LINES 126-132 +.. GENERATED FROM PYTHON SOURCE LINES 134-141 -.. code-block:: default +.. code-block:: Python from tensorly.metrics.regression import RMSE + print(RMSE(tensor, cp_reconstruction_mu)) print(RMSE(tensor, cp_reconstruction_hals)) print(RMSE(tensor, cp_reconstruction_exact_hals)) @@ -322,37 +331,39 @@ In Tensorly, we provide a function to calculate Root Mean Square Error (RMSE): .. code-block:: none - 215.7982 - 42.694946 - 0.33535954 + 220.3588 + 3.8795087 + 1.2040633 -.. GENERATED FROM PYTHON SOURCE LINES 133-137 +.. GENERATED FROM PYTHON SOURCE LINES 142-146 According to the RMSE results, HALS is better than the multiplicative update with both exact and approximate solution. In particular, HALS converged to a much lower reconstruction error than MU. We can better appreciate the difference in convergence speed on the following error per iteration plot: -.. GENERATED FROM PYTHON SOURCE LINES 137-151 +.. GENERATED FROM PYTHON SOURCE LINES 146-162 -.. code-block:: default +.. code-block:: Python import matplotlib.pyplot as plt - def each_iteration(a,b,c,title): - fig=plt.figure() + + + def each_iteration(a, b, c, title): + fig = plt.figure() fig.set_size_inches(10, fig.get_figheight(), forward=True) plt.plot(a) plt.plot(b) plt.plot(c) plt.title(str(title)) - plt.legend(['MU', 'HALS', 'Exact HALS'], loc='upper left') + plt.legend(["MU", "HALS", "Exact HALS"], loc="upper left") - each_iteration(errors_mu, errors_hals, errors_exact, 'Error for each iteration') + each_iteration(errors_mu, errors_hals, errors_exact, "Error for each iteration") @@ -366,14 +377,14 @@ in convergence speed on the following error per iteration plot: -.. GENERATED FROM PYTHON SOURCE LINES 152-156 +.. GENERATED FROM PYTHON SOURCE LINES 163-167 In conclusion, on this quick test, it appears that the HALS algorithm gives much better results than the MU original Tensorly methods. Our recommendation is to use HALS as a default, and only resort to MU in specific cases (only encountered by expert users most likely). -.. GENERATED FROM PYTHON SOURCE LINES 158-165 +.. GENERATED FROM PYTHON SOURCE LINES 169-176 References ---------- @@ -386,7 +397,7 @@ Neural computation, 24(4), 1085-1105. (Link) .. rst-class:: sphx-glr-timing - **Total running time of the script:** ( 8 minutes 56.458 seconds) + **Total running time of the script:** (3 minutes 20.624 seconds) .. _sphx_glr_download_auto_examples_decomposition_plot_nn_cp_hals.py: @@ -395,14 +406,17 @@ Neural computation, 24(4), 1085-1105. (Link) .. container:: sphx-glr-footer sphx-glr-footer-example + .. container:: sphx-glr-download sphx-glr-download-jupyter + + :download:`Download Jupyter notebook: plot_nn_cp_hals.ipynb ` .. container:: sphx-glr-download sphx-glr-download-python :download:`Download Python source code: plot_nn_cp_hals.py ` - .. container:: sphx-glr-download sphx-glr-download-jupyter + .. container:: sphx-glr-download sphx-glr-download-zip - :download:`Download Jupyter notebook: plot_nn_cp_hals.ipynb ` + :download:`Download zipped: plot_nn_cp_hals.zip ` .. only:: html diff --git a/stable/_sources/auto_examples/decomposition/plot_nn_tucker.rst.txt b/stable/_sources/auto_examples/decomposition/plot_nn_tucker.rst.txt index ddcb3fe47..4c4a413e4 100644 --- a/stable/_sources/auto_examples/decomposition/plot_nn_tucker.rst.txt +++ b/stable/_sources/auto_examples/decomposition/plot_nn_tucker.rst.txt @@ -10,8 +10,8 @@ .. note:: :class: sphx-glr-download-link-note - Click :ref:`here ` - to download the full example code + :ref:`Go to the end ` + to download the full example code. .. rst-class:: sphx-glr-example-title @@ -65,7 +65,7 @@ decomposition in Tensorly. .. GENERATED FROM PYTHON SOURCE LINES 45-54 -.. code-block:: default +.. code-block:: Python import numpy as np @@ -93,12 +93,12 @@ Here we chose to generate a random tensor from the sequence of integers from .. GENERATED FROM PYTHON SOURCE LINES 60-65 -.. code-block:: default +.. code-block:: Python # tensor generation array = np.random.randint(1000, size=(10, 30, 40)) - tensor = tl.tensor(array, dtype='float') + tensor = tl.tensor(array, dtype="float") @@ -113,15 +113,17 @@ Non-negative Tucker ----------------------- First, multiplicative update can be implemented as: -.. GENERATED FROM PYTHON SOURCE LINES 69-75 +.. GENERATED FROM PYTHON SOURCE LINES 69-77 -.. code-block:: default +.. code-block:: Python tic = time.time() - tensor_mu, error_mu = non_negative_tucker(tensor, rank=[5, 5, 5], tol=1e-12, n_iter_max=100, return_errors=True) + tensor_mu, error_mu = non_negative_tucker( + tensor, rank=[5, 5, 5], tol=1e-12, n_iter_max=100, return_errors=True + ) tucker_reconstruction_mu = tl.tucker_to_tensor(tensor_mu) - time_mu = time.time()-tic + time_mu = time.time() - tic @@ -130,27 +132,29 @@ First, multiplicative update can be implemented as: -.. GENERATED FROM PYTHON SOURCE LINES 76-79 +.. GENERATED FROM PYTHON SOURCE LINES 78-81 Here, we also compute the output tensor from the decomposed factors by using the ``tucker_to_tensor`` function. The tensor ``tucker_reconstruction_mu`` is therefore a low-rank non-negative approximation of the input tensor ``tensor``. -.. GENERATED FROM PYTHON SOURCE LINES 81-84 +.. GENERATED FROM PYTHON SOURCE LINES 83-86 Non-negative Tucker with HALS and FISTA --------------------------------------- HALS algorithm with FISTA can be calculated as: -.. GENERATED FROM PYTHON SOURCE LINES 84-90 +.. GENERATED FROM PYTHON SOURCE LINES 86-94 -.. code-block:: default +.. code-block:: Python ticnew = time.time() - tensor_hals_fista, error_fista = non_negative_tucker_hals(tensor, rank=[5, 5, 5], algorithm='fista', return_errors=True) + tensor_hals_fista, error_fista = non_negative_tucker_hals( + tensor, rank=[5, 5, 5], algorithm="fista", return_errors=True + ) tucker_reconstruction_fista = tl.tucker_to_tensor(tensor_hals_fista) - time_fista = time.time()-ticnew + time_fista = time.time() - ticnew @@ -159,21 +163,23 @@ HALS algorithm with FISTA can be calculated as: -.. GENERATED FROM PYTHON SOURCE LINES 91-94 +.. GENERATED FROM PYTHON SOURCE LINES 95-98 Non-negative Tucker with HALS and Active Set -------------------------------------------- As a second option, HALS algorithm with Active Set can be called as follows: -.. GENERATED FROM PYTHON SOURCE LINES 94-100 +.. GENERATED FROM PYTHON SOURCE LINES 98-106 -.. code-block:: default +.. code-block:: Python ticnew = time.time() - tensor_hals_as, error_as = non_negative_tucker_hals(tensor, rank=[5, 5, 5], algorithm='active_set', return_errors=True) + tensor_hals_as, error_as = non_negative_tucker_hals( + tensor, rank=[5, 5, 5], algorithm="active_set", return_errors=True + ) tucker_reconstruction_as = tl.tucker_to_tensor(tensor_hals_as) - time_as = time.time()-ticnew + time_as = time.time() - ticnew @@ -182,21 +188,21 @@ As a second option, HALS algorithm with Active Set can be called as follows: -.. GENERATED FROM PYTHON SOURCE LINES 101-105 +.. GENERATED FROM PYTHON SOURCE LINES 107-111 Comparison ----------------------- To compare the various methods, first we may look at each algorithm processing time: -.. GENERATED FROM PYTHON SOURCE LINES 105-110 +.. GENERATED FROM PYTHON SOURCE LINES 111-116 -.. code-block:: default +.. code-block:: Python - print('time for tensorly nntucker:'+' ' + str("{:.2f}".format(time_mu))) - print('time for HALS with fista:'+' ' + str("{:.2f}".format(time_fista))) - print('time for HALS with as:'+' ' + str("{:.2f}".format(time_as))) + print("time for tensorly nntucker:" + " " + str(f"{time_mu:.2f}")) + print("time for HALS with fista:" + " " + str(f"{time_fista:.2f}")) + print("time for HALS with as:" + " " + str(f"{time_as:.2f}")) @@ -206,14 +212,14 @@ processing time: .. code-block:: none - time for tensorly nntucker: 0.08 - time for HALS with fista: 1.56 - time for HALS with as: 0.37 + time for tensorly nntucker: 0.09 + time for HALS with fista: 4.55 + time for HALS with as: 3.65 -.. GENERATED FROM PYTHON SOURCE LINES 111-116 +.. GENERATED FROM PYTHON SOURCE LINES 117-122 All algorithms should run with about the same number of iterations on our example, so at first glance the MU algorithm is faster (i.e. has lower @@ -221,14 +227,16 @@ per-iteration complexity). A second way to compare methods is to compute the error between the output and input tensor. In Tensorly, there is a function to compute Root Mean Square Error (RMSE): -.. GENERATED FROM PYTHON SOURCE LINES 116-121 +.. GENERATED FROM PYTHON SOURCE LINES 122-129 -.. code-block:: default +.. code-block:: Python - print('RMSE tensorly nntucker:'+' ' + str(RMSE(tensor, tucker_reconstruction_mu))) - print('RMSE for hals with fista:'+' ' + str(RMSE(tensor, tucker_reconstruction_fista))) - print('RMSE for hals with as:'+' ' + str(RMSE(tensor, tucker_reconstruction_as))) + print("RMSE tensorly nntucker:" + " " + str(RMSE(tensor, tucker_reconstruction_mu))) + print( + "RMSE for hals with fista:" + " " + str(RMSE(tensor, tucker_reconstruction_fista)) + ) + print("RMSE for hals with as:" + " " + str(RMSE(tensor, tucker_reconstruction_as))) @@ -238,36 +246,36 @@ to compute Root Mean Square Error (RMSE): .. code-block:: none - RMSE tensorly nntucker: 286.13168034349553 - RMSE for hals with fista: 282.8068342601711 - RMSE for hals with as: 283.59144315085064 + RMSE tensorly nntucker: 286.4866417762188 + RMSE for hals with fista: 281.96963702154653 + RMSE for hals with as: 280.74724069116087 -.. GENERATED FROM PYTHON SOURCE LINES 122-125 +.. GENERATED FROM PYTHON SOURCE LINES 130-133 According to the RMSE results, HALS is better than the multiplicative update with both FISTA and active set core update options. We can better appreciate the difference in convergence speed on the following error per iteration plot: -.. GENERATED FROM PYTHON SOURCE LINES 125-139 +.. GENERATED FROM PYTHON SOURCE LINES 133-147 -.. code-block:: default +.. code-block:: Python - def each_iteration(a,b,c,title): - fig=plt.figure() + def each_iteration(a, b, c, title): + fig = plt.figure() fig.set_size_inches(10, fig.get_figheight(), forward=True) plt.plot(a) plt.plot(b) plt.plot(c) plt.title(str(title)) - plt.legend(['MU', 'HALS + Fista', 'HALS + AS'], loc='upper right') + plt.legend(["MU", "HALS + Fista", "HALS + AS"], loc="upper right") - each_iteration(error_mu, error_fista, error_as, 'Error for each iteration') + each_iteration(error_mu, error_fista, error_as, "Error for each iteration") @@ -281,7 +289,7 @@ the difference in convergence speed on the following error per iteration plot: -.. GENERATED FROM PYTHON SOURCE LINES 140-147 +.. GENERATED FROM PYTHON SOURCE LINES 148-155 In conclusion, on this quick test, it appears that the HALS algorithm gives much better results than the MU original Tensorly methods. Our recommendation @@ -291,20 +299,20 @@ FISTA and active set give very similar results, however active set may last longer when it is used with higher ranks according to our experience. Therefore, we recommend to use FISTA with high rank decomposition. -.. GENERATED FROM PYTHON SOURCE LINES 149-155 +.. GENERATED FROM PYTHON SOURCE LINES 157-164 References ---------- Gillis, N., & Glineur, F. (2012). Accelerated multiplicative updates and hierarchical ALS algorithms for nonnegative matrix factorization. -Neural computation, 24(4), 1085-1105. +Neural computation, 24(4), 1085-1105. `(Link) https://direct.mit.edu/neco/article/24/4/1085/7755/Accelerated-Multiplicative-Updates-and>`_ .. rst-class:: sphx-glr-timing - **Total running time of the script:** ( 0 minutes 2.098 seconds) + **Total running time of the script:** (0 minutes 8.372 seconds) .. _sphx_glr_download_auto_examples_decomposition_plot_nn_tucker.py: @@ -313,14 +321,17 @@ Neural computation, 24(4), 1085-1105. .. container:: sphx-glr-footer sphx-glr-footer-example + .. container:: sphx-glr-download sphx-glr-download-jupyter + + :download:`Download Jupyter notebook: plot_nn_tucker.ipynb ` .. container:: sphx-glr-download sphx-glr-download-python :download:`Download Python source code: plot_nn_tucker.py ` - .. container:: sphx-glr-download sphx-glr-download-jupyter + .. container:: sphx-glr-download sphx-glr-download-zip - :download:`Download Jupyter notebook: plot_nn_tucker.ipynb ` + :download:`Download zipped: plot_nn_tucker.zip ` .. only:: html diff --git a/stable/_sources/auto_examples/decomposition/plot_parafac2.rst.txt b/stable/_sources/auto_examples/decomposition/plot_parafac2.rst.txt index 21461db4d..bdbeeaec5 100644 --- a/stable/_sources/auto_examples/decomposition/plot_parafac2.rst.txt +++ b/stable/_sources/auto_examples/decomposition/plot_parafac2.rst.txt @@ -10,8 +10,8 @@ .. note:: :class: sphx-glr-download-link-note - Click :ref:`here ` - to download the full example code + :ref:`Go to the end ` + to download the full example code. .. rst-class:: sphx-glr-example-title @@ -23,9 +23,9 @@ Demonstration of PARAFAC2 Example of how to use the PARAFAC2 algorithm. -.. GENERATED FROM PYTHON SOURCE LINES 9-18 +.. GENERATED FROM PYTHON SOURCE LINES 8-17 -.. code-block:: default +.. code-block:: Python import numpy as np @@ -43,12 +43,12 @@ Example of how to use the PARAFAC2 algorithm. -.. GENERATED FROM PYTHON SOURCE LINES 19-32 +.. GENERATED FROM PYTHON SOURCE LINES 18-31 Create synthetic tensor ----------------------- Here, we create a random tensor that follows the PARAFAC2 constraints found -inx `(Kiers et al 1999)`_. +in `(Kiers et al 1999)`_. This particular tensor, :math:`\mathcal{X} \in \mathbb{R}^{I\times J \times K}`, is a shifted @@ -59,9 +59,9 @@ CP tensor, that is, a tensor on the form: where :math:`\sigma_i` is a cyclic permutation of :math:`J` elements. -.. GENERATED FROM PYTHON SOURCE LINES 32-64 +.. GENERATED FROM PYTHON SOURCE LINES 31-65 -.. code-block:: default +.. code-block:: Python @@ -77,21 +77,23 @@ where :math:`\sigma_i` is a cyclic permutation of :math:`J` elements. C_factor_matrix = np.random.uniform(size=(K, true_rank)) # Normalised factor matrices - A_normalised = A_factor_matrix/la.norm(A_factor_matrix, axis=0) - B_normalised = B_factor_matrix/la.norm(B_factor_matrix, axis=0) - C_normalised = C_factor_matrix/la.norm(C_factor_matrix, axis=0) + A_normalised = A_factor_matrix / la.norm(A_factor_matrix, axis=0) + B_normalised = B_factor_matrix / la.norm(B_factor_matrix, axis=0) + C_normalised = C_factor_matrix / la.norm(C_factor_matrix, axis=0) # Generate the shifted factor matrix B_factor_matrices = [np.roll(B_factor_matrix, shift=i, axis=0) for i in range(I)] Bs_normalised = [np.roll(B_normalised, shift=i, axis=0) for i in range(I)] # Construct the tensor - tensor = np.einsum('ir,ijr,kr->ijk', A_factor_matrix, B_factor_matrices, C_factor_matrix) + tensor = np.einsum( + "ir,ijr,kr->ijk", A_factor_matrix, B_factor_matrices, C_factor_matrix + ) # Add noise noise = np.random.standard_normal(tensor.shape) noise /= np.linalg.norm(noise) - noise *= noise_rate*np.linalg.norm(tensor) + noise *= noise_rate * np.linalg.norm(tensor) tensor += noise @@ -102,16 +104,16 @@ where :math:`\sigma_i` is a cyclic permutation of :math:`J` elements. -.. GENERATED FROM PYTHON SOURCE LINES 65-69 +.. GENERATED FROM PYTHON SOURCE LINES 66-70 Fit a PARAFAC2 tensor --------------------- To avoid local minima, we initialise and fit 10 models and choose the one with the lowest error -.. GENERATED FROM PYTHON SOURCE LINES 69-87 +.. GENERATED FROM PYTHON SOURCE LINES 70-95 -.. code-block:: default +.. code-block:: Python @@ -119,16 +121,23 @@ with the lowest error decomposition = None for run in range(10): - print(f'Training model {run}...') - trial_decomposition, trial_errs = parafac2(tensor, true_rank, return_errors=True, tol=1e-8, n_iter_max=500, random_state=run) - print(f'Number of iterations: {len(trial_errs)}') - print(f'Final error: {trial_errs[-1]}') + print(f"Training model {run}...") + trial_decomposition, trial_errs = parafac2( + tensor, + true_rank, + return_errors=True, + tol=1e-8, + n_iter_max=500, + random_state=run, + ) + print(f"Number of iterations: {len(trial_errs)}") + print(f"Final error: {trial_errs[-1]}") if best_err > trial_errs[-1]: best_err = trial_errs[-1] err = trial_errs decomposition = trial_decomposition - print('-------------------------------') - print(f'Best model error: {best_err}') + print("-------------------------------") + print(f"Best model error: {best_err}") @@ -140,55 +149,55 @@ with the lowest error .. code-block:: none Training model 0... - Number of iterations: 500 - Final error: 0.09204720575424472 + Number of iterations: 80 + Final error: 0.09204695261872768 ------------------------------- Training model 1... - Number of iterations: 500 - Final error: 0.09204726856012718 + Number of iterations: 81 + Final error: 0.09204698427747768 ------------------------------- Training model 2... - Number of iterations: 500 - Final error: 0.09269711804187236 + Number of iterations: 70 + Final error: 0.092697248568492 ------------------------------- Training model 3... - Number of iterations: 392 - Final error: 0.09204692795621944 + Number of iterations: 44 + Final error: 0.09204719323465736 ------------------------------- Training model 4... - Number of iterations: 415 - Final error: 0.09204692959223097 + Number of iterations: 46 + Final error: 0.09204725131428858 ------------------------------- Training model 5... - Number of iterations: 500 - Final error: 0.09291065541285955 + Number of iterations: 129 + Final error: 0.09290580705038641 ------------------------------- Training model 6... - Number of iterations: 364 - Final error: 0.09204692769766268 + Number of iterations: 47 + Final error: 0.09204716605012422 ------------------------------- Training model 7... - Number of iterations: 424 - Final error: 0.09204692883956121 + Number of iterations: 47 + Final error: 0.09204714361493882 ------------------------------- Training model 8... - Number of iterations: 481 - Final error: 0.09204693125447479 + Number of iterations: 47 + Final error: 0.0920475342964699 ------------------------------- Training model 9... - Number of iterations: 500 - Final error: 0.0920563578975846 + Number of iterations: 98 + Final error: 0.09204700880318421 ------------------------------- - Best model error: 0.09204692769766268 + Best model error: 0.09204695261872768 -.. GENERATED FROM PYTHON SOURCE LINES 88-119 +.. GENERATED FROM PYTHON SOURCE LINES 96-127 -A decomposition is a wrapper object for three variables: the *weights*, +A decomposition is a wrapper object for three variables: the *weights*, the *factor matrices* and the *projection matrices*. The weights are similar -to the output of a CP decomposition. The factor matrices and projection +to the output of a CP decomposition. The factor matrices and projection matrices are somewhat different. For a CP decomposition, we only have the weights and the factor matrices. However, since the PARAFAC2 factor matrices for the second mode is given by @@ -196,17 +205,17 @@ for the second mode is given by .. math:: B_i = P_i B, -where :math:`B` is an :math:`R \times R` matrix and :math:`P_i` is an +where :math:`B` is an :math:`R \times R` matrix and :math:`P_i` is an :math:`I \times R` projection matrix, we cannot store the factor matrices the same as for a CP decomposition. -Instead, we store the factor matrix along the first mode (:math:`A`), the -"blueprint" matrix for the second mode (:math:`B`) and the factor matrix +Instead, we store the factor matrix along the first mode (:math:`A`), the +"blueprint" matrix for the second mode (:math:`B`) and the factor matrix along the third mode (:math:`C`) in one tuple and the projection matrices, :math:`P_i`, in a separate tuple. If we wish to extract the informative :math:`B_i` factor matrices, then we -use the ``tensorly.parafac2_tensor.apply_projection_matrices`` function on +use the ``tensorly.parafac2_tensor.apply_projection_matrices`` function on the PARAFAC2 tensor instance to get another wrapper object for two variables: *weights* and *factor matrices*. However, now, the second element of the factor matrices tuple is now a list of factor matrices, one for each @@ -215,18 +224,19 @@ frontal slice of the tensor. Likewise, if we wish to construct the tensor or the frontal slices, then we can use the ``tensorly.parafac2_tensor.parafac2_to_tensor`` function. If the decomposed dataset consisted of uneven-length frontal slices, then we can -use the ``tensorly.parafac2_tensor.parafac2_to_slices`` function to get a +use the ``tensorly.parafac2_tensor.parafac2_to_slices`` function to get a list of frontal slices. -.. GENERATED FROM PYTHON SOURCE LINES 119-125 - -.. code-block:: default +.. GENERATED FROM PYTHON SOURCE LINES 127-134 +.. code-block:: Python est_tensor = tl.parafac2_tensor.parafac2_to_tensor(decomposition) - est_weights, (est_A, est_B, est_C) = tl.parafac2_tensor.apply_parafac2_projections(decomposition) + est_weights, (est_A, est_B, est_C) = tl.parafac2_tensor.apply_parafac2_projections( + decomposition + ) @@ -235,44 +245,54 @@ list of frontal slices. -.. GENERATED FROM PYTHON SOURCE LINES 126-128 +.. GENERATED FROM PYTHON SOURCE LINES 135-137 Compute performance metrics --------------------------- -.. GENERATED FROM PYTHON SOURCE LINES 128-158 +.. GENERATED FROM PYTHON SOURCE LINES 137-177 -.. code-block:: default +.. code-block:: Python reconstruction_error = la.norm(est_tensor - tensor) - recovery_rate = 1 - reconstruction_error/la.norm(tensor) + recovery_rate = 1 - reconstruction_error / la.norm(tensor) - print(f'{recovery_rate:2.0%} of the data is explained by the model, which is expected with noise rate: {noise_rate}') + print( + f"{recovery_rate:2.0%} of the data is explained by the model, which is expected with noise rate: {noise_rate}" + ) # To evaluate how well the original structure is recovered, we calculate the tucker congruence coefficient. - est_A, est_projected_Bs, est_C = tl.parafac2_tensor.apply_parafac2_projections(decomposition)[1] + est_A, est_projected_Bs, est_C = tl.parafac2_tensor.apply_parafac2_projections( + decomposition + )[1] sign = np.sign(est_A) est_A = np.abs(est_A) - est_projected_Bs = sign[:, np.newaxis]*est_projected_Bs + est_projected_Bs = sign[:, np.newaxis] * est_projected_Bs - est_A_normalised = est_A/la.norm(est_A, axis=0) - est_Bs_normalised = [est_B/la.norm(est_B, axis=0) for est_B in est_projected_Bs] - est_C_normalised = est_C/la.norm(est_C, axis=0) + est_A_normalised = est_A / la.norm(est_A, axis=0) + est_Bs_normalised = [est_B / la.norm(est_B, axis=0) for est_B in est_projected_Bs] + est_C_normalised = est_C / la.norm(est_C, axis=0) - B_corr = np.array(est_Bs_normalised).reshape(-1, true_rank).T@np.array(Bs_normalised).reshape(-1, true_rank)/len(est_Bs_normalised) - A_corr = est_A_normalised.T@A_normalised - C_corr = est_C_normalised.T@C_normalised + B_corr = ( + np.array(est_Bs_normalised).reshape(-1, true_rank).T + @ np.array(Bs_normalised).reshape(-1, true_rank) + / len(est_Bs_normalised) + ) + A_corr = est_A_normalised.T @ A_normalised + C_corr = est_C_normalised.T @ C_normalised - corr = A_corr*B_corr*C_corr - permutation = linear_sum_assignment(-corr) # Old versions of scipy does not support maximising, from scipy v1.4, you can pass `corr` and `maximize=True` instead of `-corr` to maximise the sum. + corr = A_corr * B_corr * C_corr + permutation = linear_sum_assignment( + -corr + ) # Old versions of scipy does not support maximising, from scipy v1.4, you can pass `corr` and `maximize=True` instead of `-corr` to maximise the sum. congruence_coefficient = np.mean(corr[permutation]) - print(f'Average tucker congruence coefficient: {congruence_coefficient}') + print(f"Average tucker congruence coefficient: {congruence_coefficient}") @@ -283,62 +303,62 @@ Compute performance metrics .. code-block:: none 91% of the data is explained by the model, which is expected with noise rate: 0.1 - Average tucker congruence coefficient: 0.9947046512423608 + Average tucker congruence coefficient: 0.9945618721597652 -.. GENERATED FROM PYTHON SOURCE LINES 159-161 +.. GENERATED FROM PYTHON SOURCE LINES 178-180 Visualize the components ------------------------ -.. GENERATED FROM PYTHON SOURCE LINES 161-204 +.. GENERATED FROM PYTHON SOURCE LINES 180-223 -.. code-block:: default +.. code-block:: Python # Find the best permutation so that we can plot the estimated components on top of the true components - permutation = np.argmax(A_corr*B_corr*C_corr, axis=0) + permutation = np.argmax(A_corr * B_corr * C_corr, axis=0) # Create plots of each component vector for each mode # (We just look at one of the B_i matrices) - fig, axes = plt.subplots(true_rank, 3, figsize=(15, 3*true_rank+1)) - i = 0 # What slice, B_i, we look at for the B mode + fig, axes = plt.subplots(true_rank, 3, figsize=(15, 3 * true_rank + 1)) + i = 0 # What slice, B_i, we look at for the B mode for r in range(true_rank): - + # Plot true and estimated components for mode A - axes[r][0].plot((A_normalised[:, r]), label='True') - axes[r][0].plot((est_A_normalised[:, permutation[r]]),'--', label='Estimated') - + axes[r][0].plot((A_normalised[:, r]), label="True") + axes[r][0].plot((est_A_normalised[:, permutation[r]]), "--", label="Estimated") + # Labels for the different components - axes[r][0].set_ylabel(f'Component {r}') + axes[r][0].set_ylabel(f"Component {r}") # Plot true and estimated components for mode C axes[r][2].plot(C_normalised[:, r]) - axes[r][2].plot(est_C_normalised[:, permutation[r]], '--') + axes[r][2].plot(est_C_normalised[:, permutation[r]], "--") # Plot true components for mode B axes[r][1].plot(Bs_normalised[i][:, r]) - + # Get the signs so that we can flip the B mode factor matrices A_sign = np.sign(est_A_normalised) - + # Plot estimated components for mode B (after sign correction) - axes[r][1].plot(A_sign[i, r]*est_Bs_normalised[i][:, permutation[r]], '--') + axes[r][1].plot(A_sign[i, r] * est_Bs_normalised[i][:, permutation[r]], "--") # Titles for the different modes - axes[0][0].set_title('A mode') - axes[0][2].set_title('C mode') - axes[0][1].set_title(f'B mode (slice {i})') + axes[0][0].set_title("A mode") + axes[0][2].set_title("C mode") + axes[0][1].set_title(f"B mode (slice {i})") - # Create a legend for the entire figure - handles, labels = axes[r][0].get_legend_handles_labels() - fig.legend(handles, labels, loc='upper center', ncol=2) + # Create a legend for the entire figure + handles, labels = axes[r][0].get_legend_handles_labels() + fig.legend(handles, labels, loc="upper center", ncol=2) @@ -354,11 +374,11 @@ Visualize the components .. code-block:: none - + -.. GENERATED FROM PYTHON SOURCE LINES 205-211 +.. GENERATED FROM PYTHON SOURCE LINES 224-230 Inspect the convergence rate ---------------------------- @@ -367,17 +387,20 @@ converged to a stationary point. We skip the first iteration since the initial loss often dominate the rest of the plot, making it difficult to check for convergence. -.. GENERATED FROM PYTHON SOURCE LINES 211-226 +.. GENERATED FROM PYTHON SOURCE LINES 230-247 -.. code-block:: default +.. code-block:: Python - loss_fig, loss_ax = plt.subplots(figsize=(9, 9/1.6)) + loss_fig, loss_ax = plt.subplots(figsize=(9, 9 / 1.6)) loss_ax.plot(range(1, len(err)), err[1:]) - loss_ax.set_xlabel('Iteration number') - loss_ax.set_ylabel('Relative reconstruction error') + loss_ax.set_xlabel("Iteration number") + loss_ax.set_ylabel("Relative reconstruction error") mathematical_expression_of_loss = r"$\frac{\left|\left|\hat{\mathcal{X}}\right|\right|_F}{\left|\left|\mathcal{X}\right|\right|_F}$" - loss_ax.set_title(f'Loss plot: {mathematical_expression_of_loss} \n (starting after first iteration)', fontsize=16) + loss_ax.set_title( + f"Loss plot: {mathematical_expression_of_loss} \n (starting after first iteration)", + fontsize=16, + ) xticks = loss_ax.get_xticks() loss_ax.set_xticks([1] + list(xticks[1:])) loss_ax.set_xlim(1, len(err)) @@ -388,7 +411,6 @@ to check for convergence. - .. image-sg:: /auto_examples/decomposition/images/sphx_glr_plot_parafac2_002.png :alt: Loss plot: $\frac{\left|\left|\hat{\mathcal{X}}\right|\right|_F}{\left|\left|\mathcal{X}\right|\right|_F}$ (starting after first iteration) :srcset: /auto_examples/decomposition/images/sphx_glr_plot_parafac2_002.png @@ -398,14 +420,14 @@ to check for convergence. -.. GENERATED FROM PYTHON SOURCE LINES 227-237 +.. GENERATED FROM PYTHON SOURCE LINES 248-258 References ---------- .. _(Kiers et al 1999): -Kiers HA, Ten Berge JM, Bro R. *PARAFAC2—Part I. +Kiers HA, Ten Berge JM, Bro R. *PARAFAC2—Part I. A direct fitting algorithm for the PARAFAC2 model.* **Journal of Chemometrics: A Journal of the Chemometrics Society.** 1999 May;13(3‐4):275-94. `(Online version) @@ -414,7 +436,7 @@ A direct fitting algorithm for the PARAFAC2 model.* .. rst-class:: sphx-glr-timing - **Total running time of the script:** ( 0 minutes 34.545 seconds) + **Total running time of the script:** (0 minutes 7.688 seconds) .. _sphx_glr_download_auto_examples_decomposition_plot_parafac2.py: @@ -423,14 +445,17 @@ A direct fitting algorithm for the PARAFAC2 model.* .. container:: sphx-glr-footer sphx-glr-footer-example + .. container:: sphx-glr-download sphx-glr-download-jupyter + + :download:`Download Jupyter notebook: plot_parafac2.ipynb ` .. container:: sphx-glr-download sphx-glr-download-python :download:`Download Python source code: plot_parafac2.py ` - .. container:: sphx-glr-download sphx-glr-download-jupyter + .. container:: sphx-glr-download sphx-glr-download-zip - :download:`Download Jupyter notebook: plot_parafac2.ipynb ` + :download:`Download zipped: plot_parafac2.zip ` .. only:: html diff --git a/stable/_sources/auto_examples/decomposition/plot_parafac2_compression.rst.txt b/stable/_sources/auto_examples/decomposition/plot_parafac2_compression.rst.txt new file mode 100644 index 000000000..eea00dedc --- /dev/null +++ b/stable/_sources/auto_examples/decomposition/plot_parafac2_compression.rst.txt @@ -0,0 +1,521 @@ + +.. DO NOT EDIT. +.. THIS FILE WAS AUTOMATICALLY GENERATED BY SPHINX-GALLERY. +.. TO MAKE CHANGES, EDIT THE SOURCE PYTHON FILE: +.. "auto_examples/decomposition/plot_parafac2_compression.py" +.. LINE NUMBERS ARE GIVEN BELOW. + +.. only:: html + + .. note:: + :class: sphx-glr-download-link-note + + :ref:`Go to the end ` + to download the full example code. + +.. rst-class:: sphx-glr-example-title + +.. _sphx_glr_auto_examples_decomposition_plot_parafac2_compression.py: + + +Speeding up PARAFAC2 with SVD compression +========================================= + +PARAFAC2 can be very time-consuming to fit. However, if the number of rows greatly +exceeds the number of columns or the data matrices are approximately low-rank, we can +compress the data before fitting the PARAFAC2 model to considerably speed up the fitting +procedure. + +The compression works by first computing the SVD of the tensor slices and fitting the +PARAFAC2 model to the right singular vectors multiplied by the singular values. Then, +after we fit the model, we left-multiply the :math:`B_i`-matrices with the left singular +vectors to recover the decompressed model. Fitting to compressed data and then +decompressing is mathematically equivalent to fitting to the original uncompressed data. + +For more information about why this works, see the documentation of +:py:meth:`tensorly.decomposition.preprocessing.svd_compress_tensor_slices`. + +.. GENERATED FROM PYTHON SOURCE LINES 19-26 + +.. code-block:: Python + + + from time import monotonic + import tensorly as tl + from tensorly.decomposition import parafac2 + import tensorly.preprocessing as preprocessing + + + + + + + + + +.. GENERATED FROM PYTHON SOURCE LINES 27-32 + +Function to create synthetic data +--------------------------------- + +Here, we create a function that constructs a random tensor from a PARAFAC2 +decomposition with noise + +.. GENERATED FROM PYTHON SOURCE LINES 32-50 + +.. code-block:: Python + + + rng = tl.check_random_state(0) + + + def create_random_data(shape, rank, noise_level): + I, J, K = shape # noqa: E741 + pf2 = tl.random.random_parafac2( + [(J, K) for i in range(I)], rank=rank, random_state=rng + ) + + X = pf2.to_tensor() + X_norm = [tl.norm(Xi) for Xi in X] + + noise = [rng.standard_normal((J, K)) for i in range(I)] + noise = [noise_level * X_norm[i] / tl.norm(E_i) for i, E_i in enumerate(noise)] + return [X_i + E_i for X_i, E_i in zip(X, noise)] + + + + + + + + + +.. GENERATED FROM PYTHON SOURCE LINES 51-55 + +Compressing data with many rows and few columns +----------------------------------------------- + +Here, we set up for a case where we have many rows compared to columns + +.. GENERATED FROM PYTHON SOURCE LINES 55-63 + +.. code-block:: Python + + + n_inits = 5 + rank = 3 + shape = (10, 10_000, 15) # 10 matrices/tensor slices, each of size 10_000 x 15. + noise_level = 0.33 + + uncompressed_data = create_random_data(shape, rank=rank, noise_level=noise_level) + + + + + + + + +.. GENERATED FROM PYTHON SOURCE LINES 64-70 + +Fitting without compression +^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +As a baseline, we see how long time it takes to fit models without compression. +Since PARAFAC2 is very prone to local minima, we fit five models and select the model +with the lowest reconstruction error. + +.. GENERATED FROM PYTHON SOURCE LINES 70-91 + +.. code-block:: Python + + + print("Fitting PARAFAC2 model without compression...") + t1 = monotonic() + lowest_error = float("inf") + for i in range(n_inits): + pf2, errs = parafac2( + uncompressed_data, + rank, + n_iter_max=1000, + nn_modes=[0], + random_state=rng, + return_errors=True, + ) + if errs[-1] < lowest_error: + pf2_full, errs_full = pf2, errs + t2 = monotonic() + print( + f"It took {t2 - t1:.1f}s to fit a PARAFAC2 model a tensor of shape {shape} " + + "without compression" + ) + + + + + +.. rst-class:: sphx-glr-script-out + + .. code-block:: none + + Fitting PARAFAC2 model without compression... + It took 212.8s to fit a PARAFAC2 model a tensor of shape (10, 10000, 15) without compression + + + + +.. GENERATED FROM PYTHON SOURCE LINES 92-103 + +Fitting with lossless compression +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +Since the tensor slices have many rows compared to columns, we should be able to save +a lot of time by compressing the data. By compressing the matrices, we only need to +fit the PARAFAC2 model to a set of 10 matrices, each of size 15 x 15, not 10_000 x 15. + +The main bottleneck here is the SVD computation at the beginning of the fitting +procedure, but luckily, this is independent of the initialisations, so we only need +to compute this once. Also, if we are performing a grid search for the rank, then +we just need to perform the compression once for the whole grid search as well. + +.. GENERATED FROM PYTHON SOURCE LINES 103-130 + +.. code-block:: Python + + + print("Fitting PARAFAC2 model with SVD compression...") + t1 = monotonic() + lowest_error = float("inf") + scores, loadings = preprocessing.svd_compress_tensor_slices(uncompressed_data) + t2 = monotonic() + for i in range(n_inits): + pf2, errs = parafac2( + scores, + rank, + n_iter_max=1000, + nn_modes=[0], + random_state=rng, + return_errors=True, + ) + if errs[-1] < lowest_error: + pf2_compressed, errs_compressed = pf2, errs + pf2_decompressed = preprocessing.svd_decompress_parafac2_tensor( + pf2_compressed, loadings + ) + t3 = monotonic() + print( + f"It took {t3 - t1:.1f}s to fit a PARAFAC2 model a tensor of shape {shape} " + + "with lossless SVD compression" + ) + print(f"The compression took {t2 - t1:.1f}s and the fitting took {t3 - t2:.1f}s") + + + + + +.. rst-class:: sphx-glr-script-out + + .. code-block:: none + + Fitting PARAFAC2 model with SVD compression... + It took 121.5s to fit a PARAFAC2 model a tensor of shape (10, 10000, 15) with lossless SVD compression + The compression took 0.0s and the fitting took 121.4s + + + + +.. GENERATED FROM PYTHON SOURCE LINES 131-132 + +We see that we saved a lot of time by compressing the data before fitting the model. + +.. GENERATED FROM PYTHON SOURCE LINES 134-141 + +Fitting with lossy compression +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +We can try to speed the process up even further by accepting a slight discrepancy +between the model obtained from compressed data and a model obtained from uncompressed +data. Specifically, we can truncate the singular values at some threshold, essentially +removing the parts of the data matrices that have a very low "signal strength". + +.. GENERATED FROM PYTHON SOURCE LINES 141-170 + +.. code-block:: Python + + + print("Fitting PARAFAC2 model with lossy SVD compression...") + t1 = monotonic() + lowest_error = float("inf") + scores, loadings = preprocessing.svd_compress_tensor_slices(uncompressed_data, 1e-5) + t2 = monotonic() + for i in range(n_inits): + pf2, errs = parafac2( + scores, + rank, + n_iter_max=1000, + nn_modes=[0], + random_state=rng, + return_errors=True, + ) + if errs[-1] < lowest_error: + pf2_compressed_lossy, errs_compressed_lossy = pf2, errs + pf2_decompressed_lossy = preprocessing.svd_decompress_parafac2_tensor( + pf2_compressed_lossy, loadings + ) + t3 = monotonic() + print( + f"It took {t3 - t1:.1f}s to fit a PARAFAC2 model a tensor of shape {shape} " + + "with lossy SVD compression" + ) + print( + f"Of which the compression took {t2 - t1:.1f}s and the fitting took {t3 - t2:.1f}s" + ) + + + + + +.. rst-class:: sphx-glr-script-out + + .. code-block:: none + + Fitting PARAFAC2 model with lossy SVD compression... + It took 120.9s to fit a PARAFAC2 model a tensor of shape (10, 10000, 15) with lossy SVD compression + Of which the compression took 0.0s and the fitting took 120.8s + + + + +.. GENERATED FROM PYTHON SOURCE LINES 171-175 + +We see that we didn't save much, if any, time in this case (compared to using +lossless compression). This is because the main bottleneck now is the CP-part of +the PARAFAC2 procedure, so reducing the tensor size from 10 x 15 x 15 to 10 x 4 x 15 +(which is typically what we would get here) will have a negligible effect. + +.. GENERATED FROM PYTHON SOURCE LINES 178-182 + +Compressing data that is approximately low-rank +----------------------------------------------- + +Here, we simulate data with many rows and columns but an approximately low rank. + +.. GENERATED FROM PYTHON SOURCE LINES 182-189 + +.. code-block:: Python + + + rank = 3 + shape = (10, 2_000, 2_000) + noise_level = 0.33 + + uncompressed_data = create_random_data(shape, rank=rank, noise_level=noise_level) + + + + + + + + +.. GENERATED FROM PYTHON SOURCE LINES 190-194 + +Fitting without compression +^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +Again, we start by fitting without compression as a baseline. + +.. GENERATED FROM PYTHON SOURCE LINES 194-215 + +.. code-block:: Python + + + print("Fitting PARAFAC2 model without compression...") + t1 = monotonic() + lowest_error = float("inf") + for i in range(n_inits): + pf2, errs = parafac2( + uncompressed_data, + rank, + n_iter_max=1000, + nn_modes=[0], + random_state=rng, + return_errors=True, + ) + if errs[-1] < lowest_error: + pf2_full, errs_full = pf2, errs + t2 = monotonic() + print( + f"It took {t2 - t1:.1f}s to fit a PARAFAC2 model a tensor of shape {shape} " + + "without compression" + ) + + + + + +.. rst-class:: sphx-glr-script-out + + .. code-block:: none + + Fitting PARAFAC2 model without compression... + It took 263.0s to fit a PARAFAC2 model a tensor of shape (10, 2000, 2000) without compression + + + + +.. GENERATED FROM PYTHON SOURCE LINES 216-220 + +Fitting with lossless compression +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +Next, we fit with lossless compression. + +.. GENERATED FROM PYTHON SOURCE LINES 220-249 + +.. code-block:: Python + + + print("Fitting PARAFAC2 model with SVD compression...") + t1 = monotonic() + lowest_error = float("inf") + scores, loadings = preprocessing.svd_compress_tensor_slices(uncompressed_data) + t2 = monotonic() + for i in range(n_inits): + pf2, errs = parafac2( + scores, + rank, + n_iter_max=1000, + nn_modes=[0], + random_state=rng, + return_errors=True, + ) + if errs[-1] < lowest_error: + pf2_compressed, errs_compressed = pf2, errs + pf2_decompressed = preprocessing.svd_decompress_parafac2_tensor( + pf2_compressed, loadings + ) + t3 = monotonic() + print( + f"It took {t3 - t1:.1f}s to fit a PARAFAC2 model a tensor of shape {shape} " + + "with lossless SVD compression" + ) + print( + f"Of which the compression took {t2 - t1:.1f}s and the fitting took {t3 - t2:.1f}s" + ) + + + + + +.. rst-class:: sphx-glr-script-out + + .. code-block:: none + + Fitting PARAFAC2 model with SVD compression... + It took 346.0s to fit a PARAFAC2 model a tensor of shape (10, 2000, 2000) with lossless SVD compression + Of which the compression took 0.0s and the fitting took 346.0s + + + + +.. GENERATED FROM PYTHON SOURCE LINES 250-253 + +We see that the lossless compression no effect for this data. This is because the +number ofrows is equal to the number of columns, so we cannot compress the data +losslessly with the SVD. + +.. GENERATED FROM PYTHON SOURCE LINES 255-259 + +Fitting with lossy compression +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +Finally, we fit with lossy SVD compression. + +.. GENERATED FROM PYTHON SOURCE LINES 259-289 + +.. code-block:: Python + + + print("Fitting PARAFAC2 model with lossy SVD compression...") + t1 = monotonic() + lowest_error = float("inf") + scores, loadings = preprocessing.svd_compress_tensor_slices(uncompressed_data, 1e-5) + t2 = monotonic() + for i in range(n_inits): + pf2, errs = parafac2( + scores, + rank, + n_iter_max=1000, + nn_modes=[0], + random_state=rng, + return_errors=True, + ) + if errs[-1] < lowest_error: + pf2_compressed_lossy, errs_compressed_lossy = pf2, errs + pf2_decompressed_lossy = preprocessing.svd_decompress_parafac2_tensor( + pf2_compressed_lossy, loadings + ) + t3 = monotonic() + print( + f"It took {t3 - t1:.1f}s to fit a PARAFAC2 model a tensor of shape {shape} " + + "with lossy SVD compression" + ) + print( + f"Of which the compression took {t2 - t1:.1f}s and the fitting took {t3 - t2:.1f}s" + ) + + + + + + +.. rst-class:: sphx-glr-script-out + + .. code-block:: none + + Fitting PARAFAC2 model with lossy SVD compression... + It took 115.5s to fit a PARAFAC2 model a tensor of shape (10, 2000, 2000) with lossy SVD compression + Of which the compression took 12.9s and the fitting took 102.5s + + + + +.. GENERATED FROM PYTHON SOURCE LINES 290-295 + +Here we see a large speedup. This is because the data is approximately low rank so +the compressed tensor slices will have shape R x 2_000, where R is typically below 10 +in this example. If your tensor slices are large in both modes, you might want to plot +the singular values of your dataset to see if lossy compression could speed up +PARAFAC2. + + +.. rst-class:: sphx-glr-timing + + **Total running time of the script:** (19 minutes 40.649 seconds) + + +.. _sphx_glr_download_auto_examples_decomposition_plot_parafac2_compression.py: + +.. only:: html + + .. container:: sphx-glr-footer sphx-glr-footer-example + + .. container:: sphx-glr-download sphx-glr-download-jupyter + + :download:`Download Jupyter notebook: plot_parafac2_compression.ipynb ` + + .. container:: sphx-glr-download sphx-glr-download-python + + :download:`Download Python source code: plot_parafac2_compression.py ` + + .. container:: sphx-glr-download sphx-glr-download-zip + + :download:`Download zipped: plot_parafac2_compression.zip ` + + +.. only:: html + + .. rst-class:: sphx-glr-signature + + `Gallery generated by Sphinx-Gallery `_ diff --git a/stable/_sources/auto_examples/decomposition/plot_permute_factors.rst.txt b/stable/_sources/auto_examples/decomposition/plot_permute_factors.rst.txt index 4a41ce32c..101cff676 100644 --- a/stable/_sources/auto_examples/decomposition/plot_permute_factors.rst.txt +++ b/stable/_sources/auto_examples/decomposition/plot_permute_factors.rst.txt @@ -10,8 +10,8 @@ .. note:: :class: sphx-glr-download-link-note - Click :ref:`here ` - to download the full example code + :ref:`Go to the end ` + to download the full example code. .. rst-class:: sphx-glr-example-title @@ -34,7 +34,7 @@ Tensorly CPTensor should be used as an input to permute their factors and weight .. GENERATED FROM PYTHON SOURCE LINES 15-21 -.. code-block:: default +.. code-block:: Python import tensorly as tl @@ -57,7 +57,7 @@ Here, we create a random tensor, then we permute its factors manually. .. GENERATED FROM PYTHON SOURCE LINES 25-45 -.. code-block:: default +.. code-block:: Python @@ -98,7 +98,7 @@ It should be noted that, reference CPTensor won't be included among the output C .. GENERATED FROM PYTHON SOURCE LINES 53-56 -.. code-block:: default +.. code-block:: Python cp_tensors, permutation = cp_permute_factors(cp_tensor_1, [cp_tensor_2, cp_tensor_3]) @@ -117,7 +117,7 @@ col_order_2 above. .. GENERATED FROM PYTHON SOURCE LINES 59-62 -.. code-block:: default +.. code-block:: Python print(permutation) @@ -142,7 +142,7 @@ before and after permuting. .. GENERATED FROM PYTHON SOURCE LINES 65-75 -.. code-block:: default +.. code-block:: Python fig, axs = plt.subplots(1, 3) @@ -175,7 +175,7 @@ before and after permuting. .. rst-class:: sphx-glr-timing - **Total running time of the script:** ( 0 minutes 0.168 seconds) + **Total running time of the script:** (0 minutes 0.158 seconds) .. _sphx_glr_download_auto_examples_decomposition_plot_permute_factors.py: @@ -184,14 +184,17 @@ before and after permuting. .. container:: sphx-glr-footer sphx-glr-footer-example + .. container:: sphx-glr-download sphx-glr-download-jupyter + + :download:`Download Jupyter notebook: plot_permute_factors.ipynb ` .. container:: sphx-glr-download sphx-glr-download-python :download:`Download Python source code: plot_permute_factors.py ` - .. container:: sphx-glr-download sphx-glr-download-jupyter + .. container:: sphx-glr-download sphx-glr-download-zip - :download:`Download Jupyter notebook: plot_permute_factors.ipynb ` + :download:`Download zipped: plot_permute_factors.zip ` .. only:: html diff --git a/stable/_sources/auto_examples/decomposition/sg_execution_times.rst.txt b/stable/_sources/auto_examples/decomposition/sg_execution_times.rst.txt index 6a970f98f..8d2488e5a 100644 --- a/stable/_sources/auto_examples/decomposition/sg_execution_times.rst.txt +++ b/stable/_sources/auto_examples/decomposition/sg_execution_times.rst.txt @@ -3,20 +3,53 @@ .. _sphx_glr_auto_examples_decomposition_sg_execution_times: + Computation times ================= -**09:45.651** total execution time for **auto_examples_decomposition** files: - -+---------------------------------------------------------------------------------------------------------------------+-----------+--------+ -| :ref:`sphx_glr_auto_examples_decomposition_plot_nn_cp_hals.py` (``plot_nn_cp_hals.py``) | 08:56.458 | 0.0 MB | -+---------------------------------------------------------------------------------------------------------------------+-----------+--------+ -| :ref:`sphx_glr_auto_examples_decomposition_plot_parafac2.py` (``plot_parafac2.py``) | 00:34.545 | 0.0 MB | -+---------------------------------------------------------------------------------------------------------------------+-----------+--------+ -| :ref:`sphx_glr_auto_examples_decomposition_plot_cp_line_search.py` (``plot_cp_line_search.py``) | 00:08.314 | 0.0 MB | -+---------------------------------------------------------------------------------------------------------------------+-----------+--------+ -| :ref:`sphx_glr_auto_examples_decomposition_plot_guide_for_constrained_cp.py` (``plot_guide_for_constrained_cp.py``) | 00:04.068 | 0.0 MB | -+---------------------------------------------------------------------------------------------------------------------+-----------+--------+ -| :ref:`sphx_glr_auto_examples_decomposition_plot_nn_tucker.py` (``plot_nn_tucker.py``) | 00:02.098 | 0.0 MB | -+---------------------------------------------------------------------------------------------------------------------+-----------+--------+ -| :ref:`sphx_glr_auto_examples_decomposition_plot_permute_factors.py` (``plot_permute_factors.py``) | 00:00.168 | 0.0 MB | -+---------------------------------------------------------------------------------------------------------------------+-----------+--------+ +**23:24.575** total execution time for 7 files **from auto_examples/decomposition**: + +.. container:: + + .. raw:: html + + + + + + + + .. list-table:: + :header-rows: 1 + :class: table table-striped sg-datatable + + * - Example + - Time + - Mem (MB) + * - :ref:`sphx_glr_auto_examples_decomposition_plot_parafac2_compression.py` (``plot_parafac2_compression.py``) + - 19:40.649 + - 0.0 + * - :ref:`sphx_glr_auto_examples_decomposition_plot_nn_cp_hals.py` (``plot_nn_cp_hals.py``) + - 03:20.624 + - 0.0 + * - :ref:`sphx_glr_auto_examples_decomposition_plot_nn_tucker.py` (``plot_nn_tucker.py``) + - 00:08.372 + - 0.0 + * - :ref:`sphx_glr_auto_examples_decomposition_plot_parafac2.py` (``plot_parafac2.py``) + - 00:07.688 + - 0.0 + * - :ref:`sphx_glr_auto_examples_decomposition_plot_cp_line_search.py` (``plot_cp_line_search.py``) + - 00:03.835 + - 0.0 + * - :ref:`sphx_glr_auto_examples_decomposition_plot_guide_for_constrained_cp.py` (``plot_guide_for_constrained_cp.py``) + - 00:03.250 + - 0.0 + * - :ref:`sphx_glr_auto_examples_decomposition_plot_permute_factors.py` (``plot_permute_factors.py``) + - 00:00.158 + - 0.0 diff --git a/stable/_sources/auto_examples/index.rst.txt b/stable/_sources/auto_examples/index.rst.txt index 16b798da5..92a25b65d 100644 --- a/stable/_sources/auto_examples/index.rst.txt +++ b/stable/_sources/auto_examples/index.rst.txt @@ -18,6 +18,7 @@ Examples of tensor usage.
    +.. thumbnail-parent-div-open .. raw:: html @@ -26,7 +27,7 @@ Examples of tensor usage. .. only:: html .. image:: /auto_examples/images/thumb/sphx_glr_plot_tensor_thumb.png - :alt: Basic tensor operations + :alt: :ref:`sphx_glr_auto_examples_plot_tensor.py` @@ -36,6 +37,8 @@ Examples of tensor usage.
    +.. thumbnail-parent-div-close + .. raw:: html
    @@ -57,15 +60,16 @@ See how you can use TensorLy on practical applications and datasets.
    +.. thumbnail-parent-div-open .. raw:: html -
    +
    .. only:: html .. image:: /auto_examples/applications/images/thumb/sphx_glr_plot_image_compression_thumb.png - :alt: Image compression via tensor decomposition + :alt: :ref:`sphx_glr_auto_examples_applications_plot_image_compression.py` @@ -77,12 +81,12 @@ See how you can use TensorLy on practical applications and datasets. .. raw:: html -
    +
    .. only:: html .. image:: /auto_examples/applications/images/thumb/sphx_glr_plot_IL2_thumb.png - :alt: Non-negative PARAFAC Decomposition of IL-2 Response Data + :alt: :ref:`sphx_glr_auto_examples_applications_plot_IL2.py` @@ -99,7 +103,7 @@ See how you can use TensorLy on practical applications and datasets. .. only:: html .. image:: /auto_examples/applications/images/thumb/sphx_glr_plot_covid_thumb.png - :alt: COVID-19 Serology Dataset Analysis with CP + :alt: :ref:`sphx_glr_auto_examples_applications_plot_covid.py` @@ -109,6 +113,8 @@ See how you can use TensorLy on practical applications and datasets.
    +.. thumbnail-parent-div-close + .. raw:: html
    @@ -123,6 +129,7 @@ Tensor decomposition
    +.. thumbnail-parent-div-open .. raw:: html @@ -131,7 +138,7 @@ Tensor decomposition .. only:: html .. image:: /auto_examples/decomposition/images/thumb/sphx_glr_plot_permute_factors_thumb.png - :alt: Permuting CP factors + :alt: :ref:`sphx_glr_auto_examples_decomposition_plot_permute_factors.py` @@ -143,12 +150,12 @@ Tensor decomposition .. raw:: html -
    +
    .. only:: html .. image:: /auto_examples/decomposition/images/thumb/sphx_glr_plot_cp_line_search_thumb.png - :alt: Using line search with PARAFAC + :alt: :ref:`sphx_glr_auto_examples_decomposition_plot_cp_line_search.py` @@ -165,7 +172,7 @@ Tensor decomposition .. only:: html .. image:: /auto_examples/decomposition/images/thumb/sphx_glr_plot_guide_for_constrained_cp_thumb.png - :alt: Constrained CP decomposition in Tensorly >=0.7 + :alt: :ref:`sphx_glr_auto_examples_decomposition_plot_guide_for_constrained_cp.py` @@ -182,7 +189,7 @@ Tensor decomposition .. only:: html .. image:: /auto_examples/decomposition/images/thumb/sphx_glr_plot_nn_tucker_thumb.png - :alt: Non-negative Tucker decomposition + :alt: :ref:`sphx_glr_auto_examples_decomposition_plot_nn_tucker.py` @@ -199,7 +206,7 @@ Tensor decomposition .. only:: html .. image:: /auto_examples/decomposition/images/thumb/sphx_glr_plot_nn_cp_hals_thumb.png - :alt: Non-negative CP decomposition in Tensorly >=0.6 + :alt: :ref:`sphx_glr_auto_examples_decomposition_plot_nn_cp_hals.py` @@ -209,6 +216,23 @@ Tensor decomposition
    +.. raw:: html + +
    + +.. only:: html + + .. image:: /auto_examples/decomposition/images/thumb/sphx_glr_plot_parafac2_compression_thumb.png + :alt: + + :ref:`sphx_glr_auto_examples_decomposition_plot_parafac2_compression.py` + +.. raw:: html + +
    Speeding up PARAFAC2 with SVD compression
    +
    + + .. raw:: html
    @@ -216,7 +240,7 @@ Tensor decomposition .. only:: html .. image:: /auto_examples/decomposition/images/thumb/sphx_glr_plot_parafac2_thumb.png - :alt: Demonstration of PARAFAC2 + :alt: :ref:`sphx_glr_auto_examples_decomposition_plot_parafac2.py` @@ -226,6 +250,8 @@ Tensor decomposition
    +.. thumbnail-parent-div-close + .. raw:: html
    @@ -240,15 +266,16 @@ Tensor regression with tensorly
    +.. thumbnail-parent-div-open .. raw:: html -
    +
    .. only:: html .. image:: /auto_examples/regression/images/thumb/sphx_glr_plot_cp_regression_thumb.png - :alt: CP tensor regression + :alt: :ref:`sphx_glr_auto_examples_regression_plot_cp_regression.py` @@ -260,12 +287,12 @@ Tensor regression with tensorly .. raw:: html -
    +
    .. only:: html .. image:: /auto_examples/regression/images/thumb/sphx_glr_plot_tucker_regression_thumb.png - :alt: Tucker tensor regression + :alt: :ref:`sphx_glr_auto_examples_regression_plot_tucker_regression.py` @@ -275,6 +302,8 @@ Tensor regression with tensorly
    +.. thumbnail-parent-div-close + .. raw:: html
    @@ -284,6 +313,7 @@ Tensor regression with tensorly :hidden: :includehidden: + /auto_examples/applications/index.rst /auto_examples/decomposition/index.rst /auto_examples/regression/index.rst diff --git a/stable/_sources/auto_examples/plot_tensor.rst.txt b/stable/_sources/auto_examples/plot_tensor.rst.txt index edd6bfc66..a9e6169df 100644 --- a/stable/_sources/auto_examples/plot_tensor.rst.txt +++ b/stable/_sources/auto_examples/plot_tensor.rst.txt @@ -10,8 +10,8 @@ .. note:: :class: sphx-glr-download-link-note - Click :ref:`here ` - to download the full example code + :ref:`Go to the end ` + to download the full example code. .. rst-class:: sphx-glr-example-title @@ -23,9 +23,9 @@ Basic tensor operations Example on how to use :mod:`tensorly` to perform basic tensor operations. -.. GENERATED FROM PYTHON SOURCE LINES 9-13 +.. GENERATED FROM PYTHON SOURCE LINES 8-12 -.. code-block:: default +.. code-block:: Python import numpy as np import tensorly as tl @@ -38,16 +38,16 @@ Example on how to use :mod:`tensorly` to perform basic tensor operations. -.. GENERATED FROM PYTHON SOURCE LINES 14-15 +.. GENERATED FROM PYTHON SOURCE LINES 13-14 A tensor is simply a numpy array -.. GENERATED FROM PYTHON SOURCE LINES 15-18 +.. GENERATED FROM PYTHON SOURCE LINES 14-17 -.. code-block:: default +.. code-block:: Python tensor = tl.tensor(np.arange(24).reshape((3, 4, 2))) - print('* original tensor:\n{}'.format(tensor)) + print(f"* original tensor:\n{tensor}") @@ -76,16 +76,16 @@ A tensor is simply a numpy array -.. GENERATED FROM PYTHON SOURCE LINES 19-20 +.. GENERATED FROM PYTHON SOURCE LINES 18-19 Unfolding a tensor is easy -.. GENERATED FROM PYTHON SOURCE LINES 20-23 +.. GENERATED FROM PYTHON SOURCE LINES 19-22 -.. code-block:: default +.. code-block:: Python for mode in range(tensor.ndim): - print('* mode-{} unfolding:\n{}'.format(mode, tl.unfold(tensor, mode))) + print(f"* mode-{mode} unfolding:\n{tl.unfold(tensor, mode)}") @@ -111,13 +111,13 @@ Unfolding a tensor is easy -.. GENERATED FROM PYTHON SOURCE LINES 24-25 +.. GENERATED FROM PYTHON SOURCE LINES 23-24 Re-folding the tensor is as easy: -.. GENERATED FROM PYTHON SOURCE LINES 25-29 +.. GENERATED FROM PYTHON SOURCE LINES 24-28 -.. code-block:: default +.. code-block:: Python for mode in range(tensor.ndim): unfolding = tl.unfold(tensor, mode) @@ -133,7 +133,7 @@ Re-folding the tensor is as easy: .. rst-class:: sphx-glr-timing - **Total running time of the script:** ( 0 minutes 0.006 seconds) + **Total running time of the script:** (0 minutes 0.005 seconds) .. _sphx_glr_download_auto_examples_plot_tensor.py: @@ -142,14 +142,17 @@ Re-folding the tensor is as easy: .. container:: sphx-glr-footer sphx-glr-footer-example + .. container:: sphx-glr-download sphx-glr-download-jupyter + + :download:`Download Jupyter notebook: plot_tensor.ipynb ` .. container:: sphx-glr-download sphx-glr-download-python :download:`Download Python source code: plot_tensor.py ` - .. container:: sphx-glr-download sphx-glr-download-jupyter + .. container:: sphx-glr-download sphx-glr-download-zip - :download:`Download Jupyter notebook: plot_tensor.ipynb ` + :download:`Download zipped: plot_tensor.zip ` .. only:: html diff --git a/stable/_sources/auto_examples/regression/index.rst.txt b/stable/_sources/auto_examples/regression/index.rst.txt index bbc2c172a..ce942ca21 100644 --- a/stable/_sources/auto_examples/regression/index.rst.txt +++ b/stable/_sources/auto_examples/regression/index.rst.txt @@ -12,15 +12,16 @@ Tensor regression with tensorly
    +.. thumbnail-parent-div-open .. raw:: html -
    +
    .. only:: html .. image:: /auto_examples/regression/images/thumb/sphx_glr_plot_cp_regression_thumb.png - :alt: CP tensor regression + :alt: :ref:`sphx_glr_auto_examples_regression_plot_cp_regression.py` @@ -32,12 +33,12 @@ Tensor regression with tensorly .. raw:: html -
    +
    .. only:: html .. image:: /auto_examples/regression/images/thumb/sphx_glr_plot_tucker_regression_thumb.png - :alt: Tucker tensor regression + :alt: :ref:`sphx_glr_auto_examples_regression_plot_tucker_regression.py` @@ -47,6 +48,8 @@ Tensor regression with tensorly
    +.. thumbnail-parent-div-close + .. raw:: html
    diff --git a/stable/_sources/auto_examples/regression/plot_cp_regression.rst.txt b/stable/_sources/auto_examples/regression/plot_cp_regression.rst.txt index e8d72494e..51d95e94c 100644 --- a/stable/_sources/auto_examples/regression/plot_cp_regression.rst.txt +++ b/stable/_sources/auto_examples/regression/plot_cp_regression.rst.txt @@ -10,8 +10,8 @@ .. note:: :class: sphx-glr-download-link-note - Click :ref:`here ` - to download the full example code + :ref:`Go to the end ` + to download the full example code. .. rst-class:: sphx-glr-example-title @@ -23,7 +23,7 @@ CP tensor regression Example on how to use :class:`tensorly.regression.cp_regression.CPRegressor` to perform tensor regression. -.. GENERATED FROM PYTHON SOURCE LINES 7-66 +.. GENERATED FROM PYTHON SOURCE LINES 7-74 @@ -36,7 +36,7 @@ Example on how to use :class:`tensorly.regression.cp_regression.CPRegressor` to -.. code-block:: default +.. code-block:: Python import matplotlib.pyplot as plt @@ -49,7 +49,7 @@ Example on how to use :class:`tensorly.regression.cp_regression.CPRegressor` to image_height = 25 image_width = 25 # shape of the images - patterns = ['rectangle', 'swiss', 'circle'] + patterns = ["rectangle", "swiss", "circle"] # ranks to test ranks = [1, 2, 3, 4, 5] @@ -67,33 +67,41 @@ Example on how to use :class:`tensorly.regression.cp_regression.CPRegressor` to for i, pattern in enumerate(patterns): # Generate the original image - weight_img = gen_image(region=pattern, image_height=image_height, image_width=image_width) + weight_img = gen_image( + region=pattern, image_height=image_height, image_width=image_width + ) weight_img = tl.tensor(weight_img) # Generate the labels y = tl.dot(partial_tensor_to_vec(X, skip_begin=1), tensor_to_vec(weight_img)) # Plot the original weights - ax = fig.add_subplot(n_rows, n_columns, i*n_columns + 1) - ax.imshow(tl.to_numpy(weight_img), cmap=plt.cm.OrRd, interpolation='nearest') + ax = fig.add_subplot(n_rows, n_columns, i * n_columns + 1) + ax.imshow(tl.to_numpy(weight_img), cmap=plt.cm.OrRd, interpolation="nearest") ax.set_axis_off() if i == 0: - ax.set_title('Original\nweights') + ax.set_title("Original\nweights") for j, rank in enumerate(ranks): # Create a tensor Regressor estimator - estimator = CPRegressor(weight_rank=rank, tol=10e-7, n_iter_max=100, reg_W=1, verbose=0) + estimator = CPRegressor( + weight_rank=rank, tol=10e-7, n_iter_max=100, reg_W=1, verbose=0 + ) # Fit the estimator to the data estimator.fit(X, y) - ax = fig.add_subplot(n_rows, n_columns, i*n_columns + j + 2) - ax.imshow(tl.to_numpy(estimator.weight_tensor_), cmap=plt.cm.OrRd, interpolation='nearest') + ax = fig.add_subplot(n_rows, n_columns, i * n_columns + j + 2) + ax.imshow( + tl.to_numpy(estimator.weight_tensor_), + cmap=plt.cm.OrRd, + interpolation="nearest", + ) ax.set_axis_off() if i == 0: - ax.set_title('Learned\nrank = {}'.format(rank)) + ax.set_title(f"Learned\nrank = {rank}") plt.suptitle("CP tensor regression") plt.show() @@ -101,7 +109,7 @@ Example on how to use :class:`tensorly.regression.cp_regression.CPRegressor` to .. rst-class:: sphx-glr-timing - **Total running time of the script:** ( 0 minutes 4.491 seconds) + **Total running time of the script:** (0 minutes 4.573 seconds) .. _sphx_glr_download_auto_examples_regression_plot_cp_regression.py: @@ -110,14 +118,17 @@ Example on how to use :class:`tensorly.regression.cp_regression.CPRegressor` to .. container:: sphx-glr-footer sphx-glr-footer-example + .. container:: sphx-glr-download sphx-glr-download-jupyter + + :download:`Download Jupyter notebook: plot_cp_regression.ipynb ` .. container:: sphx-glr-download sphx-glr-download-python :download:`Download Python source code: plot_cp_regression.py ` - .. container:: sphx-glr-download sphx-glr-download-jupyter + .. container:: sphx-glr-download sphx-glr-download-zip - :download:`Download Jupyter notebook: plot_cp_regression.ipynb ` + :download:`Download zipped: plot_cp_regression.zip ` .. only:: html diff --git a/stable/_sources/auto_examples/regression/plot_tucker_regression.rst.txt b/stable/_sources/auto_examples/regression/plot_tucker_regression.rst.txt index 2f45308e0..d20b19623 100644 --- a/stable/_sources/auto_examples/regression/plot_tucker_regression.rst.txt +++ b/stable/_sources/auto_examples/regression/plot_tucker_regression.rst.txt @@ -10,8 +10,8 @@ .. note:: :class: sphx-glr-download-link-note - Click :ref:`here ` - to download the full example code + :ref:`Go to the end ` + to download the full example code. .. rst-class:: sphx-glr-example-title @@ -23,7 +23,7 @@ Tucker tensor regression Example on how to use :class:`tensorly.regression.tucker_regression.TuckerRegressor` to perform tensor regression. -.. GENERATED FROM PYTHON SOURCE LINES 7-68 +.. GENERATED FROM PYTHON SOURCE LINES 7-76 @@ -63,7 +63,7 @@ Example on how to use :class:`tensorly.regression.tucker_regression.TuckerRegres | -.. code-block:: default +.. code-block:: Python import matplotlib.pyplot as plt @@ -76,7 +76,7 @@ Example on how to use :class:`tensorly.regression.tucker_regression.TuckerRegres image_height = 25 image_width = 25 # shape of the images - patterns = ['rectangle', 'swiss', 'circle'] + patterns = ["rectangle", "swiss", "circle"] # ranks to test ranks = [1, 2, 3, 4, 5] @@ -92,37 +92,45 @@ Example on how to use :class:`tensorly.regression.tucker_regression.TuckerRegres for i, pattern in enumerate(patterns): - print('fitting pattern n.{}'.format(i)) + print(f"fitting pattern n.{i}") # Generate the original image - weight_img = gen_image(region=pattern, image_height=image_height, image_width=image_width) + weight_img = gen_image( + region=pattern, image_height=image_height, image_width=image_width + ) weight_img = tl.tensor(weight_img) # Generate the labels y = tl.dot(partial_tensor_to_vec(X, skip_begin=1), tensor_to_vec(weight_img)) # Plot the original weights - ax = fig.add_subplot(n_rows, n_columns, i*n_columns + 1) - ax.imshow(tl.to_numpy(weight_img), cmap=plt.cm.OrRd, interpolation='nearest') + ax = fig.add_subplot(n_rows, n_columns, i * n_columns + 1) + ax.imshow(tl.to_numpy(weight_img), cmap=plt.cm.OrRd, interpolation="nearest") ax.set_axis_off() if i == 0: - ax.set_title('Original\nweights') + ax.set_title("Original\nweights") for j, rank in enumerate(ranks): - print('fitting for rank = {}'.format(rank)) + print(f"fitting for rank = {rank}") # Create a tensor Regressor estimator - estimator = TuckerRegressor(weight_ranks=[rank, rank], tol=10e-7, n_iter_max=100, reg_W=1, verbose=0) + estimator = TuckerRegressor( + weight_ranks=[rank, rank], tol=10e-7, n_iter_max=100, reg_W=1, verbose=0 + ) # Fit the estimator to the data estimator.fit(X, y) - ax = fig.add_subplot(n_rows, n_columns, i*n_columns + j + 2) - ax.imshow(tl.to_numpy(estimator.weight_tensor_), cmap=plt.cm.OrRd, interpolation='nearest') + ax = fig.add_subplot(n_rows, n_columns, i * n_columns + j + 2) + ax.imshow( + tl.to_numpy(estimator.weight_tensor_), + cmap=plt.cm.OrRd, + interpolation="nearest", + ) ax.set_axis_off() if i == 0: - ax.set_title('Learned\nrank = {}'.format(rank)) + ax.set_title(f"Learned\nrank = {rank}") plt.suptitle("Tucker tensor regression") plt.show() @@ -130,7 +138,7 @@ Example on how to use :class:`tensorly.regression.tucker_regression.TuckerRegres .. rst-class:: sphx-glr-timing - **Total running time of the script:** ( 0 minutes 1.130 seconds) + **Total running time of the script:** (0 minutes 1.173 seconds) .. _sphx_glr_download_auto_examples_regression_plot_tucker_regression.py: @@ -139,14 +147,17 @@ Example on how to use :class:`tensorly.regression.tucker_regression.TuckerRegres .. container:: sphx-glr-footer sphx-glr-footer-example + .. container:: sphx-glr-download sphx-glr-download-jupyter + + :download:`Download Jupyter notebook: plot_tucker_regression.ipynb ` .. container:: sphx-glr-download sphx-glr-download-python :download:`Download Python source code: plot_tucker_regression.py ` - .. container:: sphx-glr-download sphx-glr-download-jupyter + .. container:: sphx-glr-download sphx-glr-download-zip - :download:`Download Jupyter notebook: plot_tucker_regression.ipynb ` + :download:`Download zipped: plot_tucker_regression.zip ` .. only:: html diff --git a/stable/_sources/auto_examples/regression/sg_execution_times.rst.txt b/stable/_sources/auto_examples/regression/sg_execution_times.rst.txt index 68e19c93e..229ea6801 100644 --- a/stable/_sources/auto_examples/regression/sg_execution_times.rst.txt +++ b/stable/_sources/auto_examples/regression/sg_execution_times.rst.txt @@ -3,12 +3,38 @@ .. _sphx_glr_auto_examples_regression_sg_execution_times: + Computation times ================= -**00:05.621** total execution time for **auto_examples_regression** files: +**00:05.746** total execution time for 2 files **from auto_examples/regression**: + +.. container:: + + .. raw:: html + + + + + + + + .. list-table:: + :header-rows: 1 + :class: table table-striped sg-datatable -+----------------------------------------------------------------------------------------------------+-----------+--------+ -| :ref:`sphx_glr_auto_examples_regression_plot_cp_regression.py` (``plot_cp_regression.py``) | 00:04.491 | 0.0 MB | -+----------------------------------------------------------------------------------------------------+-----------+--------+ -| :ref:`sphx_glr_auto_examples_regression_plot_tucker_regression.py` (``plot_tucker_regression.py``) | 00:01.130 | 0.0 MB | -+----------------------------------------------------------------------------------------------------+-----------+--------+ + * - Example + - Time + - Mem (MB) + * - :ref:`sphx_glr_auto_examples_regression_plot_cp_regression.py` (``plot_cp_regression.py``) + - 00:04.573 + - 0.0 + * - :ref:`sphx_glr_auto_examples_regression_plot_tucker_regression.py` (``plot_tucker_regression.py``) + - 00:01.173 + - 0.0 diff --git a/stable/_sources/auto_examples/sg_execution_times.rst.txt b/stable/_sources/auto_examples/sg_execution_times.rst.txt index ee46d28cc..9f079b605 100644 --- a/stable/_sources/auto_examples/sg_execution_times.rst.txt +++ b/stable/_sources/auto_examples/sg_execution_times.rst.txt @@ -3,10 +3,35 @@ .. _sphx_glr_auto_examples_sg_execution_times: + Computation times ================= -**00:00.006** total execution time for **auto_examples** files: +**00:00.005** total execution time for 1 file **from auto_examples**: + +.. container:: + + .. raw:: html + + + + + + + + .. list-table:: + :header-rows: 1 + :class: table table-striped sg-datatable -+-------------------------------------------------------------------+-----------+--------+ -| :ref:`sphx_glr_auto_examples_plot_tensor.py` (``plot_tensor.py``) | 00:00.006 | 0.0 MB | -+-------------------------------------------------------------------+-----------+--------+ + * - Example + - Time + - Mem (MB) + * - :ref:`sphx_glr_auto_examples_plot_tensor.py` (``plot_tensor.py``) + - 00:00.005 + - 0.0 diff --git a/stable/_sources/development_guide/backend_system.rst.txt b/stable/_sources/development_guide/backend_system.rst.txt index c2cc87a78..933e0f7fc 100644 --- a/stable/_sources/development_guide/backend_system.rst.txt +++ b/stable/_sources/development_guide/backend_system.rst.txt @@ -3,7 +3,7 @@ Backend System ============== The TensorLy backend system allows for switching between multiple backends in -a thread-local way. You can obtain the back that is currently being used with the +a thread-local way. You can obtain the backend that is currently being used with the ``get_backend()`` function:: >>> import tensorly as tl @@ -23,7 +23,7 @@ from the thread that spawned them (which is typically the main thread). Globally setting the backend supports interactive usage. Additionally, we provide a context manager ``backend_context`` -for convenience, whcih may be used to +for convenience, which may be used to safely use a backend only for limited context:: >>> with tl.backend_context('pytorch'): diff --git a/stable/_sources/development_guide/contributing.rst.txt b/stable/_sources/development_guide/contributing.rst.txt index a3dd5949a..17c8c0d3d 100644 --- a/stable/_sources/development_guide/contributing.rst.txt +++ b/stable/_sources/development_guide/contributing.rst.txt @@ -34,7 +34,7 @@ To contribute code to the TensorLy code-base, you must ensure compatibility with .. important:: We want algorithms to run transparently with all the TensorLy backends - (NumPy, MXNet, PyTorch, TensorLy, JAX, CuPy) and any other backend added later on! + (NumPy, PyTorch, TensorLy, JAX, CuPy, Paddle) and any other backend added later on! This means you should only use TensorLy functions, never directly a function from the backend e.g. use ``tl.mean``, **not** ``numpy.mean`` or ``torch.mean``. @@ -68,7 +68,7 @@ Practically, **use the wrapped functions**. For instance: The reason is that you do not want your code to be restricted to any of the backends. -You might be using NumPy but another user might be using MXNet and calling a NumPy function on an MXNet NDArray will most likely fail. +You might be using NumPy but another user might be using JAX and calling a NumPy function on an JAX NDArray will most likely fail. Context of a tensor @@ -88,7 +88,7 @@ Check-out the page on :doc:`../user_guide/backend` for more on this. Index assignment ("NumPy style") ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -In NumPy, PyTorch and MXNet, you can combined indexing and assignment in a convenient way, +In NumPy and PyTorch, you can combined indexing and assignment in a convenient way, e.g. if you have a tensor `t`, you can update its values for given indices using the expression ``t[indices] = values``. diff --git a/stable/_sources/modules/api.rst.txt b/stable/_sources/modules/api.rst.txt index 24e2f1979..497bbf318 100644 --- a/stable/_sources/modules/api.rst.txt +++ b/stable/_sources/modules/api.rst.txt @@ -5,7 +5,7 @@ API reference Unified backend interface (:mod:`tensorly`) =========================================== -There are several libraries for multi-dimensional array computation, including NumPy, PyTorch, MXNet, TensorFlow, JAX and CuPy. +There are several libraries for multi-dimensional array computation, including NumPy, PyTorch, TensorFlow, JAX, CuPy and Paddle. They all have strenghs and weaknesses, e.g. some are better on CPU, some better on GPU etc. Therefore, in TensorLy we enable you to use our algorithm (and any code you write using the library), with any of these libraries. @@ -290,14 +290,14 @@ Other tensor algebraic functionalities: kronecker mode_dot multi_mode_dot - proximal.soft_thresholding - proximal.svd_thresholding - proximal.procrustes inner outer batched_outer tensordot higher_order_moment + proximal.soft_thresholding + proximal.svd_thresholding + proximal.procrustes Tensor Algebra Backend ---------------------- @@ -376,6 +376,23 @@ Functions constrained_parafac +Preprocessing (:mod:`tensorly.preprocessing`) +============================================= + +.. automodule:: tensorly.preprocessing + :no-members: + :no-inherited-members: + +.. currentmodule:: tensorly.preprocessing + +.. autosummary:: + :toctree: generated/ + :template: function.rst + + svd_compress_tensor_slices + svd_decompress_parafac2_tensor + + Tensor Regression (:mod:`tensorly.regression`) ============================================== @@ -394,6 +411,26 @@ Tensor Regression (:mod:`tensorly.regression`) CP_PLSR +Solvers (:mod:`tensorly.solvers`) +================================= + +Tensorly provides with efficient solvers for nonnegative least squares problems which are crucial to nonnegative tensor decomposition, as well as a generic admm solver useful for constrained decompositions. Several proximal (projection) operators are located in tenalg. + +.. automodule:: tensorly.solvers + :no-members: + :no-inherited-members: + +.. currentmodule:: tensorly.solvers + +.. autosummary:: + :toctree: generated/ + :template: function.rst + + nnls.hals_nnls + nnls.fista + nnls.active_set_nnls + admm.admm + Performance measures (:mod:`tensorly.metrics`) ============================================== @@ -508,5 +545,3 @@ Currently, the following decomposition methods are supported (for the NumPy back sparse.decomposition.parafac sparse.decomposition.non_negative_parafac sparse.decomposition.symmetric_parafac_power_iteration - - diff --git a/stable/_sources/modules/generated/tensorly.preprocessing.svd_compress_tensor_slices.rst.txt b/stable/_sources/modules/generated/tensorly.preprocessing.svd_compress_tensor_slices.rst.txt new file mode 100644 index 000000000..da4f872b7 --- /dev/null +++ b/stable/_sources/modules/generated/tensorly.preprocessing.svd_compress_tensor_slices.rst.txt @@ -0,0 +1,10 @@ +:mod:`tensorly.preprocessing`.svd_compress_tensor_slices +===================================================================== + +.. currentmodule:: tensorly.preprocessing + +.. autofunction:: svd_compress_tensor_slices + +.. raw:: html + +
    \ No newline at end of file diff --git a/stable/_sources/modules/generated/tensorly.preprocessing.svd_decompress_parafac2_tensor.rst.txt b/stable/_sources/modules/generated/tensorly.preprocessing.svd_decompress_parafac2_tensor.rst.txt new file mode 100644 index 000000000..52639bae2 --- /dev/null +++ b/stable/_sources/modules/generated/tensorly.preprocessing.svd_decompress_parafac2_tensor.rst.txt @@ -0,0 +1,10 @@ +:mod:`tensorly.preprocessing`.svd_decompress_parafac2_tensor +========================================================================= + +.. currentmodule:: tensorly.preprocessing + +.. autofunction:: svd_decompress_parafac2_tensor + +.. raw:: html + +
    \ No newline at end of file diff --git a/stable/_sources/modules/generated/tensorly.solvers.admm.admm.rst.txt b/stable/_sources/modules/generated/tensorly.solvers.admm.admm.rst.txt new file mode 100644 index 000000000..cc1a1e439 --- /dev/null +++ b/stable/_sources/modules/generated/tensorly.solvers.admm.admm.rst.txt @@ -0,0 +1,10 @@ +:mod:`tensorly.solvers.admm`.admm +============================================== + +.. currentmodule:: tensorly.solvers.admm + +.. autofunction:: admm + +.. raw:: html + +
    \ No newline at end of file diff --git a/stable/_sources/modules/generated/tensorly.solvers.nnls.active_set_nnls.rst.txt b/stable/_sources/modules/generated/tensorly.solvers.nnls.active_set_nnls.rst.txt new file mode 100644 index 000000000..660213b76 --- /dev/null +++ b/stable/_sources/modules/generated/tensorly.solvers.nnls.active_set_nnls.rst.txt @@ -0,0 +1,10 @@ +:mod:`tensorly.solvers.nnls`.active_set_nnls +========================================================= + +.. currentmodule:: tensorly.solvers.nnls + +.. autofunction:: active_set_nnls + +.. raw:: html + +
    \ No newline at end of file diff --git a/stable/_sources/modules/generated/tensorly.solvers.nnls.fista.rst.txt b/stable/_sources/modules/generated/tensorly.solvers.nnls.fista.rst.txt new file mode 100644 index 000000000..36331677a --- /dev/null +++ b/stable/_sources/modules/generated/tensorly.solvers.nnls.fista.rst.txt @@ -0,0 +1,10 @@ +:mod:`tensorly.solvers.nnls`.fista +=============================================== + +.. currentmodule:: tensorly.solvers.nnls + +.. autofunction:: fista + +.. raw:: html + +
    \ No newline at end of file diff --git a/stable/_sources/modules/generated/tensorly.solvers.nnls.hals_nnls.rst.txt b/stable/_sources/modules/generated/tensorly.solvers.nnls.hals_nnls.rst.txt new file mode 100644 index 000000000..b2e31cc4f --- /dev/null +++ b/stable/_sources/modules/generated/tensorly.solvers.nnls.hals_nnls.rst.txt @@ -0,0 +1,10 @@ +:mod:`tensorly.solvers.nnls`.hals_nnls +=================================================== + +.. currentmodule:: tensorly.solvers.nnls + +.. autofunction:: hals_nnls + +.. raw:: html + +
    \ No newline at end of file diff --git a/stable/_sources/sg_execution_times.rst.txt b/stable/_sources/sg_execution_times.rst.txt new file mode 100644 index 000000000..2cf3a7be5 --- /dev/null +++ b/stable/_sources/sg_execution_times.rst.txt @@ -0,0 +1,73 @@ + +:orphan: + +.. _sphx_glr_sg_execution_times: + + +Computation times +================= +**23:36.587** total execution time for 13 files **from all galleries**: + +.. container:: + + .. raw:: html + + + + + + + + .. list-table:: + :header-rows: 1 + :class: table table-striped sg-datatable + + * - Example + - Time + - Mem (MB) + * - :ref:`sphx_glr_auto_examples_decomposition_plot_parafac2_compression.py` (``../examples/decomposition/plot_parafac2_compression.py``) + - 19:40.649 + - 0.0 + * - :ref:`sphx_glr_auto_examples_decomposition_plot_nn_cp_hals.py` (``../examples/decomposition/plot_nn_cp_hals.py``) + - 03:20.624 + - 0.0 + * - :ref:`sphx_glr_auto_examples_decomposition_plot_nn_tucker.py` (``../examples/decomposition/plot_nn_tucker.py``) + - 00:08.372 + - 0.0 + * - :ref:`sphx_glr_auto_examples_decomposition_plot_parafac2.py` (``../examples/decomposition/plot_parafac2.py``) + - 00:07.688 + - 0.0 + * - :ref:`sphx_glr_auto_examples_regression_plot_cp_regression.py` (``../examples/regression/plot_cp_regression.py``) + - 00:04.573 + - 0.0 + * - :ref:`sphx_glr_auto_examples_decomposition_plot_cp_line_search.py` (``../examples/decomposition/plot_cp_line_search.py``) + - 00:03.835 + - 0.0 + * - :ref:`sphx_glr_auto_examples_applications_plot_covid.py` (``../examples/applications/plot_covid.py``) + - 00:03.405 + - 0.0 + * - :ref:`sphx_glr_auto_examples_decomposition_plot_guide_for_constrained_cp.py` (``../examples/decomposition/plot_guide_for_constrained_cp.py``) + - 00:03.250 + - 0.0 + * - :ref:`sphx_glr_auto_examples_applications_plot_image_compression.py` (``../examples/applications/plot_image_compression.py``) + - 00:01.443 + - 0.0 + * - :ref:`sphx_glr_auto_examples_applications_plot_IL2.py` (``../examples/applications/plot_IL2.py``) + - 00:01.412 + - 0.0 + * - :ref:`sphx_glr_auto_examples_regression_plot_tucker_regression.py` (``../examples/regression/plot_tucker_regression.py``) + - 00:01.173 + - 0.0 + * - :ref:`sphx_glr_auto_examples_decomposition_plot_permute_factors.py` (``../examples/decomposition/plot_permute_factors.py``) + - 00:00.158 + - 0.0 + * - :ref:`sphx_glr_auto_examples_plot_tensor.py` (``../examples/plot_tensor.py``) + - 00:00.005 + - 0.0 diff --git a/stable/_sources/user_guide/backend.rst.txt b/stable/_sources/user_guide/backend.rst.txt index 66fb47db3..555154476 100644 --- a/stable/_sources/user_guide/backend.rst.txt +++ b/stable/_sources/user_guide/backend.rst.txt @@ -6,21 +6,21 @@ TensorLy's backend system .. note:: In short, you can write your code using TensorLy and you can transparently combine it and execute with any of the backends. - Currently we support NumPy PyTorch, MXNet, JAX, TensorFlow and CuPy as backends. + Currently we support NumPy PyTorch, JAX, TensorFlow, CuPy and Paddle as backends. Backend? -------- -To represent tensors and for numerical computation, TensorLy supports several backends transparently: the ubiquitous NumPy (the default), MXNet, and PyTorch. +To represent tensors and for numerical computation, TensorLy supports several backends transparently: the ubiquitous NumPy (the default), JAX, PyTorch and Paddle. For the end user, the interface is exactly the same, but under the hood, a different library is used to represent multi-dimensional arrays and perform computations on these. -In other words, you write your code using TensorLy and can then decide whether the computation is done using NumPy, PyTorch or MXNet. +In other words, you write your code using TensorLy and can then decide whether the computation is done using NumPy, PyTorch, JAX or Paddle. Why backends? ------------- The goal of TensorLy is to make tensor methods accessible. -While NumPy needs no introduction, other backends such as MXNet and PyTorch backends are especially useful as they allows to perform transparently computation on CPU or GPU. -Last but not least, using MXNet or PyTorch as a backend, we are able to combine tensor methods and deep learning easily! +While NumPy needs no introduction, other backends such as JAX and PyTorch backends are especially useful as they allows to perform transparently computation on CPU or GPU. +Last but not least, using JAX or PyTorch as a backend, we are able to combine tensor methods and deep learning easily! @@ -32,14 +32,14 @@ Alternatively during the execution, assuming you have imported TensorLy as ``imp .. important:: NumPy is installed by default with TensorLy if you haven't already installed it. - However, to keep dependencies as minimal as possible, and to not complexify installation, neither MXNet nor PyTorch are installed. If you want to use them as backend, you will have to install them first. + However, to keep dependencies as minimal as possible, and to not complexify installation, no backend is installed. If you want to use them as backend, you will have to install them first. It is easy however, simply refer to their respective installation instructions: * `PyTorch `_ - * `MXNet `_ * `JAX `_ * `CuPy `_ * `TensorFlow `_ + * `Paddle` `_ Once you change the backend, all the computation is done using that backend. @@ -47,7 +47,7 @@ Once you change the backend, all the computation is done using that backend. Context of a tensor ------------------- -Different backends have different parameters associated with the tensors. For instance, in NumPy we traditionally set the dtype when creating an ndarray, while in mxnet we also have to change the *context* (GPU or CPU), with the `ctx` argument. Similarly, in PyTorch, we might want to create a FloatTensor for CPU and a cuda.FloatTensor for GPU. +Different backends have different parameters associated with the tensors. For instance, in NumPy we traditionally set the dtype when creating an ndarray. Similarly, in PyTorch, we might want to create a FloatTensor for CPU and a cuda.FloatTensor for GPU. To handle this difference, we implemented a `context` function, that, given a tensor, returns a dictionary of values characterising that tensor. A function getting a tensor as input and creating a new tensor should use that context to create the new tensor. @@ -115,7 +115,7 @@ Now, let's create a random tensor using the :mod:`tensorly.random` module: # tensor is a PyTorch Tensor! We can decompose it easily, here using a Tucker decomposition: -First, we reate a decomposition instance, which keeps the number of parameters the same +First, we create a decomposition instance, which keeps the number of parameters the same and with a random initialization. We then fit it to our tensor. .. code:: python @@ -160,12 +160,12 @@ The rest is exactly the same, nothing more to do! Using static dispatching ------------------------ -We optimized the dynammical dispatch so the overhead is negligeable. +We optimized the dynamical dispatch so the overhead is negligeable. However, if you only want to use one backend, you can first set it and then switch to static dispatching: >>> tl.use_static_dispatch() -And you can switch back to dynammical dispatching just as easily: +And you can switch back to dynamical dispatching just as easily: >>> tl.use_dynamic_dispatch() diff --git a/stable/_sources/user_guide/quickstart.rst.txt b/stable/_sources/user_guide/quickstart.rst.txt index 5f3ce4d3f..c74ce86dd 100644 --- a/stable/_sources/user_guide/quickstart.rst.txt +++ b/stable/_sources/user_guide/quickstart.rst.txt @@ -1,12 +1,12 @@ Quick-Start =========== -A short overview of TensorLy to get started quickly and get familiar with the organization of TensorLY. +A short overview of TensorLy to get started quickly and get familiar with the organization of TensorLy. Organization of TensorLy ------------------------- -TensorLy is organized in several submodule: +TensorLy is organized in several submodules: ================================= ================================ Module Description @@ -32,22 +32,22 @@ TensorLy Backend ---------------- Earlier, we mentioned that all function for manipulating arrays can be accessed through :mod:`tensorly` or `tensorly.backend`. -For instance, if you have a tensor ``t``, to take its mean, you should use ``tensorly.mean(t)``, **not**, for instance, ``numpy.mean(t)`` (or torch, mxnet, etc). +For instance, if you have a tensor ``t``, to take its mean, you should use ``tensorly.mean(t)``, **not**, for instance, ``numpy.mean(t)`` (or torch, JAX, etc). Why is that? .. important:: This is because we support several backends: the code you write in TensorLy can be *transparently* executed with several frameworks, without having to change anything in your code! - For instance, you can execute your code normally using NumPy, but you can also have it run on GPU or multiple machines, using PyTorch, TensorFlow, CuPy, MXNet or JAX. Without having to adapt your code! + For instance, you can execute your code normally using NumPy, but you can also have it run on GPU or multiple machines, using PyTorch, TensorFlow, CuPy, JAX or Paddle. Without having to adapt your code! This is why you should always manipulate tensors using tensorly backend functions only. -For instance, `tensorly.max` calls either the MXNet, NumPy or PyTorch version depending on the backend you selected. There are other subtlties that are handled by the backend to allow a common API regardless of the backend use. +For instance, `tensorly.max` calls either the NumPy or PyTorch version depending on the backend you selected. There are other subtleties that are handled by the backend to allow a common API regardless of the backend use. .. note:: By default, the backend is set to NumPy. You can change the backend using ``tensorly.set_backend``. - For instance, to switch to pytorch, simply type ``tensorly.set_backend('pytorch')``. + For instance, to switch to PyTorch, simply type ``tensorly.set_backend('pytorch')``. For more information on the backend, refer to :doc:`./backend`. @@ -127,7 +127,7 @@ Tensor algebra -------------- More '*advanced*' tensor algebra functions are located in the aptly named :py:mod:`tensorly.tenalg` module. -This includes for instance, n-mode product, kronecker product, etc. +This includes for instance, n-mode product, Kronecker product, etc. We now provide a backend system for tensor algebra, which allows to either use our "hand-crafter" implementations or to dispatch all the operations to einsum. By default, we use the hand-crafted implementations. To switch to einsum, or change the tenalg backend: diff --git a/stable/_sources/user_guide/sparse_backend.rst.txt b/stable/_sources/user_guide/sparse_backend.rst.txt index b1dcd8e2f..53b68ea50 100644 --- a/stable/_sources/user_guide/sparse_backend.rst.txt +++ b/stable/_sources/user_guide/sparse_backend.rst.txt @@ -123,7 +123,7 @@ much memory. This is how much memory the sparse array takes up, vs. how much it would take -up if it were represented densly. +up if it were represented densely. >>> tensor.nbytes / 1e9 # Actual memory usage in GB 0.000161408 diff --git a/stable/_sources/user_guide/tensor_basics.rst.txt b/stable/_sources/user_guide/tensor_basics.rst.txt index 52acc077e..9c70642d2 100644 --- a/stable/_sources/user_guide/tensor_basics.rst.txt +++ b/stable/_sources/user_guide/tensor_basics.rst.txt @@ -57,7 +57,7 @@ Also called **matrization**, **unfolding** a tensor is done by reading the eleme For a tensor of size :math:`(I_0, I_1, \cdots, I_N)`, the n-mode unfolding of this tensor will be of size :math:`(I_n, I_0 \times I_1 \times \cdots \times I_{n-1} \times I_{n+1} \cdots \times I_N)`. .. important:: - In tensorly we use an unfolding different from the classical one as defined in [1]_ for better performance. + In TensorLy we use an unfolding different from the classical one as defined in [1]_ for better performance. Given a tensor :math:`\tilde X \in \mathbb{R}^{I_0, I_1 \times I_2 \times \cdots \times I_N}`, the mode-n unfolding of :math:`\tilde X` is a matrix :math:`\mathbf{X}_{[n]} \in \mathbb{R}^{I_n, I_M}`, @@ -72,7 +72,7 @@ For a tensor of size :math:`(I_0, I_1, \cdots, I_N)`, the n-mode unfolding of th Traditionally, mode-1 unfolding denotes the unfolding along the first dimension. However, to be consistent with the Python indexing that always starts at zero, - in tensorly, unfolding also starts at zero! + in TensorLy, unfolding also starts at zero! Therefore ``unfold(tensor, 0)`` will unfold said tensor along its first dimension! @@ -112,7 +112,7 @@ Finally, the 2-mode unfolding is the unfolding along the last axis: \end{matrix} \right] -In tensorly: +In TensorLy: .. code-block:: python diff --git a/stable/_sources/user_guide/tensor_decomposition.rst.txt b/stable/_sources/user_guide/tensor_decomposition.rst.txt index 1d61da890..c32a4922a 100644 --- a/stable/_sources/user_guide/tensor_decomposition.rst.txt +++ b/stable/_sources/user_guide/tensor_decomposition.rst.txt @@ -10,7 +10,7 @@ Refer to [1]_ for more information on tensor decomposition. CP form of a tensor ------------------------ -The idea is to express the tensor as a sum of rank one tensors. That is, a sum of outer product of vectors. +The idea is to express the tensor as a sum of rank one tensors. That is, a sum of outer products of vectors. Such representation can be obtained by applying Canonical Polyadic Decomposition (also known as CANDECOMP-PARAFAC, CP, or PARAFAC decomposition). CANDECOMP-PARAFAC decomposition @@ -126,7 +126,7 @@ Note that some coefficients are almost zero (10e-16) but not exactly due to nume Matrix-Product-State / Tensor-Train Decomposition -------------------------------------------------- -The tensor-train decomposition, also known as matrix product state in physics community, is a way of decompositing high order tensors into third order ones. For a order d tensor A[i1,...,id], it splits each dimension into a order 3 sub-tensor, which we called factors or cores. One of the dimension of the sub-tensor is the real physical dimension, while the other two are edges connecting the cores before and after it. +The tensor-train decomposition, also known as matrix product state in physics community, is a way of decompositing high order tensors into third order ones. For an order d tensor A[i1,...,id], it splits each dimension into an order 3 sub-tensor, which we called factors or cores. One of the dimension of the sub-tensor is the real physical dimension, while the other two are edges connecting the cores before and after it. .. math:: @@ -137,7 +137,7 @@ The advantage of the TT/tensor-train decomposition is that both of its number of Implementations +++++++++++++++ -Two versions tensor train decompositions are available in TensorLy: and SVD-based decomposition method (:func:`tensorly.decomposition.mps_decomposition` and a cross approximation-based method :func:`tensorly.contrib.mps_decomposition_cross`). +Two versions tensor train decompositions are available in TensorLy: and SVD-based decomposition method (:func:`tensorly.decomposition.tensor_train` and a cross approximation-based method :func:`tensorly.contrib.tensor_train_cross`). Using the same tensor as previously, we will perform a rank [1,2,1]-decomposition of the shape (12,12) `tensor` meaning the first core has shape (1,12,2) and the second has (2,12,1).: diff --git a/stable/_sources/user_guide/tensor_regression.rst.txt b/stable/_sources/user_guide/tensor_regression.rst.txt index ed119280a..d8f250834 100644 --- a/stable/_sources/user_guide/tensor_regression.rst.txt +++ b/stable/_sources/user_guide/tensor_regression.rst.txt @@ -19,7 +19,7 @@ For a detailed explanation on tensor regression, please refer to [1]_. TensorLy implements both types of tensor regression as scikit-learn-like estimators. -For instance, Krusal regression is available through the :class:`tensorly.regression.CPRegression` object. This implements a fit method that takes as parameters `X`, the data tensor which first dimension is the number of samples, and `y`, the corresponding vector of labels. +For instance, Krusal regression is available through the :class:`tensorly.regression.CPRegression` object. This implements a fit method that takes as parameters `X`, the data tensor whose first dimension is the number of samples, and `y`, the corresponding vector of labels. Given a set of testing samples, you can use the predict method to obtain the corresponding predictions from the model. diff --git a/stable/_static/basic.css b/stable/_static/basic.css index 7577acb1a..7ebbd6d07 100644 --- a/stable/_static/basic.css +++ b/stable/_static/basic.css @@ -1,12 +1,5 @@ /* - * basic.css - * ~~~~~~~~~ - * * Sphinx stylesheet -- basic theme. - * - * :copyright: Copyright 2007-2023 by the Sphinx team, see AUTHORS. - * :license: BSD, see LICENSE for details. - * */ /* -- main layout ----------------------------------------------------------- */ @@ -115,15 +108,11 @@ img { /* -- search page ----------------------------------------------------------- */ ul.search { - margin: 10px 0 0 20px; - padding: 0; + margin-top: 10px; } ul.search li { - padding: 5px 0 5px 20px; - background-image: url(file.png); - background-repeat: no-repeat; - background-position: 0 7px; + padding: 5px 0; } ul.search li a { @@ -237,6 +226,10 @@ a.headerlink { visibility: hidden; } +a:visited { + color: #551A8B; +} + h1:hover > a.headerlink, h2:hover > a.headerlink, h3:hover > a.headerlink, @@ -670,6 +663,16 @@ dd { margin-left: 30px; } +.sig dd { + margin-top: 0px; + margin-bottom: 0px; +} + +.sig dl { + margin-top: 0px; + margin-bottom: 0px; +} + dl > dd:last-child, dl > dd:last-child > :last-child { margin-bottom: 0; @@ -738,6 +741,14 @@ abbr, acronym { cursor: help; } +.translated { + background-color: rgba(207, 255, 207, 0.2) +} + +.untranslated { + background-color: rgba(255, 207, 207, 0.2) +} + /* -- code displays --------------------------------------------------------- */ pre { diff --git a/stable/_static/doctools.js b/stable/_static/doctools.js index d06a71d75..0398ebb9f 100644 --- a/stable/_static/doctools.js +++ b/stable/_static/doctools.js @@ -1,12 +1,5 @@ /* - * doctools.js - * ~~~~~~~~~~~ - * * Base JavaScript utilities for all Sphinx HTML documentation. - * - * :copyright: Copyright 2007-2023 by the Sphinx team, see AUTHORS. - * :license: BSD, see LICENSE for details. - * */ "use strict"; diff --git a/stable/_static/documentation_options.js b/stable/_static/documentation_options.js index b3ec1c429..bb137617a 100644 --- a/stable/_static/documentation_options.js +++ b/stable/_static/documentation_options.js @@ -1,6 +1,5 @@ -var DOCUMENTATION_OPTIONS = { - URL_ROOT: document.getElementById("documentation_options").getAttribute('data-url_root'), - VERSION: '0.8.1', +const DOCUMENTATION_OPTIONS = { + VERSION: '0.9.0', LANGUAGE: 'en', COLLAPSE_INDEX: false, BUILDER: 'html', diff --git a/stable/_static/jupyterlite_badge_logo.svg b/stable/_static/jupyterlite_badge_logo.svg new file mode 100644 index 000000000..5de36d7fd --- /dev/null +++ b/stable/_static/jupyterlite_badge_logo.svg @@ -0,0 +1,3 @@ + + +launchlaunchlitelite \ No newline at end of file diff --git a/stable/_static/language_data.js b/stable/_static/language_data.js index 250f5665f..c7fe6c6fa 100644 --- a/stable/_static/language_data.js +++ b/stable/_static/language_data.js @@ -1,19 +1,12 @@ /* - * language_data.js - * ~~~~~~~~~~~~~~~~ - * * This script contains the language-specific data used by searchtools.js, * namely the list of stopwords, stemmer, scorer and splitter. - * - * :copyright: Copyright 2007-2023 by the Sphinx team, see AUTHORS. - * :license: BSD, see LICENSE for details. - * */ var stopwords = ["a", "and", "are", "as", "at", "be", "but", "by", "for", "if", "in", "into", "is", "it", "near", "no", "not", "of", "on", "or", "such", "that", "the", "their", "then", "there", "these", "they", "this", "to", "was", "will", "with"]; -/* Non-minified version is copied as a separate JS file, is available */ +/* Non-minified version is copied as a separate JS file, if available */ /** * Porter Stemmer diff --git a/stable/_static/pygments.css b/stable/_static/pygments.css index 691aeb82d..0d49244ed 100644 --- a/stable/_static/pygments.css +++ b/stable/_static/pygments.css @@ -17,6 +17,7 @@ span.linenos.special { color: #000000; background-color: #ffffc0; padding-left: .highlight .cs { color: #408090; background-color: #fff0f0 } /* Comment.Special */ .highlight .gd { color: #A00000 } /* Generic.Deleted */ .highlight .ge { font-style: italic } /* Generic.Emph */ +.highlight .ges { font-weight: bold; font-style: italic } /* Generic.EmphStrong */ .highlight .gr { color: #FF0000 } /* Generic.Error */ .highlight .gh { color: #000080; font-weight: bold } /* Generic.Heading */ .highlight .gi { color: #00A000 } /* Generic.Inserted */ diff --git a/stable/_static/searchtools.js b/stable/_static/searchtools.js index 97d56a74d..2c774d17a 100644 --- a/stable/_static/searchtools.js +++ b/stable/_static/searchtools.js @@ -1,12 +1,5 @@ /* - * searchtools.js - * ~~~~~~~~~~~~~~~~ - * * Sphinx JavaScript utilities for the full-text search. - * - * :copyright: Copyright 2007-2023 by the Sphinx team, see AUTHORS. - * :license: BSD, see LICENSE for details. - * */ "use strict"; @@ -20,7 +13,7 @@ if (typeof Scorer === "undefined") { // and returns the new score. /* score: result => { - const [docname, title, anchor, descr, score, filename] = result + const [docname, title, anchor, descr, score, filename, kind] = result return score }, */ @@ -47,6 +40,14 @@ if (typeof Scorer === "undefined") { }; } +// Global search result kind enum, used by themes to style search results. +class SearchResultKind { + static get index() { return "index"; } + static get object() { return "object"; } + static get text() { return "text"; } + static get title() { return "title"; } +} + const _removeChildren = (element) => { while (element && element.lastChild) element.removeChild(element.lastChild); }; @@ -57,16 +58,20 @@ const _removeChildren = (element) => { const _escapeRegExp = (string) => string.replace(/[.*+\-?^${}()|[\]\\]/g, "\\$&"); // $& means the whole matched string -const _displayItem = (item, searchTerms) => { +const _displayItem = (item, searchTerms, highlightTerms) => { const docBuilder = DOCUMENTATION_OPTIONS.BUILDER; - const docUrlRoot = DOCUMENTATION_OPTIONS.URL_ROOT; const docFileSuffix = DOCUMENTATION_OPTIONS.FILE_SUFFIX; const docLinkSuffix = DOCUMENTATION_OPTIONS.LINK_SUFFIX; const showSearchSummary = DOCUMENTATION_OPTIONS.SHOW_SEARCH_SUMMARY; + const contentRoot = document.documentElement.dataset.content_root; - const [docName, title, anchor, descr, score, _filename] = item; + const [docName, title, anchor, descr, score, _filename, kind] = item; let listItem = document.createElement("li"); + // Add a class representing the item's type: + // can be used by a theme's CSS selector for styling + // See SearchResultKind for the class names. + listItem.classList.add(`kind-${kind}`); let requestUrl; let linkUrl; if (docBuilder === "dirhtml") { @@ -75,28 +80,35 @@ const _displayItem = (item, searchTerms) => { if (dirname.match(/\/index\/$/)) dirname = dirname.substring(0, dirname.length - 6); else if (dirname === "index/") dirname = ""; - requestUrl = docUrlRoot + dirname; + requestUrl = contentRoot + dirname; linkUrl = requestUrl; } else { // normal html builders - requestUrl = docUrlRoot + docName + docFileSuffix; + requestUrl = contentRoot + docName + docFileSuffix; linkUrl = docName + docLinkSuffix; } let linkEl = listItem.appendChild(document.createElement("a")); linkEl.href = linkUrl + anchor; linkEl.dataset.score = score; linkEl.innerHTML = title; - if (descr) + if (descr) { listItem.appendChild(document.createElement("span")).innerHTML = " (" + descr + ")"; + // highlight search terms in the description + if (SPHINX_HIGHLIGHT_ENABLED) // set in sphinx_highlight.js + highlightTerms.forEach((term) => _highlightText(listItem, term, "highlighted")); + } else if (showSearchSummary) fetch(requestUrl) .then((responseData) => responseData.text()) .then((data) => { if (data) listItem.appendChild( - Search.makeSearchSummary(data, searchTerms) + Search.makeSearchSummary(data, searchTerms, anchor) ); + // highlight search terms in the summary + if (SPHINX_HIGHLIGHT_ENABLED) // set in sphinx_highlight.js + highlightTerms.forEach((term) => _highlightText(listItem, term, "highlighted")); }); Search.output.appendChild(listItem); }; @@ -108,27 +120,46 @@ const _finishSearch = (resultCount) => { "Your search did not match any documents. Please make sure that all words are spelled correctly and that you've selected enough categories." ); else - Search.status.innerText = _( - `Search finished, found ${resultCount} page(s) matching the search query.` - ); + Search.status.innerText = Documentation.ngettext( + "Search finished, found one page matching the search query.", + "Search finished, found ${resultCount} pages matching the search query.", + resultCount, + ).replace('${resultCount}', resultCount); }; const _displayNextItem = ( results, resultCount, - searchTerms + searchTerms, + highlightTerms, ) => { // results left, load the summary and display it // this is intended to be dynamic (don't sub resultsCount) if (results.length) { - _displayItem(results.pop(), searchTerms); + _displayItem(results.pop(), searchTerms, highlightTerms); setTimeout( - () => _displayNextItem(results, resultCount, searchTerms), + () => _displayNextItem(results, resultCount, searchTerms, highlightTerms), 5 ); } // search finished, update title and status message else _finishSearch(resultCount); }; +// Helper function used by query() to order search results. +// Each input is an array of [docname, title, anchor, descr, score, filename, kind]. +// Order the results by score (in opposite order of appearance, since the +// `_displayNextItem` function uses pop() to retrieve items) and then alphabetically. +const _orderResultsByScoreThenName = (a, b) => { + const leftScore = a[4]; + const rightScore = b[4]; + if (leftScore === rightScore) { + // same score: sort alphabetically + const leftTitle = a[1].toLowerCase(); + const rightTitle = b[1].toLowerCase(); + if (leftTitle === rightTitle) return 0; + return leftTitle > rightTitle ? -1 : 1; // inverted is intentional + } + return leftScore > rightScore ? 1 : -1; +}; /** * Default splitQuery function. Can be overridden in ``sphinx.search`` with a @@ -152,13 +183,26 @@ const Search = { _queued_query: null, _pulse_status: -1, - htmlToText: (htmlString) => { + htmlToText: (htmlString, anchor) => { const htmlElement = new DOMParser().parseFromString(htmlString, 'text/html'); - htmlElement.querySelectorAll(".headerlink").forEach((el) => { el.remove() }); + for (const removalQuery of [".headerlink", "script", "style"]) { + htmlElement.querySelectorAll(removalQuery).forEach((el) => { el.remove() }); + } + if (anchor) { + const anchorContent = htmlElement.querySelector(`[role="main"] ${anchor}`); + if (anchorContent) return anchorContent.textContent; + + console.warn( + `Anchored content block not found. Sphinx search tries to obtain it via DOM query '[role=main] ${anchor}'. Check your theme or template.` + ); + } + + // if anchor not specified or not found, fall back to main content const docContent = htmlElement.querySelector('[role="main"]'); - if (docContent !== undefined) return docContent.textContent; + if (docContent) return docContent.textContent; + console.warn( - "Content block not found. Sphinx search tries to obtain it via '[role=main]'. Could you check your theme or template." + "Content block not found. Sphinx search tries to obtain it via DOM query '[role=main]'. Check your theme or template." ); return ""; }, @@ -211,6 +255,7 @@ const Search = { searchSummary.classList.add("search-summary"); searchSummary.innerText = ""; const searchList = document.createElement("ul"); + searchList.setAttribute("role", "list"); searchList.classList.add("search"); const out = document.getElementById("search-results"); @@ -231,16 +276,7 @@ const Search = { else Search.deferQuery(query); }, - /** - * execute search (requires search index to be loaded) - */ - query: (query) => { - const filenames = Search._index.filenames; - const docNames = Search._index.docnames; - const titles = Search._index.titles; - const allTitles = Search._index.alltitles; - const indexEntries = Search._index.indexentries; - + _parseQuery: (query) => { // stem the search terms and add them to the correct list const stemmer = new Stemmer(); const searchTerms = new Set(); @@ -276,22 +312,40 @@ const Search = { // console.info("required: ", [...searchTerms]); // console.info("excluded: ", [...excludedTerms]); - // array of [docname, title, anchor, descr, score, filename] - let results = []; + return [query, searchTerms, excludedTerms, highlightTerms, objectTerms]; + }, + + /** + * execute search (requires search index to be loaded) + */ + _performSearch: (query, searchTerms, excludedTerms, highlightTerms, objectTerms) => { + const filenames = Search._index.filenames; + const docNames = Search._index.docnames; + const titles = Search._index.titles; + const allTitles = Search._index.alltitles; + const indexEntries = Search._index.indexentries; + + // Collect multiple result groups to be sorted separately and then ordered. + // Each is an array of [docname, title, anchor, descr, score, filename, kind]. + const normalResults = []; + const nonMainIndexResults = []; + _removeChildren(document.getElementById("search-progress")); - const queryLower = query.toLowerCase(); + const queryLower = query.toLowerCase().trim(); for (const [title, foundTitles] of Object.entries(allTitles)) { - if (title.toLowerCase().includes(queryLower) && (queryLower.length >= title.length/2)) { + if (title.toLowerCase().trim().includes(queryLower) && (queryLower.length >= title.length/2)) { for (const [file, id] of foundTitles) { - let score = Math.round(100 * queryLower.length / title.length) - results.push([ + const score = Math.round(Scorer.title * queryLower.length / title.length); + const boost = titles[file] === title ? 1 : 0; // add a boost for document titles + normalResults.push([ docNames[file], titles[file] !== title ? `${titles[file]} > ${title}` : title, id !== null ? "#" + id : "", null, - score, + score + boost, filenames[file], + SearchResultKind.title, ]); } } @@ -300,46 +354,48 @@ const Search = { // search for explicit entries in index directives for (const [entry, foundEntries] of Object.entries(indexEntries)) { if (entry.includes(queryLower) && (queryLower.length >= entry.length/2)) { - for (const [file, id] of foundEntries) { - let score = Math.round(100 * queryLower.length / entry.length) - results.push([ + for (const [file, id, isMain] of foundEntries) { + const score = Math.round(100 * queryLower.length / entry.length); + const result = [ docNames[file], titles[file], id ? "#" + id : "", null, score, filenames[file], - ]); + SearchResultKind.index, + ]; + if (isMain) { + normalResults.push(result); + } else { + nonMainIndexResults.push(result); + } } } } // lookup as object objectTerms.forEach((term) => - results.push(...Search.performObjectSearch(term, objectTerms)) + normalResults.push(...Search.performObjectSearch(term, objectTerms)) ); // lookup as search terms in fulltext - results.push(...Search.performTermsSearch(searchTerms, excludedTerms)); + normalResults.push(...Search.performTermsSearch(searchTerms, excludedTerms)); // let the scorer override scores with a custom scoring function - if (Scorer.score) results.forEach((item) => (item[4] = Scorer.score(item))); - - // now sort the results by score (in opposite order of appearance, since the - // display function below uses pop() to retrieve items) and then - // alphabetically - results.sort((a, b) => { - const leftScore = a[4]; - const rightScore = b[4]; - if (leftScore === rightScore) { - // same score: sort alphabetically - const leftTitle = a[1].toLowerCase(); - const rightTitle = b[1].toLowerCase(); - if (leftTitle === rightTitle) return 0; - return leftTitle > rightTitle ? -1 : 1; // inverted is intentional - } - return leftScore > rightScore ? 1 : -1; - }); + if (Scorer.score) { + normalResults.forEach((item) => (item[4] = Scorer.score(item))); + nonMainIndexResults.forEach((item) => (item[4] = Scorer.score(item))); + } + + // Sort each group of results by score and then alphabetically by name. + normalResults.sort(_orderResultsByScoreThenName); + nonMainIndexResults.sort(_orderResultsByScoreThenName); + + // Combine the result groups in (reverse) order. + // Non-main index entries are typically arbitrary cross-references, + // so display them after other results. + let results = [...nonMainIndexResults, ...normalResults]; // remove duplicate search results // note the reversing of results, so that in the case of duplicates, the highest-scoring entry is kept @@ -353,14 +409,19 @@ const Search = { return acc; }, []); - results = results.reverse(); + return results.reverse(); + }, + + query: (query) => { + const [searchQuery, searchTerms, excludedTerms, highlightTerms, objectTerms] = Search._parseQuery(query); + const results = Search._performSearch(searchQuery, searchTerms, excludedTerms, highlightTerms, objectTerms); // for debugging //Search.lastresults = results.slice(); // a copy // console.info("search results:", Search.lastresults); // print the results - _displayNextItem(results, results.length, searchTerms); + _displayNextItem(results, results.length, searchTerms, highlightTerms); }, /** @@ -424,6 +485,7 @@ const Search = { descr, score, filenames[match[0]], + SearchResultKind.object, ]); }; Object.keys(objects).forEach((prefix) => @@ -458,14 +520,18 @@ const Search = { // add support for partial matches if (word.length > 2) { const escapedWord = _escapeRegExp(word); - Object.keys(terms).forEach((term) => { - if (term.match(escapedWord) && !terms[word]) - arr.push({ files: terms[term], score: Scorer.partialTerm }); - }); - Object.keys(titleTerms).forEach((term) => { - if (term.match(escapedWord) && !titleTerms[word]) - arr.push({ files: titleTerms[word], score: Scorer.partialTitle }); - }); + if (!terms.hasOwnProperty(word)) { + Object.keys(terms).forEach((term) => { + if (term.match(escapedWord)) + arr.push({ files: terms[term], score: Scorer.partialTerm }); + }); + } + if (!titleTerms.hasOwnProperty(word)) { + Object.keys(titleTerms).forEach((term) => { + if (term.match(escapedWord)) + arr.push({ files: titleTerms[term], score: Scorer.partialTitle }); + }); + } } // no match but word was a required one @@ -488,9 +554,8 @@ const Search = { // create the mapping files.forEach((file) => { - if (fileMap.has(file) && fileMap.get(file).indexOf(word) === -1) - fileMap.get(file).push(word); - else fileMap.set(file, [word]); + if (!fileMap.has(file)) fileMap.set(file, [word]); + else if (fileMap.get(file).indexOf(word) === -1) fileMap.get(file).push(word); }); }); @@ -531,6 +596,7 @@ const Search = { null, score, filenames[file], + SearchResultKind.text, ]); } return results; @@ -541,8 +607,8 @@ const Search = { * search summary for a given text. keywords is a list * of stemmed words. */ - makeSearchSummary: (htmlText, keywords) => { - const text = Search.htmlToText(htmlText); + makeSearchSummary: (htmlText, keywords, anchor) => { + const text = Search.htmlToText(htmlText, anchor); if (text === "") return null; const textLower = text.toLowerCase(); diff --git a/stable/_static/sg_gallery-binder.css b/stable/_static/sg_gallery-binder.css index a33aa4204..420005d22 100644 --- a/stable/_static/sg_gallery-binder.css +++ b/stable/_static/sg_gallery-binder.css @@ -4,3 +4,8 @@ div.binder-badge { margin: 1em auto; vertical-align: middle; } + +div.lite-badge { + margin: 1em auto; + vertical-align: middle; +} diff --git a/stable/_static/sg_gallery-dataframe.css b/stable/_static/sg_gallery-dataframe.css index 25be73092..fac74c43b 100644 --- a/stable/_static/sg_gallery-dataframe.css +++ b/stable/_static/sg_gallery-dataframe.css @@ -19,6 +19,7 @@ table.dataframe { color: var(--sg-text-color); font-size: 12px; table-layout: fixed; + width: auto; } table.dataframe thead { border-bottom: 1px solid var(--sg-text-color); diff --git a/stable/_static/sg_gallery.css b/stable/_static/sg_gallery.css index 72227837d..9bcd33c8a 100644 --- a/stable/_static/sg_gallery.css +++ b/stable/_static/sg_gallery.css @@ -178,23 +178,44 @@ thumbnail with its default link Background color */ max-height: 112px; max-width: 160px; } -.sphx-glr-thumbcontainer[tooltip]:hover:after { - background: var(--sg-tooltip-background); + +.sphx-glr-thumbcontainer[tooltip]::before { + content: ""; + position: absolute; + pointer-events: none; + top: 0; + left: 0; + width: 100%; + height: 100%; + z-index: 97; + background-color: var(--sg-tooltip-background); + backdrop-filter: blur(3px); + opacity: 0; + transition: opacity 0.3s; +} + +.sphx-glr-thumbcontainer[tooltip]:hover::before { + opacity: 1; +} + +.sphx-glr-thumbcontainer[tooltip]:hover::after { -webkit-border-radius: 4px; -moz-border-radius: 4px; border-radius: 4px; color: var(--sg-tooltip-foreground); content: attr(tooltip); - padding: 10px; + padding: 10px 10px 5px; z-index: 98; width: 100%; - height: 100%; + max-height: 100%; position: absolute; pointer-events: none; top: 0; box-sizing: border-box; overflow: hidden; - backdrop-filter: blur(3px); + display: -webkit-box; + -webkit-box-orient: vertical; + -webkit-line-clamp: 6; } .sphx-glr-script-out { @@ -283,6 +304,10 @@ div.sphx-glr-download a:hover { background-color: var(--sg-download-a-hover-background-color); } +div.sphx-glr-sidebar-item img { + max-height: 20px; +} + .sphx-glr-example-title:target::before { display: block; content: ""; diff --git a/stable/_static/sphinx_highlight.js b/stable/_static/sphinx_highlight.js index aae669d7e..8a96c69a1 100644 --- a/stable/_static/sphinx_highlight.js +++ b/stable/_static/sphinx_highlight.js @@ -29,14 +29,19 @@ const _highlight = (node, addItems, text, className) => { } span.appendChild(document.createTextNode(val.substr(pos, text.length))); + const rest = document.createTextNode(val.substr(pos + text.length)); parent.insertBefore( span, parent.insertBefore( - document.createTextNode(val.substr(pos + text.length)), + rest, node.nextSibling ) ); node.nodeValue = val.substr(0, pos); + /* There may be more occurrences of search term in this node. So call this + * function recursively on the remaining fragment. + */ + _highlight(rest, addItems, text, className); if (isInSVG) { const rect = document.createElementNS( @@ -140,5 +145,10 @@ const SphinxHighlight = { }, }; -_ready(SphinxHighlight.highlightSearchWords); -_ready(SphinxHighlight.initEscapeListener); +_ready(() => { + /* Do not call highlightSearchWords() when we are on the search page. + * It will highlight words from the *previous* search query. + */ + if (typeof Search === "undefined") SphinxHighlight.highlightSearchWords(); + SphinxHighlight.initEscapeListener(); +}); diff --git a/stable/_static/tensorly-pyramid.png b/stable/_static/tensorly-pyramid.png index dd4207745..34ee9f415 100644 Binary files a/stable/_static/tensorly-pyramid.png and b/stable/_static/tensorly-pyramid.png differ diff --git a/stable/about.html b/stable/about.html index 0a303879a..0233a5230 100644 --- a/stable/about.html +++ b/stable/about.html @@ -1,10 +1,9 @@ - - + - + About us — TensorLy: Tensor Learning in Python @@ -16,17 +15,17 @@ - - - - - - + + + + + + - - - + + + @@ -204,7 +203,7 @@

    Origin

    “TensorLy: Tensor Learning in Python”, by Jean Kossaifi, Yannis Panagakis, Anima Anandkumar and Maja Pantic.

    Originally, TensorLy was built on top of NumPy and SciPy only. In order to combine tensor methods with deep learning and run them on multiple devices, CPU and GPU, a flexible backend system was added. -This allows algorithms written in TensorLy to be ran with any major framework such as PyTorch, MXNet, TensorFlow, CuPy and JAX.

    +This allows algorithms written in TensorLy to be ran with any major framework such as PyTorch, TensorFlow, CuPy, JAX and Paddle.

    Core developers

    @@ -228,18 +227,24 @@

    Core developers

    Supporters

    The TensorLy project is and has been supported by various organizations and universities:

    -NVIDIA +NVIDIA +
    -INRIA +INRIA +

    INRIA is funding a full-time engineer to work on TensorLy.


    -Imperial College London +Imperial College London +
    -California Institute of Technology +California Institute of Technology +
    -National and Kapodistrian University of Athens +National and Kapodistrian University of Athens +
    -UCLA +UCLA +
    @@ -265,7 +270,7 @@

    Supporters

    - © Copyright 2016 - 2023, TensorLy Developers.
    + © Copyright 2016 - 2024, TensorLy Developers.
    diff --git a/stable/auto_examples/applications/index.html b/stable/auto_examples/applications/index.html index 5a232a19e..f2f783da4 100644 --- a/stable/auto_examples/applications/index.html +++ b/stable/auto_examples/applications/index.html @@ -1,10 +1,9 @@ - - + - + Practical applications of tensor methods — TensorLy: Tensor Learning in Python @@ -16,17 +15,17 @@ - - - - - - + + + + + + - - - + + + @@ -197,13 +196,13 @@

    Practical applications of tensor methods

    See how you can use TensorLy on practical applications and datasets.

    -
    Image compression via tensor decomposition +

    Image compression via tensor decomposition

    Image compression via tensor decomposition
    -
    Non-negative PARAFAC Decomposition of IL-2 Response Data +

    Non-negative PARAFAC Decomposition of IL-2 Response Data

    Non-negative PARAFAC Decomposition of IL-2 Response Data
    -
    COVID-19 Serology Dataset Analysis with CP +

    COVID-19 Serology Dataset Analysis with CP

    COVID-19 Serology Dataset Analysis with CP
    @@ -238,7 +237,7 @@
    - © Copyright 2016 - 2023, TensorLy Developers.
    + © Copyright 2016 - 2024, TensorLy Developers.
    diff --git a/stable/auto_examples/applications/plot_IL2.html b/stable/auto_examples/applications/plot_IL2.html index 40ec372c2..b171e74d7 100644 --- a/stable/auto_examples/applications/plot_IL2.html +++ b/stable/auto_examples/applications/plot_IL2.html @@ -1,10 +1,9 @@ - - + - + Non-negative PARAFAC Decomposition of IL-2 Response Data — TensorLy: Tensor Learning in Python @@ -16,17 +15,17 @@ - - - - - - + + + + + + - - - + + + @@ -196,8 +195,8 @@

    Non-negative PARAFAC Decomposition of IL-2 Response Data

    @@ -206,7 +205,7 @@ of a tensor of experimental data, and then make insights about the underlying structure of that data.

    To do this, we will work with a tensor of experimentally measured cell signaling data.

    -
    import numpy as np
    +
    import numpy as np
     import matplotlib.pyplot as plt
     from tensorly.datasets import load_IL2data
     from tensorly.decomposition import non_negative_parafac
    @@ -241,7 +240,7 @@
     representing IL-2 mutant, stimulation time, dose, and cell type respectively. Each
     measured quantity represents the amount of phosphorlyated STAT5 (pSTAT5) in a
     given cell population following stimulation with the specified IL-2 mutant.

    -
    response_data = load_IL2data()
    +
    response_data = load_IL2data()
     IL2mutants, cells = response_data.ticks[0], response_data.ticks[3]
     print(response_data.tensor.shape, response_data.dims)
     
    @@ -255,18 +254,26 @@

    First we must preprocess our tensor to ready it for factorization. Our data has a few missing values, and so we must first generate a mask to mark where those values occur.

    -
    tensor_mask = np.isfinite(response_data.tensor)
    +
    tensor_mask = np.isfinite(response_data.tensor)
     

    Now that we’ve marked where those non-finite values occur, we can regenerate our tensor without including non-finite values, allowing it to be factorized.

    -
    response_data_fin = np.nan_to_num(response_data.tensor)
    +
    response_data_fin = np.nan_to_num(response_data.tensor)
     

    Using this mask, and finite-value only tensor, we can decompose our signaling data into three components. We will also normalize this tensor, which will allow for easier comparisons to be made between the meanings, and magnitudes of our resulting components.

    -
    sig_tensor_fact = non_negative_parafac(response_data_fin, init='random', rank=3, mask=tensor_mask, n_iter_max=5000, tol=1e-9, random_state=1)
    +
    sig_tensor_fact = non_negative_parafac(
    +    response_data_fin,
    +    init="random",
    +    rank=3,
    +    mask=tensor_mask,
    +    n_iter_max=5000,
    +    tol=1e-9,
    +    random_state=1,
    +)
     sig_tensor_fact = cp_normalize(sig_tensor_fact)
     
    @@ -275,7 +282,7 @@ mutations made to their amino acid sequence, as well as their valency format (monovalent or bivalent).

    Finally, we label, plot, and analyze our factored tensor of data.

    -
    f, ax = plt.subplots(1, 2, figsize=(9, 4.5))
    +
    f, ax = plt.subplots(1, 2, figsize=(9, 4.5))
     
     components = [1, 2, 3]
     width = 0.25
    @@ -284,9 +291,9 @@
     ligands = IL2mutants
     x_lig = np.arange(len(ligands))
     
    -lig_rects_comp1 = ax[0].bar(x_lig - width, lig_facs[:, 0], width, label='Component 1')
    -lig_rects_comp2 = ax[0].bar(x_lig, lig_facs[:, 1], width, label='Component 2')
    -lig_rects_comp3 = ax[0].bar(x_lig + width, lig_facs[:, 2], width, label='Component 3')
    +lig_rects_comp1 = ax[0].bar(x_lig - width, lig_facs[:, 0], width, label="Component 1")
    +lig_rects_comp2 = ax[0].bar(x_lig, lig_facs[:, 1], width, label="Component 2")
    +lig_rects_comp3 = ax[0].bar(x_lig + width, lig_facs[:, 2], width, label="Component 3")
     ax[0].set(xlabel="Ligand", ylabel="Component Weight", ylim=(0, 1))
     ax[0].set_xticks(x_lig, ligands)
     ax[0].set_xticklabels(ax[0].get_xticklabels(), rotation=60, ha="right", fontsize=9)
    @@ -296,9 +303,13 @@
     cell_facs = sig_tensor_fact[1][3]
     x_cell = np.arange(len(cells))
     
    -cell_rects_comp1 = ax[1].bar(x_cell - width, cell_facs[:, 0], width, label='Component 1')
    -cell_rects_comp2 = ax[1].bar(x_cell, cell_facs[:, 1], width, label='Component 2')
    -cell_rects_comp3 = ax[1].bar(x_cell + width, cell_facs[:, 2], width, label='Component 3')
    +cell_rects_comp1 = ax[1].bar(
    +    x_cell - width, cell_facs[:, 0], width, label="Component 1"
    +)
    +cell_rects_comp2 = ax[1].bar(x_cell, cell_facs[:, 1], width, label="Component 2")
    +cell_rects_comp3 = ax[1].bar(
    +    x_cell + width, cell_facs[:, 2], width, label="Component 3"
    +)
     ax[1].set(xlabel="Cell", ylabel="Component Weight", ylim=(0, 1))
     ax[1].set_xticks(x_cell, cells)
     ax[1].set_xticklabels(ax[1].get_xticklabels(), rotation=45, ha="right")
    @@ -319,13 +330,16 @@
     By plotting the correlations which time and dose have with each component, we
     could additionally make inferences as to the dynamics and dose dependence of how mutations
     affect IL-2 signaling in immune cells.

    -

    Total running time of the script: ( 0 minutes 2.337 seconds)

    +

    Total running time of the script: (0 minutes 1.412 seconds)