Skip to content

Commit

Permalink
Github action: auto-update.
Browse files Browse the repository at this point in the history
  • Loading branch information
github-actions[bot] committed Apr 10, 2024
1 parent b7976cb commit 5490fa8
Show file tree
Hide file tree
Showing 58 changed files with 329 additions and 335 deletions.
Binary file not shown.
Binary file not shown.
Binary file modified dev/_images/sphx_glr_plot_cp_line_search_001.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified dev/_images/sphx_glr_plot_cp_line_search_thumb.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified dev/_images/sphx_glr_plot_guide_for_constrained_cp_001.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified dev/_images/sphx_glr_plot_guide_for_constrained_cp_002.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified dev/_images/sphx_glr_plot_guide_for_constrained_cp_003.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified dev/_images/sphx_glr_plot_guide_for_constrained_cp_004.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified dev/_images/sphx_glr_plot_guide_for_constrained_cp_005.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified dev/_images/sphx_glr_plot_guide_for_constrained_cp_006.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified dev/_images/sphx_glr_plot_guide_for_constrained_cp_thumb.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified dev/_images/sphx_glr_plot_image_compression_001.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified dev/_images/sphx_glr_plot_image_compression_thumb.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified dev/_images/sphx_glr_plot_nn_cp_hals_001.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified dev/_images/sphx_glr_plot_nn_cp_hals_thumb.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified dev/_images/sphx_glr_plot_nn_tucker_001.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified dev/_images/sphx_glr_plot_nn_tucker_thumb.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified dev/_images/sphx_glr_plot_permute_factors_001.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified dev/_images/sphx_glr_plot_permute_factors_thumb.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
86 changes: 54 additions & 32 deletions dev/_modules/tensorly/tenalg/core_tenalg/mttkrp.html
Original file line number Diff line number Diff line change
Expand Up @@ -145,7 +145,9 @@

<h1>Source code for tensorly.tenalg.core_tenalg.mttkrp</h1><div class="highlight"><pre>
<span></span><span class="kn">from</span> <span class="nn">.n_mode_product</span> <span class="kn">import</span> <span class="n">multi_mode_dot</span>
<span class="kn">from</span> <span class="nn">._khatri_rao</span> <span class="kn">import</span> <span class="n">khatri_rao</span>
<span class="kn">from</span> <span class="nn">...</span> <span class="kn">import</span> <span class="n">backend</span> <span class="k">as</span> <span class="n">T</span>
<span class="kn">from</span> <span class="nn">...base</span> <span class="kn">import</span> <span class="n">unfold</span>

<span class="c1"># Author: Jean Kossaifi</span>

Expand All @@ -171,24 +173,61 @@ <h1>Source code for tensorly.tenalg.core_tenalg.mttkrp</h1><div class="highlight

<span class="sd"> Notes</span>
<span class="sd"> -----</span>
<span class="sd"> This is a variant of::</span>
<span class="sd"> Default unfolding_dot_khatri_rao implementation.</span>

<span class="sd"> unfolded = unfold(tensor, mode)</span>
<span class="sd"> kr_factors = khatri_rao(factors, skip_matrix=mode)</span>
<span class="sd"> mttkrp2 = tl.dot(unfolded, kr_factors)</span>
<span class="sd"> Implemented as the product between an unfolded tensor</span>
<span class="sd"> and a Khatri-Rao product explicitly formed. Due to matrix-matrix</span>
<span class="sd"> products being extremely efficient operations, this is a</span>
<span class="sd"> simple yet hard-to-beat implementation of MTTKRP.</span>

<span class="sd"> Multiplying with the Khatri-Rao product is equivalent to multiplying,</span>
<span class="sd"> for each rank, with the kronecker product of each factor.</span>
<span class="sd"> In code::</span>
<span class="sd"> If working with sparse tensors, or when the CP-rank of the CP-tensor is comparable to, or larger than,</span>
<span class="sd"> the dimensions of the input tensor, however, this method requires a lot</span>
<span class="sd"> of memory, which can be harmful when dealing with large tensors. In this</span>
<span class="sd"> case, please use the memory-efficient version of MTTKRP.</span>

<span class="sd"> mttkrp_parts = []</span>
<span class="sd"> for r in range(rank):</span>
<span class="sd"> component = tl.tenalg.multi_mode_dot(tensor, [f[:, r] for f in factors], skip=mode)</span>
<span class="sd"> mttkrp_parts.append(component)</span>
<span class="sd"> mttkrp = tl.stack(mttkrp_parts, axis=1)</span>
<span class="sd"> return mttkrp</span>
<span class="sd"> To use the slower memory efficient version, run</span>

<span class="sd"> This can be done by taking n-mode-product with the full factors</span>
<span class="sd"> &gt;&gt;&gt; from tensorly.tenalg.core_tenalg.mttkrp import unfolding_dot_khatri_rao_memory</span>
<span class="sd"> &gt;&gt;&gt; tl.tenalg.register_backend_method(&quot;unfolding_dot_khatri_rao&quot;, unfolding_dot_khatri_rao_memory)</span>
<span class="sd"> &gt;&gt;&gt; tl.tenalg.use_dynamic_dispatch()</span>

<span class="sd"> &quot;&quot;&quot;</span>
<span class="n">weights</span><span class="p">,</span> <span class="n">factors</span> <span class="o">=</span> <span class="n">cp_tensor</span>
<span class="n">kr_factors</span> <span class="o">=</span> <span class="n">khatri_rao</span><span class="p">(</span><span class="n">factors</span><span class="p">,</span> <span class="n">weights</span><span class="o">=</span><span class="n">weights</span><span class="p">,</span> <span class="n">skip_matrix</span><span class="o">=</span><span class="n">mode</span><span class="p">)</span>
<span class="n">mttkrp</span> <span class="o">=</span> <span class="n">T</span><span class="o">.</span><span class="n">dot</span><span class="p">(</span><span class="n">unfold</span><span class="p">(</span><span class="n">tensor</span><span class="p">,</span> <span class="n">mode</span><span class="p">),</span> <span class="n">T</span><span class="o">.</span><span class="n">conj</span><span class="p">(</span><span class="n">kr_factors</span><span class="p">))</span>
<span class="k">return</span> <span class="n">mttkrp</span></div>



<span class="k">def</span> <span class="nf">unfolding_dot_khatri_rao_memory</span><span class="p">(</span><span class="n">tensor</span><span class="p">,</span> <span class="n">cp_tensor</span><span class="p">,</span> <span class="n">mode</span><span class="p">):</span>
<span class="w"> </span><span class="sd">&quot;&quot;&quot;mode-n unfolding times khatri-rao product of factors</span>

<span class="sd"> Parameters</span>
<span class="sd"> ----------</span>
<span class="sd"> tensor : tl.tensor</span>
<span class="sd"> tensor to unfold</span>
<span class="sd"> factors : tl.tensor list</span>
<span class="sd"> list of matrices of which to the khatri-rao product</span>
<span class="sd"> mode : int</span>
<span class="sd"> mode on which to unfold `tensor`</span>

<span class="sd"> Returns</span>
<span class="sd"> -------</span>
<span class="sd"> mttkrp</span>
<span class="sd"> dot(unfold(tensor, mode), khatri-rao(factors))</span>

<span class="sd"> Notes</span>
<span class="sd"> -----</span>
<span class="sd"> Implemented as a sequence of Tensor-times-vectors products between a tensor</span>
<span class="sd"> and a Khatri-Rao product. The Khatri-Rao product is never computed explicitly,</span>
<span class="sd"> rather each column in the Khatri-Rao product is contracted with the tensor. This</span>
<span class="sd"> operation is implemented in Python and without making of use of parallelism, and it</span>
<span class="sd"> is therefore in general slower than the naive MTTKRP product.</span>
<span class="sd"> When the CP-rank of the CP-tensor is comparable to, or larger than,</span>
<span class="sd"> the dimensions of the input tensor, this method however requires much less</span>
<span class="sd"> memory.</span>

<span class="sd"> This method can also be implemented by taking n-mode-product with the full factors</span>
<span class="sd"> (faster but more memory consuming)::</span>

<span class="sd"> projected = multi_mode_dot(tensor, factors, skip=mode, transpose=True)</span>
Expand All @@ -198,22 +237,6 @@ <h1>Source code for tensorly.tenalg.core_tenalg.mttkrp</h1><div class="highlight
<span class="sd"> index = tuple([slice(None) if k == mode else i for k in range(ndims)])</span>
<span class="sd"> res.append(projected[index])</span>
<span class="sd"> return T.stack(res, axis=-1)</span>


<span class="sd"> The same idea could be expressed using einsum::</span>

<span class="sd"> ndims = tl.ndim(tensor)</span>
<span class="sd"> tensor_idx = &#39;&#39;.join(chr(ord(&#39;a&#39;) + i) for i in range(ndims))</span>
<span class="sd"> rank = chr(ord(&#39;a&#39;) + ndims + 1)</span>
<span class="sd"> op = tensor_idx</span>
<span class="sd"> for i in range(ndims):</span>
<span class="sd"> if i != mode:</span>
<span class="sd"> op += &#39;,&#39; + &#39;&#39;.join([tensor_idx[i], rank])</span>
<span class="sd"> else:</span>
<span class="sd"> result = &#39;&#39;.join([tensor_idx[i], rank])</span>
<span class="sd"> op += &#39;-&gt;&#39; + result</span>
<span class="sd"> factors = [f for (i, f) in enumerate(factors) if i != mode]</span>
<span class="sd"> return tl_einsum(op, tensor, *factors)</span>
<span class="sd"> &quot;&quot;&quot;</span>
<span class="n">mttkrp_parts</span> <span class="o">=</span> <span class="p">[]</span>
<span class="n">weights</span><span class="p">,</span> <span class="n">factors</span> <span class="o">=</span> <span class="n">cp_tensor</span>
Expand All @@ -227,8 +250,7 @@ <h1>Source code for tensorly.tenalg.core_tenalg.mttkrp</h1><div class="highlight
<span class="k">if</span> <span class="n">weights</span> <span class="ow">is</span> <span class="kc">None</span><span class="p">:</span>
<span class="k">return</span> <span class="n">T</span><span class="o">.</span><span class="n">stack</span><span class="p">(</span><span class="n">mttkrp_parts</span><span class="p">,</span> <span class="n">axis</span><span class="o">=</span><span class="mi">1</span><span class="p">)</span>
<span class="k">else</span><span class="p">:</span>
<span class="k">return</span> <span class="n">T</span><span class="o">.</span><span class="n">stack</span><span class="p">(</span><span class="n">mttkrp_parts</span><span class="p">,</span> <span class="n">axis</span><span class="o">=</span><span class="mi">1</span><span class="p">)</span> <span class="o">*</span> <span class="n">T</span><span class="o">.</span><span class="n">reshape</span><span class="p">(</span><span class="n">weights</span><span class="p">,</span> <span class="p">(</span><span class="mi">1</span><span class="p">,</span> <span class="o">-</span><span class="mi">1</span><span class="p">))</span></div>

<span class="k">return</span> <span class="n">T</span><span class="o">.</span><span class="n">stack</span><span class="p">(</span><span class="n">mttkrp_parts</span><span class="p">,</span> <span class="n">axis</span><span class="o">=</span><span class="mi">1</span><span class="p">)</span> <span class="o">*</span> <span class="n">T</span><span class="o">.</span><span class="n">reshape</span><span class="p">(</span><span class="n">weights</span><span class="p">,</span> <span class="p">(</span><span class="mi">1</span><span class="p">,</span> <span class="o">-</span><span class="mi">1</span><span class="p">))</span>
</pre></div>

</div>
Expand Down
2 changes: 1 addition & 1 deletion dev/_sources/auto_examples/applications/plot_IL2.rst.txt
Original file line number Diff line number Diff line change
Expand Up @@ -242,7 +242,7 @@ affect IL-2 signaling in immune cells.

.. rst-class:: sphx-glr-timing

**Total running time of the script:** (0 minutes 1.723 seconds)
**Total running time of the script:** (0 minutes 1.444 seconds)


.. _sphx_glr_download_auto_examples_applications_plot_IL2.py:
Expand Down
4 changes: 2 additions & 2 deletions dev/_sources/auto_examples/applications/plot_covid.rst.txt
Original file line number Diff line number Diff line change
Expand Up @@ -201,7 +201,7 @@ how each component looks like on weights.
.. code-block:: none
<matplotlib.colorbar.Colorbar object at 0x7f884c658e20>
<matplotlib.colorbar.Colorbar object at 0x7f5568575690>
Expand All @@ -228,7 +228,7 @@ References

.. rst-class:: sphx-glr-timing

**Total running time of the script:** (0 minutes 6.886 seconds)
**Total running time of the script:** (0 minutes 4.097 seconds)


.. _sphx_glr_download_auto_examples_applications_plot_covid.py:
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -112,7 +112,7 @@ Example on how to use :func:`tensorly.decomposition.parafac` and :func:`tensorly
.. rst-class:: sphx-glr-timing

**Total running time of the script:** (0 minutes 3.524 seconds)
**Total running time of the script:** (0 minutes 1.435 seconds)


.. _sphx_glr_download_auto_examples_applications_plot_image_compression.py:
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@

Computation times
=================
**00:12.133** total execution time for 3 files **from auto_examples/applications**:
**00:06.976** total execution time for 3 files **from auto_examples/applications**:

.. container::

Expand All @@ -33,11 +33,11 @@ Computation times
- Time
- Mem (MB)
* - :ref:`sphx_glr_auto_examples_applications_plot_covid.py` (``plot_covid.py``)
- 00:06.886
- 0.0
* - :ref:`sphx_glr_auto_examples_applications_plot_image_compression.py` (``plot_image_compression.py``)
- 00:03.524
- 00:04.097
- 0.0
* - :ref:`sphx_glr_auto_examples_applications_plot_IL2.py` (``plot_IL2.py``)
- 00:01.723
- 00:01.444
- 0.0
* - :ref:`sphx_glr_auto_examples_applications_plot_image_compression.py` (``plot_image_compression.py``)
- 00:01.435
- 0.0
Original file line number Diff line number Diff line change
Expand Up @@ -88,7 +88,7 @@ Example on how to use :func:`tensorly.decomposition.parafac` with line search to
.. rst-class:: sphx-glr-timing

**Total running time of the script:** (0 minutes 9.561 seconds)
**Total running time of the script:** (0 minutes 5.194 seconds)


.. _sphx_glr_download_auto_examples_decomposition_plot_cp_line_search.py:
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -240,21 +240,21 @@ a python dictionary:
.. code-block:: none
1. factor
[[3.83 0. 0.6 ]
[3.57 0. 0.07]
[4.63 1.55 0.86]
[6.27 0.97 2.23]
[5.64 0.96 1.35]
[7.02 0.9 2.72]]
[[4.6 0.65 0.98]
[4.79 0.78 0.56]
[4.68 0.87 0.87]
[3.63 1.77 1.97]
[4.16 1.63 1.81]
[4.48 1.1 0.86]]
2. factor
[[ 0.41 -0.17 -0.33]
[ 0.4 0.06 -0.42]
[ 0.42 0.1 -0.5 ]
[ 0.37 0.66 -0.58]
[ 0.44 -0.6 -0.39]
[ 0.43 0.27 -0.64]
[ 0.41 -0.31 -0.51]
[ 0.38 0.18 -0.55]]
[[ 0.22 0.67 -0.18]
[ 0.37 -0.04 0.13]
[ 0.33 -0.24 0.52]
[ 0.38 -0.85 0.86]
[ 0.29 -0.02 0.31]
[ 0.32 0.64 -0.4 ]
[ 0.28 0.61 -0.33]
[ 0.4 -0.76 0.51]]
Expand Down Expand Up @@ -365,32 +365,32 @@ only `True` depending on the selected constraint.
.. code-block:: none
1. factor
[[ 18.97 -17.93 -7.8 ]
[ 19.67 -24.39 -1.47]
[ 19.5 -8.36 -5.33]
[ 9.71 15.28 16.51]
[ 16.16 -6.45 10.39]
[ 12.05 6.65 13.63]]
[[15.09 -4.49 5.42]
[14.94 -3.67 5.59]
[15.03 10.82 4.88]
[11.44 -3.01 14.39]
[13.79 7.82 9.56]
[14.17 13.23 5.75]]
2. factor
[[0.44 1.22 0.42]
[0.39 0.55 0.62]
[0.38 0. 0.57]
[0.4 0.81 0. ]
[0.39 0.42 0.43]
[0.36 0.37 0.53]
[0.31 0. 0.53]
[0.33 0.08 0.36]]
[[0.34 1.43 0.52]
[0.42 0. 0.29]
[0.36 0.3 0.7 ]
[0.33 0. 0.63]
[0.34 0. 0.61]
[0.44 0.18 0.05]
[0.38 0. 0.3 ]
[0.35 0. 0.08]]
3. factor
[[ 0.08 -0.01 0.03]
[ 0.08 0.02 0.03]
[ 0.08 0.02 -0.02]
[ 0.07 -0.01 0.02]
[ 0.08 0.01 0.02]
[ 0.08 0.01 0.01]
[ 0.06 -0.01 0.04]
[ 0.08 0.02 0.01]
[ 0.08 0.02 0. ]
[ 0.07 0.01 0.05]]
[[ 0.06 0. 0.07]
[ 0.09 0.02 -0. ]
[ 0.09 -0.01 0. ]
[ 0.08 0.01 0.04]
[ 0.06 0.02 0.05]
[ 0.08 0.03 0.01]
[ 0.12 -0.02 -0.01]
[ 0.07 -0.01 0.04]
[ 0.07 -0.02 0.02]
[ 0.1 0. -0.01]]
Expand All @@ -416,7 +416,7 @@ IEEE Transactions on Signal Processing 64.19 (2016): 5052-5065.

.. rst-class:: sphx-glr-timing

**Total running time of the script:** (0 minutes 4.667 seconds)
**Total running time of the script:** (0 minutes 4.547 seconds)


.. _sphx_glr_download_auto_examples_decomposition_plot_guide_for_constrained_cp.py:
Expand Down
30 changes: 15 additions & 15 deletions dev/_sources/auto_examples/decomposition/plot_nn_cp_hals.rst.txt
Original file line number Diff line number Diff line change
Expand Up @@ -157,11 +157,11 @@ the case but the approximation is quite coarse.
.. code-block:: none
reconstructed tensor
[[[8122.51 8527.78]
[7602.15 8553.97]]
[[[8339.46 8385.85]
[8482.02 8314.91]]
[[8856.23 9085.41]
[8355.52 9077.47]]]
[[9022.14 9120.01]
[9121.18 9020.65]]]
input data tensor
[[[8210. 8211.]
Expand Down Expand Up @@ -218,11 +218,11 @@ Again, we can look at the reconstructed tensor entries.
.. code-block:: none
reconstructed tensor
[[[8199.7 8201.79]
[8218. 8208.66]]
[[[8211.58 8210.53]
[8232.58 8234.39]]
[[9000.56 9002.48]
[9018.87 9010.79]]]
[[9011.52 9010.65]
[9032.35 9034.05]]]
input data tensor
[[[8210. 8211.]
Expand Down Expand Up @@ -287,9 +287,9 @@ First comparison option is processing time for each algorithm:

.. code-block:: none
0.21 seconds
0.35 seconds
116.74 seconds
0.04 seconds
0.04 seconds
116.34 seconds
Expand Down Expand Up @@ -322,9 +322,9 @@ In Tensorly, we provide a function to calculate Root Mean Square Error (RMSE):

.. code-block:: none
229.19572
17.95952
0.4071443
226.07481
4.026286
0.7840668
Expand Down Expand Up @@ -386,7 +386,7 @@ Neural computation, 24(4), 1085-1105. (Link)

.. rst-class:: sphx-glr-timing

**Total running time of the script:** (1 minutes 57.373 seconds)
**Total running time of the script:** (1 minutes 56.486 seconds)


.. _sphx_glr_download_auto_examples_decomposition_plot_nn_cp_hals.py:
Expand Down
10 changes: 5 additions & 5 deletions dev/_sources/auto_examples/decomposition/plot_nn_tucker.rst.txt
Original file line number Diff line number Diff line change
Expand Up @@ -207,8 +207,8 @@ processing time:
.. code-block:: none
time for tensorly nntucker: 0.09
time for HALS with fista: 1.26
time for HALS with as: 0.31
time for HALS with fista: 1.24
time for HALS with as: 0.34
Expand Down Expand Up @@ -238,9 +238,9 @@ to compute Root Mean Square Error (RMSE):

.. code-block:: none
RMSE tensorly nntucker: 285.0544765115807
RMSE for hals with fista: 281.90474318437265
RMSE for hals with as: 282.5328542921823
RMSE tensorly nntucker: 285.80975994977814
RMSE for hals with fista: 281.61298947939366
RMSE for hals with as: 281.92985708859294
Expand Down
Loading

0 comments on commit 5490fa8

Please sign in to comment.