Prerequisite knowledge about Python

Object-oriented programming language

  • R: S3 methods

model <- lm(y~x, data) print(model) coef(model) pred(model, newdata)

  • Python: Member functions, attributes

model = sklearn.linear_model.LinearRegression() model.fit(x, y) # member function model.predict(newdata) model.coef_ # attribute model.intercept_

Integrated development environment (IDE)

PyCharm Spyder Jupyter

  • PyCharm: heavy-loaded, more useful for huge python project
  • Spyder: easier to use and debug, better for data analysis
  • Jupyter (recommended): lightweight, convenient for data analysis

Packages

  • R:

library(glm)

  • Python:

import sklearn.linear_model

NOTE:

import numpy as np import pandas as pd import tensorly as tl

    • import will import all the functions in the package.
    • from ... import just imports a part of the functions in the package.
    • Structure: Package -> Module -> Function

from tensorly.decomposition import parafac from tensorly import random

In [46]:
import numpy as np
import pandas as pd
import tensorly as tl

Useful packages in data analysis

  • numpy: multi-dimensional container of generic data
  • pandas: data analysis and manipulation tool
  • scipy: scientific computing and technical computing
  • sklearn: machine learning library
  • matplotlib: plotting library
  • seaborn: data visualization library (fancier plotting library)
  • keras: neural-network library

Data structures

  • R:

a1 <- c(1,2,3) # vector a2 <- matrix(1:24, 4, 6) # matrix a3 <- array(1:24, dim = c(2,3,4)) # array a4 <- data.frame(a1, a1) # data frame

  • Python:

    • Build-in structure

      a1 = [1,2,3] a2 = [[1,2,3], [4,5,6]] # list a3 = (1,2,3) # tuple

    • Advanced structure imported from packages

      import numpy as np import pandas as pd a1 = np.array([[1,2,3], [4,5,6]]) # matrix (array) a1.shape a1.ndim a1.transpose() a2 = pd.DataFrame([[1,2,3], [4,5,6]], columns=['a', 'b', 'c']) # data frame

In [47]:
a1 = np.array([[1,2,3], [4,5,6]])

Indexing

  • R: starts from 1

a <- matrix(1:24, 4, 6) a[1,1]

  • Python: starts from 0

a = np.arange(24).reshape(4,6) a[1,1]

Arrangement of the element in the array

  • R: elements are arranged along the axis starting from the first one to the last one.

a <- array(1:24, dim=c(2,3,4)) a[2,1,1]

  • Python: elements are arranged along the axis starting from the last one to the first one.

a = np.arange(24).reshape(2,3,4) a[1,0,0]

Syntax: Indentation

In [48]:
# Fibonacci number
a = [0,1]
i = 0
while True:
    a.append(a[-1]+a[-2])
    i += 1
    if i >= 10:
        break

Setup

Conda (optional)

  1. Features

    • Package manager, helps install the packages easily
    • For many languages: Python, R, Ruby, Lua, Scala, Java, JavaScript, C/ C++, FORTRAN
    • Setup different environment to manage different versions of language
  2. Install: Miniconda

  3. Setup new environment (not necessary): conda create -n env_name python=3 -> source activate env_name
  4. Install numpy: conda install numpy

Python & Jupyter Installation

  1. Python 2.x or Python 3.x: Download from Official website

  2. Jupyter notebook:

    • Conda: conda install -c conda-forge notebook
    • Pip: pip install notebook

Note: Tensorly is written in Python3, which is totally different from Python2.

Launch

  • MacOS: terminal -> change to startup folder: cd /some_folder_name -> jupyter notebook
  • Windows: run -> cmd -> jupyter notebook

How to change startup folder in Windowns? Refer to the website

Change modes in Jupyter

  • Markdown: M
  • Code: Y

Package installation

Two ways to install the packages

  • Pip: easiest one: pip install -U tensorly.
  • Anaconda: easy to control different versions of Python: conda install -c tensorly tensorly.

Package import

In [49]:
import tensorly as tl
import numpy as np
from tensorly import random
from tensorly.decomposition import parafac, tucker, matrix_product_state
from scipy.misc import face
import matplotlib.pyplot as plt

Generate arrays with numpy

Create a random $2\times 3 \times 4$ array.

In [50]:
ary = np.array(np.arange(24).reshape(2,3,4))

More details about the package

For more details, please refer to Official API Reference

Data manipulation on tensor data

Data generation

Create a random $2\times 3 \times 4$ tensor with tl.tensor.

In [51]:
tensor = tl.tensor(ary)
In [52]:
tensor.shape
Out[52]:
(2, 3, 4)
In [53]:
tensor
Out[53]:
array([[[ 0,  1,  2,  3],
        [ 4,  5,  6,  7],
        [ 8,  9, 10, 11]],

       [[12, 13, 14, 15],
        [16, 17, 18, 19],
        [20, 21, 22, 23]]])
In [54]:
tensor[0,:,:]
Out[54]:
array([[ 0,  1,  2,  3],
       [ 4,  5,  6,  7],
       [ 8,  9, 10, 11]])

Alternatively, another slicing method can be exclusively used for tensor object

In [55]:
tensor[0,...]
Out[55]:
array([[ 0,  1,  2,  3],
       [ 4,  5,  6,  7],
       [ 8,  9, 10, 11]])

Matricization

In [56]:
unfolded = tl.unfold(tensor, mode=0)
In [57]:
unfolded.shape
Out[57]:
(2, 12)
In [58]:
unfolded
Out[58]:
array([[ 0,  1,  2,  3,  4,  5,  6,  7,  8,  9, 10, 11],
       [12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23]])

The order in matricization is different from the typically used one in the literature or in R. Let $X \in \mathbb{R}^{p_1\times \ldots \times p_M}$, $X_{(n)}$ is the mode-$n$ matricization of $X$. Then the element $(i_1, \ldots, i_M)$ in $X$ is mapped to the element $(i_n, j)$, where $ j = \sum_{k=1, k\not=n}^M i_k \prod_{\ell > k} p_\ell. $

In [59]:
tensor[1,2,3]
Out[59]:
23
In [60]:
tensor[0,1,3]
Out[60]:
7

Folding

In [61]:
tl.fold(unfolded, mode=0, shape=tensor.shape)
Out[61]:
array([[[ 0,  1,  2,  3],
        [ 4,  5,  6,  7],
        [ 8,  9, 10, 11]],

       [[12, 13, 14, 15],
        [16, 17, 18, 19],
        [20, 21, 22, 23]]])

Vectorization

In [62]:
tl.tensor_to_vec(tensor)
Out[62]:
array([ 0,  1,  2,  3,  4,  5,  6,  7,  8,  9, 10, 11, 12, 13, 14, 15, 16,
       17, 18, 19, 20, 21, 22, 23])

Let $X \in \mathbb{R}^{p_1\times \ldots \times p_M}$ and $v$ is the vecterization of $X$. For $v_j = X_{i_1, \ldots, i_M}$, we have $ j = \sum_{k=1}^M i_k \prod_{\ell>k} p_\ell $

In [63]:
tensor[1,2,3]
Out[63]:
23

NOTE: Similarly, there are corresponding vectorization functions for CP, Tucker and TT form.

CP, Tucker, TT form

Generation

  • R: NA
  • Matlab: ttensor, ktensor
  • Python: random_tucker, random_kruskal, random_mps (randomly generated)

Note: Only Python provides Tensor-Train (TT) form.

In [64]:
cp_tensor = random.random_kruskal((10,8,6), rank=2, random_state=0)

tk_tensor = random.random_tucker((10,8,6), rank=2, random_state=0)

tt_tensor = random.random_mps((10,8,6), rank=(1,2,4,1), random_state=0)
In [65]:
tt_tensor[0]
Out[65]:
array([[[0.5488135 , 0.71518937],
        [0.60276338, 0.54488318],
        [0.4236548 , 0.64589411],
        [0.43758721, 0.891773  ],
        [0.96366276, 0.38344152],
        [0.79172504, 0.52889492],
        [0.56804456, 0.92559664],
        [0.07103606, 0.0871293 ],
        [0.0202184 , 0.83261985],
        [0.77815675, 0.87001215]]])
In [66]:
tt_tensor[1]
Out[66]:
array([[[0.97861834, 0.79915856, 0.46147936, 0.78052918],
        [0.11827443, 0.63992102, 0.14335329, 0.94466892],
        [0.52184832, 0.41466194, 0.26455561, 0.77423369],
        [0.45615033, 0.56843395, 0.0187898 , 0.6176355 ],
        [0.61209572, 0.616934  , 0.94374808, 0.6818203 ],
        [0.3595079 , 0.43703195, 0.6976312 , 0.06022547],
        [0.66676672, 0.67063787, 0.21038256, 0.1289263 ],
        [0.31542835, 0.36371077, 0.57019677, 0.43860151]],

       [[0.98837384, 0.10204481, 0.20887676, 0.16130952],
        [0.65310833, 0.2532916 , 0.46631077, 0.24442559],
        [0.15896958, 0.11037514, 0.65632959, 0.13818295],
        [0.19658236, 0.36872517, 0.82099323, 0.09710128],
        [0.83794491, 0.09609841, 0.97645947, 0.4686512 ],
        [0.97676109, 0.60484552, 0.73926358, 0.03918779],
        [0.28280696, 0.12019656, 0.2961402 , 0.11872772],
        [0.31798318, 0.41426299, 0.0641475 , 0.69247212]]])
In [67]:
tt_tensor[2]
Out[67]:
array([[[0.56660145],
        [0.26538949],
        [0.52324805],
        [0.09394051],
        [0.5759465 ],
        [0.9292962 ]],

       [[0.31856895],
        [0.66741038],
        [0.13179786],
        [0.7163272 ],
        [0.28940609],
        [0.18319136]],

       [[0.58651293],
        [0.02010755],
        [0.82894003],
        [0.00469548],
        [0.67781654],
        [0.27000797]],

       [[0.73519402],
        [0.96218855],
        [0.24875314],
        [0.57615733],
        [0.59204193],
        [0.57225191]]])

Converted to full tensor

  • R: NA
  • Matlab: full(), tensor()
  • Python: kruskal_to_tensor, ...
In [68]:
tl.kruskal_to_tensor(cp_tensor).shape

tl.tucker_to_tensor(tk_tensor[0], tk_tensor[1]).shape

tl.mps_to_tensor(tt_tensor).shape
Out[68]:
(10, 8, 6)

Matricization and vectorization

  • R: NA
  • Matlab: NA
  • Python: kruskal_to_unfolded, kruskal_to_vec, ...
In [69]:
tl.kruskal_to_unfolded(cp_tensor, mode=0).shape

tl.kruskal_to_vec(cp_tensor).shape
Out[69]:
(480,)

Special functions in Tensorly

  • Parital matricization
  • Partial vectorization
In [70]:
tensor_4D = tl.tensor(np.arange(48).reshape((2,3,4,2))) 
In [71]:
tensor_4D
Out[71]:
array([[[[ 0,  1],
         [ 2,  3],
         [ 4,  5],
         [ 6,  7]],

        [[ 8,  9],
         [10, 11],
         [12, 13],
         [14, 15]],

        [[16, 17],
         [18, 19],
         [20, 21],
         [22, 23]]],


       [[[24, 25],
         [26, 27],
         [28, 29],
         [30, 31]],

        [[32, 33],
         [34, 35],
         [36, 37],
         [38, 39]],

        [[40, 41],
         [42, 43],
         [44, 45],
         [46, 47]]]])
In [72]:
tl.partial_unfold(tensor_4D, skip_begin=1, mode=0)
Out[72]:
array([[[ 0,  1,  2,  3,  4,  5,  6,  7],
        [ 8,  9, 10, 11, 12, 13, 14, 15],
        [16, 17, 18, 19, 20, 21, 22, 23]],

       [[24, 25, 26, 27, 28, 29, 30, 31],
        [32, 33, 34, 35, 36, 37, 38, 39],
        [40, 41, 42, 43, 44, 45, 46, 47]]])
In [73]:
tl.partial_tensor_to_vec(tensor_4D, skip_begin=1)
Out[73]:
array([[ 0,  1,  2,  3,  4,  5,  6,  7,  8,  9, 10, 11, 12, 13, 14, 15,
        16, 17, 18, 19, 20, 21, 22, 23],
       [24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39,
        40, 41, 42, 43, 44, 45, 46, 47]])

The same as the matricization at the first mode

In [76]:
tl.unfold(tensor_4D, mode=0)
Out[76]:
array([[ 0,  1,  2,  3,  4,  5,  6,  7,  8,  9, 10, 11, 12, 13, 14, 15,
        16, 17, 18, 19, 20, 21, 22, 23],
       [24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39,
        40, 41, 42, 43, 44, 45, 46, 47]])

Tensor algebra

Tensor $\times$ vector(s)/matrix/matrices

  • R: ttm, ttl
  • Matlab: ttv, ttm
  • Python: mode_dot, multi_mode_dot
In [78]:
tensor = tl.tensor(np.random.random(1000).reshape((10,10,10)))
In [79]:
v1 = np.random.random(10)
v2 = np.random.random(10)
In [83]:
tl.tenalg.multi_mode_dot(tensor, matrix_or_vec_list=[v1, v2], modes=[1,2])
Out[83]:
array([16.64637615, 13.70395885, 14.38539706, 17.15135556, 13.59066707,
       15.79774032, 13.24020414, 16.11630755, 16.78667033, 13.61675607])
In [27]:
U1 = np.random.random(20).reshape(2,10)
U2 = np.random.random(30).reshape(3,10)
U3 = np.random.random(40).reshape(4,10)
In [28]:
tl.tenalg.multi_mode_dot(tensor, matrix_or_vec_list=[U1, U2, U3])
Out[28]:
array([[[74.94811597, 72.39795805, 51.21577296, 73.36920705],
        [86.61459514, 83.19362725, 58.14420655, 83.16531273],
        [72.1453675 , 69.94358717, 49.57866499, 71.00311864]],

       [[62.86524731, 60.96459118, 41.80225742, 60.62902616],
        [72.63058762, 69.65294601, 47.40830243, 68.48062897],
        [60.97180608, 58.02170576, 40.62732416, 58.21936613]]])

Note: Khatri-Rao product, Kronecker product, inner products are also provided.

Decomposition

In [29]:
# from tensorly.decomposition import parafac, tucker, matrix_product_state

CP decomposition (via alternating least squares)

  • R: cp()
  • Matlab: parafac_als()
  • Python: parafac()
In [30]:
cp_tensor_full = tl.kruskal_to_tensor(cp_tensor) # Convert the cp decomposition to the full tensor.

Two ways of generating the intial factor matrices: random and svd.

In [94]:
parafac(tensor=cp_tensor_full, rank=2, init='random', n_iter_max=int(1e4), random_state=1)
Out[94]:
[array([[-0.10559977,  0.71252282],
        [-0.11598067,  0.54285221],
        [-0.08151734,  0.6434857 ],
        [-0.08419799,  0.88844728],
        [-0.18542341,  0.38201403],
        [-0.15233983,  0.52692421],
        [-0.10929999,  0.92214514],
        [-0.01366838,  0.08680446],
        [-0.0038898 ,  0.82951348],
        [-0.14972886,  0.86676871]]), array([[3.68095077, 0.83851125],
        [1.73579316, 0.81896234],
        [0.44486928, 0.67142952],
        [0.53919756, 0.99118248],
        [1.96286745, 0.43508105],
        [0.99508756, 0.81235602],
        [1.71575067, 0.59642409],
        [0.07066971, 0.64804628]]), array([[-0.84572911,  0.59018489],
        [-1.30397301,  0.65225839],
        [-0.49672949,  0.41808291],
        [-0.96391719,  0.05761552],
        [-0.92126771,  0.64156026],
        [-0.29068491,  0.12333646]])]
In [95]:
parafac(tensor=cp_tensor_full, rank=2, init='svd', n_iter_max=int(1e4), random_state=1)
Out[95]:
[array([[-2.35394906,  0.33328564],
        [-1.79341121,  0.36604901],
        [-2.12587236,  0.25727858],
        [-2.93514764,  0.26573906],
        [-1.26205284,  0.58521868],
        [-1.74079009,  0.48080292],
        [-3.04647466,  0.34496399],
        [-0.28677435,  0.04313907],
        [-2.74044917,  0.0122767 ],
        [-2.86352845,  0.4725624 ]]), array([[ 0.32934741, -1.54770474],
        [ 0.32166908, -0.72983737],
        [ 0.26372168, -0.18705126],
        [ 0.3893131 , -0.22671287],
        [ 0.17088956, -0.82531374],
        [ 0.31907428, -0.41839786],
        [ 0.2342613 , -0.72141022],
        [ 0.2545373 , -0.02971408]]), array([[-0.45482505, -0.63730835],
        [-0.50266188, -0.98262299],
        [-0.32219493, -0.37431591],
        [-0.04440128, -0.72637019],
        [-0.4944174 , -0.69423129],
        [-0.09504905, -0.21904876]])]

Tucker decomposition (HOOI)

  • R: tucker()
  • Matlab: tucker_als()
  • Python: tucker()

NOTE: HOSVD algorithm is not included in Python.

In [33]:
tk_tensor_full = tl.tucker_to_tensor(tk_tensor[0], tk_tensor[1])
In [34]:
tucker(tk_tensor_full, ranks=(2,2,2), init='random', verbose=1, random_state=0)
reconsturction error=2.9124493967733347e-08, variation=-6.5647579469834255e-09.
converged in 2 iterations.
Out[34]:
(array([[[-1.29291262e+01,  2.45373253e-03],
         [-9.19237691e-04,  1.85159091e-01]],
 
        [[ 7.76285802e-04, -2.13659171e-01],
         [ 5.37001042e-01,  5.97032048e-02]]]),
 [array([[-0.32341508, -0.06443498],
         [-0.29977631,  0.11877355],
         [-0.27126999, -0.12363651],
         [-0.33194248, -0.304662  ],
         [-0.3669131 ,  0.57406501],
         [-0.35071827,  0.30316463],
         [-0.37744746, -0.21270022],
         [-0.0406078 , -0.00405343],
         [-0.19766141, -0.63751853],
         [-0.42538157,  0.02211216]]), array([[-0.59824489,  0.50814642],
         [-0.39262156, -0.05528546],
         [-0.22104867, -0.33552337],
         [-0.31420205, -0.53017488],
         [-0.31586875,  0.2793628 ],
         [-0.31453989, -0.27042064],
         [-0.33249122,  0.09371564],
         [-0.17637203, -0.43025096]]), array([[-0.43842971,  0.27359544],
         [-0.60919062, -0.06962722],
         [-0.27602428,  0.29694229],
         [-0.34028665, -0.8611019 ],
         [-0.4772423 ,  0.29547958],
         [-0.1300829 , -0.05760388]])])

Tensor-Train decomposition

  • R: NA
  • Matlab: NA
  • Python: matrix_product_state()
In [35]:
tt_tensor_full = tl.mps_to_tensor(tt_tensor)
In [36]:
tt_tensor_full.shape
Out[36]:
(10, 8, 6)
In [37]:
matrix_product_state(tt_tensor_full, rank=(1,2,4,1), verbose=1)
Out[37]:
[array([[[ 0.32326716, -0.06517303],
         [ 0.30004664,  0.11808899],
         [ 0.27098707, -0.12425538],
         [ 0.33124621, -0.30541888],
         [ 0.36822249,  0.57322601],
         [ 0.35140935,  0.3023633 ],
         [ 0.37696098, -0.21356122],
         [ 0.04059845, -0.00414611],
         [ 0.19620572, -0.63796804],
         [ 0.42543094,  0.02114114]]]),
 array([[[-4.31100679e-01,  5.15810192e-02,  7.89705449e-01,
          -4.05250954e-02],
         [-3.38459921e-01,  4.36136470e-01, -7.64760536e-02,
          -2.29711900e-02],
         [-2.91048244e-01,  2.12677105e-02, -2.15253766e-01,
          -2.60859687e-01],
         [-2.90729872e-01,  1.70861036e-01, -3.52817804e-01,
           2.91780629e-01],
         [-5.04502792e-01, -4.30916468e-01, -1.88722681e-01,
          -5.18221442e-01],
         [-3.54183528e-01, -4.53897105e-01, -1.11317350e-01,
           5.88616798e-01],
         [-2.23411128e-01,  1.80378275e-02,  1.37461224e-01,
           4.09027103e-01],
         [-3.12396141e-01,  4.55580185e-01, -1.59091474e-01,
          -8.63019757e-02]],
 
        [[-2.81019557e-02,  1.47666895e-01, -1.54560146e-01,
           8.96212302e-02],
         [ 3.83867457e-03,  2.19261498e-01, -1.16489716e-01,
           2.85211555e-02],
         [-2.05322067e-02,  1.62461142e-01,  1.12768304e-01,
          -7.13646816e-03],
         [-3.48089861e-05,  1.93459906e-01,  1.72076299e-01,
          -1.30720180e-02],
         [ 7.68781802e-03,  1.41627353e-01, -9.91201590e-02,
           1.50877150e-01],
         [ 3.65706255e-02,  4.32122059e-02, -1.10631962e-01,
          -5.14798921e-02],
         [-1.38133862e-02,  6.81893752e-02,  3.38941709e-02,
           1.42571550e-01],
         [ 8.82487592e-03, -1.32708137e-01, -6.70071248e-02,
           7.99757179e-03]]]),
 array([[[-11.23633361],
         [ -8.79730477],
         [ -9.36830937],
         [ -6.1913284 ],
         [-11.04109851],
         [-10.425839  ]],
 
        [[ -0.11218545],
         [  1.61409651],
         [ -1.33652367],
         [  1.2058251 ],
         [ -0.50939786],
         [ -0.21672365]],
 
        [[ -0.16619819],
         [ -0.07887306],
         [ -0.33661488],
         [ -0.21120459],
         [ -0.212313  ],
         [  0.898407  ]],
 
        [[ -0.17288672],
         [ -0.13153822],
         [  0.12244191],
         [  0.28878884],
         [ -0.04202463],
         [  0.06030523]]])]

Tensor predictor regression: CP tensor coefficient and Tucker tensor coefficient

Skipped

Example: image compressing

In [62]:
# from scipy.misc import face
# import matplotlib.pyplot as plt
In [40]:
image = tl.tensor(face(), dtype='float64')
In [79]:
image.shape
Out[79]:
(768, 1024, 3)
In [42]:
def to_image(tensor):
    """A convenience function to convert from a float dtype back to uint8"""
    im = tl.to_numpy(tensor)
    im -= im.min()
    im /= im.max()
    im *= 255
    return im.astype(np.uint8)
In [96]:
random_state = 12345
  • CP decomposition
In [97]:
# Rank of the CP decomposition
cp_rank = 25

# Perform the CP decomposition
factors = parafac(image, rank=cp_rank, init='random', tol=10e-6, random_state=random_state)

# Reconstruct the image from the factors
cp_reconstruction = tl.kruskal_to_tensor(factors)
  • Tucker decomposition
In [98]:
# Rank of the Tucker decomposition
tucker_rank = [100, 100, 2]

# Tucker decomposition
core, tucker_factors = tucker(image, ranks=tucker_rank, init='random', tol=10e-5, random_state=random_state)

# Reconstruct the image from the factors
tucker_reconstruction = tl.tucker_to_tensor(core, tucker_factors)
  • Tensor-Train decomposition
In [103]:
# Tensor-Train decomposition
tt_rank = [1, 100, 2, 1]

tt_factors = matrix_product_state(image, rank=tt_rank)

tt_reconstruction = tl.mps_to_tensor(tt_factors)
In [49]:
%timeit factors = parafac(image, rank=cp_rank, init='random', tol=10e-6, random_state = random_state)
18.5 s ± 1.87 s per loop (mean ± std. dev. of 7 runs, 1 loop each)
In [60]:
%timeit core, tucker_factors = tucker(image, ranks=tucker_rank, init='random', tol=10e-5, random_state=random_state)
1.13 s ± 63.3 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
In [93]:
%timeit tt_factors = matrix_product_state(image, rank=tt_rank)
188 ms ± 15.5 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
In [101]:
fig = plt.figure(figsize=(40,30))
ax1 = plt.subplot(221)
ax1.set_axis_off()
ax1.imshow(to_image(image))
ax1.set_title("Original")
ax1.title.set_size(50)

ax2 = plt.subplot(222)
ax2.set_axis_off()
ax2.imshow(to_image(cp_reconstruction))
ax2.set_title("CP decomposition")
ax2.title.set_size(50)

ax3 = plt.subplot(223)
ax3.set_axis_off()
ax3.imshow(to_image(tucker_reconstruction))
ax3.set_title("Tucker decomposition")
ax3.title.set_size(50)

ax4 = plt.subplot(224)
ax4.set_axis_off()
ax4.imshow(to_image(tt_reconstruction))
ax4.set_title("Tensor-Train decomposition")
ax4.title.set_size(50)