Factorization
At a Glance
- Tensor factorization decomposes tensors into simpler components using a matrix factorization algorithm.
- Essential for compression, denoising, and tensor network algorithms.
- With matrix factorization that produce three factors, the middle factor can be kept separate or absorbed into either of the other factors.
- Supports truncation via tolerance (absolute error) or max bond dimension with rank revealing factorizations.
Overview
Tensor factorization in Aleph decomposes a tensor into component arrays by specifying which dimensions form the "left" or "right" side of a matrix decomposition. The remaining dimensions form the other side. The tensor is conceptually reshaped into a matrix along this split and then factorized using a chosen algorithm.
Key concept: When you call factorize(T, [0, 1], options) on a tensor T with shape [d0, d1, d2, d3], dimensions 0 and 1 become the left side (size ) and dimensions 2 and 3 become the right side (size ), forming an effective matrix. This matrix is then factorized using the specified algorithm.
Key Concepts
Bond Dimension
The "bond dimension" is the size of the middle index created by factorization. It equals if no truncation occurs:
var T = random_tensor([6, 8, 10],as_real)
var factors = factorize(T, [0, 1]) // Left=48, Right=10
var S = factors[1]
print(S.size()) // 10 (the smaller dimension)
Truncation reduces the bond dimension further, enabling compression.
Truncation Error
When using tolerance-based truncation, small singular values are discarded. The error calculation is controlled by two options:
- measure: How errors accumulate ("Cumulative" sums errors across discarded values, "None" disables truncation)
- norm: Which norm to use ("SquareModulus" squares the singular values)
Currently, the default and only available combination is measure: "Cumulative" with norm: "SquareModulus", which sums the squares of discarded singular values.
Comparison: Absorption Options
| Option | Output | Use Case |
|---|---|---|
"None" | [U, S, V] | Access singular values separately; most flexible |
"Left" | [U.S, V] | Prepare for left-to-right sweeps in tensor networks |
"Right" | [U, S.V] | Prepare for right-to-left sweeps in tensor networks |
"Both" | [U.√S, √S.V] | Symmetric treatment; useful for some algorithms |
Absorption behavior is specific to factorizations that produce a diagonnal matrix such as the singular value decomposition. Other factorization algorithms may ignore this option.
Why Factorization Matters: Information Compression
Factorization allows us to identify and retain the most important components while discarding noise and small contribution:
// Decompose a 3D tensor and compress
var T = random_tensor([100, 100, 50],as_real)
// Factorize with first two dimensions as left side
var options = ["absorb": "None", "tolerance": 1e-6]
var factors = factorize(T, [0, 1], options)
// factors[0]: left unitary (100x100 x bond_size)
// factors[1]: singular values (bond_size)
// factors[2]: right unitary (bond_size x 50)
This information compression property is the primary motivation for tensor network methods. Through the lens of tensor factorization, one can understand a tensor network as a compressed representation of a higher dimensional tensor. The network structure creates opportunities for efficient computation and clever approximations. Factorization is a common operation in tensor network algorithms such as the Density Matrix Renormalization Group.
A Tensor Network is a network of tensor where each node is a tensor and each edge a contraction on a shared index between two tensor.
Singular Value Decomposition (SVD)
SVD decomposes an effective matrix (formed from grouped tensor dimensions) into orthogonal and diagonal components. It is the most versatile factorization and is currently the only tensor factorization algorithm available in Aleph.
Mathematical notation:
For a tensor with left dimensions forming an effective matrix :
where and are unitary matrices, and is diagonal with singular values .
Code example:
// Basic SVD factorization
var T = random_tensor([6, 8, 10],as_real)
// Factorize with first two dimensions as left side
var factors = factorize(T, [0, 1])
// Returns three tensors: left unitary, singular values, right unitary
U = factors[0] // [6,8,10]
S = factors[1] // [10]
V = factors[2] // [10,10]
Absorption of Singular Values
The absorb option controls where singular values go in the factorization output. This is specific to SVD and algorithms that produce diagonal factors.
No Absorption (absorb: "None")
Returns three tensors: left unitary, singular values, right unitary
var factors = factorize(T, [0, 1], ["absorb": "None"])
var U = factors[0]
var S = factors[1]
var V = factors[2]
Left Absorption (absorb: "Left")
Singular values are absorbed into the left factor:
var factors = factorize(T, [0, 1], ["absorb": "Left"])
var US = factors[0]
var V = factors[1]
print(US.shape()) // Singular values multiplied into left unitary
print(V.shape()) // Right unitary unchanged
// Reconstruction: contract(US, V, [US.order()-1], [0])
Right Absorption (absorb: "Right")
Singular values are absorbed into the right factor:
var factors = factorize(T, [0, 1], ["absorb": "Right"])
var U = factors[0]
var SV = factors[1]
print(U.shape()) // Left unitary unchanged
print(SV.shape()) // Singular values multiplied into right unitary
Both Absorption (absorb: "Both")
Singular values are split equally: into each factor
var factors = factorize(T, [0, 1], ["absorb": "Both"])
var U = factors[0]
var V = factors[1]
// Both contain square root of the singular values
// Reconstruction: contract(U, V, [U.order()-1], [0])
Truncation
Compression via Tolerance
The tolerance option discards small singular values, performing lossy compression. This is currently supported only for SVD; other algorithms may have different truncation strategies.
Truncation can be fully deactivated by setting "measure": "None", which keeps all singular values regardless of the tolerance parameter.
Mathematical notation:
With measure "Cumulative" (default) and norm "SquareModulus" (default), keep singular values while the square-sum of the discarded singular values satisfies:
This is an absolute error threshold: discard singular values until their squared sum falls below the tolerance.
Code example:
var T = random_tensor([100, 100, 50],as_real)
// Keep only singular values that contribute significantly
// measure: "Cumulative" sums errors across discarded values
// norm: "SquareModulus" uses squared singular values for error calculation
var options = ["absorb": "None", "tolerance": 1e-4, "measure": "Cumulative", "norm": "SquareModulus"]
var factors = factorize(T, [0, 1], options)
var U = factors[0]
var S = factors[1]
var V = factors[2]
print(S.size()) // Fewer singular values due to truncation
// Compression achieved: smaller bond dimension
Maximum Bond Dimension
The max_size option limits the bond dimension after factorization. This option is mostly intended as a protection against uncontolled growth that can happen in some Tensor Network algorithms. Whenever this truncation method come into play, there's is a strong possibility that the accuracy of the final result is uncontrolled. If the end state of the algorithm is a strong attractor, the end result will be reliable.
Code example:
var T = random_tensor([100, 100, 50],as_real)
// Keep at most 20 singular values
var options = ["absorb": "None", "max_size": 20]
var factors = factorize(T, [0, 1], options)
var U = factors[0]
var S = factors[1]
var V = factors[2]
print(S.size()) // At most 20
// If original had more than 20, keeps largest 20
Which side is specified
By default, specified dimensions form the left side. Use side: "right" to reverse:
var T = random_tensor([2, 3, 4, 5],as_real)
// Factorize with dimensions [2, 3] as RIGHT side
var options = ["side": "right", "absorb": "None"]
var factors = factorize(T, [2, 3], options)
var U = factors[0]
var S = factors[1]
var V = factors[2]
// Now dimensions [0, 1] form left side, [2, 3] form right side
SVD on Higher-Dimensional Tensors
For tensors of any order, factorize() automatically groups dimensions appropriately.
Code example:
// 5D tensor
var T = random_tensor([2, 3, 4, 5, 6],as_real)
// Factorize with first three dimensions as left side
var factors = factorize(T, [0, 1, 2])
var U = factors[0]
var S = factors[1]
var V = factors[2]
print(U.shape()) // [2, 3, 4, bond_dim]
print(S.shape()) // [bond_dim]
print(V.shape()) // [bond_dim, 5, 6]
Related Topics
- Tensor: Introduction to tensors and their properties
- Contraction: Combining factorized tensors via contraction
- Tensor networks: MPS, PEPS, and factorization-based representations
Further Reading
- For tensor basics, see the Tensor page
- For tensor contraction operations, see the Contraction page