Skip to main content

Linear Tensor Networks

At a Glance

  • Linear tensor networks represent high-dimensional tensors as chains of smaller tensors connected by contractions.
  • Matrix Product State (MPS) represents state vectors and wavefunctions in 1D quantum systems.
  • Matrix Product Operator (MPO) represents operators and Hamiltonians in 1D quantum systems.
  • Many interesting observables and Hamiltonians have an exact MPO representation.
  • The bond dimension controls the compression ratio and approximation quality.
  • Great tools for efficient simulation of low-dimensionnal quantum many-body systems.

Overview

A tensor network is a structured representation of high-dimensional tensors as chains of lower-order tensors. In 1D systems, this takes the form of either a Matrix Product State (MPS) for state vectors or a Matrix Product Operator (MPO) for operators.

Rather than storing a tensor of dimension d1×d2××dNd_1 \times d_2 \times \cdots \times d_N directly, a linear tensor network decomposes it into NN smaller tensors connected by "bond" indices. This hierarchical structure enables:

  • Compression: Approximations with reduced bond dimension dramatically lower memory usage. With bond dimensions χi\chi_i, the memory requirement becomes iχi1diχi\sum_i \chi_{i-1}d_i\chi_i.
  • Efficient algorithms: Operations like variational optimization become tractable.
tip

For an introduction to tensors and their representation in Aleph, see the Tensor concept page. For compression techniques using factorization, see Factorization.

The Key Concept: Bond Dimension

The structure of a linear tensor network is defined by its bond dimensions (also called bond indices). At each connection between neighboring tensors, there is a shared index of size χi\chi_i. These bond dimensions can vary across the chain:

  • Memory usage: With varying bond dimensions χi\chi_i, memory scales as iχi1diχi\sum_i \chi_{i-1} d_i \chi_i where did_i is the physical dimension at site ii. Alternatively, using the maximum bond dimension χmax\chi_\text{max}, memory is roughly O(χmax2dphysN)O(\chi_\text{max}^2 d_\text{phys} N) where NN is the number of sites.
  • Approximation quality: Larger bond dimensions allow more information and better approximations
  • Computational cost: Operations typically scale with the maximum bond dimension χmax\chi_\text{max}

For exact representations, χi\chi_i equals the effective matrix rank at bond ii. For approximations, bond dimensions are truncated to balance accuracy and efficiency. Importantly, optimal representations often have smaller bonds near the boundaries and larger bonds in the center.

Code example:

// Create a random MPS with physical dimension 2 (e.g., spin-1/2)
// and bond dimension 16
var psi = make_random_mps(10, 2, 16, as_complex)
print(psi.size()) // Number of sites
print(psi.max_bond()) // Maximum bond dimension

Matrix Product States (MPS)

An MPS decomposes a quantum state vector ψ|\psi\rangle as a chain of tensors:

ψ=s0,s1,,sN1Aa0,a1s0Aa1,a2s1AaN1,aNsN1s0s1sN1\ket{\psi} = \sum_{s_0, s_1, \ldots, s_{N-1}} A^{s_0}_{a_0,a_1} A^{s_1}_{a_1,a_2} \cdots A^{s_{N-1}}_{a_{N-1},a_N} \ket{s_0 s_1 \cdots s_{N-1}}

where:

  • Each AsiA^{s_i} is a site tensor with indices: bond-left (ai)(a_{i}), physical (si)(s_i), bond-right (ai+1)(a_{i+1})
  • Physical dimension dpd_p is the local Hilbert space dimension (e.g., 2 for qubits)
  • Bond dimension χi\chi_i is the size of the bond indices aia_i, and can vary across the chain
  • The boundary tensors have bond dimension 1 on the open edges: χ0=χN=1\chi_0 = \chi_N = 1

Canonical Form

An MPS can be put into canonical form with an orthogonality center at position \ell:

  • All tensors to the left of \ell are in left-canonical form (left-orthogonal)
  • All tensors to the right of \ell are in right-canonical form (right-orthogonal)
  • The orthogonality center tensor contains all the weight

This is useful because:

  • Operations at the orthogonality center act on an effective small problem
  • Left and right boundaries automatically satisfy certain constraints
  • Algorithms can sweep through the chain systematically

Code example:

// Set orthogonality center to position 5
var psi = make_random_mps(10, 2, 16, as_complex)
psi.set_orthogonality_center(5)

// Check if in canonical form
if (psi.is_canonical()) {
print("MPS is in canonical form")
}

// Compute inner product with another MPS
var phi = make_random_mps(10, 2, 16, as_complex)
var overlap = psi.dot(phi)
print(overlap) // Complex number

Creating and Manipulating MPS

// Create an MPS for 16 sites with physical dimension 2
var mps = make_random_mps(10, 2, 16, as_complex)

// Get the tensor at site 3
var A3 = mps.get_site(3)
print(A3.shape()) // [bond_left, physical, bond_right]
// Note: bond dimensions may vary; this shows actual dimensions for site 3

// Check maximum bond dimension across the chain
print(mps.max_bond()) // Returns largest bond dimension used

// Normalize the MPS to unit norm
mps.normalize()

// Access site tensors directly
var first_site = mps[0]
var last_site = mps[-1] // Negative indexing supported

Matrix Product Operators (MPO)

An MPO represents an operator O^\hat{O} in a similar chain structure:

O^=s0,s0,Ws0,a0,a1s0Ws1,a1,a2s1WsN1,aN1,aNsN1s0s1sN1s0s1sN1\hat{O} = \sum_{s_0, s_0', \ldots} W^{s_0}_{s_0',a_0,a_1} W^{s_1}_{s_1',a_1,a_2} \cdots W^{s_{N-1}}_{s_{N-1}',a_{N-1},a_N} \ket{s_0' s_1' \cdots s_{N-1}'} \bra{s_0 s_1 \cdots s_{N-1}}

where each WsisiW^{s_i}_{s_i'} is a site tensor with:

  • Physical index (out) sis_i'
  • Physical index (in) sis_i
  • Bond indices (left and right)

MPOs are commonly used for:

  • Hamiltonians in quantum simulations
  • Observables for computing expectation values
  • Projectors and other many-body operators
  • Time-evolution operators in algorithms like TEBD
Constructing Hamiltonians and Observables

For constructing Hamiltonians and observables, use the Model class which provides a high-level interface for defining quantum systems. This is more convenient than manually constructing MPO tensors.

Code example:

// Create a length 10 MPO, with local dimension 2 and uniform bond dimension 4
var mpo = make_random_mpo(10,2,4)

// Get maximum bond dimension
print(mpo.max_bond())

// Access and set site tensors
var W = mpo.get_site(0)
mpo.set_site(0, W)

// Convert to full matrix representation (expensive for large systems!)
var full_matrix = mpo.matrix()
print(full_matrix.shape()) // [2^10, 2^10] matrix

Contraction and Overlap Computations

The primary use of linear tensor networks is efficient contraction:

MPS-MPS Overlap (Inner Product)

Computing the overlap between two MPS is efficient:

ψϕ=s0,,sN1A1,a1s0Aa1,a2s1B~aN1,1sN1\bra{\psi} \ket{\phi} = \sum_{s_0, \ldots, s_{N-1}} A^{s_0}_{1,a_1} A^{s_1}_{a_1,a_2} \cdots \tilde{B}^{s_{N-1}}_{a_{N-1},1}

This can be computed in O(Nχmax3dp)O(N \chi_\text{max}^3 d_p) time by sweeping through the chain once, where χmax\chi_\text{max} is the maximum bond dimension. This is much better than the naive O(dpN)O(d_p^N) scaling for the full tensor.

var psi = make_random_mps(10, 10, 16, as_complex)
var phi = make_random_mps(10, 10, 16, as_complex)
psi.set_orthogonality_center(0)
psi.normalize()
phi.set_orthogonality_center(-1)
phi.normalize()

// Efficient inner product computation
var overlap = psi.dot(phi)
print(overlap) // Complex scalar with norm less than or equal to 1.

Why Linear Tensor Networks Matter

1D Quantum Systems

Linear tensor networks are the natural choice for 1D quantum systems because:

  • The entanglement in ground states of local Hamiltonians is limited (area law)
  • Bounded bond dimension can capture the physics exactly or with controlled approximation
  • Algorithms like DMRG can efficiently optimize the tensor structure

Dimensional Reduction

For high-dimensional tensors (e.g., 21002^{100} dimensional state vectors in a 100-qubit system), direct storage is impossible. MPS provides a compressed representation that:

  • Reduces memory from exponential to polynomial in system size
  • Enables algorithms that would otherwise be intractable
  • Maintains mathematical structure for efficient computation

Variational Algorithms

The MPS structure enables powerful variational algorithms:

  • DMRG (Density Matrix Renormalization Group): sweeps through sites, optimizing tensors
  • Variational quantum algorithms: use MPS as an ansatz for classical simulation
  • Time evolution: algorithms for simulating quantum dynamics

Key Properties and Operations

Orthogonality Center

The orthogonality center position is crucial for algorithms:

var mps = make_random_mps(10, 2, 16, as_complex)

// Get current center
var center = mps.get_orthogonality_center()

// Move center to a new position (efficiently via orthogonalization)
mps.set_orthogonality_center(7)

// After sweeps, center might shift
mps.adjust_orthogonality_center(5)

// Check if still canonical
if (! mps.is_canonical()) {
mps.set_orthogonality_center(0)
}

Normalization

var mps = make_random_mps(10, 2, 16, as_complex)

// Normalize in place
mps.normalize()

// Get a normalized copy without modifying original
var mps_normalized = mps.normalized()

Applications in Aleph

  • Ground state calculation with DMRG
  • State overlap and fidelity

See Also

  • Tensor: Introduction to tensors and their representation
  • Factorization: Decomposition techniques like SVD
  • Contraction: Combining tensors via contractions
  • DMRG: The Density Matrix Renormalization Group algorithm