Skip to main content

Symbolic Heisenberg Evolution

In the Heisenberg picture of quantum mechanics, operators evolve in time while states remain fixed. Let's see how to simulate this evolution in Aleph for a simple spin chain.

The Heisenberg equation of motion

For an operator A(t)A(t) evolving under a Hamiltonian HH, the equation of motion is:

dAdt=i[H,A(t)]\frac{dA}{dt} = i[H, A(t)]

where [H,A]=HAAH[H, A] = HA - AH is the commutator. We'll solve this numerically using Euler's method and see how to manage the growth of operator expressions.

Building the Hamiltonian

We consider the following Hamiltonian for the transverse-field Ising model. For NN qubits, we have

H=Jj=0N1ZjZj+1+hi=0N1XiH = J \sum_{j=0}^{N-1} Z_j Z_{j+1} + h \sum_{i=0}^{N-1} X_i

where JJ is the coupling strength and hh is the transverse field strength. We can build this in Aleph!

var N = 2;              // number of qubits
var J = 1.0; // coupling strength
var h = 0.5; // transverse field

var Hmt = operator_sum(); // empty ComplexOperatorSum

// ZZ interaction terms
for(j : [0..N-1]) {
var k_site = (j + 1) % N;
Hmt += J * Z(j) * Z(k_site);
}

// X field terms
for (var m = 0; m < N; ++m) {
Hmt += h * X(m);
}

// Simplify the Hamiltonian
Hmt = simplify_paulis(Hmt);

The simplify_paulis function is a specialized transformation for Pauli operators that combines reorder_by_site, flatten, and merge in one step.

Setting up the initial operator

We start with a simple observable, the ZZ operator on the first qubit:

var A_old = operator_sum(Z(0)); // ComplexOperatorSum

Time evolution with Euler's method

This is how we evolve the operator in time. Euler's method approximates the time derivative with a finite difference:

A(t+Δt)A(t)+iΔt[H,A(t)]A(t + \Delta t) \approx A(t) + i\Delta t \cdot [H, A(t)]

Let's implement this step by step.

var Nt = 5;            // number of time steps
var dt = 0.1; // time step size
var epsilon = 1e-3; // pruning threshold

for (step : [0..Nt]) {
// Compute the commutator [H, A(t)]
var rhs = commutator(Hmt, A_old);

// Apply transformations to manage expression growth
flatten(rhs);
rhs = simplify_paulis(rhs);
merge(rhs);

// Euler step: A(t+dt) = A(t) + i*dt*[H,A(t)]
var A_new = A_old + dt * 1i * rhs;

// Clean up the new operator
A_old = simplify_paulis(A_new);
flatten(A_old);
merge(A_old);
prune(A_old, epsilon);

print(A_old);
}

For each step, we can see the result

((1+0i) * Z(0) + (0.1+0i) * Y(0))
((0.99+0i) * Z(0) + (0.2+0i) * Y(0) + (0.02+0i) * X(0) * Z(1))
((0.97+0i) * Z(0) + (0.295+0i) * Y(0) + (0.06+0i) * X(0) * Z(1) + (0.002+0i) * X(0) * Y(1))
((0.9405+0i) * Z(0) + (0.38+0i) * Y(0) + (0.1188+0i) * X(0) * Z(1) + (0.008+0i) * X(0) * Y(1))
((0.9025+0i) * Z(0) + (0.45029+0i) * Y(0) + (0.194+0i) * X(0) * Z(1) + (0.01988+0i) * X(0) * Y(1))

Notice how the operator evolves:

  • Step 0: A Y(0) component appears due to the transverse field h*X(0)
  • Step 1: The coupling J*Z(0)*Z(1) creates the two-qubit term X(0)*Z(1)
  • Later steps: More complex terms emerge, showing operator spreading across the chain
  • Coefficients: The original Z(0) weight decreases as the operator delocalizes

Without transformations, the operator expression grows exponentially with each time step. Each commutator doubles the number of terms, and nested structures accumulate. The transformation pipeline manages this growth:

  • flatten expands nested products into a flat sum
  • simplify_paulis applies Pauli algebra rules (X2=IX^2 = I, XY=iZXY = iZ, etc.)
  • merge combines identical operator terms
  • prune removes terms below the threshold ε\varepsilon

Scaling to larger systems: While our 2-qubit, 5-step example remains manageable, the computational cost grows rapidly. For N=10N=10 qubits and 50 time steps, the expression can grow to millions of terms without pruning. The pruning threshold ε\varepsilon becomes critical for controlling this growth - smaller values preserve accuracy but increase memory usage, while larger values keep expressions compact at the cost of approximation error.

Understanding the results

After evolution, A_old contains the time-evolved operator A(t)A(t) as a sum of Pauli strings with complex coefficients. You can:

  • Analyze support: See which qubits the operator acts on (it spreads from qubit 0)
  • Compute observables: Use the evolved operator to calculate expectation values

Key takeaways

  1. Heisenberg evolution tracks operator dynamics using dAdt=i[H,A(t)]\frac{dA}{dt} = i[H, A(t)]
  2. Euler's method provides a simple numerical integration scheme
  3. Transformations are mandatory to prevent exponential expression growth
  4. Pruning threshold ε\varepsilon controls the accuracy-performance tradeoff
  5. Benchmarking reveals how the simulation scales with system size

Benchmarking performance

Now that we've established all the pieces, we can combine them into a single function. This lets us easily test how performance scales with the number of qubits NN.

By creating a single function that takes N as input, we can benchmark different system sizes:

var my_Heisenberg = fun(integer N)
{
// Hamiltonian parameters
var J = 1.0;
var h = 0.5;

// Time evolution parameters
var Nt = 30;
var dt = 0.1;
var epsilon = 1e-3;

// Build Hamiltonian
var Hmt = operator_sum();

// ZZ interaction terms
for (j : [0..N-1]) {
Hmt += J * Z(j) * Z((j + 1) % N);
}

// X field terms
for (m : [0..N]) {
Hmt += h * X(m);
}

Hmt = simplify_paulis(Hmt);

// Setup initial operator
var A_old = operator_sum(Z(0));

var rhs = operator_sum();

// Time evolution loop
for (step : [0..Nt]) {
rhs = commutator(Hmt, A_old);
flatten(rhs);
rhs = simplify_paulis(rhs);
merge(rhs);

var A_new = A_old + dt * 1i * rhs;

A_old = simplify_paulis(A_new);
flatten(A_old);
merge(A_old);
prune(A_old, epsilon);
}
}

To run benchmark for different system sizes, we can build the following for-loop

// Run benchmark for various N
for (N : [2, 4, 6, 8, 10]) {
var results = benchmark(bind(my_Heisenberg, N.integer()), 10);
print(results.string());
}

This runs the full evolution 10 times for each system size (2, 4, 6, 8, and 10 qubits) and reports timing statistics. You'll see how the computational cost increases as you add more qubits.

BenchmarkResult(total=153.281 ms, mean=15.328 ms, best=5.952 ms, worst=39.383 ms, stddev=10.393 ms)
BenchmarkResult(total=496.970 ms, mean=49.697 ms, best=28.598 ms, worst=87.498 ms, stddev=20.349 ms)
BenchmarkResult(total=668.166 ms, mean=66.817 ms, best=59.923 ms, worst=78.522 ms, stddev=5.574 ms)
BenchmarkResult(total=959.134 ms, mean=95.913 ms, best=80.751 ms, worst=108.251 ms, stddev=9.061 ms)
BenchmarkResult(total=2.437 s, mean=243.659 ms, best=205.505 ms, worst=307.903 ms, stddev=31.664 ms)
Exit code: 0. Total Duration: 5.502 seconds / Script Duration: 4.947 seconds

Computing expectation values

Now that we have the time-evolved operator A(t)A(t), let's see how to compute expectation values

for a given quantum state ψ|\psi\rangle.

There are two ways to compute expectation values:

  1. Schrödinger picture: The state evolves in time while operators remain fixed A(t)=ψ(t)Aψ(t)\langle A(t) \rangle = \langle \psi(t) | A | \psi(t) \rangle

  2. Heisenberg picture: Operators evolve in time while the state remains fixed A(t)=ψ(0)A(t)ψ(0)\langle A(t) \rangle = \langle \psi(0) | A(t) | \psi(0) \rangle

We'll implement both methods and compare their results to verify our Heisenberg evolution, tracking how observables evolve in many-body systems.

info

Aleph provides a built-in expval(A, state) function that computes expectation values ψAψ\langle \psi | A | \psi \rangle efficiently. See the expval for details.

Constructing a state vector

First, we need to create a quantum state. For an NN-qubit system, the state vector has dimension 2N2^N. Let's start with a simple example - the all-zeros computational basis state

ψ=00...0|\psi\rangle = |00...0\rangle
// Create empty state vector container
var state_0 = zeros([2**N], as_matrix | as_complex);
var q = Qbit(N); // |00...0⟩ → index 0
state_0[q] = 1.0; // Set amplitude

Schrödinger picture: Evolving the state

In the Schrödinger picture, we evolve the initial state ψ(0)|\psi(0)\rangle forward in time using the unitary operator U(t)=eiHtU(t) = e^{-iHt}, then compute expectation values with the fixed operator A(0)A(0):

ψ(t)=eiHtψ(0)|\psi(t)\rangle = e^{-iHt}|\psi(0)\rangle A(t)=ψ(t)A(0)ψ(t)\langle A(t) \rangle = \langle \psi(t) | A(0) | \psi(t) \rangle

We'll use exact diagonalization to compute the time evolution efficiently:

def schrodingerPicture(N, J, h, dt, Nt, A_0, Hmt, state_0)
{
// Diagonalize the Hamiltonian H = V D V†
var Hmat = matrix(Hmt);
var es = eigensolver(Hmat, ["is_hermitian" : true]);
var evals = es.eigenvalues(); // Eigenvalues
var evecs = es.eigenvectors(); // Eigenvectors

var expecvals = [];

// Time evolution loop
for (var step = 0; step <= Nt; ++step) {
var t = dt * step;

// Compute e^(-iHt)
var expdt_arr = exp(-1i * t * array(evals));
var expdt_mat = matrix(expdt_arr);

// Evolve the state |ψ(t)⟩ = e^(-iHt)|ψ(0)⟩
var state_t = evecs * as_diagonal(expdt_mat) * evecs.adjointed() * state_0;

// Compute expectation value ⟨ψ(t)|A_0|ψ(t)⟩
var expecval_t = expval(A_0, state_t);

expecvals.push_back(expecval_t.real());
}

return expecvals;
}

Using the Schrödinger picture function:

We can call this function with our parameters to compute the time evolution of expectation values:

// Use the parameters and objects we defined earlier
var expecvals_S = schrodingerPicture(N, J, h, dt, Nt, A_0, Hmt, state_0);

Heisenberg picture: Evolving the operator

In the Heisenberg picture, the state ψ(0)|\psi(0)\rangle remains fixed while the operator evolves according to the Heisenberg equation of motion. We compute the expectation value using the evolved operator and the initial state: A(t)=ψ(0)A(t)ψ(0)\langle A(t) \rangle = \langle \psi(0) | A(t) | \psi(0) \rangle

Runge-Kutta 4th order (RK4) integration:

While we used Euler's method earlier for simplicity, the Runge-Kutta 4th order (RK4) method offers significantly better accuracy and stability, particularly for longer time evolutions [1, 2].

The RK4 method requires the derivative i[H,A]i[H, A] at four intermediate points:

k1=iΔt[H,A(t)]k_1 = i\Delta t[H, A(t)] k2=iΔt[H,A(t)+k12]k_2 = i\Delta t[H, A(t) + \frac{k_1}{2}] k3=iΔt[H,A(t)+k22]k_3 = i\Delta t[H, A(t) + \frac{k_2}{2}] k4=iΔt[H,A(t)+k3]k_4 = i\Delta t[H, A(t) + k_3]

First, we can create a helper function to compute each kk stage. This function calculates iΔt[H,A]i\Delta t[H, A] with proper simplification:

// Helper function to compute k stages: i*dt*[H, A]
def RK_kStage(A_k, Hmt, dt)
{
var a_coeff = get_coefficients(A_k);
var h_coeff = get_coefficients(Hmt);

// Compute i*dt*[H, A] term by term
var dA = operator_sum();

for (var h_idx = 0; h_idx < h_coeff.size(); ++h_idx) {
for (var a_idx = 0; a_idx < a_coeff.size(); ++a_idx) {

// Compute commutator [H_term, A_term]
var comm = commutator(Hmt[h_idx].second, A_k[a_idx].second);

// Simplify the commutator
var comm_sum_reorder = reorder_by_site(comm);
var temp_sum = simplify_paulis(comm_sum_reorder);
flatten(temp_sum);

var comm_coeff = get_coefficients(temp_sum);

// Assemble with proper coefficients
for (var c_idx = 0; c_idx < comm_coeff.size(); ++c_idx) {
var term_coeff = 1i * dt * h_coeff[h_idx] * a_coeff[a_idx] * comm_coeff[c_idx];
dA += term_coeff * temp_sum[c_idx].second;
}
}
}

// Final simplification
dA = simplify_paulis(dA);
flatten(dA);
merged(dA);

return dA;
}

Then combines them with weighted averaging: A(t+Δt)=A(t)+16(k1+2k2+2k3+k4)A(t + \Delta t) = A(t) + \frac{1}{6}(k_1 + 2k_2 + 2k_3 + k_4)

def evolveOneStep_RK4(N, dt, A, Hmt, epsilon)
{
var k1 = operator_sum();
k1 = RK_kStage(A, Hmt, dt);

// this k1 includes dt
k1 = simplify_paulis(k1);
flatten(k1);
merged(k1);

var A_half1 = A + 0.5 * k1;
var k2 = operator_sum();
k2 = RK_kStage(A_half1, Hmt, dt);

var A_half2 = A + 0.5 * k2;
var k3 = operator_sum();
k3 = RK_kStage(A_half2, Hmt, dt);

var A_p_3 = A + k3;
var k4 = operator_sum();
k4 = RK_kStage(A_p_3, Hmt, dt);

// clean up before adding more terms
A = simplify_paulis(A);
flatten(A);
merged(A);

A += (1.0 / 6) * (k1 + 2.0 * k2 + 2.0 * k3 + k4);
merged(A);

prune(A, epsilon)
}

Complete Heisenberg picture function:

Using the RK4 helper functions we defined, here's the complete implementation:

def heisenbergPicture(N, J, h, dt, Nt, A_0, Hmt, state_0, epsilon)
{
var expecvals = [];
var A_t = A_0; // Current observable

for (var step = 0; step <= Nt; ++step) {

// Compute expectation value ⟨ψ(0)|A(t)|ψ(0)⟩
var expecval_t = expval(A_t, state_0);
expecvals.push_back(expecval_t.real());

// Evolve operator to next time step
if (step < Nt) {
A_t = evolveOneStep_RK4(N, dt, A_t, Hmt, epsilon);
}
}

return expecvals;
}
tip

If your system starts out in a product state, you can use a Qbit, representing this initial state, with the function expval. This results in minimal memory overhead, allowing you to do time evolution for system sizes beyond advanced exact diagonalization methods exactly.

Using the Heisenberg picture function:

Now we can call this function to compute the time evolution of expectation values using operator evolution:

var epsilon = 1e-5;  // Pruning threshold
var expecvals_H = heisenbergPicture(N, J, h, dt, Nt, A_0, Hmt, state_0, epsilon);

Key takeaways

  1. Two equivalent pictures: Schrödinger (evolving states) and Heisenberg (evolving operators) give identical physical predictions for expectation values
  2. Built-in expval function: Aleph provides expval(A, state) to compute ψAψ\langle \psi | A | \psi \rangle. This works for dense states and product states (Qbit)
  3. Schrödinger picture uses exact diagonalization: Compute H=VDVH = VDV^\dagger once, then evolve states with eiHte^{-iHt}
    • exact but limited to small systems
  4. Heisenberg picture uses RK4 integration: Evolve operators via dAdt=i[H,A(t)]\frac{dA}{dt} = i[H, A(t)]
    • approximate but offers flexibility for larger systems through expression simplification and pruning
  5. RK4 over Euler time integration: Fourth-order method achieves O(Δt4)O(\Delta t^4) global error compared to Euler's O(Δt)O(\Delta t), while maintaining numerical stability over longer time evolutions
  6. Pruning is essential: The epsilon parameter controls operator expression growth - smaller values preserve accuracy but increase computational cost
  7. Validation through comparison: Comparing both methods verifies implementation correctness and reveals numerical accuracy limits
  • Try different Hamiltonians (Ising model, XYXY model)
  • Explore adaptive pruning strategies
  • Export expectation values and visualize the comparison between both methods (see Data Files)

References

[1] Butcher, J.C. (2016). Numerical Methods for Ordinary Differential Equations (3rd ed.) . John Wiley & Sons. DOI:10.1002/97811191215344

[2] Griffiths, D.F., & Higham, D.J. (2010). *Numerical Methods for Ordinary Differential Equations: Initial Value Problems. Springer. DOI:10.1007/978-0-85729-148-6

Contributors

Eunji Yoo

Sebastien J. Avakian

Jonathon Riddell