Skip to content

API specification

The EMU-MPS api is based on a series of abstract base classes, which are intended to generalize into a backend independent API. Currently these classes are defined in EMU-MPS, and they will be documented here until they are moved into a more general location, probably pulser-core. While they are in this project, see the specification here.

MPSBackend

Bases: Backend

A backend for emulating Pulser sequences using Matrix Product States (MPS), aka tensor trains.

run(sequence, mps_config)

Emulates the given sequence.

PARAMETER DESCRIPTION
sequence

a Pulser sequence to simulate

TYPE: Sequence

mps_config

the backends config. Should be of type MPSConfig

TYPE: BackendConfig

RETURNS DESCRIPTION
Results

the simulation results

Source code in emu_mps/mps_backend.py
def run(self, sequence: Sequence, mps_config: BackendConfig) -> Results:
    """
    Emulates the given sequence.

    Args:
        sequence: a Pulser sequence to simulate
        mps_config: the backends config. Should be of type MPSConfig

    Returns:
        the simulation results
    """
    assert isinstance(mps_config, MPSConfig)

    self.validate_sequence(sequence)

    impl = create_impl(sequence, mps_config)
    impl.init()  # This is separate from the constructor for testing purposes.

    while not impl.is_finished():
        impl.progress()

    return impl.results

MPSConfig

Bases: BackendConfig

The configuration of the emu-ct MPSBackend. The kwargs passed to this class are passed on to the base class. See the API for that class for a list of available options.

PARAMETER DESCRIPTION
initial_state

the initial state to use in the simulation

TYPE: State | None DEFAULT: None

dt

the timestep size that the solver uses. Note that observables are only calculated if the evaluation_times are divisible by dt.

TYPE: int DEFAULT: 10

precision

up to what precision the state is truncated

TYPE: float DEFAULT: 1e-05

max_bond_dim

the maximum bond dimension that the state is allowed to have.

TYPE: int DEFAULT: 1024

max_krylov_dim

the size of the krylov subspace that the Lanczos algorithm maximally builds

TYPE: int DEFAULT: 100

extra_krylov_tolerance

the Lanczos algorithm uses this*precision as the convergence tolerance

TYPE: float DEFAULT: 0.001

num_gpus_to_use

during the simulation, distribute the state over this many GPUs 0=all factors to cpu. As shown in the benchmarks, using multiple GPUs might alleviate memory pressure per GPU, but the runtime should be similar.

TYPE: int DEFAULT: DEVICE_COUNT

kwargs

arguments that are passed to the base class

TYPE: Any DEFAULT: {}

Examples:

>>> num_gpus_to_use = 2 #use 2 gpus if available, otherwise 1 or cpu
>>> dt = 1 #this will impact the runtime
>>> precision = 1e-6 #smaller dt requires better precision, generally
>>> MPSConfig(num_gpus_to_use=num_gpus_to_use, dt=dt, precision=precision,
>>>     with_modulation=True) #the last arg is taken from the base class
Source code in emu_mps/mps_config.py
def __init__(
    self,
    *,
    initial_state: State | None = None,
    dt: int = 10,
    precision: float = 1e-5,
    max_bond_dim: int = 1024,
    max_krylov_dim: int = 100,
    extra_krylov_tolerance: float = 1e-3,
    num_gpus_to_use: int = DEVICE_COUNT,
    **kwargs: Any,
):
    super().__init__(**kwargs)
    self.initial_state = initial_state
    self.dt = dt
    self.precision = precision
    self.max_bond_dim = max_bond_dim
    self.max_krylov_dim = max_krylov_dim
    self.num_gpus_to_use = num_gpus_to_use
    self.extra_krylov_tolerance = extra_krylov_tolerance

    if self.noise_model is not None:
        if "doppler" in self.noise_model.noise_types:
            raise NotImplementedError("Unsupported noise type: doppler")
        if (
            "amplitude" in self.noise_model.noise_types
            and self.noise_model.amp_sigma != 0.0
        ):
            raise NotImplementedError("Unsupported noise type: amp_sigma")

MPS

Bases: State

Matrix Product State, aka tensor train.

Each tensor has 3 dimensions ordered as such: (left bond, site, right bond).

Only qubits are supported.

This constructor creates a MPS directly from a list of tensors. It is for internal use only.

PARAMETER DESCRIPTION
factors

the tensors for each site WARNING: for efficiency in a lot of use cases, this list of tensors IS NOT DEEP-COPIED. Therefore, the new MPS object is not necessarily the exclusive owner of the list and its tensors. As a consequence, beware of potential external modifications affecting the list or the tensors. You are responsible for deciding whether to pass its own exclusive copy of the data to this constructor, or some shared objects.

TYPE: List[Tensor]

orthogonality_center

the orthogonality center of the MPS, or None (in which case it will be orthogonalized when needed)

TYPE: Optional[int] DEFAULT: None

precision

the precision with which to truncate here or in tdvp

TYPE: float DEFAULT: DEFAULT_PRECISION

max_bond_dim

the maximum bond dimension to allow

TYPE: int DEFAULT: DEFAULT_MAX_BOND_DIM

num_gpus_to_use

distribute the factors over this many GPUs 0=all factors to cpu, None=keep the existing device assignment.

TYPE: Optional[int] DEFAULT: DEVICE_COUNT

Source code in emu_mps/mps.py
def __init__(
    self,
    factors: List[torch.Tensor],
    /,
    *,
    orthogonality_center: Optional[int] = None,
    precision: float = DEFAULT_PRECISION,
    max_bond_dim: int = DEFAULT_MAX_BOND_DIM,
    num_gpus_to_use: Optional[int] = DEVICE_COUNT,
):
    """
    This constructor creates a MPS directly from a list of tensors. It is for internal use only.

    Args:
        factors: the tensors for each site
            WARNING: for efficiency in a lot of use cases, this list of tensors
            IS NOT DEEP-COPIED. Therefore, the new MPS object is not necessarily
            the exclusive owner of the list and its tensors. As a consequence,
            beware of potential external modifications affecting the list or the tensors.
            You are responsible for deciding whether to pass its own exclusive copy
            of the data to this constructor, or some shared objects.
        orthogonality_center: the orthogonality center of the MPS, or None (in which case
            it will be orthogonalized when needed)
        precision: the precision with which to truncate here or in tdvp
        max_bond_dim: the maximum bond dimension to allow
        num_gpus_to_use: distribute the factors over this many GPUs
            0=all factors to cpu, None=keep the existing device assignment.
    """
    self.precision = precision
    self.max_bond_dim = max_bond_dim

    assert all(
        factors[i - 1].shape[2] == factors[i].shape[0] for i in range(1, len(factors))
    ), "The dimensions of consecutive tensors should match"
    assert (
        factors[0].shape[0] == 1 and factors[-1].shape[2] == 1
    ), "The dimension of the left (right) link of the first (last) tensor should be 1"

    self.factors = factors
    self.num_sites = len(factors)
    assert self.num_sites > 1  # otherwise, do state vector

    assert (orthogonality_center is None) or (
        0 <= orthogonality_center < self.num_sites
    ), "Invalid orthogonality center provided"
    self.orthogonality_center = orthogonality_center

    if num_gpus_to_use is not None:
        assign_devices(self.factors, min(DEVICE_COUNT, num_gpus_to_use))

__add__(other)

Returns the sum of two MPSs, computed with a direct algorithm. The resulting MPS is orthogonalized on the first site and truncated up to self.precision.

PARAMETER DESCRIPTION
other

the other state

TYPE: State

RETURNS DESCRIPTION
MPS

the summed state

Source code in emu_mps/mps.py
def __add__(self, other: State) -> MPS:
    """
    Returns the sum of two MPSs, computed with a direct algorithm.
    The resulting MPS is orthogonalized on the first site and truncated
    up to `self.precision`.

    Args:
        other: the other state

    Returns:
        the summed state
    """
    assert isinstance(other, MPS), "Other state also needs to be an MPS"
    new_tt = add_factors(self.factors, other.factors)
    result = MPS(
        new_tt,
        precision=self.precision,
        max_bond_dim=self.max_bond_dim,
        num_gpus_to_use=None,
        orthogonality_center=None,  # Orthogonality is lost.
    )
    result.truncate()
    return result

__rmul__(scalar)

Multiply an MPS by a scalar.

PARAMETER DESCRIPTION
scalar

the scale factor

TYPE: complex

RETURNS DESCRIPTION
MPS

the scaled MPS

Source code in emu_mps/mps.py
def __rmul__(self, scalar: complex) -> MPS:
    """
    Multiply an MPS by a scalar.

    Args:
        scalar: the scale factor

    Returns:
        the scaled MPS
    """
    which = (
        self.orthogonality_center
        if self.orthogonality_center is not None
        else 0  # No need to orthogonalize for scaling.
    )
    factors = scale_factors(self.factors, scalar, which=which)
    return MPS(
        factors,
        precision=self.precision,
        max_bond_dim=self.max_bond_dim,
        num_gpus_to_use=None,
        orthogonality_center=self.orthogonality_center,
    )

apply(qubit_index, single_qubit_operator)

Apply given single qubit operator to qubit qubit_index, leaving the MPS orthogonalized on that qubit.

Source code in emu_mps/mps.py
def apply(self, qubit_index: int, single_qubit_operator: torch.Tensor) -> None:
    """
    Apply given single qubit operator to qubit qubit_index, leaving the MPS
    orthogonalized on that qubit.
    """
    self.orthogonalize(qubit_index)

    self.factors[qubit_index] = torch.tensordot(
        self.factors[qubit_index],
        single_qubit_operator.to(self.factors[qubit_index].device),
        ([1], [1]),
    ).transpose(1, 2)

expect_batch(single_qubit_operators)

Computes expectation values for each qubit and each single qubit operator in the batched input tensor.

Returns a tensor T such that T[q, i] is the expectation value for qubit #q and operator single_qubit_operators[i].

Source code in emu_mps/mps.py
def expect_batch(self, single_qubit_operators: torch.Tensor) -> torch.Tensor:
    """
    Computes expectation values for each qubit and each single qubit operator in
    the batched input tensor.

    Returns a tensor T such that T[q, i] is the expectation value for qubit #q
    and operator single_qubit_operators[i].
    """
    orthogonality_center = (
        self.orthogonality_center
        if self.orthogonality_center is not None
        else self.orthogonalize(0)
    )

    result = torch.zeros(
        self.num_sites, single_qubit_operators.shape[0], dtype=torch.complex128
    )

    center_factor = self.factors[orthogonality_center]
    for qubit_index in range(orthogonality_center, self.num_sites):
        temp = torch.tensordot(center_factor.conj(), center_factor, ([0, 2], [0, 2]))

        result[qubit_index] = torch.tensordot(
            single_qubit_operators.to(temp.device), temp, dims=2
        )

        if qubit_index < self.num_sites - 1:
            _, r = torch.linalg.qr(center_factor.reshape(-1, center_factor.shape[2]))
            center_factor = torch.tensordot(
                r, self.factors[qubit_index + 1].to(r.device), dims=1
            )

    center_factor = self.factors[orthogonality_center]
    for qubit_index in range(orthogonality_center - 1, -1, -1):
        _, r = torch.linalg.qr(
            center_factor.reshape(center_factor.shape[0], -1).mT,
        )
        center_factor = torch.tensordot(
            self.factors[qubit_index],
            r.to(self.factors[qubit_index].device),
            ([2], [1]),
        )

        temp = torch.tensordot(center_factor.conj(), center_factor, ([0, 2], [0, 2]))

        result[qubit_index] = torch.tensordot(
            single_qubit_operators.to(temp.device), temp, dims=2
        )

    return result

from_state_string(*, basis, nqubits, strings, **kwargs) staticmethod

See the base class.

PARAMETER DESCRIPTION
basis

A tuple containing the basis states (e.g., ('r', 'g')).

TYPE: Iterable[str]

nqubits

the number of qubits.

TYPE: int

strings

A dictionary mapping state strings to complex or floats amplitudes.

TYPE: dict[str, complex]

RETURNS DESCRIPTION
MPS

The resulting MPS representation of the state.s

Source code in emu_mps/mps.py
@staticmethod
def from_state_string(
    *,
    basis: Iterable[str],
    nqubits: int,
    strings: dict[str, complex],
    **kwargs: Any,
) -> MPS:
    """
    See the base class.

    Args:
        basis: A tuple containing the basis states (e.g., ('r', 'g')).
        nqubits: the number of qubits.
        strings: A dictionary mapping state strings to complex or floats amplitudes.

    Returns:
        The resulting MPS representation of the state.s
    """

    basis = set(basis)
    if basis == {"r", "g"}:
        one = "r"
    elif basis == {"0", "1"}:
        one = "1"
    else:
        raise ValueError("Unsupported basis provided")

    basis_0 = torch.tensor([[[1.0], [0.0]]], dtype=torch.complex128)  # ground state
    basis_1 = torch.tensor([[[0.0], [1.0]]], dtype=torch.complex128)  # excited state

    accum_mps = MPS(
        [torch.zeros((1, 2, 1), dtype=torch.complex128)] * nqubits,
        orthogonality_center=0,
        **kwargs,
    )

    for state, amplitude in strings.items():
        factors = [basis_1 if ch == one else basis_0 for ch in state]
        accum_mps += amplitude * MPS(factors, **kwargs)
    norm = accum_mps.norm()
    if not math.isclose(1.0, norm, rel_tol=1e-5, abs_tol=0.0):
        print("\nThe state is not normalized, normalizing it for you.")
        accum_mps *= 1 / norm

    return accum_mps

get_correlation_matrix(*, operator=n_operator)

Efficiently compute the symmetric correlation matrix C_ij = in basis ("r", "g").

PARAMETER DESCRIPTION
operator

a 2x2 Torch tensor to use

TYPE: Tensor DEFAULT: n_operator

RETURNS DESCRIPTION
list[list[float]]

the corresponding correlation matrix

Source code in emu_mps/mps.py
def get_correlation_matrix(
    self, *, operator: torch.Tensor = n_operator
) -> list[list[float]]:
    """
    Efficiently compute the symmetric correlation matrix
        C_ij = <self|operator_i operator_j|self>
    in basis ("r", "g").

    Args:
        operator: a 2x2 Torch tensor to use

    Returns:
        the corresponding correlation matrix
    """
    assert operator.shape == (2, 2)

    result = [[0.0 for _ in range(self.num_sites)] for _ in range(self.num_sites)]

    for left in range(0, self.num_sites):
        self.orthogonalize(left)
        accumulator = torch.tensordot(
            self.factors[left],
            operator.to(self.factors[left].device),
            dims=([1], [0]),
        )
        accumulator = torch.tensordot(
            accumulator, self.factors[left].conj(), dims=([0, 2], [0, 1])
        )
        result[left][left] = accumulator.trace().item().real
        for right in range(left + 1, self.num_sites):
            partial = torch.tensordot(
                accumulator.to(self.factors[right].device),
                self.factors[right],
                dims=([0], [0]),
            )
            partial = torch.tensordot(
                partial, self.factors[right].conj(), dims=([0], [0])
            )

            result[left][right] = (
                torch.tensordot(
                    partial, operator.to(partial.device), dims=([0, 2], [0, 1])
                )
                .trace()
                .item()
                .real
            )
            result[right][left] = result[left][right]
            accumulator = tensor_trace(partial, 0, 2)

    return result

get_max_bond_dim()

Return the max bond dimension of this MPS.

RETURNS DESCRIPTION
int

the largest bond dimension in the state

Source code in emu_mps/mps.py
def get_max_bond_dim(self) -> int:
    """
    Return the max bond dimension of this MPS.

    Returns:
        the largest bond dimension in the state
    """
    return max((x.shape[2] for x in self.factors), default=0)

get_memory_footprint()

Returns the number of MBs of memory occupied to store the state

RETURNS DESCRIPTION
float

the memory in MBs

Source code in emu_mps/mps.py
def get_memory_footprint(self) -> float:
    """
    Returns the number of MBs of memory occupied to store the state

    Returns:
        the memory in MBs
    """
    return (  # type: ignore[no-any-return]
        sum(factor.element_size() * factor.numel() for factor in self.factors) * 1e-6
    )

inner(other)

Compute the inner product between this state and other. Note that self is the left state in the inner product, so this function is linear in other, and anti-linear in self

PARAMETER DESCRIPTION
other

the other state

TYPE: State

RETURNS DESCRIPTION
float | complex

inner product

Source code in emu_mps/mps.py
def inner(self, other: State) -> float | complex:
    """
    Compute the inner product between this state and other.
    Note that self is the left state in the inner product,
    so this function is linear in other, and anti-linear in self

    Args:
        other: the other state

    Returns:
        inner product
    """
    assert isinstance(other, MPS), "Other state also needs to be an MPS"
    assert (
        self.num_sites == other.num_sites
    ), "States do not have the same number of sites"

    acc = torch.ones(1, 1, dtype=self.factors[0].dtype, device=self.factors[0].device)

    for i in range(self.num_sites):
        acc = acc.to(self.factors[i].device)
        acc = torch.tensordot(acc, other.factors[i].to(acc.device), dims=1)
        acc = torch.tensordot(self.factors[i].conj(), acc, dims=([0, 1], [0, 1]))

    return acc.item()  # type: ignore[no-any-return]

make(num_sites, precision=DEFAULT_PRECISION, max_bond_dim=DEFAULT_MAX_BOND_DIM, num_gpus_to_use=DEVICE_COUNT) classmethod

Returns a MPS in ground state |000..0>.

PARAMETER DESCRIPTION
num_sites

the number of qubits

TYPE: int

precision

the precision with which to truncate here or in tdvp

TYPE: float DEFAULT: DEFAULT_PRECISION

max_bond_dim

the maximum bond dimension to allow

TYPE: int DEFAULT: DEFAULT_MAX_BOND_DIM

num_gpus_to_use

distribute the factors over this many GPUs 0=all factors to cpu

TYPE: int DEFAULT: DEVICE_COUNT

Source code in emu_mps/mps.py
@classmethod
def make(
    cls,
    num_sites: int,
    precision: float = DEFAULT_PRECISION,
    max_bond_dim: int = DEFAULT_MAX_BOND_DIM,
    num_gpus_to_use: int = DEVICE_COUNT,
) -> MPS:
    """
    Returns a MPS in ground state |000..0>.

    Args:
        num_sites: the number of qubits
        precision: the precision with which to truncate here or in tdvp
        max_bond_dim: the maximum bond dimension to allow
        num_gpus_to_use: distribute the factors over this many GPUs
            0=all factors to cpu
    """
    if num_sites <= 1:
        raise ValueError("For 1 qubit states, do state vector")

    return cls(
        [
            torch.tensor([[[1.0], [0.0]]], dtype=torch.complex128)
            for _ in range(num_sites)
        ],
        precision=precision,
        max_bond_dim=max_bond_dim,
        num_gpus_to_use=num_gpus_to_use,
        orthogonality_center=0,  # Arbitrary: every qubit is an orthogonality center.
    )

norm()

Computes the norm of the MPS.

Source code in emu_mps/mps.py
def norm(self) -> float:
    """Computes the norm of the MPS."""
    orthogonality_center = (
        self.orthogonality_center
        if self.orthogonality_center is not None
        else self.orthogonalize(0)
    )

    return float(
        torch.linalg.norm(self.factors[orthogonality_center].to("cpu")).item()
    )

orthogonalize(desired_orthogonality_center=0)

Orthogonalize the state on the given orthogonality center.

Returns the new orthogonality center index as an integer, this is convenient for type-checking purposes.

Source code in emu_mps/mps.py
def orthogonalize(self, desired_orthogonality_center: int = 0) -> int:
    """
    Orthogonalize the state on the given orthogonality center.

    Returns the new orthogonality center index as an integer,
    this is convenient for type-checking purposes.
    """
    assert (
        0 <= desired_orthogonality_center < self.num_sites
    ), f"Cannot move orthogonality center to nonexistent qubit #{desired_orthogonality_center}"

    lr_swipe_start = (
        self.orthogonality_center if self.orthogonality_center is not None else 0
    )

    for i in range(lr_swipe_start, desired_orthogonality_center):
        q, r = torch.linalg.qr(self.factors[i].reshape(-1, self.factors[i].shape[2]))
        self.factors[i] = q.reshape(self.factors[i].shape[0], 2, -1)
        self.factors[i + 1] = torch.tensordot(
            r.to(self.factors[i + 1].device), self.factors[i + 1], dims=1
        )

    rl_swipe_start = (
        self.orthogonality_center
        if self.orthogonality_center is not None
        else (self.num_sites - 1)
    )

    for i in range(rl_swipe_start, desired_orthogonality_center, -1):
        q, r = torch.linalg.qr(
            self.factors[i].reshape(self.factors[i].shape[0], -1).mT,
        )
        self.factors[i] = q.mT.reshape(-1, 2, self.factors[i].shape[2])
        self.factors[i - 1] = torch.tensordot(
            self.factors[i - 1], r.to(self.factors[i - 1].device), ([2], [1])
        )

    self.orthogonality_center = desired_orthogonality_center

    return desired_orthogonality_center

sample(num_shots, p_false_pos=0.0, p_false_neg=0.0)

Samples bitstrings, taking into account the specified error rates.

PARAMETER DESCRIPTION
num_shots

how many bitstrings to sample

TYPE: int

p_false_pos

the rate at which a 0 is read as a 1

TYPE: float DEFAULT: 0.0

p_false_neg

teh rate at which a 1 is read as a 0

TYPE: float DEFAULT: 0.0

RETURNS DESCRIPTION
Counter[str]

the measured bitstrings, by count

Source code in emu_mps/mps.py
def sample(
    self, num_shots: int, p_false_pos: float = 0.0, p_false_neg: float = 0.0
) -> Counter[str]:
    """
    Samples bitstrings, taking into account the specified error rates.

    Args:
        num_shots: how many bitstrings to sample
        p_false_pos: the rate at which a 0 is read as a 1
        p_false_neg: teh rate at which a 1 is read as a 0

    Returns:
        the measured bitstrings, by count
    """
    self.orthogonalize(0)

    num_qubits = len(self.factors)
    rnd_matrix = torch.rand(num_shots, num_qubits)
    bitstrings = Counter(
        self._sample_implementation(rnd_matrix[x, :]) for x in range(num_shots)
    )
    if p_false_neg > 0 or p_false_pos > 0:
        bitstrings = apply_measurement_errors(
            bitstrings,
            p_false_pos=p_false_pos,
            p_false_neg=p_false_neg,
        )
    return bitstrings

truncate()

SVD based truncation of the state. Puts the orthogonality center at the first qubit. Calls orthogonalize on the last qubit, and then sweeps a series of SVDs right-left. Uses self.precision and self.max_bond_dim for determining accuracy. An in-place operation.

Source code in emu_mps/mps.py
def truncate(self) -> None:
    """
    SVD based truncation of the state. Puts the orthogonality center at the first qubit.
    Calls orthogonalize on the last qubit, and then sweeps a series of SVDs right-left.
    Uses self.precision and self.max_bond_dim for determining accuracy.
    An in-place operation.
    """
    self.orthogonalize(self.num_sites - 1)
    truncate_impl(
        self.factors,
        max_error=self.precision,
        max_rank=self.max_bond_dim,
    )
    self.orthogonality_center = 0

inner

Wrapper around MPS.inner.

PARAMETER DESCRIPTION
left

the anti-linear argument

TYPE: MPS

right

the linear argument

TYPE: MPS

RETURNS DESCRIPTION
float | complex

the inner product

Source code in emu_mps/mps.py
def inner(left: MPS, right: MPS) -> float | complex:
    """
    Wrapper around MPS.inner.

    Args:
        left: the anti-linear argument
        right: the linear argument

    Returns:
        the inner product
    """
    return left.inner(right)

MPO

Bases: Operator

Matrix Product Operator.

Each tensor has 4 dimensions ordered as such: (left bond, output, input, right bond).

PARAMETER DESCRIPTION
factors

the tensors making up the MPO

TYPE: List[Tensor]

Source code in emu_mps/mpo.py
def __init__(
    self, factors: List[torch.Tensor], /, num_gpus_to_use: Optional[int] = None
):
    self.factors = factors
    self.num_sites = len(factors)
    if not self.num_sites > 1:
        raise ValueError("For 1 qubit states, do state vector")
    if factors[0].shape[0] != 1 or factors[-1].shape[-1] != 1:
        raise ValueError(
            "The dimension of the left (right) link of the first (last) tensor should be 1"
        )
    assert all(
        factors[i - 1].shape[-1] == factors[i].shape[0]
        for i in range(1, self.num_sites)
    )

    if num_gpus_to_use is not None:
        assign_devices(self.factors, min(DEVICE_COUNT, num_gpus_to_use))

__add__(other)

Returns the sum of two MPOs, computed with a direct algorithm. The result is currently not truncated

PARAMETER DESCRIPTION
other

the other operator

TYPE: Operator

RETURNS DESCRIPTION
MPO

the summed operator

Source code in emu_mps/mpo.py
def __add__(self, other: Operator) -> MPO:
    """
    Returns the sum of two MPOs, computed with a direct algorithm.
    The result is currently not truncated

    Args:
        other: the other operator

    Returns:
        the summed operator
    """
    assert isinstance(other, MPO), "MPO can only be added to another MPO"
    sum_factors = add_factors(self.factors, other.factors)
    return MPO(sum_factors)

__matmul__(other)

Compose two operators. The ordering is that self is applied after other.

PARAMETER DESCRIPTION
other

the operator to compose with self

TYPE: Operator

RETURNS DESCRIPTION
MPO

the composed operator

Source code in emu_mps/mpo.py
def __matmul__(self, other: Operator) -> MPO:
    """
    Compose two operators. The ordering is that
    self is applied after other.

    Args:
        other: the operator to compose with self

    Returns:
        the composed operator
    """
    assert isinstance(other, MPO), "MPO can only be applied to another MPO"
    factors = zip_right(self.factors, other.factors)
    return MPO(factors)

__mul__(other)

Applies this MPO to the given MPS. The returned MPS is:

- othogonal on the first site
- truncated up to `other.precision`
- distributed on the same devices of `other`
PARAMETER DESCRIPTION
other

the state to apply this operator to

TYPE: State

RETURNS DESCRIPTION
MPS

the resulting state

Source code in emu_mps/mpo.py
def __mul__(self, other: State) -> MPS:
    """
    Applies this MPO to the given MPS.
    The returned MPS is:

        - othogonal on the first site
        - truncated up to `other.precision`
        - distributed on the same devices of `other`

    Args:
        other: the state to apply this operator to

    Returns:
        the resulting state
    """
    assert isinstance(other, MPS), "MPO can only be multiplied with MPS"
    factors = zip_right(
        self.factors,
        other.factors,
        max_error=other.precision,
        max_rank=other.max_bond_dim,
    )
    return MPS(factors, orthogonality_center=0)

__rmul__(scalar)

Multiply an MPO by scalar. Assumes the orthogonal centre is on the first factor.

PARAMETER DESCRIPTION
scalar

the scale factor to multiply with

TYPE: complex

RETURNS DESCRIPTION
MPO

the scaled MPO

Source code in emu_mps/mpo.py
def __rmul__(self, scalar: complex) -> MPO:
    """
    Multiply an MPO by scalar.
    Assumes the orthogonal centre is on the first factor.

    Args:
        scalar: the scale factor to multiply with

    Returns:
        the scaled MPO
    """
    factors = scale_factors(self.factors, scalar, which=0)
    return MPO(factors)

expect(state)

Compute the expectation value of self on the given state.

PARAMETER DESCRIPTION
state

the state with which to compute

TYPE: State

RETURNS DESCRIPTION
float | complex

the expectation

Source code in emu_mps/mpo.py
def expect(self, state: State) -> float | complex:
    """
    Compute the expectation value of self on the given state.

    Args:
        state: the state with which to compute

    Returns:
        the expectation
    """
    assert isinstance(
        state, MPS
    ), "currently, only expectation values of MPSs are \
    supported"
    acc = torch.ones(
        1, 1, 1, dtype=state.factors[0].dtype, device=state.factors[0].device
    )
    n = len(self.factors) - 1
    for i in range(n):
        acc = new_left_bath(acc, state.factors[i], self.factors[i]).to(
            state.factors[i + 1].device
        )
    acc = new_left_bath(acc, state.factors[n], self.factors[n])
    return acc.item()  # type: ignore [no-any-return]

from_operator_string(basis, nqubits, operations, operators={}, /, **kwargs) staticmethod

See the base class

PARAMETER DESCRIPTION
basis

the eigenstates in the basis to use e.g. ('r', 'g')

TYPE: Iterable[str]

nqubits

how many qubits there are in the state

TYPE: int

operations

which bitstrings make up the state with what weight

TYPE: FullOp

operators

additional symbols to be used in operations

TYPE: dict[str, QuditOp] DEFAULT: {}

RETURNS DESCRIPTION
MPO

the operator in MPO form.

Source code in emu_mps/mpo.py
@staticmethod
def from_operator_string(
    basis: Iterable[str],
    nqubits: int,
    operations: FullOp,
    operators: dict[str, QuditOp] = {},
    /,
    **kwargs: Any,
) -> MPO:
    """
    See the base class

    Args:
        basis: the eigenstates in the basis to use e.g. ('r', 'g')
        nqubits: how many qubits there are in the state
        operations: which bitstrings make up the state with what weight
        operators: additional symbols to be used in operations

    Returns:
        the operator in MPO form.
    """

    _validate_operator_targets(operations, nqubits)

    basis = set(basis)
    if basis == {"r", "g"}:
        # operators will now contain the basis for single qubit ops, and potentially
        # user defined strings in terms of these
        operators |= {
            "gg": torch.tensor(
                [[1.0, 0.0], [0.0, 0.0]], dtype=torch.complex128
            ).reshape(1, 2, 2, 1),
            "gr": torch.tensor(
                [[0.0, 0.0], [1.0, 0.0]], dtype=torch.complex128
            ).reshape(1, 2, 2, 1),
            "rg": torch.tensor(
                [[0.0, 1.0], [0.0, 0.0]], dtype=torch.complex128
            ).reshape(1, 2, 2, 1),
            "rr": torch.tensor(
                [[0.0, 0.0], [0.0, 1.0]], dtype=torch.complex128
            ).reshape(1, 2, 2, 1),
        }
    elif basis == {"0", "1"}:
        # operators will now contain the basis for single qubit ops, and potentially
        # user defined strings in terms of these
        operators |= {
            "00": torch.tensor(
                [[1.0, 0.0], [0.0, 0.0]], dtype=torch.complex128
            ).reshape(1, 2, 2, 1),
            "01": torch.tensor(
                [[0.0, 0.0], [1.0, 0.0]], dtype=torch.complex128
            ).reshape(1, 2, 2, 1),
            "10": torch.tensor(
                [[0.0, 1.0], [0.0, 0.0]], dtype=torch.complex128
            ).reshape(1, 2, 2, 1),
            "11": torch.tensor(
                [[0.0, 0.0], [0.0, 1.0]], dtype=torch.complex128
            ).reshape(1, 2, 2, 1),
        }
    else:
        raise ValueError("Unsupported basis provided")

    mpos = []
    for coeff, tensorop in operations:
        # this function will recurse through the operators, and replace any definitions
        # in terms of strings by the computed tensor
        def replace_operator_string(op: QuditOp | torch.Tensor) -> torch.Tensor:
            if isinstance(op, dict):
                for opstr, coeff in op.items():
                    tensor = replace_operator_string(operators[opstr])
                    operators[opstr] = tensor
                    op[opstr] = tensor * coeff
                op = sum(cast(list[torch.Tensor], op.values()))
            return op

        factors = [
            torch.eye(2, 2, dtype=torch.complex128).reshape(1, 2, 2, 1)
        ] * nqubits

        for i, op in enumerate(tensorop):
            tensorop[i] = (replace_operator_string(op[0]), op[1])

        for op in tensorop:
            for i in op[1]:
                factors[i] = op[0]
        mpos.append(coeff * MPO(factors, **kwargs))
    return sum(mpos[1:], start=mpos[0])