API specification
The EMU-MPS api is based on a series of abstract base classes, which are intended to generalize into a backend independent API. Currently these classes are defined in EMU-MPS, and they will be documented here until they are moved into a more general location, probably pulser-core. While they are in this project, see the specification here.
MPSBackend
Bases: Backend
A backend for emulating Pulser sequences using Matrix Product States (MPS), aka tensor trains.
run(sequence, mps_config)
Emulates the given sequence.
PARAMETER | DESCRIPTION |
---|---|
sequence
|
a Pulser sequence to simulate
TYPE:
|
mps_config
|
the backends config. Should be of type MPSConfig
TYPE:
|
RETURNS | DESCRIPTION |
---|---|
Results
|
the simulation results |
Source code in emu_mps/mps_backend.py
MPSConfig
Bases: BackendConfig
The configuration of the emu-ct MPSBackend. The kwargs passed to this class are passed on to the base class. See the API for that class for a list of available options.
PARAMETER | DESCRIPTION |
---|---|
initial_state
|
the initial state to use in the simulation
TYPE:
|
dt
|
the timestep size that the solver uses. Note that observables are only calculated if the evaluation_times are divisible by dt.
TYPE:
|
precision
|
up to what precision the state is truncated
TYPE:
|
max_bond_dim
|
the maximum bond dimension that the state is allowed to have.
TYPE:
|
max_krylov_dim
|
the size of the krylov subspace that the Lanczos algorithm maximally builds
TYPE:
|
extra_krylov_tolerance
|
the Lanczos algorithm uses this*precision as the convergence tolerance
TYPE:
|
num_gpus_to_use
|
during the simulation, distribute the state over this many GPUs 0=all factors to cpu. As shown in the benchmarks, using multiple GPUs might alleviate memory pressure per GPU, but the runtime should be similar.
TYPE:
|
kwargs
|
arguments that are passed to the base class
TYPE:
|
Examples:
>>> num_gpus_to_use = 2 #use 2 gpus if available, otherwise 1 or cpu
>>> dt = 1 #this will impact the runtime
>>> precision = 1e-6 #smaller dt requires better precision, generally
>>> MPSConfig(num_gpus_to_use=num_gpus_to_use, dt=dt, precision=precision,
>>> with_modulation=True) #the last arg is taken from the base class
Source code in emu_mps/mps_config.py
MPS
Bases: State
Matrix Product State, aka tensor train.
Each tensor has 3 dimensions ordered as such: (left bond, site, right bond).
Only qubits are supported.
This constructor creates a MPS directly from a list of tensors. It is for internal use only.
PARAMETER | DESCRIPTION |
---|---|
factors
|
the tensors for each site WARNING: for efficiency in a lot of use cases, this list of tensors IS NOT DEEP-COPIED. Therefore, the new MPS object is not necessarily the exclusive owner of the list and its tensors. As a consequence, beware of potential external modifications affecting the list or the tensors. You are responsible for deciding whether to pass its own exclusive copy of the data to this constructor, or some shared objects.
TYPE:
|
orthogonality_center
|
the orthogonality center of the MPS, or None (in which case it will be orthogonalized when needed)
TYPE:
|
precision
|
the precision with which to truncate here or in tdvp
TYPE:
|
max_bond_dim
|
the maximum bond dimension to allow
TYPE:
|
num_gpus_to_use
|
distribute the factors over this many GPUs 0=all factors to cpu, None=keep the existing device assignment.
TYPE:
|
Source code in emu_mps/mps.py
__add__(other)
Returns the sum of two MPSs, computed with a direct algorithm.
The resulting MPS is orthogonalized on the first site and truncated
up to self.precision
.
PARAMETER | DESCRIPTION |
---|---|
other
|
the other state
TYPE:
|
RETURNS | DESCRIPTION |
---|---|
MPS
|
the summed state |
Source code in emu_mps/mps.py
__rmul__(scalar)
Multiply an MPS by a scalar.
PARAMETER | DESCRIPTION |
---|---|
scalar
|
the scale factor
TYPE:
|
RETURNS | DESCRIPTION |
---|---|
MPS
|
the scaled MPS |
Source code in emu_mps/mps.py
apply(qubit_index, single_qubit_operator)
Apply given single qubit operator to qubit qubit_index, leaving the MPS orthogonalized on that qubit.
Source code in emu_mps/mps.py
expect_batch(single_qubit_operators)
Computes expectation values for each qubit and each single qubit operator in the batched input tensor.
Returns a tensor T such that T[q, i] is the expectation value for qubit #q and operator single_qubit_operators[i].
Source code in emu_mps/mps.py
from_state_string(*, basis, nqubits, strings, **kwargs)
staticmethod
See the base class.
PARAMETER | DESCRIPTION |
---|---|
basis
|
A tuple containing the basis states (e.g., ('r', 'g')).
TYPE:
|
nqubits
|
the number of qubits.
TYPE:
|
strings
|
A dictionary mapping state strings to complex or floats amplitudes.
TYPE:
|
RETURNS | DESCRIPTION |
---|---|
MPS
|
The resulting MPS representation of the state.s |
Source code in emu_mps/mps.py
get_correlation_matrix(*, operator=n_operator)
Efficiently compute the symmetric correlation matrix
C_ij =
PARAMETER | DESCRIPTION |
---|---|
operator
|
a 2x2 Torch tensor to use
TYPE:
|
RETURNS | DESCRIPTION |
---|---|
list[list[float]]
|
the corresponding correlation matrix |
Source code in emu_mps/mps.py
get_max_bond_dim()
Return the max bond dimension of this MPS.
RETURNS | DESCRIPTION |
---|---|
int
|
the largest bond dimension in the state |
get_memory_footprint()
Returns the number of MBs of memory occupied to store the state
RETURNS | DESCRIPTION |
---|---|
float
|
the memory in MBs |
Source code in emu_mps/mps.py
inner(other)
Compute the inner product between this state and other. Note that self is the left state in the inner product, so this function is linear in other, and anti-linear in self
PARAMETER | DESCRIPTION |
---|---|
other
|
the other state
TYPE:
|
RETURNS | DESCRIPTION |
---|---|
float | complex
|
inner product |
Source code in emu_mps/mps.py
make(num_sites, precision=DEFAULT_PRECISION, max_bond_dim=DEFAULT_MAX_BOND_DIM, num_gpus_to_use=DEVICE_COUNT)
classmethod
Returns a MPS in ground state |000..0>.
PARAMETER | DESCRIPTION |
---|---|
num_sites
|
the number of qubits
TYPE:
|
precision
|
the precision with which to truncate here or in tdvp
TYPE:
|
max_bond_dim
|
the maximum bond dimension to allow
TYPE:
|
num_gpus_to_use
|
distribute the factors over this many GPUs 0=all factors to cpu
TYPE:
|
Source code in emu_mps/mps.py
norm()
Computes the norm of the MPS.
Source code in emu_mps/mps.py
orthogonalize(desired_orthogonality_center=0)
Orthogonalize the state on the given orthogonality center.
Returns the new orthogonality center index as an integer, this is convenient for type-checking purposes.
Source code in emu_mps/mps.py
sample(num_shots, p_false_pos=0.0, p_false_neg=0.0)
Samples bitstrings, taking into account the specified error rates.
PARAMETER | DESCRIPTION |
---|---|
num_shots
|
how many bitstrings to sample
TYPE:
|
p_false_pos
|
the rate at which a 0 is read as a 1
TYPE:
|
p_false_neg
|
teh rate at which a 1 is read as a 0
TYPE:
|
RETURNS | DESCRIPTION |
---|---|
Counter[str]
|
the measured bitstrings, by count |
Source code in emu_mps/mps.py
truncate()
SVD based truncation of the state. Puts the orthogonality center at the first qubit. Calls orthogonalize on the last qubit, and then sweeps a series of SVDs right-left. Uses self.precision and self.max_bond_dim for determining accuracy. An in-place operation.
Source code in emu_mps/mps.py
inner
Wrapper around MPS.inner.
PARAMETER | DESCRIPTION |
---|---|
left
|
the anti-linear argument
TYPE:
|
right
|
the linear argument
TYPE:
|
RETURNS | DESCRIPTION |
---|---|
float | complex
|
the inner product |
MPO
Bases: Operator
Matrix Product Operator.
Each tensor has 4 dimensions ordered as such: (left bond, output, input, right bond).
PARAMETER | DESCRIPTION |
---|---|
factors
|
the tensors making up the MPO
TYPE:
|
Source code in emu_mps/mpo.py
__add__(other)
Returns the sum of two MPOs, computed with a direct algorithm. The result is currently not truncated
PARAMETER | DESCRIPTION |
---|---|
other
|
the other operator
TYPE:
|
RETURNS | DESCRIPTION |
---|---|
MPO
|
the summed operator |
Source code in emu_mps/mpo.py
__matmul__(other)
Compose two operators. The ordering is that self is applied after other.
PARAMETER | DESCRIPTION |
---|---|
other
|
the operator to compose with self
TYPE:
|
RETURNS | DESCRIPTION |
---|---|
MPO
|
the composed operator |
Source code in emu_mps/mpo.py
__mul__(other)
Applies this MPO to the given MPS. The returned MPS is:
- othogonal on the first site
- truncated up to `other.precision`
- distributed on the same devices of `other`
PARAMETER | DESCRIPTION |
---|---|
other
|
the state to apply this operator to
TYPE:
|
RETURNS | DESCRIPTION |
---|---|
MPS
|
the resulting state |
Source code in emu_mps/mpo.py
__rmul__(scalar)
Multiply an MPO by scalar. Assumes the orthogonal centre is on the first factor.
PARAMETER | DESCRIPTION |
---|---|
scalar
|
the scale factor to multiply with
TYPE:
|
RETURNS | DESCRIPTION |
---|---|
MPO
|
the scaled MPO |
Source code in emu_mps/mpo.py
expect(state)
Compute the expectation value of self on the given state.
PARAMETER | DESCRIPTION |
---|---|
state
|
the state with which to compute
TYPE:
|
RETURNS | DESCRIPTION |
---|---|
float | complex
|
the expectation |
Source code in emu_mps/mpo.py
from_operator_string(basis, nqubits, operations, operators={}, /, **kwargs)
staticmethod
See the base class
PARAMETER | DESCRIPTION |
---|---|
basis
|
the eigenstates in the basis to use e.g. ('r', 'g')
TYPE:
|
nqubits
|
how many qubits there are in the state
TYPE:
|
operations
|
which bitstrings make up the state with what weight
TYPE:
|
operators
|
additional symbols to be used in operations
TYPE:
|
RETURNS | DESCRIPTION |
---|---|
MPO
|
the operator in MPO form. |
Source code in emu_mps/mpo.py
157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 |
|