API specification
The emu-mps API is based on the specification here. Concretely, the classes are as follows:
MPSBackend
Bases: EmulatorBackend
A backend for emulating Pulser sequences using Matrix Product States (MPS), aka tensor trains.
Source code in pulser/backend/abc.py
resume(autosave_file)
staticmethod
Resume simulation from autosave file. Only resume simulations from data you trust! Unpickling of untrusted data is not safe.
Source code in emu_mps/mps_backend.py
run()
Emulates the given sequence.
RETURNS | DESCRIPTION |
---|---|
Results
|
the simulation results |
Source code in emu_mps/mps_backend.py
MPSConfig
Bases: EmulationConfig
The configuration of the emu-mps MPSBackend. The kwargs passed to this class are passed on to the base class. See the API for that class for a list of available options.
PARAMETER | DESCRIPTION |
---|---|
dt
|
the timestep size that the solver uses. Note that observables are only calculated if the evaluation_times are divisible by dt.
TYPE:
|
precision
|
up to what precision the state is truncated
TYPE:
|
max_bond_dim
|
the maximum bond dimension that the state is allowed to have.
TYPE:
|
max_krylov_dim
|
the size of the krylov subspace that the Lanczos algorithm maximally builds
TYPE:
|
extra_krylov_tolerance
|
the Lanczos algorithm uses this*precision as the convergence tolerance
TYPE:
|
num_gpus_to_use
|
during the simulation, distribute the state over this many GPUs 0=all factors to cpu. As shown in the benchmarks, using multiple GPUs might alleviate memory pressure per GPU, but the runtime should be similar.
TYPE:
|
autosave_prefix
|
filename prefix for autosaving simulation state to file
TYPE:
|
autosave_dt
|
minimum time interval in seconds between two autosaves Saving the simulation state is only possible at specific times, therefore this interval is only a lower bound.
TYPE:
|
kwargs
|
arguments that are passed to the base class
TYPE:
|
Examples:
>>> num_gpus_to_use = 2 #use 2 gpus if available, otherwise 1 or cpu
>>> dt = 1 #this will impact the runtime
>>> precision = 1e-6 #smaller dt requires better precision, generally
>>> MPSConfig(num_gpus_to_use=num_gpus_to_use, dt=dt, precision=precision,
>>> with_modulation=True) #the last arg is taken from the base class
Source code in emu_mps/mps_config.py
MPS
Bases: State[complex, Tensor]
Matrix Product State, aka tensor train.
Each tensor has 3 dimensions ordered as such: (left bond, site, right bond).
Only qubits are supported.
This constructor creates a MPS directly from a list of tensors. It is for internal use only.
PARAMETER | DESCRIPTION |
---|---|
factors
|
the tensors for each site WARNING: for efficiency in a lot of use cases, this list of tensors IS NOT DEEP-COPIED. Therefore, the new MPS object is not necessarily the exclusive owner of the list and its tensors. As a consequence, beware of potential external modifications affecting the list or the tensors. You are responsible for deciding whether to pass its own exclusive copy of the data to this constructor, or some shared objects.
TYPE:
|
orthogonality_center
|
the orthogonality center of the MPS, or None (in which case it will be orthogonalized when needed)
TYPE:
|
config
|
the emu-mps config object passed to the run method
TYPE:
|
num_gpus_to_use
|
distribute the factors over this many GPUs 0=all factors to cpu, None=keep the existing device assignment.
TYPE:
|
Source code in emu_mps/mps.py
n_qudits
property
The number of qudits in the state.
__add__(other)
Returns the sum of two MPSs, computed with a direct algorithm.
The resulting MPS is orthogonalized on the first site and truncated
up to self.config.precision
.
PARAMETER | DESCRIPTION |
---|---|
other
|
the other state
TYPE:
|
RETURNS | DESCRIPTION |
---|---|
MPS
|
the summed state |
Source code in emu_mps/mps.py
__rmul__(scalar)
Multiply an MPS by a scalar.
PARAMETER | DESCRIPTION |
---|---|
scalar
|
the scale factor
TYPE:
|
RETURNS | DESCRIPTION |
---|---|
MPS
|
the scaled MPS |
Source code in emu_mps/mps.py
apply(qubit_index, single_qubit_operator)
Apply given single qubit operator to qubit qubit_index, leaving the MPS orthogonalized on that qubit.
Source code in emu_mps/mps.py
expect_batch(single_qubit_operators)
Computes expectation values for each qubit and each single qubit operator in the batched input tensor.
Returns a tensor T such that T[q, i] is the expectation value for qubit #q and operator single_qubit_operators[i].
Source code in emu_mps/mps.py
get_correlation_matrix(*, operator=n_operator)
Efficiently compute the symmetric correlation matrix
C_ij =
PARAMETER | DESCRIPTION |
---|---|
operator
|
a 2x2 Torch tensor to use
TYPE:
|
RETURNS | DESCRIPTION |
---|---|
Tensor
|
the corresponding correlation matrix |
Source code in emu_mps/mps.py
get_max_bond_dim()
Return the max bond dimension of this MPS.
RETURNS | DESCRIPTION |
---|---|
int
|
the largest bond dimension in the state |
get_memory_footprint()
Returns the number of MBs of memory occupied to store the state
RETURNS | DESCRIPTION |
---|---|
float
|
the memory in MBs |
Source code in emu_mps/mps.py
inner(other)
Compute the inner product between this state and other. Note that self is the left state in the inner product, so this function is linear in other, and anti-linear in self
PARAMETER | DESCRIPTION |
---|---|
other
|
the other state
TYPE:
|
RETURNS | DESCRIPTION |
---|---|
Tensor
|
inner product |
Source code in emu_mps/mps.py
make(num_sites, config=None, num_gpus_to_use=DEVICE_COUNT)
classmethod
Returns a MPS in ground state |000..0>.
PARAMETER | DESCRIPTION |
---|---|
num_sites
|
the number of qubits
TYPE:
|
config
|
the MPSConfig
TYPE:
|
num_gpus_to_use
|
distribute the factors over this many GPUs 0=all factors to cpu
TYPE:
|
Source code in emu_mps/mps.py
norm()
Computes the norm of the MPS.
Source code in emu_mps/mps.py
orthogonalize(desired_orthogonality_center=0)
Orthogonalize the state on the given orthogonality center.
Returns the new orthogonality center index as an integer, this is convenient for type-checking purposes.
Source code in emu_mps/mps.py
overlap(other)
Compute the overlap of this state and other. This is defined as \(|\langle self | other \rangle |^2\)
sample(*, num_shots, one_state=None, p_false_pos=0.0, p_false_neg=0.0)
Samples bitstrings, taking into account the specified error rates.
PARAMETER | DESCRIPTION |
---|---|
num_shots
|
how many bitstrings to sample
TYPE:
|
p_false_pos
|
the rate at which a 0 is read as a 1
TYPE:
|
p_false_neg
|
the rate at which a 1 is read as a 0
TYPE:
|
RETURNS | DESCRIPTION |
---|---|
Counter[str]
|
the measured bitstrings, by count |
Source code in emu_mps/mps.py
185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 |
|
truncate()
SVD based truncation of the state. Puts the orthogonality center at the first qubit. Calls orthogonalize on the last qubit, and then sweeps a series of SVDs right-left. Uses self.config for determining accuracy. An in-place operation.
Source code in emu_mps/mps.py
inner
Wrapper around MPS.inner.
PARAMETER | DESCRIPTION |
---|---|
left
|
the anti-linear argument
TYPE:
|
right
|
the linear argument
TYPE:
|
RETURNS | DESCRIPTION |
---|---|
Tensor
|
the inner product |
MPO
Bases: Operator[complex, Tensor, MPS]
Matrix Product Operator.
Each tensor has 4 dimensions ordered as such: (left bond, output, input, right bond).
PARAMETER | DESCRIPTION |
---|---|
factors
|
the tensors making up the MPO
TYPE:
|
Source code in emu_mps/mpo.py
__add__(other)
Returns the sum of two MPOs, computed with a direct algorithm. The result is currently not truncated
PARAMETER | DESCRIPTION |
---|---|
other
|
the other operator
TYPE:
|
RETURNS | DESCRIPTION |
---|---|
MPO
|
the summed operator |
Source code in emu_mps/mpo.py
__matmul__(other)
Compose two operators. The ordering is that self is applied after other.
PARAMETER | DESCRIPTION |
---|---|
other
|
the operator to compose with self
TYPE:
|
RETURNS | DESCRIPTION |
---|---|
MPO
|
the composed operator |
Source code in emu_mps/mpo.py
__rmul__(scalar)
Multiply an MPO by scalar. Assumes the orthogonal centre is on the first factor.
PARAMETER | DESCRIPTION |
---|---|
scalar
|
the scale factor to multiply with
TYPE:
|
RETURNS | DESCRIPTION |
---|---|
MPO
|
the scaled MPO |
Source code in emu_mps/mpo.py
apply_to(other)
Applies this MPO to the given MPS. The returned MPS is:
- othogonal on the first site
- truncated up to `other.precision`
- distributed on the same devices of `other`
PARAMETER | DESCRIPTION |
---|---|
other
|
the state to apply this operator to
TYPE:
|
RETURNS | DESCRIPTION |
---|---|
MPS
|
the resulting state |
Source code in emu_mps/mpo.py
expect(state)
Compute the expectation value of self on the given state.
PARAMETER | DESCRIPTION |
---|---|
state
|
the state with which to compute
TYPE:
|
RETURNS | DESCRIPTION |
---|---|
Tensor
|
the expectation |