Skip to content

Readout mitigation

Readout error mitigation

Readout errors are introduced during measurements in the computation basis via probabilistic bitflip operators characterized by the readout matrix (also known as confusion matrix) defined over the system of qubits of dimension \(2^n\times2^n\). The complete implementation of the mitigation technique involves using the characterized readout matrix for the system of qubits \((T)\) and classically applying an inversion \((T^{−1})\) to the measured probability distributions. However there are several limitations of this approach:

  • The complete implementation requires \(2^n\) characterization experiments (probability measurements), which is not scalable.
  • Classical overhead from full matrix inversion for large system of qubits is expensive
  • The matrix \(T\) may become singular for large \(n\), preventing direct inversion.
  • The inverse \(T^{−1}\) might not be a stochastic matrix, meaning that it can produce negative corrected probabilities.
  • The correction is not rigorously justified, so we cannot be sure that we are only removing SPAM errors and not otherwise corrupting an estimated probability distribution.

Qadence relies on the assumption of uncorrelated readout errors, this gives us:

\[ T=T_1\otimes T_2\otimes \dots \otimes T_n \]

for which the inversion is straightforward:

\[ T^{-1}=T_1^{-1}\otimes T_2^{-1}\otimes \dots \otimes T_n^{-1} \]
from qadence import QuantumModel, QuantumCircuit, hamiltonian_factory, kron, H, Z, I
from qadence.noise import Noise


# Simple circuit and observable construction.
block = kron(H(0), I(1))
circuit = QuantumCircuit(2, block)
n_shots = 10000

# Construct a quantum model and noise
model = QuantumModel(circuit=circuit)
error_probability = 0.2
noise = Noise(protocol=Noise.READOUT,options={"error_probability": error_probability})

noiseless_samples = model.sample(n_shots=n_shots)
noisy_samples = model.sample(noise=noise, n_shots=n_shots)
noiseless samples: [Counter({'00': 5025, '10': 4975})]
noisy samples: [Counter({'10': 4003, '00': 3938, '11': 1034, '01': 1025})]

Note that the noisy states have samples with the second qubit flipped. In the below protocols, we describe ways to reconstruct the noiseless distribution (untargeted mitigation). Besides this one might just be interrested in mitigating the expectation value (targeted mitigation).

Constrained optimization

However, even for a reduced \(n\) the forth limitation holds. This can be avoided by reformulating into a minimization problem1:

\[ \lVert Tp_{\textrm{corr}}-p_{\textrm{raw}}\rVert_{2}^{2} \]

subjected to physicality constraints \(0 \leq p_{corr}(x) \leq 1\) and \(\lVert p_{corr} \rVert = 1\). At this point, two methods are implemented to solve this problem. The method involves solving a constrained optimization problem and can be computationally expensive.

from qadence_protocols.mitigations.protocols import Mitigations
from qadence_protocols.types import ReadOutOptimization


# Define the mitigation method solving the minimization problem:
options={"optimization_type": ReadOutOptimization.CONSTRAINED, "n_shots": n_shots}
mitigation = Mitigations(protocol=Mitigations.READOUT, options=options)

# Run noiseless, noisy and mitigated simulations.
mitigated_samples_opt = mitigation(model=model, noise=noise)
Optimization based mitigation: [Counter({'00': 4365, '10': 4338, '11': 654, '01': 643})]

Maximum Likelihood estimation (MLE)

This method replaces the constraints with additional post processing for correcting probability distributions with negative entries. The runtime of the method is linear in the size of the distribution and thus is very efficient. The optimality of the solution is however not always guaranteed. The method redistributes any negative probabilities on using the inverse operation equally and can be shown to maximize the likelihood with minimal effort2.

# Define the mitigation method solving the minimization problem:
options={"optimization_type": ReadOutOptimization.MLE, "n_shots": n_shots}
mitigation = Mitigations(protocol=Mitigations.READOUT, options=options)
mitigated_samples_mle = mitigation(model=model, noise=noise)
MLE based mitigation [Counter({'00': 4400, '10': 4308, '01': 658, '11': 634})]

Matrix free measurement mitigation (MTHREE)

This method relies on inverting the probability distribution within a restricted subspace of measured bitstrings3. The method is better suited for computations that exceed 20 qubits where the corrected probability distribution would require a state in a unreasonably high dimensional Hilbert space. Thus, the idea here is to stick to the basis states that show up in the measurement alone. Additionally, one might want to include states that are \(k\) hamming distance away from it.

import numpy as np
from scipy.sparse import csr_matrix
from scipy.sparse.linalg import gmres

n_qubits = 10
exact_prob = np.random.rand(2 ** (n_qubits))
exact_prob[2 ** (n_qubits//2):]=0
exact_prob = 0.90 * exact_prob + 0.1 * np.ones(2**n_qubits) / 2**n_qubits
exact_prob = exact_prob / sum(exact_prob)
np.random.shuffle(exact_prob)

observed_prob = np.array(exact_prob, copy=True)
observed_prob[exact_prob < 1 / 2 ** (n_qubits)] = 0

observed_prob = observed_prob / sum(observed_prob)

input_csr = csr_matrix(observed_prob, shape=(1, 2**n_qubits)).T
{'0000111001': 0.017, '0001001001': 0.007, '0001011100': 0.009, '0010100000': 0.035, '0011000100': 0.05, '0011111111': 0.041, '0100100100': 0.032, '0101000011': 0.047, '0101010110': 0.032, '0101100110': 0.023, '0110100101': 0.049, '0110101110': 0.045, '0111001111': 0.049, '0111111010': 0.031, '1001000000': 0.012, '1001000110': 0.007, '1001001000': 0.032, '1001011011': 0.034, '1001110100': 0.035, '1011011100': 0.038, '1011100111': 0.058, '1100001011': 0.012, '1101010001': 0.043, '1101100110': 0.059, '1101101011': 0.013, '1101110111': 0.001, '1101111001': 0.057, '1110101000': 0.055, '1110110100': 0.003, '1111011111': 0.059, '1111110010': 0.015}
Filling percentage 0.0302734375 %

We have generated a probability distribution within a small subspace of bitstrings being filled. We use csr_matrix for efficient representation and computation. Now we use MTHREE to do mitigation on the probability distribution

from scipy.stats import wasserstein_distance
from qadence_protocols.mitigations.readout import (
    normalized_subspace_kron,
    mle_solve,
    matrix_inv,
    tensor_rank_mult
    )

noise_matrices = []
for t in range(n_qubits):
    t_a, t_b = np.random.rand(2) / 8
    K = np.array([[1 - t_a, t_a], [t_b, 1 - t_b]]).transpose()  # column sum be 1
    noise_matrices.append(K)

confusion_matrix_subspace = normalized_subspace_kron(noise_matrices, observed_prob.nonzero()[0])

p_corr_mthree_gmres = gmres(confusion_matrix_subspace, input_csr.toarray())[0]
p_corr_mthree_gmres_mle = mle_solve(p_corr_mthree_gmres)

noise_matrices_inv = list(map(matrix_inv, noise_matrices))
p_corr_inv_mle = mle_solve(tensor_rank_mult(noise_matrices_inv, observed_prob))

distance = wasserstein_distance(p_corr_mthree_gmres_mle, p_corr_inv_mle)
Wasserstein distance between the 2 distributions: 0.0003002850138064578

We have used wasserstein_distance instead of kl_divergence as many of the bistrings have 0 probabilites. If the expected solution lies outside the space of observed bitstrings, MTHREE will fail. We next look at majority voting to circument this problem when the expected output is a single bitstring.

Majority Voting

Mitigation protocol to be used only when the circuit output has a single expected bitstring as the solution 4. The method votes on the likeliness of each qubit to be a 0 or 1 assuming a tensor product structure for the output. The method is valid only when the readout errors are not correlated.

from qadence import QuantumModel, QuantumCircuit,kron, H, Z, I
from qadence.noise import Noise
from qadence_protocols.mitigations.readout import majority_vote
import numpy as np

# Simple circuit and observable construction.
n_qubits = 4
block = kron(*[I(i) for i in range(n_qubits)])
circuit = QuantumCircuit(n_qubits, block)
n_shots = 1000

# Construct a quantum model.
model = QuantumModel(circuit=circuit)

# Sampling the noisy solution
error_p = 0.2
noise = Noise(protocol=Noise.READOUT,options={"error_probability": error_p})
noisy_samples = model.sample(noise=noise, n_shots=n_shots)[0]

# Removing samples that correspond to actual solution
noisy_samples['0000'] = 0

# Constructing the probability vector
ordered_bitstrings = [bin(k)[2:].zfill(n_qubits) for k in range(2**n_qubits)]
observed_prob = np.array([noisy_samples[bs] for bs in ordered_bitstrings]) / n_shots
noisy samples: Counter({'1000': 105, '0010': 95, '0001': 93, '0100': 92, '0011': 33, '1100': 33, '1001': 31, '1010': 27, '0101': 23, '0110': 23, '0111': 10, '1011': 9, '1110': 6, '1101': 5, '0000': 0})
observed probability: [0.    0.093 0.095 0.033 0.092 0.023 0.023 0.01  0.105 0.031 0.027 0.009
 0.033 0.005 0.006 0.   ]

We have removed the actual solution from the observed distribution and will use this as the observed probability.

noise_matrices = [np.array([[1 - error_p, error_p], [error_p, 1 - error_p]])]*n_qubits
result_index = majority_vote(noise_matrices, observed_prob).argmax()
mitigated solution index: 0

Model free mitigation

This protocol makes use of all possible so-called twirl operations to average out the effect of readout errors into an effective scaling. The twirl operation consists of using bit flip operators before the measurement and after the measurement is obtained5. The number of twirl operations can be reduced through random sampling. The method is exact in that it requires no calibration which might be prone to errors of modelling.

from qadence.measurements import Measurements
from qadence.noise.protocols import Noise
from qadence.operations import CNOT, RX, Z
from qadence_protocols import Mitigations

import torch
from qadence import (
    QuantumCircuit,
    QuantumModel,
    chain,
    kron,
)

error_probability=0.15
n_shots=10000
block= chain(kron(RX(0, torch.pi / 3), RX(1, torch.pi / 6)), CNOT(0, 1))
observable=[3 * kron(Z(0), Z(1)) + 2 * Z(0)]

circuit = QuantumCircuit(block.n_qubits, block)
noise = Noise(protocol=Noise.READOUT, options={"error_probability": error_probability})
tomo_measurement = Measurements(
    protocol=Measurements.TOMOGRAPHY,
    options={"n_shots": n_shots},
)

model = QuantumModel(
    circuit=circuit, observable=observable,
)

noisy_model = QuantumModel(
    circuit=circuit,
    observable=observable,
    measurement=tomo_measurement,
    noise=noise,
)

mitigate = Mitigations(protocol=Mitigations.TWIRL)
expectation_mitigated = mitigate(noisy_model)
noiseless expectation value tensor([[3.5806]], grad_fn=<TransposeBackward0>)
noisy expectation value tensor([[1.9648]], grad_fn=<TransposeBackward0>)
expected mitigation value tensor([3.5708])

References