Skip to content

Variational quantum algorithms

Variational algorithms on noisy devices and quantum machine learning (QML) [^1] in particular are the target applications for Qadence. For this purpose, the library offers both flexible symbolic expressions for the quantum circuit parameters via sympy (see here for more details) and native automatic differentiation via integration with PyTorch deep learning framework.

Qadence symbolic parameter interface allows to create arbitrary feature maps to encode classical data into quantum circuits with an arbitrary non-linear function embedding for the input values:

import qadence as qd
from qadence.operations import *
import torch
from sympy import acos

n_qubits = 4

fp = qd.FeatureParameter("phi")
feature_map = qd.kron(RX(i, 2 * acos(fp)) for i in range(n_qubits))

# the key in the dictionary must correspond to
# the name of the assigned to the feature parameter
inputs = {"phi": torch.rand(3)}
samples = qd.sample(feature_map, values=inputs)
print(samples)
[Counter({'1111': 75, '1011': 9, '0111': 6, '1101': 4, '1110': 4, '0110': 2}), Counter({'1111': 24, '1110': 16, '1101': 9, '0111': 8, '1001': 7, '0011': 6, '0110': 6, '0100': 5, '1010': 5, '1011': 5, '0101': 3, '0010': 2, '1100': 2, '0000': 1, '0001': 1}), Counter({'0000': 28, '1000': 14, '0001': 12, '0010': 6, '0100': 6, '0101': 6, '1100': 6, '0011': 5, '0110': 4, '1001': 4, '1011': 2, '1101': 2, '1110': 2, '0111': 1, '1010': 1, '1111': 1})]

The constructors.feature_map module provides convenience functions to build commonly used feature maps where the input parameter is encoded in the single-qubit gates rotation angle.

Furthermore, Qadence is natively integrated with PyTorch automatic differentiation engine thus Qadence quantum models can be used seamlessly in a PyTorch workflow.

Let's create a quantum neural network model using the feature map just defined, a digital-analog variational ansaztz and a simple observable \(X(0) \otimes X(1)\). We use the convenience QNN quantum model abstraction.

ansatz = qd.hea(n_qubits, strategy="sDAQC")
circuit = qd.QuantumCircuit(n_qubits, feature_map, ansatz)
observable = qd.kron(X(0), X(1))

model = qd.QNN(circuit, observable)

# NOTE: the `QNN` is a torch.nn.Module
assert isinstance(model, torch.nn.Module)

Differentiation works the same way as any other PyTorch module:

values = {"phi": torch.rand(10, requires_grad=True)}

# the forward pass of the quantum model returns the expectation
# value of the input observable
out = model(values)
print(f"Quantum model output: {out}")

# you can compute the gradient with respect to inputs using
# PyTorch autograd differentiation engine
dout = torch.autograd.grad(out, values["phi"], torch.ones_like(out), create_graph=True)[0]
print(f"First-order derivative w.r.t. the feature parameter: {dout}")

# you can also call directly a backward pass to compute derivatives with respect
# to the variational parameters and use it for implementing variational
# optimization
out.sum().backward()
Quantum model output: tensor([[0.1904], [0.3953], [0.2193], [0.0431], [0.2558], [0.3160], [0.0408], [0.4406], [0.4627], [0.4138]], grad_fn=) First-order derivative w.r.t. the feature parameter: tensor([ 0.5730, 0.6237, 0.6140, -1.0375, 0.6560, 0.6914, -0.9687, -0.5948, -0.1979, -0.8826], grad_fn=)

To run QML on real devices, Qadence offers generalized parameter shift rules (GPSR) 1 for arbitrary quantum operations which can be selected when constructing the QNN model:

model = qd.QNN(circuit, observable, diff_mode="gpsr")
out = model(values)

dout = torch.autograd.grad(out, values["phi"], torch.ones_like(out), create_graph=True)[0]
print(f"First-order derivative w.r.t. the feature parameter: {dout}")
First-order derivative w.r.t. the feature parameter: tensor([ 0.5730, 0.6237, 0.6140, -1.0375, 0.6560, 0.6914, -0.9687, -0.5948, -0.1979, -0.8826], grad_fn=)

See here for more details on how the parameter shift rules implementation works in Qadence.

References

[^1] Schuld, Petruccione, Machine learning on Quantum Computers, Springer Nature (2021)