Skip to content

Backends

Backends allow execution of Qadence abstract quantum circuits. They could be chosen from a variety of simulators, emulators and hardware and can enable circuit differentiability. The primary way to interact and configure a backend is via the high-level API QuantumModel.

Not all backends are equivalent

Not all backends support the same set of operations, especially while executing analog blocks. Qadence will throw descriptive errors in such cases.

Execution backends

PyQTorch: An efficient, large-scale simulator designed for quantum machine learning, seamlessly integrated with the popular PyTorch deep learning framework for automatic differentiability. It also offers analog computing for time-(in)dependent pulses. See PyQTorchBackend.

Pulser: A Python library for pulse-level/analog control of neutral atom devices. Execution via QuTiP. See PulserBackend.

More: Proprietary Qadence extensions provide more high-performance backends based on tensor networks or differentiation engines. For more enquiries, please contact: info@pasqal.com.

Differentiation backend

The DifferentiableBackend class enables different differentiation modes for the given backend. This can be chosen from two types:

  • Automatic differentiation (AD): available for PyTorch based backends (PyQTorch).
  • Parameter Shift Rules (PSR): available for all backends. See this section for more information on differentiability and PSR.

In practice, only a diff_mode should be provided in the QuantumModel. Please note that diff_mode defaults to None:

import sympy
import torch
from qadence import Parameter, RX, RZ, Z, CNOT, QuantumCircuit, QuantumModel, chain, BackendName, DiffMode

x = Parameter("x", trainable=False)
y = Parameter("y", trainable=False)
fm = chain(
    RX(0, 3 * x),
    RX(0, x),
    RZ(1, sympy.exp(y)),
    RX(0, 3.14),
    RZ(1, "theta")
)

ansatz = CNOT(0, 1)
block = chain(fm, ansatz)

circuit = QuantumCircuit(2, block)

observable = Z(0)

# DiffMode.GPSR is available for any backend.
# DiffMode.AD is only available for natively differentiable backends.
model = QuantumModel(circuit, observable, backend=BackendName.PYQTORCH, diff_mode=DiffMode.GPSR)

# Get some values for the feature parameters.
values = {"x": (x := torch.tensor([0.5], requires_grad=True)), "y": torch.tensor([0.1])}

# Compute expectation.
exp = model.expectation(values)

# Differentiate the expectation wrt x.
dexp_dx = torch.autograd.grad(exp, x, torch.ones_like(exp))
dexp_dx = (tensor([3.6398]),)

Low-level backend_factory interface

Every backend in Qadence inherits from the abstract Backend class: Backend and implement the following methods:

  • run: propagate the initial state according to the quantum circuit and return the final wavefunction object.
  • sample: sample from a circuit.
  • expectation: computes the expectation of a circuit given an observable.
  • convert: convert the abstract QuantumCircuit object to its backend-native representation including a backend specific parameter embedding function.

Backends are purely functional objects which take as input the values for the circuit parameters and return the desired output from a call to a method. In order to use a backend directly, embedded parameters must be supplied as they are returned by the backend specific embedding function.

Here is a simple demonstration of the use of the PyQTorch backend to execute a circuit in non-differentiable mode:

from qadence import QuantumCircuit, FeatureParameter, RX, RZ, CNOT, hea, chain

# Construct a feature map.
x = FeatureParameter("x")
z = FeatureParameter("y")
fm = chain(RX(0, 3 * x), RZ(1, z), CNOT(0, 1))

# Construct a circuit with an hardware-efficient ansatz.
circuit = QuantumCircuit(3, fm, hea(3,1))

The abstract QuantumCircuit can now be converted to its native representation via the PyQTorch backend.

from qadence import backend_factory

# Use only PyQtorch in non-differentiable mode:
backend = backend_factory("pyqtorch")

# The `Converted` object
# (contains a `ConvertedCircuit` with the original and native representation)
conv = backend.convert(circuit)
conv.circuit.original = ChainBlock(0,1,2)
├── ChainBlock(0,1)
   ├── RX(0) [params: ['3*x']]
   ├── RZ(1) [params: ['y']]
   └── CNOT(0, 1)
└── ChainBlock(0,1,2) [tag: HEA]
    ├── ChainBlock(0,1,2)
       ├── KronBlock(0,1,2)
          ├── RX(0) [params: ['theta_0']]
          ├── RX(1) [params: ['theta_1']]
          └── RX(2) [params: ['theta_2']]
       ├── KronBlock(0,1,2)
          ├── RY(0) [params: ['theta_3']]
          ├── RY(1) [params: ['theta_4']]
          └── RY(2) [params: ['theta_5']]
       └── KronBlock(0,1,2)
           ├── RX(0) [params: ['theta_6']]
           ├── RX(1) [params: ['theta_7']]
           └── RX(2) [params: ['theta_8']]
    └── ChainBlock(0,1,2)
        ├── KronBlock(0,1)
           └── CNOT(0, 1)
        └── KronBlock(1,2)
            └── CNOT(1, 2)
conv.circuit.native = QuantumCircuit(
  (operations): ModuleList(
    (0): Sequence(
      (operations): ModuleList(
        (0): Sequence(
          (operations): ModuleList(
            (0): RX(target: (0,), param: c71ae995-1761-43d7-882b-f66d63db78dd)
            (1): RZ(target: (1,), param: 95c60058-8e2e-4c35-996c-596be26ad63d)
            (2): CNOT(control: (0,), target: (1,))
          )
        )
        (1): Sequence(
          (operations): ModuleList(
            (0): Sequence(
              (operations): ModuleList(
                (0): Merge(
                  (operations): ModuleList(
                    (0): RX(target: (0,), param: 3ac4fc11-c297-443d-9395-6a5f6221ca8f)
                    (1): RY(target: (0,), param: 3479ca8f-8931-43a5-9c3d-d4b657b0abd4)
                    (2): RX(target: (0,), param: 641f7a15-ba03-4f1c-8c73-86a47f0f78c1)
                  )
                )
                (1): Merge(
                  (operations): ModuleList(
                    (0): RX(target: (1,), param: 8ee4f763-67d8-4603-a0ef-fa9ea4be097e)
                    (1): RY(target: (1,), param: d328c437-9f6a-4bf6-a504-9bb659401819)
                    (2): RX(target: (1,), param: e9c3a962-7234-43ff-ab80-4a256b52bac2)
                  )
                )
                (2): Merge(
                  (operations): ModuleList(
                    (0): RX(target: (2,), param: bf63768e-28e3-4ecb-af10-8854a4b911a1)
                    (1): RY(target: (2,), param: 751cec43-9046-4482-93b8-8fc9a1e8b244)
                    (2): RX(target: (2,), param: c90e9e3c-00a1-42f9-a3ac-bdb07fac277b)
                  )
                )
              )
            )
            (1): Sequence(
              (operations): ModuleList(
                (0): Sequence(
                  (operations): ModuleList(
                    (0): CNOT(control: (0,), target: (1,))
                  )
                )
                (1): Sequence(
                  (operations): ModuleList(
                    (0): CNOT(control: (1,), target: (2,))
                  )
                )
              )
            )
          )
        )
      )
    )
  )
)

Additionally, Converted contains all fixed and variational parameters, as well as an embedding function which accepts feature parameters to construct a dictionary of circuit native parameters. These are needed as each backend uses a different representation of the circuit parameters:

import torch

# Contains fixed parameters and variational (from the HEA)
conv.params

inputs = {"x": torch.tensor([1., 1.]), "y":torch.tensor([2., 2.])}

# get all circuit parameters (including feature params)
embedded = conv.embedding_fn(conv.params, inputs)
conv.params = {
  theta_3: tensor([0.0027], requires_grad=True)
  theta_7: tensor([0.0292], requires_grad=True)
  theta_8: tensor([0.0008], requires_grad=True)
  theta_0: tensor([0.6596], requires_grad=True)
  theta_2: tensor([0.0555], requires_grad=True)
  theta_5: tensor([0.8874], requires_grad=True)
  theta_1: tensor([0.2985], requires_grad=True)
  theta_4: tensor([0.7404], requires_grad=True)
  theta_6: tensor([0.4904], requires_grad=True)
}
embedded = {
  c71ae995-1761-43d7-882b-f66d63db78dd: tensor([3., 3.], grad_fn=<ViewBackward0>)
  95c60058-8e2e-4c35-996c-596be26ad63d: tensor([2., 2.])
  3ac4fc11-c297-443d-9395-6a5f6221ca8f: tensor([0.6596], grad_fn=<ViewBackward0>)
  3479ca8f-8931-43a5-9c3d-d4b657b0abd4: tensor([0.0027], grad_fn=<ViewBackward0>)
  641f7a15-ba03-4f1c-8c73-86a47f0f78c1: tensor([0.4904], grad_fn=<ViewBackward0>)
  8ee4f763-67d8-4603-a0ef-fa9ea4be097e: tensor([0.2985], grad_fn=<ViewBackward0>)
  d328c437-9f6a-4bf6-a504-9bb659401819: tensor([0.7404], grad_fn=<ViewBackward0>)
  e9c3a962-7234-43ff-ab80-4a256b52bac2: tensor([0.0292], grad_fn=<ViewBackward0>)
  bf63768e-28e3-4ecb-af10-8854a4b911a1: tensor([0.0555], grad_fn=<ViewBackward0>)
  751cec43-9046-4482-93b8-8fc9a1e8b244: tensor([0.8874], grad_fn=<ViewBackward0>)
  c90e9e3c-00a1-42f9-a3ac-bdb07fac277b: tensor([0.0008], grad_fn=<ViewBackward0>)
}

With the embedded parameters, QuantumModel methods are accessible:

output = backend.run(conv.circuit, embedded)
print(f"{output = }")
output = tensor([[ 0.1880-0.1455j,  0.0843-0.0756j, -0.0936+0.1829j, -0.2237+0.3690j,
         -0.6136-0.3550j, -0.3036-0.1475j,  0.0729+0.1027j,  0.1371+0.2262j],
        [ 0.1880-0.1455j,  0.0843-0.0756j, -0.0936+0.1829j, -0.2237+0.3690j,
         -0.6136-0.3550j, -0.3036-0.1475j,  0.0729+0.1027j,  0.1371+0.2262j]],
       grad_fn=<TBackward0>)

Lower-level: the Backend representation

If there is a requirement to work with a specific backend, it is possible to access directly the native circuit. For example, should one wish to use PyQtorch noise features directly instead of using the NoiseHandler interface from Qadence:

from pyqtorch.noise import Depolarizing

inputs = {"x": torch.rand(1), "y":torch.rand(1)}
embedded = conv.embedding_fn(conv.params, inputs)

# Define a noise channel on qubit 0
noise = Depolarizing(0, error_probability=0.1)

# Add noise to circuit
conv.circuit.native.operations.append(noise)

When running With noise, one can see that the output is a density matrix:

density_result = backend.run(conv.circuit, embedded)
print(density_result.shape)
torch.Size([1, 8, 8])