Skip to content

Digital-Analog Emulation

TL;DR: Automatic emulation in the pyqtorch backend

All analog blocks are automatically translated to their emulated version when running them with the pyqtorch backend (by calling add_interaction on them under the hood):

import torch
from qadence import Register, AnalogRX, sample

reg = Register.from_coordinates([(0,0), (0,5)])
print(sample(reg, AnalogRX(torch.pi)))
[Counter({'01': 35, '00': 33, '10': 32})]

Qadence includes primitives for the simple construction of ising-like Hamiltonians to account for the interaction among qubits. This allows to simulate systems closer to real quantum computing platforms such as neutral atoms. The constructed Hamiltonians are of the form

\[ \mathcal{H} = \sum_{i} \frac{\hbar\Omega}{2} \hat\sigma^x_i - \sum_{i} \hbar\delta \hat n_i + \mathcal{H}_{int}, \]

where \(\hat n = \frac{1-\hat\sigma_z}{2}\), and \(\mathcal{H}_{int}\) is a pair-wise interaction term.

We currently have two central operations that can be used to compose analog programs.

Both are time-independent and can be emulated by calling add_interaction.

To compose analog blocks you can use chain and kron as usual with the following restrictions:

  • AnalogChains can only be constructed from AnalogKron blocks or globally supported primitive, analog blocks.
  • AnalogKrons can only be constructed from non-global, analog blocks with the same duration.

The wait operation can be emulated with an Ising or an \(XY\)-interaction:

from qadence import Register, wait, add_interaction, run

block = wait(duration=3000)
print(block)

reg = Register.from_coordinates([(0,0), (0,5)])  # we need atomic distances
emulated = add_interaction(reg, block, interaction="XY")  # or: interaction="Ising"
print(emulated.generator)
WaitBlock(t=3000.0, support=(<QubitSupportType.GLOBAL: 'global'>,))

AddBlock(0,1)
└── [mul: 29.600] 
    └── AddBlock(0,1)
        ├── KronBlock(0,1)
           ├── X(0)
           └── X(1)
        └── KronBlock(0,1)
            ├── Y(0)
            └── Y(1)

The AnalogRot constructor can create any constant (in time), analog rotation.

import torch
from qadence import AnalogRot, AnalogRX

# implement a global RX rotation
block = AnalogRot(
    duration=1000.,  # [ns]
    omega=torch.pi, # [rad/μs]
    delta=0,        # [rad/μs]
    phase=0,        # [rad]
)
print(block)

# or use the short hand
block = AnalogRX(torch.pi)
print(block)
ConstantAnalogRotation(α=3.14159265358979, t=1000.00000000000, 
support=(<QubitSupportType.GLOBAL: 'global'>,), Ω=3.14159265358979, δ=0, φ=0)
ConstantAnalogRotation(α=3.14159265358979, t=1000.00000000000, 
support=(<QubitSupportType.GLOBAL: 'global'>,), Ω=3.14159265358979, δ=0, φ=0)

Analog blocks can also be chained, and kroned like all other blocks, but with two small caveats:

import torch
from qadence import AnalogRot, kron, chain, wait

# only blocks with the same `duration` can be `kron`ed
kron(
    wait(duration=1000, qubit_support=(0,1)),
    AnalogRot(duration=1000, omega=2.0, qubit_support=(2,3))
)

# only blocks with `"global"` or the same qubit support can be `chain`ed
chain(wait(duration=200), AnalogRot(duration=300, omega=2.0))

Composing digital & analog blocks

You can also compose digital and analog blocks where the additional restrictions of chain/kron only apply to composite blocks which only contain analog blocks. For more details/examples, see AnalogChain and AnalogKron.

Fitting a simple function

Just as most other blocks, analog blocks can be parametrized, and thus we can build a small ansatz which can fit a sine wave. When using the pyqtorch backend the add_interaction function is called automatically. As usual, we can choose which differentiation backend we want to use: autodiff or parameter shift rule (PSR).

First we define an ansatz block and an observable

import torch
from qadence import Register, FeatureParameter, VariationalParameter
from qadence import AnalogRX, AnalogRZ, Z
from qadence import wait, chain, add

pi = torch.pi

# two qubit register
reg = Register.from_coordinates([(0, 0), (0, 12)])

# analog ansatz with input parameter
t = FeatureParameter("t")
block = chain(
    AnalogRX(pi / 2),
    AnalogRZ(t),
    wait(1000 * VariationalParameter("theta", value=0.5)),
    AnalogRX(pi / 2),
)

# observable
obs = add(Z(i) for i in range(reg.n_qubits))

Then we define the dataset we want to train on and plot the initial prediction.

import matplotlib.pyplot as plt
from qadence import QuantumCircuit, QuantumModel

# define quantum model; including digital-analog emulation
circ = QuantumCircuit(reg, block)
model = QuantumModel(circ, obs, diff_mode="gpsr")

x_train = torch.linspace(0, 6, steps=30)
y_train = -0.64 * torch.sin(x_train + 0.33) + 0.1
y_pred_initial = model.expectation({"t": x_train})

fig, ax = plt.subplots()
scatter(ax, x_train, y_train, label="Training points", marker="o", color="green")
plot(ax, x_train, y_pred_initial, label="Initial prediction")
plt.legend()
2023-10-10T21:36:14.426865 image/svg+xml Matplotlib v3.7.3, https://matplotlib.org/

The rest is the usual PyTorch training routine.

mse_loss = torch.nn.MSELoss()
optimizer = torch.optim.Adam(model.parameters(), lr=5e-2)


def loss_fn(x_train, y_train):
    return mse_loss(model.expectation({"t": x_train}).squeeze(), y_train)


# train
n_epochs = 200

for i in range(n_epochs):
    optimizer.zero_grad()

    loss = loss_fn(x_train, y_train)
    loss.backward()
    optimizer.step()

    # if (i + 1) % 10 == 0:
    #     print(f"Epoch {i+1:0>3} - Loss: {loss.item()}\n")

# visualize
y_pred = model.expectation({"t": x_train})

fig, ax = plt.subplots()
scatter(ax, x_train, y_train, label="Training points", marker="o", color="green")
plot(ax, x_train, y_pred_initial, label="Initial prediction")
plot(ax, x_train, y_pred, label="Final prediction")
plt.legend()
2023-10-10T21:36:22.723484 image/svg+xml Matplotlib v3.7.3, https://matplotlib.org/