Digital-Analog Emulation
TL;DR: Automatic emulation in the pyqtorch
backend
All analog blocks are automatically translated to their emulated version when running them
with the pyqtorch
backend (by calling add_interaction
on them under the hood):
Qadence includes primitives for the simple construction of ising-like Hamiltonians to account for the interaction among qubits. This allows to simulate systems closer to real quantum computing platforms such as neutral atoms. The constructed Hamiltonians are of the form
where \(\hat n = \frac{1-\hat\sigma_z}{2}\), and \(\mathcal{H}_{int}\) is a pair-wise interaction term.
We currently have two central operations that can be used to compose analog programs.
WaitBlock
for interactionsConstantAnalogRotation
Both are time-independent and can be emulated by calling add_interaction
.
To compose analog blocks you can use chain
and kron
as usual with the following restrictions:
AnalogChain
s can only be constructed from AnalogKron blocks or globally supported primitive, analog blocks.AnalogKron
s can only be constructed from non-global, analog blocks with the same duration.
The wait
operation can be emulated with an Ising or an \(XY\)-interaction:
from qadence import Register, wait, add_interaction, run
block = wait(duration=3000)
print(block)
reg = Register.from_coordinates([(0,0), (0,5)]) # we need atomic distances
emulated = add_interaction(reg, block, interaction="XY") # or: interaction="Ising"
print(emulated.generator)
The AnalogRot
constructor can create any constant (in time), analog rotation.
import torch
from qadence import AnalogRot, AnalogRX
# implement a global RX rotation
block = AnalogRot(
duration=1000., # [ns]
omega=torch.pi, # [rad/μs]
delta=0, # [rad/μs]
phase=0, # [rad]
)
print(block)
# or use the short hand
block = AnalogRX(torch.pi)
print(block)
Analog blocks can also be chain
ed, and kron
ed like all other blocks, but with two small caveats:
import torch
from qadence import AnalogRot, kron, chain, wait
# only blocks with the same `duration` can be `kron`ed
kron(
wait(duration=1000, qubit_support=(0,1)),
AnalogRot(duration=1000, omega=2.0, qubit_support=(2,3))
)
# only blocks with `"global"` or the same qubit support can be `chain`ed
chain(wait(duration=200), AnalogRot(duration=300, omega=2.0))
Composing digital & analog blocks
You can also compose digital and analog blocks where the additional restrictions of chain
/kron
only apply to composite blocks which only contain analog blocks. For more details/examples, see
AnalogChain
and AnalogKron
.
Fitting a simple function
Just as most other blocks, analog blocks can be parametrized, and thus we can build a
small ansatz which can fit a sine wave. When using the pyqtorch
backend the
add_interaction
function is called automatically. As usual, we can choose which
differentiation backend we want to use: autodiff or parameter shift rule (PSR).
First we define an ansatz block and an observable
import torch
from qadence import Register, FeatureParameter, VariationalParameter
from qadence import AnalogRX, AnalogRZ, Z
from qadence import wait, chain, add
pi = torch.pi
# two qubit register
reg = Register.from_coordinates([(0, 0), (0, 12)])
# analog ansatz with input parameter
t = FeatureParameter("t")
block = chain(
AnalogRX(pi / 2),
AnalogRZ(t),
wait(1000 * VariationalParameter("theta", value=0.5)),
AnalogRX(pi / 2),
)
# observable
obs = add(Z(i) for i in range(reg.n_qubits))
Then we define the dataset we want to train on and plot the initial prediction.
import matplotlib.pyplot as plt
from qadence import QuantumCircuit, QuantumModel
# define quantum model; including digital-analog emulation
circ = QuantumCircuit(reg, block)
model = QuantumModel(circ, obs, diff_mode="gpsr")
x_train = torch.linspace(0, 6, steps=30)
y_train = -0.64 * torch.sin(x_train + 0.33) + 0.1
y_pred_initial = model.expectation({"t": x_train})
fig, ax = plt.subplots()
scatter(ax, x_train, y_train, label="Training points", marker="o", color="green")
plot(ax, x_train, y_pred_initial, label="Initial prediction")
plt.legend()
The rest is the usual PyTorch training routine.
mse_loss = torch.nn.MSELoss()
optimizer = torch.optim.Adam(model.parameters(), lr=5e-2)
def loss_fn(x_train, y_train):
return mse_loss(model.expectation({"t": x_train}).squeeze(), y_train)
# train
n_epochs = 200
for i in range(n_epochs):
optimizer.zero_grad()
loss = loss_fn(x_train, y_train)
loss.backward()
optimizer.step()
# if (i + 1) % 10 == 0:
# print(f"Epoch {i+1:0>3} - Loss: {loss.item()}\n")
# visualize
y_pred = model.expectation({"t": x_train})
fig, ax = plt.subplots()
scatter(ax, x_train, y_train, label="Training points", marker="o", color="green")
plot(ax, x_train, y_pred_initial, label="Initial prediction")
plot(ax, x_train, y_pred, label="Final prediction")
plt.legend()