Restricted local addressability
Physics behind semi-local addressing patterns
Recall that in Qadence the general neutral-atom Hamiltonian for a set of \(n\) interacting qubits is given by expression
as is described in detail in the analog interface basics documentation.
The driving Hamiltonian term in priciple can model any local single-qubit rotation by addressing each qubit individually. However, some neutral-atom devices offer restricted local addressability using devices called spatial light modulators (SLMs).
We refer to this regime as semi-local addressability. In this regime, the individual qubit addressing is restricted to a pattern of targeted qubits which is kept fixed during the execution of the quantum circuit. More formally, the addressing pattern appears as an additional term in the neutral-atom Hamiltonian:
where \(\mathcal{H}_{\rm pattern}\) is given by
Here \(\Delta\) specifies the maximal negative detuning that each qubit in the register can be exposed to. The weight \(w_i^{\rm det}\in [0, 1]\) determines the actual value of detuning that \(i\)-th qubit feels and this way the detuning pattern is emulated. Similarly, for the amplitude pattern \(\Gamma\) determines the maximal additional positive drive that acts on qubits. In this case the corresponding weights \(w_i^{\rm drive}\) can vary in the interval \([0, 1]\).
Using the detuning and amplitude patterns described above one can modify the behavior of a selected set of qubits, thus achieving semi-local addressing.
Qadence implements semi-local addressing in two different flavors of increasing complexity: either as a circuit constructor or directly as a pattern added to the general evolution Hamiltonian described by the circuit.
Using circuit constructors
The rydberg_hea
constructor routine allows to
build a circuit instance implementing a basic version of the Hamiltonian
evolution described above where both \(\Delta\) and \(\tilde{\Omega}\) coefficients
are considered constants. Furthemore, no global drive and detuning are explicitly added
to the Hamiltonian. Therefore, the final Hamiltonian generator of the circuit reads as follows:
This implementation does not perform any checks on the weights normalization, thus making it not realistic. This implies that global drive and detuning can be retrieved by appropriately choosing the weights.
You can easily create a Rydberg hardware efficient ansatz implementing multiple layers of the evolution generated by the local addressing Hamiltonian:
Notice that in real-device implementation, one layer only is usually achievable.
import qadence as qd
from qadence import rydberg_hea, rydberg_hea_layer
n_qubits = 4
n_layers = 2
register = qd.Register.line(n_qubits)
# ansatz constructor
# the evolution time is parametrized for each layer of the evolution
ansatz = rydberg_hea(
register,
n_layers=n_layers, # number of subsequent layers of Hamiltonian evolution
addressable_detuning=True, # make the local detuning weights w_i^{det} as variational parameters
addressable_drive=True, # make the local drive weights w_i^{drv} as variational parameters
tunable_phase=True, # make the phase \phi as a variational parameter
)
# alternatively, a single ansatz layer can also be created for
# better flexibility
# these can be variational parameters
tevo_drive = 1.0 # evolution time for the locally addressed drive term
tevo_det = 1.0 # evolution time for the locally addressed detuning term
tevo_int = 1.0 # evolution time for the interaction term
# these can be list of variational parameters
weights_drive = [0.0, 0.25, 0.5, 0.25]
weights_det = [0.0, 0.0, 0.5, 0.5]
ansatz_layer = rydberg_hea_layer(
register,
tevo_det,
tevo_drive,
tevo_int,
detunings=weights_det,
drives=weights_drive,
)
This circuit constructor is meant to be used with fully differentiable backends such as
pyqtorch
and mainly for quick experimentation with neutral atom compatible ansatze.
Using addressing patterns
In Qadence semi-local addressing patterns can be created by either specifying fixed values for the weights of the qubits being addressed or defining them as trainable parameters that can be optimized later in some training loop. Semi-local addressing patterns can be defined with the AddressingPattern
dataclass.
Fixed weights
With fixed weights, detuning/amplitude addressing patterns can be defined in the following way:
import torch
from qadence.analog import AddressingPattern
n_qubits = 3
w_det = {0: 0.9, 1: 0.5, 2: 1.0}
w_amp = {0: 0.1, 1: 0.4, 2: 0.8}
det = 9.0
amp = 6.5
pattern = AddressingPattern(
n_qubits=n_qubits,
det=det,
amp=amp,
weights_det=w_det,
weights_amp=w_amp,
)
If only detuning or amplitude pattern is needed - the corresponding weights for all qubits can be set to 0.
The created addressing pattern can now be passed as an argument to any Qadence device class, or to the
IdealDevice
or RealisticDevice
to make use of the pre-defined options in those devices,
import torch
from qadence import (
AnalogRX,
AnalogRY,
BackendName,
DiffMode,
Parameter,
QuantumCircuit,
QuantumModel,
Register,
chain,
total_magnetization,
IdealDevice,
PI
)
# define register and circuit
spacing = 8.0
x = Parameter("x")
block = chain(AnalogRX(3 * x), AnalogRY(0.5 * x))
device_specs = IdealDevice(pattern = pattern)
reg = Register.line(
n_qubits,
spacing=spacing,
device_specs=device_specs,
)
circ = QuantumCircuit(reg, block)
obs = total_magnetization(n_qubits)
model_pyq = QuantumModel(
circuit=circ, observable=obs, backend=BackendName.PYQTORCH, diff_mode=DiffMode.AD
)
# calculate expectation value of the circuit for random input value
value = {"x": 1.0 + torch.rand(1)}
expval_pyq = model_pyq.expectation(values = value)
The same configuration can also be seamlessly used to create a model with the Pulser backend.
model_pulser = QuantumModel(
circuit=circ,
observable=obs,
backend=BackendName.PULSER,
diff_mode=DiffMode.GPSR
)
# calculate expectation value of the circuit for same random input value
expval_pulser = model_pulser.expectation(values = value)
Note that by default the addressing pattern terms are added to every analog operation in the circuit. However, it is
possible to turn the addressing pattern off for specific operations by passing add_pattern=False
in the operation.
For example AnalogRX(pi)
will get the extra addressing pattern term, but AnalogRX(pi, add_pattern=False)
will not.
This is currently only implemented for the PyQTorch backend. If an addressing pattern is specified for the Pulser backend,
it will be added to all the blocks.
Trainable weights
Note
Trainable parameters currently are supported only by pyqtorch
backend.
Since both the maximum detuning/amplitude value of the addressing pattern and the corresponding weights can be
user specified, they can be variationally used in some QML setting. This can be achieved by defining pattern weights as trainable Parameter
instances or strings specifying weight names.
n_qubits = 3
reg = Register.line(n_qubits, spacing=8.0)
# some random target function value
f_value = torch.rand(1)
# define trainable addressing pattern
w_amp = {i: f"w_amp{i}" for i in range(n_qubits)}
w_det = {i: f"w_det{i}" for i in range(n_qubits)}
amp = "max_amp"
det = "max_det"
pattern = AddressingPattern(
n_qubits=n_qubits,
det=det,
amp=amp,
weights_det=w_det,
weights_amp=w_amp,
)
# some fixed analog operation
block = AnalogRX(PI)
device_specs = IdealDevice(pattern = pattern)
reg = Register.line(
n_qubits,
spacing=spacing,
device_specs=device_specs,
)
circ = QuantumCircuit(reg, block)
# define quantum model
obs = total_magnetization(n_qubits)
model = QuantumModel(circuit=circ, observable=obs, backend=BackendName.PYQTORCH)
# prepare for training
optimizer = torch.optim.Adam(model.parameters(), lr=0.1)
loss_criterion = torch.nn.MSELoss()
n_epochs = 200
loss_save = []
# train model
for _ in range(n_epochs):
optimizer.zero_grad()
out = model.expectation()
loss = loss_criterion(f_value, out)
loss.backward()
optimizer.step()
loss_save.append(loss.item())
# get final results
f_value_model = model.expectation().detach()
assert torch.isclose(f_value, f_value_model, atol=0.01)
Here, the expectation value of the circuit is fitted by varying the parameters of the addressing pattern.