Fitting a function with a Hamiltonian ansatz
In the analog QCL tutorial we used analog blocks to learn a function of interest. The analog blocks are a direct abstraction of device execution with global addressing. However, we may want to directly program an Hamiltonian-level ansatz to have a finer control on our model. In Qadence this can easily be done through digital-analog programs. In this tutorial we will solve a simple QCL problem with this approach.
Setting up the problem
The example problem considered is to fit a function of interest in a specified range. Below we define and plot the function \(f(x)=x^5\) .
import torch
import matplotlib.pyplot as plt
# Function to fit:
def f ( x ):
return x ** 5
xmin = - 1.0
xmax = 1.0
n_test = 100
x_test = torch . linspace ( xmin , xmax , steps = n_test )
y_test = f ( x_test )
plt . plot ( x_test , y_test )
plt . xlim (( - 1.1 , 1.1 ))
plt . ylim (( - 1.1 , 1.1 ))
2025-01-28T16:50:58.207805
image/svg+xml
Matplotlib v3.10.0, https://matplotlib.org/
Digital-Analog Ansatz
We start by defining the register of qubits. The topology we use now will define the interactions in the entangling Hamiltonian. As an example, we can define a rectangular lattice with 6 qubits.
from qadence import Register
reg = Register . rectangular_lattice (
qubits_row = 3 ,
qubits_col = 2 ,
)
Inspired by the Ising interaction mode of Rydberg atoms, we can now define an interaction Hamiltonian as \(\mathcal{H}_{ij}=\frac{1}{r_{ij}^6}N_iN_j\) , where \(N_i=(1/2)(I_i-Z_i)\) is the number operator and and \(r_{ij}\) is the distance between qubits \(i\) and \(j\) . We can easily instatiate this interaction Hamiltonian from the register information :
from qadence import N , add
def h_ij ( i : int , j : int ):
return N ( i ) @N ( j )
h_int = add ( h_ij ( * edge ) / r ** 6 for edge , r in reg . edge_distances . items ())
To build the digital-analog ansatz we can make use of the standard hea
function by specifying we want to use the Strategy.SDAQC
and passing the Hamiltonian we created as the entangler, as see in the QML constructors tutorial . The entangling operation will be replaced by the evolution of this Hamiltonian HamEvo(h_int, t)
, where the time parameter t
is considered to be a variational parameter at each layer.
from qadence import hea , Strategy , RX , RY
depth = 2
da_ansatz = hea (
n_qubits = reg . n_qubits ,
depth = depth ,
operations = [ RX , RY , RX ],
entangler = h_int ,
strategy = Strategy . SDAQC ,
)
print ( html_string ( da_ansatz ))
%3
cluster_c638c34d3639499e83c337577fa03527
cluster_c7418e7737884b9e9a7445f9611915aa
00c9caa206ae49f5a142194da32e2b2d
0
b4e6d34c5433491692ee6788d3bb1305
RX(theta₀)
00c9caa206ae49f5a142194da32e2b2d--b4e6d34c5433491692ee6788d3bb1305
94cfc23f69f14827b02ee7a192abd5db
1
598cfe0ab2c2451d89402e7a3b36bbbb
RY(theta₆)
b4e6d34c5433491692ee6788d3bb1305--598cfe0ab2c2451d89402e7a3b36bbbb
7384f2e22e3a42c0a2b4f50e7ba6796b
RX(theta₁₂)
598cfe0ab2c2451d89402e7a3b36bbbb--7384f2e22e3a42c0a2b4f50e7ba6796b
d5023dd1a8574cd4b6a87ea05e707f23
7384f2e22e3a42c0a2b4f50e7ba6796b--d5023dd1a8574cd4b6a87ea05e707f23
2ac7e57487e94297b58166bdc68257d9
RX(theta₁₈)
d5023dd1a8574cd4b6a87ea05e707f23--2ac7e57487e94297b58166bdc68257d9
a1f333af91a949dd99d9835e4109fad1
RY(theta₂₄)
2ac7e57487e94297b58166bdc68257d9--a1f333af91a949dd99d9835e4109fad1
f8f6d0f21c274c72a4fd69c0f10d355c
RX(theta₃₀)
a1f333af91a949dd99d9835e4109fad1--f8f6d0f21c274c72a4fd69c0f10d355c
0f20a2a89d85475cb3fc845a30f20a97
f8f6d0f21c274c72a4fd69c0f10d355c--0f20a2a89d85475cb3fc845a30f20a97
e4cbb09593f44df2abafe076bc7de291
0f20a2a89d85475cb3fc845a30f20a97--e4cbb09593f44df2abafe076bc7de291
cdf9790965b64c80a35287e99a9ac70e
32e3039e4b9440a9a1f83f5b557c2f3a
RX(theta₁)
94cfc23f69f14827b02ee7a192abd5db--32e3039e4b9440a9a1f83f5b557c2f3a
b2dae95547ba4504940ad1ab679e63a6
2
1969fcae73064fdaad0ddc14089f4984
RY(theta₇)
32e3039e4b9440a9a1f83f5b557c2f3a--1969fcae73064fdaad0ddc14089f4984
97de4ce592914dcdb51da8bd98bf5d00
RX(theta₁₃)
1969fcae73064fdaad0ddc14089f4984--97de4ce592914dcdb51da8bd98bf5d00
c4fd73b44c064d08b9f88d40d7a8d9e8
97de4ce592914dcdb51da8bd98bf5d00--c4fd73b44c064d08b9f88d40d7a8d9e8
1cbd0679fcd84179bf3fee79d9d0a604
RX(theta₁₉)
c4fd73b44c064d08b9f88d40d7a8d9e8--1cbd0679fcd84179bf3fee79d9d0a604
98dbf73061684716979bec80a7138f12
RY(theta₂₅)
1cbd0679fcd84179bf3fee79d9d0a604--98dbf73061684716979bec80a7138f12
e1886c5cd36a4ff3bb70251d668a35cb
RX(theta₃₁)
98dbf73061684716979bec80a7138f12--e1886c5cd36a4ff3bb70251d668a35cb
be197eb7ac66412e86d840454200e245
e1886c5cd36a4ff3bb70251d668a35cb--be197eb7ac66412e86d840454200e245
be197eb7ac66412e86d840454200e245--cdf9790965b64c80a35287e99a9ac70e
3757f32c21474464a1f2dd5819c2ad17
d168fe8e8dc04433b4cdbb99517a1389
RX(theta₂)
b2dae95547ba4504940ad1ab679e63a6--d168fe8e8dc04433b4cdbb99517a1389
b295d4142ec24e75b005b45d33c2fa50
3
3705d29e1e224a3d9e0ec716687a95ca
RY(theta₈)
d168fe8e8dc04433b4cdbb99517a1389--3705d29e1e224a3d9e0ec716687a95ca
f152caa769a7411b8163ea799501eb93
RX(theta₁₄)
3705d29e1e224a3d9e0ec716687a95ca--f152caa769a7411b8163ea799501eb93
f70fe047a2f04c448e10b669caa0ff84
HamEvo
f152caa769a7411b8163ea799501eb93--f70fe047a2f04c448e10b669caa0ff84
803ab0391c9541a99d5b1342e7e91735
RX(theta₂₀)
f70fe047a2f04c448e10b669caa0ff84--803ab0391c9541a99d5b1342e7e91735
d4cbc4b5296b4f86b469e62cb07ce16d
RY(theta₂₆)
803ab0391c9541a99d5b1342e7e91735--d4cbc4b5296b4f86b469e62cb07ce16d
75acc861a4034c51b514c9ad51dd88cc
RX(theta₃₂)
d4cbc4b5296b4f86b469e62cb07ce16d--75acc861a4034c51b514c9ad51dd88cc
43414aae0fca452893d2252ab5003b0d
HamEvo
75acc861a4034c51b514c9ad51dd88cc--43414aae0fca452893d2252ab5003b0d
43414aae0fca452893d2252ab5003b0d--3757f32c21474464a1f2dd5819c2ad17
8ba758f7b33d409a8183369e33d539ba
d924069f56e24f10ab488ecfaa6093e2
RX(theta₃)
b295d4142ec24e75b005b45d33c2fa50--d924069f56e24f10ab488ecfaa6093e2
ea41b5e3495f4c699b3a157463105426
4
d4cf487769454f038371c19f87319b01
RY(theta₉)
d924069f56e24f10ab488ecfaa6093e2--d4cf487769454f038371c19f87319b01
5e95fbf44924490387b68ed79fc54b55
RX(theta₁₅)
d4cf487769454f038371c19f87319b01--5e95fbf44924490387b68ed79fc54b55
823ab1415c814d019971aa40a2108eb1
t = theta_t₀
5e95fbf44924490387b68ed79fc54b55--823ab1415c814d019971aa40a2108eb1
8e1d0634f0954842a3dcf86b89a02af5
RX(theta₂₁)
823ab1415c814d019971aa40a2108eb1--8e1d0634f0954842a3dcf86b89a02af5
d8ae1b58a0d04cf798170cc7d8cc4c7e
RY(theta₂₇)
8e1d0634f0954842a3dcf86b89a02af5--d8ae1b58a0d04cf798170cc7d8cc4c7e
92fdad102deb4039bb4a2973aea2322a
RX(theta₃₃)
d8ae1b58a0d04cf798170cc7d8cc4c7e--92fdad102deb4039bb4a2973aea2322a
446116dc3a154c518782308856b61d1a
t = theta_t₁
92fdad102deb4039bb4a2973aea2322a--446116dc3a154c518782308856b61d1a
446116dc3a154c518782308856b61d1a--8ba758f7b33d409a8183369e33d539ba
2d0989c063064387a2bf6353d08ee46f
2ff521a0c84740cbb4732cb7ed3ae7a9
RX(theta₄)
ea41b5e3495f4c699b3a157463105426--2ff521a0c84740cbb4732cb7ed3ae7a9
89e9f3f72ace4d3ea0cc095c7809fd59
5
6894012cd34748dd8aeb688fb37242cf
RY(theta₁₀)
2ff521a0c84740cbb4732cb7ed3ae7a9--6894012cd34748dd8aeb688fb37242cf
04bd8e2eeb014046ac52d6aec6b44a2d
RX(theta₁₆)
6894012cd34748dd8aeb688fb37242cf--04bd8e2eeb014046ac52d6aec6b44a2d
b8938ad5ddf5451db1f6199631c88296
04bd8e2eeb014046ac52d6aec6b44a2d--b8938ad5ddf5451db1f6199631c88296
0810bfe53ca245d7af2ceeaceed0efd6
RX(theta₂₂)
b8938ad5ddf5451db1f6199631c88296--0810bfe53ca245d7af2ceeaceed0efd6
37f13d32199e47d29b0de635c322c225
RY(theta₂₈)
0810bfe53ca245d7af2ceeaceed0efd6--37f13d32199e47d29b0de635c322c225
916a78532d56459b828069f4de54db24
RX(theta₃₄)
37f13d32199e47d29b0de635c322c225--916a78532d56459b828069f4de54db24
3558b9a0705f4b2fadb381d0e88fec3d
916a78532d56459b828069f4de54db24--3558b9a0705f4b2fadb381d0e88fec3d
3558b9a0705f4b2fadb381d0e88fec3d--2d0989c063064387a2bf6353d08ee46f
5b4affb1f95d48d1b734135358374d5e
ee8b8d85983f470fb2227b0b2c427c44
RX(theta₅)
89e9f3f72ace4d3ea0cc095c7809fd59--ee8b8d85983f470fb2227b0b2c427c44
ce511675a2e3465cb385ec59b446e54c
RY(theta₁₁)
ee8b8d85983f470fb2227b0b2c427c44--ce511675a2e3465cb385ec59b446e54c
cc5d3301f46647d0a9570cf6d9299bb7
RX(theta₁₇)
ce511675a2e3465cb385ec59b446e54c--cc5d3301f46647d0a9570cf6d9299bb7
652552cee834423584ab7c3884eafabf
cc5d3301f46647d0a9570cf6d9299bb7--652552cee834423584ab7c3884eafabf
6ec96fd644964933aaa79e9c3540e82d
RX(theta₂₃)
652552cee834423584ab7c3884eafabf--6ec96fd644964933aaa79e9c3540e82d
cddb3eb646bb41ea9c7c8005e03e912a
RY(theta₂₉)
6ec96fd644964933aaa79e9c3540e82d--cddb3eb646bb41ea9c7c8005e03e912a
a7bda1635f36436db668a45b10cd798a
RX(theta₃₅)
cddb3eb646bb41ea9c7c8005e03e912a--a7bda1635f36436db668a45b10cd798a
cf22e846482d49938b7ec72001a8c89b
a7bda1635f36436db668a45b10cd798a--cf22e846482d49938b7ec72001a8c89b
cf22e846482d49938b7ec72001a8c89b--5b4affb1f95d48d1b734135358374d5e
Creating the QuantumModel
The rest of the procedure is the same as any other Qadence workflow. We start by defining a feature map for input encoding and an observable for output decoding.
from qadence import feature_map , BasisSet , ReuploadScaling
from qadence import Z , I
fm = feature_map (
n_qubits = reg . n_qubits ,
param = "x" ,
fm_type = BasisSet . CHEBYSHEV ,
reupload_scaling = ReuploadScaling . TOWER ,
)
# Total magnetization
observable = add ( Z ( i ) for i in range ( reg . n_qubits ))
And we have all the ingredients to initialize the QuantumModel
:
from qadence import QuantumCircuit , QuantumModel
circuit = QuantumCircuit ( reg , fm , da_ansatz )
model = QuantumModel ( circuit , observable = observable )
Training the model
We can now train the model. We use a set of 20 equally spaced training points.
# Chebyshev FM does not accept x = -1, 1
xmin = - 0.99
xmax = 0.99
n_train = 20
x_train = torch . linspace ( xmin , xmax , steps = n_train )
y_train = f ( x_train )
# Initial model prediction
y_pred_initial = model . expectation ({ "x" : x_test }) . detach ()
And we use a simple custom training loop.
criterion = torch . nn . MSELoss ()
optimizer = torch . optim . Adam ( model . parameters (), lr = 0.1 )
n_epochs = 200
def loss_fn ( x_train , y_train ):
out = model . expectation ({ "x" : x_train })
loss = criterion ( out . squeeze (), y_train )
return loss
for i in range ( n_epochs ):
optimizer . zero_grad ()
loss = loss_fn ( x_train , y_train )
loss . backward ()
optimizer . step ()
Results
Finally we can plot the resulting trained model.
y_pred_final = model . expectation ({ "x" : x_test }) . detach ()
plt . plot ( x_test , y_pred_initial , label = "Initial prediction" )
plt . plot ( x_test , y_pred_final , label = "Final prediction" )
plt . scatter ( x_train , y_train , label = "Training points" )
plt . xlabel ( "x" )
plt . ylabel ( "f(x)" )
plt . legend ()
plt . xlim (( - 1.1 , 1.1 ))
plt . ylim (( - 1.1 , 1.1 ))
2025-01-28T16:51:06.435797
image/svg+xml
Matplotlib v3.10.0, https://matplotlib.org/