Fitting a function with a Hamiltonian ansatz
In the analog QCL tutorial we used analog blocks to learn a function of interest. The analog blocks are a direct abstraction of device execution with global addressing. However, we may want to directly program an Hamiltonian-level ansatz to have a finer control on our model. In Qadence this can easily be done through digital-analog programs. In this tutorial we will solve a simple QCL problem with this approach.
Setting up the problem
The example problem considered is to fit a function of interest in a specified range. Below we define and plot the function \(f(x)=x^5\) .
import torch
import matplotlib.pyplot as plt
# Function to fit:
def f ( x ):
return x ** 5
xmin = - 1.0
xmax = 1.0
n_test = 100
x_test = torch . linspace ( xmin , xmax , steps = n_test )
y_test = f ( x_test )
plt . plot ( x_test , y_test )
plt . xlim (( - 1.1 , 1.1 ))
plt . ylim (( - 1.1 , 1.1 ))
2025-02-07T10:21:47.715847
image/svg+xml
Matplotlib v3.10.0, https://matplotlib.org/
Digital-Analog Ansatz
We start by defining the register of qubits. The topology we use now will define the interactions in the entangling Hamiltonian. As an example, we can define a rectangular lattice with 6 qubits.
from qadence import Register
reg = Register . rectangular_lattice (
qubits_row = 3 ,
qubits_col = 2 ,
)
Inspired by the Ising interaction mode of Rydberg atoms, we can now define an interaction Hamiltonian as \(\mathcal{H}_{ij}=\frac{1}{r_{ij}^6}N_iN_j\) , where \(N_i=(1/2)(I_i-Z_i)\) is the number operator and and \(r_{ij}\) is the distance between qubits \(i\) and \(j\) . We can easily instatiate this interaction Hamiltonian from the register information :
from qadence import N , add
def h_ij ( i : int , j : int ):
return N ( i ) @N ( j )
h_int = add ( h_ij ( * edge ) / r ** 6 for edge , r in reg . edge_distances . items ())
To build the digital-analog ansatz we can make use of the standard hea
function by specifying we want to use the Strategy.SDAQC
and passing the Hamiltonian we created as the entangler, as see in the QML constructors tutorial . The entangling operation will be replaced by the evolution of this Hamiltonian HamEvo(h_int, t)
, where the time parameter t
is considered to be a variational parameter at each layer.
from qadence import hea , Strategy , RX , RY
depth = 2
da_ansatz = hea (
n_qubits = reg . n_qubits ,
depth = depth ,
operations = [ RX , RY , RX ],
entangler = h_int ,
strategy = Strategy . SDAQC ,
)
print ( html_string ( da_ansatz ))
%3
cluster_4e1955791c2e4ecab764e2df2c660721
cluster_c3bb3021a8aa4a0b9fa967b69641a2cf
ed4923b8c7b245e2aa21d11c6af5db25
0
c5b3ca2a685b420fb6dd739d37fec8f8
RX(theta₀)
ed4923b8c7b245e2aa21d11c6af5db25--c5b3ca2a685b420fb6dd739d37fec8f8
ab9b301546f54b15ad43ff54a073931f
1
52e7ad4215584f8bae59b681583e21d7
RY(theta₆)
c5b3ca2a685b420fb6dd739d37fec8f8--52e7ad4215584f8bae59b681583e21d7
051261e6fca3428d8c96c2fb117294cb
RX(theta₁₂)
52e7ad4215584f8bae59b681583e21d7--051261e6fca3428d8c96c2fb117294cb
8374faba32c44e21984b8b3fe46c262f
051261e6fca3428d8c96c2fb117294cb--8374faba32c44e21984b8b3fe46c262f
b1b505782bea474ab02cc4748c0698e8
RX(theta₁₈)
8374faba32c44e21984b8b3fe46c262f--b1b505782bea474ab02cc4748c0698e8
23eab612bc6b46e3b8eaa003744534e3
RY(theta₂₄)
b1b505782bea474ab02cc4748c0698e8--23eab612bc6b46e3b8eaa003744534e3
ed7e708ad71b482fa38a9ac126103932
RX(theta₃₀)
23eab612bc6b46e3b8eaa003744534e3--ed7e708ad71b482fa38a9ac126103932
5a95c25e583e46b88296b4ff30485df7
ed7e708ad71b482fa38a9ac126103932--5a95c25e583e46b88296b4ff30485df7
b97f6a4bccf048b79a03f7e205500f27
5a95c25e583e46b88296b4ff30485df7--b97f6a4bccf048b79a03f7e205500f27
4125f449864d4cc992ada7e772913c7c
2c3affee762a45df90a9704dc028bc1e
RX(theta₁)
ab9b301546f54b15ad43ff54a073931f--2c3affee762a45df90a9704dc028bc1e
292fb694f76c4a4dab56daadb3526452
2
a336855c09f844efaf67b50720ebdc8a
RY(theta₇)
2c3affee762a45df90a9704dc028bc1e--a336855c09f844efaf67b50720ebdc8a
9968f591d8f04222931b6c39395535ab
RX(theta₁₃)
a336855c09f844efaf67b50720ebdc8a--9968f591d8f04222931b6c39395535ab
ee4d8daf087c40dc9ee015129995cddc
9968f591d8f04222931b6c39395535ab--ee4d8daf087c40dc9ee015129995cddc
9bbe120e5ca24de29c2cec987275dc67
RX(theta₁₉)
ee4d8daf087c40dc9ee015129995cddc--9bbe120e5ca24de29c2cec987275dc67
74e28c052987486db281606ede9a7230
RY(theta₂₅)
9bbe120e5ca24de29c2cec987275dc67--74e28c052987486db281606ede9a7230
1869ec288ea644718ab2a452051124a1
RX(theta₃₁)
74e28c052987486db281606ede9a7230--1869ec288ea644718ab2a452051124a1
b2f9801b47bc4f1db21095ebbd7d35ee
1869ec288ea644718ab2a452051124a1--b2f9801b47bc4f1db21095ebbd7d35ee
b2f9801b47bc4f1db21095ebbd7d35ee--4125f449864d4cc992ada7e772913c7c
6e53a4dc8872470f902ed521cbe6c346
66d8145de6f74c1eaca98142815d43f6
RX(theta₂)
292fb694f76c4a4dab56daadb3526452--66d8145de6f74c1eaca98142815d43f6
5c3e1f6d367e4437af0391b5ee72172d
3
899ceb4dc7554543a92450b0ca079503
RY(theta₈)
66d8145de6f74c1eaca98142815d43f6--899ceb4dc7554543a92450b0ca079503
dc29fe3a446c4267acb9742b20f18f31
RX(theta₁₄)
899ceb4dc7554543a92450b0ca079503--dc29fe3a446c4267acb9742b20f18f31
5b8fee679f2444dfb1bf5452c14443d7
HamEvo
dc29fe3a446c4267acb9742b20f18f31--5b8fee679f2444dfb1bf5452c14443d7
00686cefd6544ebf8503afc52fc7b927
RX(theta₂₀)
5b8fee679f2444dfb1bf5452c14443d7--00686cefd6544ebf8503afc52fc7b927
e3c5c13373f548f9ba3dc9b1a5c29bc2
RY(theta₂₆)
00686cefd6544ebf8503afc52fc7b927--e3c5c13373f548f9ba3dc9b1a5c29bc2
9fb7a36024d545e2b2e00513bea649ca
RX(theta₃₂)
e3c5c13373f548f9ba3dc9b1a5c29bc2--9fb7a36024d545e2b2e00513bea649ca
b2db5f9c59834a6082bfa828cd738cd8
HamEvo
9fb7a36024d545e2b2e00513bea649ca--b2db5f9c59834a6082bfa828cd738cd8
b2db5f9c59834a6082bfa828cd738cd8--6e53a4dc8872470f902ed521cbe6c346
77ff0fe3a9494eccb833221ec6072ff4
73feb4d3fc31465fbe3670f35815dc9b
RX(theta₃)
5c3e1f6d367e4437af0391b5ee72172d--73feb4d3fc31465fbe3670f35815dc9b
a45c392a41aa471ab62a9b29a129395a
4
a0017147def0465f942c866b1956f554
RY(theta₉)
73feb4d3fc31465fbe3670f35815dc9b--a0017147def0465f942c866b1956f554
bebed5c325e1478b9a7cacc7b55be1a9
RX(theta₁₅)
a0017147def0465f942c866b1956f554--bebed5c325e1478b9a7cacc7b55be1a9
4d90b53e98c043fe8f79c029a390a8c5
t = theta_t₀
bebed5c325e1478b9a7cacc7b55be1a9--4d90b53e98c043fe8f79c029a390a8c5
ab5d9d154f604262a046c3700722f7e5
RX(theta₂₁)
4d90b53e98c043fe8f79c029a390a8c5--ab5d9d154f604262a046c3700722f7e5
4eaab2f0b1a8439ebe24b7591d6ddb43
RY(theta₂₇)
ab5d9d154f604262a046c3700722f7e5--4eaab2f0b1a8439ebe24b7591d6ddb43
79e3b8bfa4b343508d9ef733d5fd8ee3
RX(theta₃₃)
4eaab2f0b1a8439ebe24b7591d6ddb43--79e3b8bfa4b343508d9ef733d5fd8ee3
7381ffc9c3a84471a2efa98fdaa91e64
t = theta_t₁
79e3b8bfa4b343508d9ef733d5fd8ee3--7381ffc9c3a84471a2efa98fdaa91e64
7381ffc9c3a84471a2efa98fdaa91e64--77ff0fe3a9494eccb833221ec6072ff4
c3319e81c1864feea0ff05817b845ff6
d08df58bc96b4f7aaf740d75212e2e57
RX(theta₄)
a45c392a41aa471ab62a9b29a129395a--d08df58bc96b4f7aaf740d75212e2e57
146d9efda42c467c99a0363517c744c7
5
a03d765c0e24403a85543b46e6003b2e
RY(theta₁₀)
d08df58bc96b4f7aaf740d75212e2e57--a03d765c0e24403a85543b46e6003b2e
2622a704f8cd4dee8b224c7f9b0a5b29
RX(theta₁₆)
a03d765c0e24403a85543b46e6003b2e--2622a704f8cd4dee8b224c7f9b0a5b29
6e83965d8bd94b81b59b3afa6df49716
2622a704f8cd4dee8b224c7f9b0a5b29--6e83965d8bd94b81b59b3afa6df49716
02425ccc50024c03ad3a2b72867b65fb
RX(theta₂₂)
6e83965d8bd94b81b59b3afa6df49716--02425ccc50024c03ad3a2b72867b65fb
d18dfaebcf534276be343f5401d72010
RY(theta₂₈)
02425ccc50024c03ad3a2b72867b65fb--d18dfaebcf534276be343f5401d72010
93cc1a50cbfe4046af5c5d24d77d3992
RX(theta₃₄)
d18dfaebcf534276be343f5401d72010--93cc1a50cbfe4046af5c5d24d77d3992
922f9f9be2ae424fa986138d1d1465fc
93cc1a50cbfe4046af5c5d24d77d3992--922f9f9be2ae424fa986138d1d1465fc
922f9f9be2ae424fa986138d1d1465fc--c3319e81c1864feea0ff05817b845ff6
a081165e0c2c459695294da8a67a16c6
0748188fb6ae4ad7abc7788aaaed6985
RX(theta₅)
146d9efda42c467c99a0363517c744c7--0748188fb6ae4ad7abc7788aaaed6985
b04550e3d97d437e82ed7f17c02f2f15
RY(theta₁₁)
0748188fb6ae4ad7abc7788aaaed6985--b04550e3d97d437e82ed7f17c02f2f15
fbb6be216b4b49c1a8b36489084d427c
RX(theta₁₇)
b04550e3d97d437e82ed7f17c02f2f15--fbb6be216b4b49c1a8b36489084d427c
08b461fde5644b9b8edd95d7e924044c
fbb6be216b4b49c1a8b36489084d427c--08b461fde5644b9b8edd95d7e924044c
9ee5e94135c448bda769e5d82b2ba398
RX(theta₂₃)
08b461fde5644b9b8edd95d7e924044c--9ee5e94135c448bda769e5d82b2ba398
aa7f8d58d31d42f88607d8b9cd60952b
RY(theta₂₉)
9ee5e94135c448bda769e5d82b2ba398--aa7f8d58d31d42f88607d8b9cd60952b
bb20bf6d842048f4ba292405dbcc0f5e
RX(theta₃₅)
aa7f8d58d31d42f88607d8b9cd60952b--bb20bf6d842048f4ba292405dbcc0f5e
a48300115e6742e3a6472a2c6c49ba0e
bb20bf6d842048f4ba292405dbcc0f5e--a48300115e6742e3a6472a2c6c49ba0e
a48300115e6742e3a6472a2c6c49ba0e--a081165e0c2c459695294da8a67a16c6
Creating the QuantumModel
The rest of the procedure is the same as any other Qadence workflow. We start by defining a feature map for input encoding and an observable for output decoding.
from qadence import feature_map , BasisSet , ReuploadScaling
from qadence import Z , I
fm = feature_map (
n_qubits = reg . n_qubits ,
param = "x" ,
fm_type = BasisSet . CHEBYSHEV ,
reupload_scaling = ReuploadScaling . TOWER ,
)
# Total magnetization
observable = add ( Z ( i ) for i in range ( reg . n_qubits ))
And we have all the ingredients to initialize the QuantumModel
:
from qadence import QuantumCircuit , QuantumModel
circuit = QuantumCircuit ( reg , fm , da_ansatz )
model = QuantumModel ( circuit , observable = observable )
Training the model
We can now train the model. We use a set of 20 equally spaced training points.
# Chebyshev FM does not accept x = -1, 1
xmin = - 0.99
xmax = 0.99
n_train = 20
x_train = torch . linspace ( xmin , xmax , steps = n_train )
y_train = f ( x_train )
# Initial model prediction
y_pred_initial = model . expectation ({ "x" : x_test }) . detach ()
And we use a simple custom training loop.
criterion = torch . nn . MSELoss ()
optimizer = torch . optim . Adam ( model . parameters (), lr = 0.1 )
n_epochs = 200
def loss_fn ( x_train , y_train ):
out = model . expectation ({ "x" : x_train })
loss = criterion ( out . squeeze (), y_train )
return loss
for i in range ( n_epochs ):
optimizer . zero_grad ()
loss = loss_fn ( x_train , y_train )
loss . backward ()
optimizer . step ()
Results
Finally we can plot the resulting trained model.
y_pred_final = model . expectation ({ "x" : x_test }) . detach ()
plt . plot ( x_test , y_pred_initial , label = "Initial prediction" )
plt . plot ( x_test , y_pred_final , label = "Final prediction" )
plt . scatter ( x_train , y_train , label = "Training points" )
plt . xlabel ( "x" )
plt . ylabel ( "f(x)" )
plt . legend ()
plt . xlim (( - 1.1 , 1.1 ))
plt . ylim (( - 1.1 , 1.1 ))
2025-02-07T10:21:55.934714
image/svg+xml
Matplotlib v3.10.0, https://matplotlib.org/