Fitting a function with a Hamiltonian ansatz
In the analog QCL tutorial we used analog blocks to learn a function of interest. The analog blocks are a direct abstraction of device execution with global addressing. However, we may want to directly program an Hamiltonian-level ansatz to have a finer control on our model. In Qadence this can easily be done through digital-analog programs. In this tutorial we will solve a simple QCL problem with this approach.
Setting up the problem
The example problem considered is to fit a function of interest in a specified range. Below we define and plot the function \(f(x)=x^5\) .
import torch
import matplotlib.pyplot as plt
# Function to fit:
def f ( x ):
return x ** 5
xmin = - 1.0
xmax = 1.0
n_test = 100
x_test = torch . linspace ( xmin , xmax , steps = n_test )
y_test = f ( x_test )
plt . plot ( x_test , y_test )
plt . xlim (( - 1.1 , 1.1 ))
plt . ylim (( - 1.1 , 1.1 ))
2024-07-19T12:06:18.652686
image/svg+xml
Matplotlib v3.7.5, https://matplotlib.org/
Digital-Analog Ansatz
We start by defining the register of qubits. The topology we use now will define the interactions in the entangling Hamiltonian. As an example, we can define a rectangular lattice with 6 qubits.
from qadence import Register
reg = Register . rectangular_lattice (
qubits_row = 3 ,
qubits_col = 2 ,
)
Inspired by the Ising interaction mode of Rydberg atoms, we can now define an interaction Hamiltonian as \(\mathcal{H}_{ij}=\frac{1}{r_{ij}^6}N_iN_j\) , where \(N_i=(1/2)(I_i-Z_i)\) is the number operator and and \(r_{ij}\) is the distance between qubits \(i\) and \(j\) . We can easily instatiate this interaction Hamiltonian from the register information :
from qadence import N , add
def h_ij ( i : int , j : int ):
return N ( i ) @N ( j )
h_int = add ( h_ij ( * edge ) / r ** 6 for edge , r in reg . edge_distances . items ())
To build the digital-analog ansatz we can make use of the standard hea
function by specifying we want to use the Strategy.SDAQC
and passing the Hamiltonian we created as the entangler, as see in the QML constructors tutorial . The entangling operation will be replaced by the evolution of this Hamiltonian HamEvo(h_int, t)
, where the time parameter t
is considered to be a variational parameter at each layer.
from qadence import hea , Strategy , RX , RY
depth = 2
da_ansatz = hea (
n_qubits = reg . n_qubits ,
depth = depth ,
operations = [ RX , RY , RX ],
entangler = h_int ,
strategy = Strategy . SDAQC ,
)
print ( html_string ( da_ansatz ))
%3
cluster_5d8f8b77ee494c65b69b8b5bc83e7d88
cluster_90d2273b05bd43d4aad9ba4c2c5356b4
e379d391d7a548a8915338db6de064d5
0
c740e81ad6e54ad99427656e43deada7
RX(theta₀)
e379d391d7a548a8915338db6de064d5--c740e81ad6e54ad99427656e43deada7
80366aa22d874c0990f1188e85e2a2ec
1
09347e723de84c3ab8ab93538a241116
RY(theta₆)
c740e81ad6e54ad99427656e43deada7--09347e723de84c3ab8ab93538a241116
c0c62b724fee4082aef6f348b286e4d5
RX(theta₁₂)
09347e723de84c3ab8ab93538a241116--c0c62b724fee4082aef6f348b286e4d5
49a72f74ccf14b35984174264dfc140b
c0c62b724fee4082aef6f348b286e4d5--49a72f74ccf14b35984174264dfc140b
3d16cf05a22f4d3895ea337d302afa7a
RX(theta₁₈)
49a72f74ccf14b35984174264dfc140b--3d16cf05a22f4d3895ea337d302afa7a
c659e56835414b78bd7baf23a02e47b3
RY(theta₂₄)
3d16cf05a22f4d3895ea337d302afa7a--c659e56835414b78bd7baf23a02e47b3
9284a4eee92844a694ceaac03670c25d
RX(theta₃₀)
c659e56835414b78bd7baf23a02e47b3--9284a4eee92844a694ceaac03670c25d
151c3d23a6a64bd1a295300bf27eabe5
9284a4eee92844a694ceaac03670c25d--151c3d23a6a64bd1a295300bf27eabe5
358ed6ef3bd948f2b38f8c3dda02bf7d
151c3d23a6a64bd1a295300bf27eabe5--358ed6ef3bd948f2b38f8c3dda02bf7d
8819872455c6493bb5738514ca3f3f59
d3ddc985d3164adcae6193b9da353b5a
RX(theta₁)
80366aa22d874c0990f1188e85e2a2ec--d3ddc985d3164adcae6193b9da353b5a
ae8d0389c5104a4aad19da8fc9036b3f
2
9326047cde4e484ebdcc015d38d6e3d1
RY(theta₇)
d3ddc985d3164adcae6193b9da353b5a--9326047cde4e484ebdcc015d38d6e3d1
011b4e720b6243cea8a83e1ee4469815
RX(theta₁₃)
9326047cde4e484ebdcc015d38d6e3d1--011b4e720b6243cea8a83e1ee4469815
4f1441e478b14ee2a67566dd661cde57
011b4e720b6243cea8a83e1ee4469815--4f1441e478b14ee2a67566dd661cde57
6435435a270d4f708bc44aa8abf5b518
RX(theta₁₉)
4f1441e478b14ee2a67566dd661cde57--6435435a270d4f708bc44aa8abf5b518
a361b60e5de946bf87b0204bd0413d7b
RY(theta₂₅)
6435435a270d4f708bc44aa8abf5b518--a361b60e5de946bf87b0204bd0413d7b
ba1cde6c1f1e496f9bcba9f99b43ea20
RX(theta₃₁)
a361b60e5de946bf87b0204bd0413d7b--ba1cde6c1f1e496f9bcba9f99b43ea20
e96b8a6362a74595b694d570a703e210
ba1cde6c1f1e496f9bcba9f99b43ea20--e96b8a6362a74595b694d570a703e210
e96b8a6362a74595b694d570a703e210--8819872455c6493bb5738514ca3f3f59
bf680a5f22184f5d9639f95e7409f899
5a85c5d0f7a149dab4cff7a45516b5c5
RX(theta₂)
ae8d0389c5104a4aad19da8fc9036b3f--5a85c5d0f7a149dab4cff7a45516b5c5
8a36709a5b4440848edecacebaa796e0
3
935ec802a48c4178b8191cb1276ed93f
RY(theta₈)
5a85c5d0f7a149dab4cff7a45516b5c5--935ec802a48c4178b8191cb1276ed93f
788111b699154932a29790dd24586623
RX(theta₁₄)
935ec802a48c4178b8191cb1276ed93f--788111b699154932a29790dd24586623
efad978347344b16aa101b05db2e3bf7
HamEvo
788111b699154932a29790dd24586623--efad978347344b16aa101b05db2e3bf7
a6ed7198b73c4ef3ba302dd15fedd01b
RX(theta₂₀)
efad978347344b16aa101b05db2e3bf7--a6ed7198b73c4ef3ba302dd15fedd01b
3ba812e2744e4b2d8dc9005c99bc2235
RY(theta₂₆)
a6ed7198b73c4ef3ba302dd15fedd01b--3ba812e2744e4b2d8dc9005c99bc2235
1861eba6b28749318727ca266e7dcb69
RX(theta₃₂)
3ba812e2744e4b2d8dc9005c99bc2235--1861eba6b28749318727ca266e7dcb69
f25b15e0e4724544b7246805d2fb9cb9
HamEvo
1861eba6b28749318727ca266e7dcb69--f25b15e0e4724544b7246805d2fb9cb9
f25b15e0e4724544b7246805d2fb9cb9--bf680a5f22184f5d9639f95e7409f899
6b4a1ad94d784dc0a2b65e4968c4f355
16e81f973c86487787dafaed8edb0c1e
RX(theta₃)
8a36709a5b4440848edecacebaa796e0--16e81f973c86487787dafaed8edb0c1e
f9e5270de1b04e54be7b3ce3764b97e4
4
32fd197030ea4248b9eec7b99f00545d
RY(theta₉)
16e81f973c86487787dafaed8edb0c1e--32fd197030ea4248b9eec7b99f00545d
c1308f87710a4a058d15c6598d3f2ba3
RX(theta₁₅)
32fd197030ea4248b9eec7b99f00545d--c1308f87710a4a058d15c6598d3f2ba3
b0b2799ab5704ff58b5a311a97b2c05a
t = theta_t₀
c1308f87710a4a058d15c6598d3f2ba3--b0b2799ab5704ff58b5a311a97b2c05a
b42ff8f0dec643089cdb536aab6f96ea
RX(theta₂₁)
b0b2799ab5704ff58b5a311a97b2c05a--b42ff8f0dec643089cdb536aab6f96ea
45817c32ad3e4eecb639c4824718b8a9
RY(theta₂₇)
b42ff8f0dec643089cdb536aab6f96ea--45817c32ad3e4eecb639c4824718b8a9
bcfe2ba06b4c40a2bfdf7fa54f090927
RX(theta₃₃)
45817c32ad3e4eecb639c4824718b8a9--bcfe2ba06b4c40a2bfdf7fa54f090927
c80acf7930f344179c88573f13757125
t = theta_t₁
bcfe2ba06b4c40a2bfdf7fa54f090927--c80acf7930f344179c88573f13757125
c80acf7930f344179c88573f13757125--6b4a1ad94d784dc0a2b65e4968c4f355
b2185ce62fd1471abd6f3c476bb1619e
cb1617ea4f40407287dc06aefd008f07
RX(theta₄)
f9e5270de1b04e54be7b3ce3764b97e4--cb1617ea4f40407287dc06aefd008f07
2a762cfd1e6645f2be36dae2d8f8ddc2
5
86ff1023c3ef40b2a7a560f477955a4e
RY(theta₁₀)
cb1617ea4f40407287dc06aefd008f07--86ff1023c3ef40b2a7a560f477955a4e
bad41fc46613474fabae0f58ec735143
RX(theta₁₆)
86ff1023c3ef40b2a7a560f477955a4e--bad41fc46613474fabae0f58ec735143
81446f4e4fdf473c91923959f351edef
bad41fc46613474fabae0f58ec735143--81446f4e4fdf473c91923959f351edef
d2cec51413ac404e91e5b584b1ef7040
RX(theta₂₂)
81446f4e4fdf473c91923959f351edef--d2cec51413ac404e91e5b584b1ef7040
36d0c3312bd143af956575029361a586
RY(theta₂₈)
d2cec51413ac404e91e5b584b1ef7040--36d0c3312bd143af956575029361a586
82a2393c9a5e4e43a576ce1ffda2bab5
RX(theta₃₄)
36d0c3312bd143af956575029361a586--82a2393c9a5e4e43a576ce1ffda2bab5
2244192e3c4440fdb51290e272a19952
82a2393c9a5e4e43a576ce1ffda2bab5--2244192e3c4440fdb51290e272a19952
2244192e3c4440fdb51290e272a19952--b2185ce62fd1471abd6f3c476bb1619e
c6c9316484104d02b553119eb7ea87e5
d9c5201efa3d4b8d83fe6beedfe614b7
RX(theta₅)
2a762cfd1e6645f2be36dae2d8f8ddc2--d9c5201efa3d4b8d83fe6beedfe614b7
8c8960a868c540ed9ed4d0cd20220e38
RY(theta₁₁)
d9c5201efa3d4b8d83fe6beedfe614b7--8c8960a868c540ed9ed4d0cd20220e38
ab1c5fb56ccb49b6858599ed899203b8
RX(theta₁₇)
8c8960a868c540ed9ed4d0cd20220e38--ab1c5fb56ccb49b6858599ed899203b8
9ba1dd9b4a484cdb8ddaf328c783d264
ab1c5fb56ccb49b6858599ed899203b8--9ba1dd9b4a484cdb8ddaf328c783d264
27c0b275fcff42849bb4691d30b8819e
RX(theta₂₃)
9ba1dd9b4a484cdb8ddaf328c783d264--27c0b275fcff42849bb4691d30b8819e
11c535b61bf44924ac2d73c645b51573
RY(theta₂₉)
27c0b275fcff42849bb4691d30b8819e--11c535b61bf44924ac2d73c645b51573
0ef2cf3afaf6442992a13442acfecfad
RX(theta₃₅)
11c535b61bf44924ac2d73c645b51573--0ef2cf3afaf6442992a13442acfecfad
f109c2e7398341c9ad9aef7b333d57b5
0ef2cf3afaf6442992a13442acfecfad--f109c2e7398341c9ad9aef7b333d57b5
f109c2e7398341c9ad9aef7b333d57b5--c6c9316484104d02b553119eb7ea87e5
Creating the QuantumModel
The rest of the procedure is the same as any other Qadence workflow. We start by defining a feature map for input encoding and an observable for output decoding.
from qadence import feature_map , BasisSet , ReuploadScaling
from qadence import Z , I
fm = feature_map (
n_qubits = reg . n_qubits ,
param = "x" ,
fm_type = BasisSet . CHEBYSHEV ,
reupload_scaling = ReuploadScaling . TOWER ,
)
# Total magnetization
observable = add ( Z ( i ) for i in range ( reg . n_qubits ))
And we have all the ingredients to initialize the QuantumModel
:
from qadence import QuantumCircuit , QuantumModel
circuit = QuantumCircuit ( reg , fm , da_ansatz )
model = QuantumModel ( circuit , observable = observable )
Training the model
We can now train the model. We use a set of 20 equally spaced training points.
# Chebyshev FM does not accept x = -1, 1
xmin = - 0.99
xmax = 0.99
n_train = 20
x_train = torch . linspace ( xmin , xmax , steps = n_train )
y_train = f ( x_train )
# Initial model prediction
y_pred_initial = model . expectation ({ "x" : x_test }) . detach ()
And we use a simple custom training loop.
criterion = torch . nn . MSELoss ()
optimizer = torch . optim . Adam ( model . parameters (), lr = 0.1 )
n_epochs = 200
def loss_fn ( x_train , y_train ):
out = model . expectation ({ "x" : x_train })
loss = criterion ( out . squeeze (), y_train )
return loss
for i in range ( n_epochs ):
optimizer . zero_grad ()
loss = loss_fn ( x_train , y_train )
loss . backward ()
optimizer . step ()
Results
Finally we can plot the resulting trained model.
y_pred_final = model . expectation ({ "x" : x_test }) . detach ()
plt . plot ( x_test , y_pred_initial , label = "Initial prediction" )
plt . plot ( x_test , y_pred_final , label = "Final prediction" )
plt . scatter ( x_train , y_train , label = "Training points" )
plt . xlabel ( "x" )
plt . ylabel ( "f(x)" )
plt . legend ()
plt . xlim (( - 1.1 , 1.1 ))
plt . ylim (( - 1.1 , 1.1 ))
2024-07-19T12:06:24.207838
image/svg+xml
Matplotlib v3.7.5, https://matplotlib.org/