Fitting a function with a Hamiltonian ansatz
In the analog QCL tutorial we used analog blocks to learn a function of interest. The analog blocks are a direct abstraction of device execution with global addressing. However, we may want to directly program an Hamiltonian-level ansatz to have a finer control on our model. In Qadence this can easily be done through digital-analog programs. In this tutorial we will solve a simple QCL problem with this approach.
Setting up the problem
The example problem considered is to fit a function of interest in a specified range. Below we define and plot the function \(f(x)=x^5\) .
import torch
import matplotlib.pyplot as plt
# Function to fit:
def f ( x ):
return x ** 5
xmin = - 1.0
xmax = 1.0
n_test = 100
x_test = torch . linspace ( xmin , xmax , steps = n_test )
y_test = f ( x_test )
plt . plot ( x_test , y_test )
plt . xlim (( - 1.1 , 1.1 ))
plt . ylim (( - 1.1 , 1.1 ))
2024-11-05T09:41:00.546536
image/svg+xml
Matplotlib v3.9.2, https://matplotlib.org/
Digital-Analog Ansatz
We start by defining the register of qubits. The topology we use now will define the interactions in the entangling Hamiltonian. As an example, we can define a rectangular lattice with 6 qubits.
from qadence import Register
reg = Register . rectangular_lattice (
qubits_row = 3 ,
qubits_col = 2 ,
)
Inspired by the Ising interaction mode of Rydberg atoms, we can now define an interaction Hamiltonian as \(\mathcal{H}_{ij}=\frac{1}{r_{ij}^6}N_iN_j\) , where \(N_i=(1/2)(I_i-Z_i)\) is the number operator and and \(r_{ij}\) is the distance between qubits \(i\) and \(j\) . We can easily instatiate this interaction Hamiltonian from the register information :
from qadence import N , add
def h_ij ( i : int , j : int ):
return N ( i ) @N ( j )
h_int = add ( h_ij ( * edge ) / r ** 6 for edge , r in reg . edge_distances . items ())
To build the digital-analog ansatz we can make use of the standard hea
function by specifying we want to use the Strategy.SDAQC
and passing the Hamiltonian we created as the entangler, as see in the QML constructors tutorial . The entangling operation will be replaced by the evolution of this Hamiltonian HamEvo(h_int, t)
, where the time parameter t
is considered to be a variational parameter at each layer.
from qadence import hea , Strategy , RX , RY
depth = 2
da_ansatz = hea (
n_qubits = reg . n_qubits ,
depth = depth ,
operations = [ RX , RY , RX ],
entangler = h_int ,
strategy = Strategy . SDAQC ,
)
print ( html_string ( da_ansatz ))
%3
cluster_e61ab6e24c1b464288d838b9fb091e7c
cluster_33e380aaf78f4fb5a17e17476422fcbe
16dea4403a0d4f088f9459a061fb0cad
0
6309b9ea57be415da547eab5f5285d68
RX(theta₀)
16dea4403a0d4f088f9459a061fb0cad--6309b9ea57be415da547eab5f5285d68
52e2c73acbbe4f7999a2e76f1d063e90
1
e3283d848bad49eea3bbe5ad3d97b6c1
RY(theta₆)
6309b9ea57be415da547eab5f5285d68--e3283d848bad49eea3bbe5ad3d97b6c1
fc25ba68219840a5a3c52a9ce0ce5a56
RX(theta₁₂)
e3283d848bad49eea3bbe5ad3d97b6c1--fc25ba68219840a5a3c52a9ce0ce5a56
454250d13054487eb861960c0a4f1147
fc25ba68219840a5a3c52a9ce0ce5a56--454250d13054487eb861960c0a4f1147
cc8cb532e3394b1ea70dca6e2ea92ad5
RX(theta₁₈)
454250d13054487eb861960c0a4f1147--cc8cb532e3394b1ea70dca6e2ea92ad5
be2569e7bfec4dfcb4fcfebc76820f47
RY(theta₂₄)
cc8cb532e3394b1ea70dca6e2ea92ad5--be2569e7bfec4dfcb4fcfebc76820f47
395adfb616be4393a2564247a6755a12
RX(theta₃₀)
be2569e7bfec4dfcb4fcfebc76820f47--395adfb616be4393a2564247a6755a12
e382553844094a3ca897ea37834d25db
395adfb616be4393a2564247a6755a12--e382553844094a3ca897ea37834d25db
98dc2891e00a46ad91be31e4c96a82ed
e382553844094a3ca897ea37834d25db--98dc2891e00a46ad91be31e4c96a82ed
8a179c51d0824eb5a0c2af830b0313af
9c98be19a1044e2c977a4ae67dfc8962
RX(theta₁)
52e2c73acbbe4f7999a2e76f1d063e90--9c98be19a1044e2c977a4ae67dfc8962
77a82201bfa14f2698aaf7b1fad0922d
2
6da662c79d0f40c68d698aee4787974b
RY(theta₇)
9c98be19a1044e2c977a4ae67dfc8962--6da662c79d0f40c68d698aee4787974b
fcc0bb92735f4b6bbbc9bc95b6b17717
RX(theta₁₃)
6da662c79d0f40c68d698aee4787974b--fcc0bb92735f4b6bbbc9bc95b6b17717
8d453dafe3a84ea48bdc108c67fc1a3e
fcc0bb92735f4b6bbbc9bc95b6b17717--8d453dafe3a84ea48bdc108c67fc1a3e
7378e726992c499e9685dbf86a16d7c1
RX(theta₁₉)
8d453dafe3a84ea48bdc108c67fc1a3e--7378e726992c499e9685dbf86a16d7c1
c59383f15516468ab7efdbc6b1941307
RY(theta₂₅)
7378e726992c499e9685dbf86a16d7c1--c59383f15516468ab7efdbc6b1941307
c4787d2e660f4858a355ba3db0ec3a30
RX(theta₃₁)
c59383f15516468ab7efdbc6b1941307--c4787d2e660f4858a355ba3db0ec3a30
44e888a5a25140738740b1f016b0b885
c4787d2e660f4858a355ba3db0ec3a30--44e888a5a25140738740b1f016b0b885
44e888a5a25140738740b1f016b0b885--8a179c51d0824eb5a0c2af830b0313af
f95d978bd9184029a9c1933663b6688e
b6489dcfc3474d36a867446605ce1fb0
RX(theta₂)
77a82201bfa14f2698aaf7b1fad0922d--b6489dcfc3474d36a867446605ce1fb0
dcd5dc59362b48749750e7818e318328
3
a2b3223ff44a4541b54ed22fd5beb33c
RY(theta₈)
b6489dcfc3474d36a867446605ce1fb0--a2b3223ff44a4541b54ed22fd5beb33c
d41155a1df3d454097bb7dd3de84db9c
RX(theta₁₄)
a2b3223ff44a4541b54ed22fd5beb33c--d41155a1df3d454097bb7dd3de84db9c
cfc04ca46ce5408d9d7dd4666ae4757e
HamEvo
d41155a1df3d454097bb7dd3de84db9c--cfc04ca46ce5408d9d7dd4666ae4757e
95c911e6e1254927b402ec541b01ff06
RX(theta₂₀)
cfc04ca46ce5408d9d7dd4666ae4757e--95c911e6e1254927b402ec541b01ff06
1b796564a05946be9b686e56fd91404e
RY(theta₂₆)
95c911e6e1254927b402ec541b01ff06--1b796564a05946be9b686e56fd91404e
57cf517dec5644cab40e73ff33f1b382
RX(theta₃₂)
1b796564a05946be9b686e56fd91404e--57cf517dec5644cab40e73ff33f1b382
131ad70a7f7c4a62a817ff3f9e69ee59
HamEvo
57cf517dec5644cab40e73ff33f1b382--131ad70a7f7c4a62a817ff3f9e69ee59
131ad70a7f7c4a62a817ff3f9e69ee59--f95d978bd9184029a9c1933663b6688e
367e301a2e01462fbaa7de5847af8a7b
3e9fb2b6ccc747399923c96d45ee6fb4
RX(theta₃)
dcd5dc59362b48749750e7818e318328--3e9fb2b6ccc747399923c96d45ee6fb4
cf135390bb5a4c76a2ecf7074c0909c3
4
7b5d122eaac14b2a9a5f360630a974f9
RY(theta₉)
3e9fb2b6ccc747399923c96d45ee6fb4--7b5d122eaac14b2a9a5f360630a974f9
1a276d0aae664537b976e5ecdb07e14d
RX(theta₁₅)
7b5d122eaac14b2a9a5f360630a974f9--1a276d0aae664537b976e5ecdb07e14d
d16b7db5c97f45669cff9060944e5fe9
t = theta_t₀
1a276d0aae664537b976e5ecdb07e14d--d16b7db5c97f45669cff9060944e5fe9
fd0ac2b5c0234926a1dad74e93670018
RX(theta₂₁)
d16b7db5c97f45669cff9060944e5fe9--fd0ac2b5c0234926a1dad74e93670018
ac1b443949884e2bae1977b05ab6e855
RY(theta₂₇)
fd0ac2b5c0234926a1dad74e93670018--ac1b443949884e2bae1977b05ab6e855
78b7dfae3de3492fae929031f673d9c8
RX(theta₃₃)
ac1b443949884e2bae1977b05ab6e855--78b7dfae3de3492fae929031f673d9c8
253123d5688648bdb11fd9cfc4f1aa68
t = theta_t₁
78b7dfae3de3492fae929031f673d9c8--253123d5688648bdb11fd9cfc4f1aa68
253123d5688648bdb11fd9cfc4f1aa68--367e301a2e01462fbaa7de5847af8a7b
92c8b2690e8e47e3ad0bdfb685db3bfe
e45d468dbd9f400985091193f7a8322a
RX(theta₄)
cf135390bb5a4c76a2ecf7074c0909c3--e45d468dbd9f400985091193f7a8322a
065fce83599c4e23bacd5e41e1e64f59
5
941ad887b9234d8f80ba900006f410de
RY(theta₁₀)
e45d468dbd9f400985091193f7a8322a--941ad887b9234d8f80ba900006f410de
fe3d4be4dac94f28816981285d14f888
RX(theta₁₆)
941ad887b9234d8f80ba900006f410de--fe3d4be4dac94f28816981285d14f888
9f1d523e3f6347bea831b25fe79d8c20
fe3d4be4dac94f28816981285d14f888--9f1d523e3f6347bea831b25fe79d8c20
e0834437f8f6428d8fe19fde8ea5c67a
RX(theta₂₂)
9f1d523e3f6347bea831b25fe79d8c20--e0834437f8f6428d8fe19fde8ea5c67a
210679b0690641dbbde343b6f1cab567
RY(theta₂₈)
e0834437f8f6428d8fe19fde8ea5c67a--210679b0690641dbbde343b6f1cab567
9da05451300d45cb94472495f5d0e190
RX(theta₃₄)
210679b0690641dbbde343b6f1cab567--9da05451300d45cb94472495f5d0e190
f5517458f1d24d6eaf8e375d99f8f236
9da05451300d45cb94472495f5d0e190--f5517458f1d24d6eaf8e375d99f8f236
f5517458f1d24d6eaf8e375d99f8f236--92c8b2690e8e47e3ad0bdfb685db3bfe
d88bc31b8d864c1e851f7f8207e2ccd9
ee14a5c689af4af9b1752f858a332a80
RX(theta₅)
065fce83599c4e23bacd5e41e1e64f59--ee14a5c689af4af9b1752f858a332a80
f1788c2068e54871a315f5d736cd7c69
RY(theta₁₁)
ee14a5c689af4af9b1752f858a332a80--f1788c2068e54871a315f5d736cd7c69
4b70beb38b1a4229b57844002f8a4af9
RX(theta₁₇)
f1788c2068e54871a315f5d736cd7c69--4b70beb38b1a4229b57844002f8a4af9
298cc9d2c5c04429abbff5ed93e3ab94
4b70beb38b1a4229b57844002f8a4af9--298cc9d2c5c04429abbff5ed93e3ab94
1d25b07a6bd84dcda0b88eaa01367554
RX(theta₂₃)
298cc9d2c5c04429abbff5ed93e3ab94--1d25b07a6bd84dcda0b88eaa01367554
1e7102e58278450fb9a26f0ff81b3493
RY(theta₂₉)
1d25b07a6bd84dcda0b88eaa01367554--1e7102e58278450fb9a26f0ff81b3493
7fbd122b577a4ca584ca1fb3df786c68
RX(theta₃₅)
1e7102e58278450fb9a26f0ff81b3493--7fbd122b577a4ca584ca1fb3df786c68
814d55768c904aec81c66c5bf2645c8d
7fbd122b577a4ca584ca1fb3df786c68--814d55768c904aec81c66c5bf2645c8d
814d55768c904aec81c66c5bf2645c8d--d88bc31b8d864c1e851f7f8207e2ccd9
Creating the QuantumModel
The rest of the procedure is the same as any other Qadence workflow. We start by defining a feature map for input encoding and an observable for output decoding.
from qadence import feature_map , BasisSet , ReuploadScaling
from qadence import Z , I
fm = feature_map (
n_qubits = reg . n_qubits ,
param = "x" ,
fm_type = BasisSet . CHEBYSHEV ,
reupload_scaling = ReuploadScaling . TOWER ,
)
# Total magnetization
observable = add ( Z ( i ) for i in range ( reg . n_qubits ))
And we have all the ingredients to initialize the QuantumModel
:
from qadence import QuantumCircuit , QuantumModel
circuit = QuantumCircuit ( reg , fm , da_ansatz )
model = QuantumModel ( circuit , observable = observable )
Training the model
We can now train the model. We use a set of 20 equally spaced training points.
# Chebyshev FM does not accept x = -1, 1
xmin = - 0.99
xmax = 0.99
n_train = 20
x_train = torch . linspace ( xmin , xmax , steps = n_train )
y_train = f ( x_train )
# Initial model prediction
y_pred_initial = model . expectation ({ "x" : x_test }) . detach ()
And we use a simple custom training loop.
criterion = torch . nn . MSELoss ()
optimizer = torch . optim . Adam ( model . parameters (), lr = 0.1 )
n_epochs = 200
def loss_fn ( x_train , y_train ):
out = model . expectation ({ "x" : x_train })
loss = criterion ( out . squeeze (), y_train )
return loss
for i in range ( n_epochs ):
optimizer . zero_grad ()
loss = loss_fn ( x_train , y_train )
loss . backward ()
optimizer . step ()
Results
Finally we can plot the resulting trained model.
y_pred_final = model . expectation ({ "x" : x_test }) . detach ()
plt . plot ( x_test , y_pred_initial , label = "Initial prediction" )
plt . plot ( x_test , y_pred_final , label = "Final prediction" )
plt . scatter ( x_train , y_train , label = "Training points" )
plt . xlabel ( "x" )
plt . ylabel ( "f(x)" )
plt . legend ()
plt . xlim (( - 1.1 , 1.1 ))
plt . ylim (( - 1.1 , 1.1 ))
2024-11-05T09:41:09.665565
image/svg+xml
Matplotlib v3.9.2, https://matplotlib.org/