Fitting a function with a Hamiltonian ansatz
In the analog QCL tutorial we used analog blocks to learn a function of interest. The analog blocks are a direct abstraction of device execution with global addressing. However, we may want to directly program an Hamiltonian-level ansatz to have a finer control on our model. In Qadence this can easily be done through digital-analog programs. In this tutorial we will solve a simple QCL problem with this approach.
Setting up the problem
The example problem considered is to fit a function of interest in a specified range. Below we define and plot the function \(f(x)=x^5\) .
import torch
import matplotlib.pyplot as plt
# Function to fit:
def f ( x ):
return x ** 5
xmin = - 1.0
xmax = 1.0
n_test = 100
x_test = torch . linspace ( xmin , xmax , steps = n_test )
y_test = f ( x_test )
plt . plot ( x_test , y_test )
plt . xlim (( - 1.1 , 1.1 ))
plt . ylim (( - 1.1 , 1.1 ))
2024-08-21T11:47:57.644830
image/svg+xml
Matplotlib v3.7.5, https://matplotlib.org/
Digital-Analog Ansatz
We start by defining the register of qubits. The topology we use now will define the interactions in the entangling Hamiltonian. As an example, we can define a rectangular lattice with 6 qubits.
from qadence import Register
reg = Register . rectangular_lattice (
qubits_row = 3 ,
qubits_col = 2 ,
)
Inspired by the Ising interaction mode of Rydberg atoms, we can now define an interaction Hamiltonian as \(\mathcal{H}_{ij}=\frac{1}{r_{ij}^6}N_iN_j\) , where \(N_i=(1/2)(I_i-Z_i)\) is the number operator and and \(r_{ij}\) is the distance between qubits \(i\) and \(j\) . We can easily instatiate this interaction Hamiltonian from the register information :
from qadence import N , add
def h_ij ( i : int , j : int ):
return N ( i ) @N ( j )
h_int = add ( h_ij ( * edge ) / r ** 6 for edge , r in reg . edge_distances . items ())
To build the digital-analog ansatz we can make use of the standard hea
function by specifying we want to use the Strategy.SDAQC
and passing the Hamiltonian we created as the entangler, as see in the QML constructors tutorial . The entangling operation will be replaced by the evolution of this Hamiltonian HamEvo(h_int, t)
, where the time parameter t
is considered to be a variational parameter at each layer.
from qadence import hea , Strategy , RX , RY
depth = 2
da_ansatz = hea (
n_qubits = reg . n_qubits ,
depth = depth ,
operations = [ RX , RY , RX ],
entangler = h_int ,
strategy = Strategy . SDAQC ,
)
print ( html_string ( da_ansatz ))
%3
cluster_2ed7ce11430142bf9451cc9747a55ce5
cluster_9e894d472b1d453fb5aed4287da46c84
c181c95552db4f6aa65e022259c91c36
0
764c76b8169b43a6ade5429b01708138
RX(theta₀)
c181c95552db4f6aa65e022259c91c36--764c76b8169b43a6ade5429b01708138
59ee0b03aee84aacb6da9e3134c3714a
1
99046c25d9444b06abe0f0d161751ce1
RY(theta₆)
764c76b8169b43a6ade5429b01708138--99046c25d9444b06abe0f0d161751ce1
b88ade6e59ca414b8d6f85e62e2b2d6f
RX(theta₁₂)
99046c25d9444b06abe0f0d161751ce1--b88ade6e59ca414b8d6f85e62e2b2d6f
cee62d13d3454e478c17c7c857c7005a
b88ade6e59ca414b8d6f85e62e2b2d6f--cee62d13d3454e478c17c7c857c7005a
c7619135f3144d43b8eb5fb12adb320d
RX(theta₁₈)
cee62d13d3454e478c17c7c857c7005a--c7619135f3144d43b8eb5fb12adb320d
c2e739107c6949dea7ceca877e755afb
RY(theta₂₄)
c7619135f3144d43b8eb5fb12adb320d--c2e739107c6949dea7ceca877e755afb
50e5561d56984371ad3c80d3a1aa04db
RX(theta₃₀)
c2e739107c6949dea7ceca877e755afb--50e5561d56984371ad3c80d3a1aa04db
db658e12d33e4f65978f7e1a2f8ed4ac
50e5561d56984371ad3c80d3a1aa04db--db658e12d33e4f65978f7e1a2f8ed4ac
caed45d5bfe043388b6acef66b5691e9
db658e12d33e4f65978f7e1a2f8ed4ac--caed45d5bfe043388b6acef66b5691e9
e9b41faabb2e4a2b8afa5506ff8fa612
3397b632d7914db9bc3d103717a92e2a
RX(theta₁)
59ee0b03aee84aacb6da9e3134c3714a--3397b632d7914db9bc3d103717a92e2a
0f17ee440de74806b6c1d244d02f05d1
2
2463821d085d4920b86f5334ebf9fb0e
RY(theta₇)
3397b632d7914db9bc3d103717a92e2a--2463821d085d4920b86f5334ebf9fb0e
9696f4f0418747a29600873962a8aacb
RX(theta₁₃)
2463821d085d4920b86f5334ebf9fb0e--9696f4f0418747a29600873962a8aacb
198992ce5c5a4249922797957b695f91
9696f4f0418747a29600873962a8aacb--198992ce5c5a4249922797957b695f91
aec39bbc34064b949f84d792ad1f801a
RX(theta₁₉)
198992ce5c5a4249922797957b695f91--aec39bbc34064b949f84d792ad1f801a
908a807606be429199e9775fd20e316b
RY(theta₂₅)
aec39bbc34064b949f84d792ad1f801a--908a807606be429199e9775fd20e316b
0b8601c0effe47248dacf3a1166ec885
RX(theta₃₁)
908a807606be429199e9775fd20e316b--0b8601c0effe47248dacf3a1166ec885
41e83bcb90bb480dbbb84012c95a5999
0b8601c0effe47248dacf3a1166ec885--41e83bcb90bb480dbbb84012c95a5999
41e83bcb90bb480dbbb84012c95a5999--e9b41faabb2e4a2b8afa5506ff8fa612
fa70b2ff71704b308c235e7d4fd1686a
0cd94826409b4e5d82a19ff3ab893a6b
RX(theta₂)
0f17ee440de74806b6c1d244d02f05d1--0cd94826409b4e5d82a19ff3ab893a6b
d871b036dd09414fb9ac4bca302b100e
3
fe3bf29a515a423c8ae71722e96ac997
RY(theta₈)
0cd94826409b4e5d82a19ff3ab893a6b--fe3bf29a515a423c8ae71722e96ac997
fc5d1c39585344f4b1676b7ff736ee08
RX(theta₁₄)
fe3bf29a515a423c8ae71722e96ac997--fc5d1c39585344f4b1676b7ff736ee08
9fab29449e934aea96ef2d36c7ae73b5
HamEvo
fc5d1c39585344f4b1676b7ff736ee08--9fab29449e934aea96ef2d36c7ae73b5
48ca849b5880455dba90924f2a32c920
RX(theta₂₀)
9fab29449e934aea96ef2d36c7ae73b5--48ca849b5880455dba90924f2a32c920
5573f478643a4ccb9e92cebf2dfc0837
RY(theta₂₆)
48ca849b5880455dba90924f2a32c920--5573f478643a4ccb9e92cebf2dfc0837
3fe86c42752a40b99d96f899e6052577
RX(theta₃₂)
5573f478643a4ccb9e92cebf2dfc0837--3fe86c42752a40b99d96f899e6052577
e5fb2d4698d04df88b473701d6a52e41
HamEvo
3fe86c42752a40b99d96f899e6052577--e5fb2d4698d04df88b473701d6a52e41
e5fb2d4698d04df88b473701d6a52e41--fa70b2ff71704b308c235e7d4fd1686a
a53101c2c368496e8c49b04864a94dba
916669c5a47346c1b6367b9ba64acf0b
RX(theta₃)
d871b036dd09414fb9ac4bca302b100e--916669c5a47346c1b6367b9ba64acf0b
4c55ff5124c64a689fa2d6ee08282bcb
4
1bd95ca6a3d245ecb2b0356afb15658c
RY(theta₉)
916669c5a47346c1b6367b9ba64acf0b--1bd95ca6a3d245ecb2b0356afb15658c
cb4e12e7fcfc443f902992a56a19d18b
RX(theta₁₅)
1bd95ca6a3d245ecb2b0356afb15658c--cb4e12e7fcfc443f902992a56a19d18b
d961cb24b30448899735cae8ab05b75b
t = theta_t₀
cb4e12e7fcfc443f902992a56a19d18b--d961cb24b30448899735cae8ab05b75b
3f5eb6be0aac45cf86b58764ae017975
RX(theta₂₁)
d961cb24b30448899735cae8ab05b75b--3f5eb6be0aac45cf86b58764ae017975
63c38fcc329040fd85421bc8fdbe5a74
RY(theta₂₇)
3f5eb6be0aac45cf86b58764ae017975--63c38fcc329040fd85421bc8fdbe5a74
97b282e76bac4c2ba4c2373117685ca8
RX(theta₃₃)
63c38fcc329040fd85421bc8fdbe5a74--97b282e76bac4c2ba4c2373117685ca8
aaeab0e7da3a4feba68fb03bb9479eb1
t = theta_t₁
97b282e76bac4c2ba4c2373117685ca8--aaeab0e7da3a4feba68fb03bb9479eb1
aaeab0e7da3a4feba68fb03bb9479eb1--a53101c2c368496e8c49b04864a94dba
d2cbb6abcbf94bec90fbb298e60d2189
93288e0f91dd4fefba693754d263d998
RX(theta₄)
4c55ff5124c64a689fa2d6ee08282bcb--93288e0f91dd4fefba693754d263d998
75c4ebb211f0463ab0c4e7a931c7b1ce
5
54ba75b35100467ea90f4f9ba12a9e16
RY(theta₁₀)
93288e0f91dd4fefba693754d263d998--54ba75b35100467ea90f4f9ba12a9e16
c1e8c0a759df4a8e8e7db7706d920909
RX(theta₁₆)
54ba75b35100467ea90f4f9ba12a9e16--c1e8c0a759df4a8e8e7db7706d920909
3f3c5f6c58b640e093db89259fe0c77a
c1e8c0a759df4a8e8e7db7706d920909--3f3c5f6c58b640e093db89259fe0c77a
a252359226dc426baffbf2c6cd17fa87
RX(theta₂₂)
3f3c5f6c58b640e093db89259fe0c77a--a252359226dc426baffbf2c6cd17fa87
be1034e578224d37b80a1fc78182a73b
RY(theta₂₈)
a252359226dc426baffbf2c6cd17fa87--be1034e578224d37b80a1fc78182a73b
b0f8d98cd42149e19e9b0099e437f357
RX(theta₃₄)
be1034e578224d37b80a1fc78182a73b--b0f8d98cd42149e19e9b0099e437f357
60c1f9bdb8be406f907038de7d95135e
b0f8d98cd42149e19e9b0099e437f357--60c1f9bdb8be406f907038de7d95135e
60c1f9bdb8be406f907038de7d95135e--d2cbb6abcbf94bec90fbb298e60d2189
ca6c459046a8419a9e6d2b6ff7a0d98f
f3f185c5bf264302973b1e321e1cc203
RX(theta₅)
75c4ebb211f0463ab0c4e7a931c7b1ce--f3f185c5bf264302973b1e321e1cc203
fecb214e885e4e04958392a52de4d911
RY(theta₁₁)
f3f185c5bf264302973b1e321e1cc203--fecb214e885e4e04958392a52de4d911
d3fba070f3d2431fb5fa6870e7f05680
RX(theta₁₇)
fecb214e885e4e04958392a52de4d911--d3fba070f3d2431fb5fa6870e7f05680
3cb6c7645ffc41a8950a8b6f451c50bb
d3fba070f3d2431fb5fa6870e7f05680--3cb6c7645ffc41a8950a8b6f451c50bb
e76cd956d3b24c35adf36730d48db439
RX(theta₂₃)
3cb6c7645ffc41a8950a8b6f451c50bb--e76cd956d3b24c35adf36730d48db439
8fe720a274e24930b04e0ba7f1cc825a
RY(theta₂₉)
e76cd956d3b24c35adf36730d48db439--8fe720a274e24930b04e0ba7f1cc825a
5c48621e49cc4be88846a94ecca9b25f
RX(theta₃₅)
8fe720a274e24930b04e0ba7f1cc825a--5c48621e49cc4be88846a94ecca9b25f
981b47950b1849559bb2bca7b2509fec
5c48621e49cc4be88846a94ecca9b25f--981b47950b1849559bb2bca7b2509fec
981b47950b1849559bb2bca7b2509fec--ca6c459046a8419a9e6d2b6ff7a0d98f
Creating the QuantumModel
The rest of the procedure is the same as any other Qadence workflow. We start by defining a feature map for input encoding and an observable for output decoding.
from qadence import feature_map , BasisSet , ReuploadScaling
from qadence import Z , I
fm = feature_map (
n_qubits = reg . n_qubits ,
param = "x" ,
fm_type = BasisSet . CHEBYSHEV ,
reupload_scaling = ReuploadScaling . TOWER ,
)
# Total magnetization
observable = add ( Z ( i ) for i in range ( reg . n_qubits ))
And we have all the ingredients to initialize the QuantumModel
:
from qadence import QuantumCircuit , QuantumModel
circuit = QuantumCircuit ( reg , fm , da_ansatz )
model = QuantumModel ( circuit , observable = observable )
Training the model
We can now train the model. We use a set of 20 equally spaced training points.
# Chebyshev FM does not accept x = -1, 1
xmin = - 0.99
xmax = 0.99
n_train = 20
x_train = torch . linspace ( xmin , xmax , steps = n_train )
y_train = f ( x_train )
# Initial model prediction
y_pred_initial = model . expectation ({ "x" : x_test }) . detach ()
And we use a simple custom training loop.
criterion = torch . nn . MSELoss ()
optimizer = torch . optim . Adam ( model . parameters (), lr = 0.1 )
n_epochs = 200
def loss_fn ( x_train , y_train ):
out = model . expectation ({ "x" : x_train })
loss = criterion ( out . squeeze (), y_train )
return loss
for i in range ( n_epochs ):
optimizer . zero_grad ()
loss = loss_fn ( x_train , y_train )
loss . backward ()
optimizer . step ()
Results
Finally we can plot the resulting trained model.
y_pred_final = model . expectation ({ "x" : x_test }) . detach ()
plt . plot ( x_test , y_pred_initial , label = "Initial prediction" )
plt . plot ( x_test , y_pred_final , label = "Final prediction" )
plt . scatter ( x_train , y_train , label = "Training points" )
plt . xlabel ( "x" )
plt . ylabel ( "f(x)" )
plt . legend ()
plt . xlim (( - 1.1 , 1.1 ))
plt . ylim (( - 1.1 , 1.1 ))
2024-08-21T11:48:03.075570
image/svg+xml
Matplotlib v3.7.5, https://matplotlib.org/