Fitting a function with a Hamiltonian ansatz
In the analog QCL tutorial we used analog blocks to learn a function of interest. The analog blocks are a direct abstraction of device execution with global addressing. However, we may want to directly program an Hamiltonian-level ansatz to have a finer control on our model. In Qadence this can easily be done through digital-analog programs. In this tutorial we will solve a simple QCL problem with this approach.
Setting up the problem
The example problem considered is to fit a function of interest in a specified range. Below we define and plot the function \(f(x)=x^5\) .
import torch
import matplotlib.pyplot as plt
# Function to fit:
def f ( x ):
return x ** 5
xmin = - 1.0
xmax = 1.0
n_test = 100
x_test = torch . linspace ( xmin , xmax , steps = n_test )
y_test = f ( x_test )
plt . plot ( x_test , y_test )
plt . xlim (( - 1.1 , 1.1 ))
plt . ylim (( - 1.1 , 1.1 ))
2024-09-20T08:34:29.184268
image/svg+xml
Matplotlib v3.7.5, https://matplotlib.org/
Digital-Analog Ansatz
We start by defining the register of qubits. The topology we use now will define the interactions in the entangling Hamiltonian. As an example, we can define a rectangular lattice with 6 qubits.
from qadence import Register
reg = Register . rectangular_lattice (
qubits_row = 3 ,
qubits_col = 2 ,
)
Inspired by the Ising interaction mode of Rydberg atoms, we can now define an interaction Hamiltonian as \(\mathcal{H}_{ij}=\frac{1}{r_{ij}^6}N_iN_j\) , where \(N_i=(1/2)(I_i-Z_i)\) is the number operator and and \(r_{ij}\) is the distance between qubits \(i\) and \(j\) . We can easily instatiate this interaction Hamiltonian from the register information :
from qadence import N , add
def h_ij ( i : int , j : int ):
return N ( i ) @N ( j )
h_int = add ( h_ij ( * edge ) / r ** 6 for edge , r in reg . edge_distances . items ())
To build the digital-analog ansatz we can make use of the standard hea
function by specifying we want to use the Strategy.SDAQC
and passing the Hamiltonian we created as the entangler, as see in the QML constructors tutorial . The entangling operation will be replaced by the evolution of this Hamiltonian HamEvo(h_int, t)
, where the time parameter t
is considered to be a variational parameter at each layer.
from qadence import hea , Strategy , RX , RY
depth = 2
da_ansatz = hea (
n_qubits = reg . n_qubits ,
depth = depth ,
operations = [ RX , RY , RX ],
entangler = h_int ,
strategy = Strategy . SDAQC ,
)
print ( html_string ( da_ansatz ))
%3
cluster_f3bcc41c5b3b424d875afd31a1f5f591
cluster_690d521b60b540a387a07fae89d85e27
fc300def63f54848a032f6d9e4fda38b
0
5256d81b63d34ab6818f747352f97222
RX(theta₀)
fc300def63f54848a032f6d9e4fda38b--5256d81b63d34ab6818f747352f97222
8b7a5dfee77d4299bea4973fa9d244f5
1
ad8a4cc0e7f349c7a021e0c1bf8013b1
RY(theta₆)
5256d81b63d34ab6818f747352f97222--ad8a4cc0e7f349c7a021e0c1bf8013b1
303a6cbe629448d28ad25814aacc8557
RX(theta₁₂)
ad8a4cc0e7f349c7a021e0c1bf8013b1--303a6cbe629448d28ad25814aacc8557
00dfc18452094b1ca3f71e87249e2ae6
303a6cbe629448d28ad25814aacc8557--00dfc18452094b1ca3f71e87249e2ae6
6bbe774c468443168056a6d1f912b9d4
RX(theta₁₈)
00dfc18452094b1ca3f71e87249e2ae6--6bbe774c468443168056a6d1f912b9d4
6c2a2b5bc1f34ba381a0b6abd5a7e781
RY(theta₂₄)
6bbe774c468443168056a6d1f912b9d4--6c2a2b5bc1f34ba381a0b6abd5a7e781
4ca494452f984b538e692cb3e2f472be
RX(theta₃₀)
6c2a2b5bc1f34ba381a0b6abd5a7e781--4ca494452f984b538e692cb3e2f472be
6daa560676284227bb37907dbb2fb035
4ca494452f984b538e692cb3e2f472be--6daa560676284227bb37907dbb2fb035
3a1cfa8cfbbe46b9b5c8c35a3e5255ce
6daa560676284227bb37907dbb2fb035--3a1cfa8cfbbe46b9b5c8c35a3e5255ce
df9428bb42dc43e9a5e6a1cc4bfc86c5
d30ead2adff84c89bf48d94d4a55e2e5
RX(theta₁)
8b7a5dfee77d4299bea4973fa9d244f5--d30ead2adff84c89bf48d94d4a55e2e5
25acd838f37749dd81b4a83018f52d13
2
4cbd0a5fe3b244fcb312e824d82454b7
RY(theta₇)
d30ead2adff84c89bf48d94d4a55e2e5--4cbd0a5fe3b244fcb312e824d82454b7
a0acfe627f4d40c6b523131347cd4388
RX(theta₁₃)
4cbd0a5fe3b244fcb312e824d82454b7--a0acfe627f4d40c6b523131347cd4388
c88c8fe8e9e64b79b5b6a2418bde7d25
a0acfe627f4d40c6b523131347cd4388--c88c8fe8e9e64b79b5b6a2418bde7d25
b2fac016600b4f2cada30c7a0cbd353b
RX(theta₁₉)
c88c8fe8e9e64b79b5b6a2418bde7d25--b2fac016600b4f2cada30c7a0cbd353b
1f8608f28188459bb68d8af6af581905
RY(theta₂₅)
b2fac016600b4f2cada30c7a0cbd353b--1f8608f28188459bb68d8af6af581905
61ed0580a9114606b901e34e5626a58c
RX(theta₃₁)
1f8608f28188459bb68d8af6af581905--61ed0580a9114606b901e34e5626a58c
0910cc4d03d74d5cbd157426d89201c1
61ed0580a9114606b901e34e5626a58c--0910cc4d03d74d5cbd157426d89201c1
0910cc4d03d74d5cbd157426d89201c1--df9428bb42dc43e9a5e6a1cc4bfc86c5
e616f88cc0ef48159e94e88a85780990
bfbd605e82aa42a381a913df9cf37f9f
RX(theta₂)
25acd838f37749dd81b4a83018f52d13--bfbd605e82aa42a381a913df9cf37f9f
345202ff2f5c4e7ab135ba64391114fb
3
f4bb4327ece14f47ad0141a8d7d62b44
RY(theta₈)
bfbd605e82aa42a381a913df9cf37f9f--f4bb4327ece14f47ad0141a8d7d62b44
fd0ae8cf5d67474bbc4d6fac813984f7
RX(theta₁₄)
f4bb4327ece14f47ad0141a8d7d62b44--fd0ae8cf5d67474bbc4d6fac813984f7
0493cd66647f43098c7a8b0e82e26371
HamEvo
fd0ae8cf5d67474bbc4d6fac813984f7--0493cd66647f43098c7a8b0e82e26371
8a56522616ee4381bb33069edf186756
RX(theta₂₀)
0493cd66647f43098c7a8b0e82e26371--8a56522616ee4381bb33069edf186756
91652dae81be445a82ebaaf604017f03
RY(theta₂₆)
8a56522616ee4381bb33069edf186756--91652dae81be445a82ebaaf604017f03
3ddb1c36e51d4a1faa2009b4806cd952
RX(theta₃₂)
91652dae81be445a82ebaaf604017f03--3ddb1c36e51d4a1faa2009b4806cd952
033af10e7bd146628f3f35d28c61519d
HamEvo
3ddb1c36e51d4a1faa2009b4806cd952--033af10e7bd146628f3f35d28c61519d
033af10e7bd146628f3f35d28c61519d--e616f88cc0ef48159e94e88a85780990
40545bf75f99460e8d1c517e50f7e9b3
61caa7a4ec4040d291751d3453d27789
RX(theta₃)
345202ff2f5c4e7ab135ba64391114fb--61caa7a4ec4040d291751d3453d27789
ef2f10a9351342048e3080d07f9335a5
4
33172ab3185d4d5797f97c456defd888
RY(theta₉)
61caa7a4ec4040d291751d3453d27789--33172ab3185d4d5797f97c456defd888
b092c72e5a454f93b35a949234aa4754
RX(theta₁₅)
33172ab3185d4d5797f97c456defd888--b092c72e5a454f93b35a949234aa4754
d5f5b13e09184879b85928b442e3c223
t = theta_t₀
b092c72e5a454f93b35a949234aa4754--d5f5b13e09184879b85928b442e3c223
5eeb1ee2141a4ad789d6f81ab44f7212
RX(theta₂₁)
d5f5b13e09184879b85928b442e3c223--5eeb1ee2141a4ad789d6f81ab44f7212
d5e9ef2e1c554f668cc971b3ad874b3b
RY(theta₂₇)
5eeb1ee2141a4ad789d6f81ab44f7212--d5e9ef2e1c554f668cc971b3ad874b3b
556d04980cec41c0b33c9cfc2dc8946b
RX(theta₃₃)
d5e9ef2e1c554f668cc971b3ad874b3b--556d04980cec41c0b33c9cfc2dc8946b
f8f6caaca8424c13856e16a1489b8b83
t = theta_t₁
556d04980cec41c0b33c9cfc2dc8946b--f8f6caaca8424c13856e16a1489b8b83
f8f6caaca8424c13856e16a1489b8b83--40545bf75f99460e8d1c517e50f7e9b3
962fbbda8a204800b4aadba34e7f9956
b0b0bac36dd0432a855387bb030b8cfd
RX(theta₄)
ef2f10a9351342048e3080d07f9335a5--b0b0bac36dd0432a855387bb030b8cfd
68f7dbdf78c046f78b8b732c6485c728
5
cd2e511957104b3f99239dd03a1be2c4
RY(theta₁₀)
b0b0bac36dd0432a855387bb030b8cfd--cd2e511957104b3f99239dd03a1be2c4
b2fe898158e448feac03b86d22987cb0
RX(theta₁₆)
cd2e511957104b3f99239dd03a1be2c4--b2fe898158e448feac03b86d22987cb0
6c080ad35e804e53a24329a6d7201c92
b2fe898158e448feac03b86d22987cb0--6c080ad35e804e53a24329a6d7201c92
782b7c197f3545b099ddb1ebe69c6311
RX(theta₂₂)
6c080ad35e804e53a24329a6d7201c92--782b7c197f3545b099ddb1ebe69c6311
3f6382dc168949a196fe0fe44f59a8e5
RY(theta₂₈)
782b7c197f3545b099ddb1ebe69c6311--3f6382dc168949a196fe0fe44f59a8e5
89f56e509ec74e77a8fe2bcd4c3d7421
RX(theta₃₄)
3f6382dc168949a196fe0fe44f59a8e5--89f56e509ec74e77a8fe2bcd4c3d7421
ad6c7f538b1944ff99839f387f97a3b6
89f56e509ec74e77a8fe2bcd4c3d7421--ad6c7f538b1944ff99839f387f97a3b6
ad6c7f538b1944ff99839f387f97a3b6--962fbbda8a204800b4aadba34e7f9956
a963faa91b3548b09b53107f22fea234
312fbd5a6a7c42ce82a3178f0890f2a1
RX(theta₅)
68f7dbdf78c046f78b8b732c6485c728--312fbd5a6a7c42ce82a3178f0890f2a1
527d792176e94e5c84891df04f7f448e
RY(theta₁₁)
312fbd5a6a7c42ce82a3178f0890f2a1--527d792176e94e5c84891df04f7f448e
ae5a9a46469b474e9e03952f32228d4c
RX(theta₁₇)
527d792176e94e5c84891df04f7f448e--ae5a9a46469b474e9e03952f32228d4c
e26b2a85f23c4aacb6ba297e5d12a157
ae5a9a46469b474e9e03952f32228d4c--e26b2a85f23c4aacb6ba297e5d12a157
5fffa588b2474625b53a99390104b339
RX(theta₂₃)
e26b2a85f23c4aacb6ba297e5d12a157--5fffa588b2474625b53a99390104b339
e695ba672e114d28939e59f629ef32fc
RY(theta₂₉)
5fffa588b2474625b53a99390104b339--e695ba672e114d28939e59f629ef32fc
20de194021f84f85a35ae88be8523520
RX(theta₃₅)
e695ba672e114d28939e59f629ef32fc--20de194021f84f85a35ae88be8523520
b6d27eb684734d88aa3ad7cb327cf4e4
20de194021f84f85a35ae88be8523520--b6d27eb684734d88aa3ad7cb327cf4e4
b6d27eb684734d88aa3ad7cb327cf4e4--a963faa91b3548b09b53107f22fea234
Creating the QuantumModel
The rest of the procedure is the same as any other Qadence workflow. We start by defining a feature map for input encoding and an observable for output decoding.
from qadence import feature_map , BasisSet , ReuploadScaling
from qadence import Z , I
fm = feature_map (
n_qubits = reg . n_qubits ,
param = "x" ,
fm_type = BasisSet . CHEBYSHEV ,
reupload_scaling = ReuploadScaling . TOWER ,
)
# Total magnetization
observable = add ( Z ( i ) for i in range ( reg . n_qubits ))
And we have all the ingredients to initialize the QuantumModel
:
from qadence import QuantumCircuit , QuantumModel
circuit = QuantumCircuit ( reg , fm , da_ansatz )
model = QuantumModel ( circuit , observable = observable )
Training the model
We can now train the model. We use a set of 20 equally spaced training points.
# Chebyshev FM does not accept x = -1, 1
xmin = - 0.99
xmax = 0.99
n_train = 20
x_train = torch . linspace ( xmin , xmax , steps = n_train )
y_train = f ( x_train )
# Initial model prediction
y_pred_initial = model . expectation ({ "x" : x_test }) . detach ()
And we use a simple custom training loop.
criterion = torch . nn . MSELoss ()
optimizer = torch . optim . Adam ( model . parameters (), lr = 0.1 )
n_epochs = 200
def loss_fn ( x_train , y_train ):
out = model . expectation ({ "x" : x_train })
loss = criterion ( out . squeeze (), y_train )
return loss
for i in range ( n_epochs ):
optimizer . zero_grad ()
loss = loss_fn ( x_train , y_train )
loss . backward ()
optimizer . step ()
Results
Finally we can plot the resulting trained model.
y_pred_final = model . expectation ({ "x" : x_test }) . detach ()
plt . plot ( x_test , y_pred_initial , label = "Initial prediction" )
plt . plot ( x_test , y_pred_final , label = "Final prediction" )
plt . scatter ( x_train , y_train , label = "Training points" )
plt . xlabel ( "x" )
plt . ylabel ( "f(x)" )
plt . legend ()
plt . xlim (( - 1.1 , 1.1 ))
plt . ylim (( - 1.1 , 1.1 ))
2024-09-20T08:34:38.366156
image/svg+xml
Matplotlib v3.7.5, https://matplotlib.org/