Fitting a function with a Hamiltonian ansatz
In the analog QCL tutorial we used analog blocks to learn a function of interest. The analog blocks are a direct abstraction of device execution with global addressing. However, we may want to directly program an Hamiltonian-level ansatz to have a finer control on our model. In Qadence this can easily be done through digital-analog programs. In this tutorial we will solve a simple QCL problem with this approach.
Setting up the problem
The example problem considered is to fit a function of interest in a specified range. Below we define and plot the function \(f(x)=x^5\) .
import torch
import matplotlib.pyplot as plt
# Function to fit:
def f ( x ):
return x ** 5
xmin = - 1.0
xmax = 1.0
n_test = 100
x_test = torch . linspace ( xmin , xmax , steps = n_test )
y_test = f ( x_test )
plt . plot ( x_test , y_test )
plt . xlim (( - 1.1 , 1.1 ))
plt . ylim (( - 1.1 , 1.1 ))
2025-01-08T18:01:58.375411
image/svg+xml
Matplotlib v3.10.0, https://matplotlib.org/
Digital-Analog Ansatz
We start by defining the register of qubits. The topology we use now will define the interactions in the entangling Hamiltonian. As an example, we can define a rectangular lattice with 6 qubits.
from qadence import Register
reg = Register . rectangular_lattice (
qubits_row = 3 ,
qubits_col = 2 ,
)
Inspired by the Ising interaction mode of Rydberg atoms, we can now define an interaction Hamiltonian as \(\mathcal{H}_{ij}=\frac{1}{r_{ij}^6}N_iN_j\) , where \(N_i=(1/2)(I_i-Z_i)\) is the number operator and and \(r_{ij}\) is the distance between qubits \(i\) and \(j\) . We can easily instatiate this interaction Hamiltonian from the register information :
from qadence import N , add
def h_ij ( i : int , j : int ):
return N ( i ) @N ( j )
h_int = add ( h_ij ( * edge ) / r ** 6 for edge , r in reg . edge_distances . items ())
To build the digital-analog ansatz we can make use of the standard hea
function by specifying we want to use the Strategy.SDAQC
and passing the Hamiltonian we created as the entangler, as see in the QML constructors tutorial . The entangling operation will be replaced by the evolution of this Hamiltonian HamEvo(h_int, t)
, where the time parameter t
is considered to be a variational parameter at each layer.
from qadence import hea , Strategy , RX , RY
depth = 2
da_ansatz = hea (
n_qubits = reg . n_qubits ,
depth = depth ,
operations = [ RX , RY , RX ],
entangler = h_int ,
strategy = Strategy . SDAQC ,
)
print ( html_string ( da_ansatz ))
%3
cluster_89b1d8ab624941bb8119bdf2c5817539
cluster_825b55d9c8f34fc0bbcbf8944b205116
5fe21d985e5d43b2b84d03f01887a120
0
50e21ff8267449c5bac22330997baf61
RX(theta₀)
5fe21d985e5d43b2b84d03f01887a120--50e21ff8267449c5bac22330997baf61
daa25264c65844798cc91db6e243ef6d
1
806da9483f3c4f9e89356c246cf5f1c1
RY(theta₆)
50e21ff8267449c5bac22330997baf61--806da9483f3c4f9e89356c246cf5f1c1
a24d8d1b2fc5443e8fe0a92d5c1a441d
RX(theta₁₂)
806da9483f3c4f9e89356c246cf5f1c1--a24d8d1b2fc5443e8fe0a92d5c1a441d
c3e501a12cee41668c29791c3f28502a
a24d8d1b2fc5443e8fe0a92d5c1a441d--c3e501a12cee41668c29791c3f28502a
7335deb07a454ae69607f364465035fb
RX(theta₁₈)
c3e501a12cee41668c29791c3f28502a--7335deb07a454ae69607f364465035fb
75ca3aa8fd2f477a86f3dd8c0d9e6512
RY(theta₂₄)
7335deb07a454ae69607f364465035fb--75ca3aa8fd2f477a86f3dd8c0d9e6512
290716f0989a43158838e12be07f2f9e
RX(theta₃₀)
75ca3aa8fd2f477a86f3dd8c0d9e6512--290716f0989a43158838e12be07f2f9e
2b856880c9824c8da4e0397cf9c55115
290716f0989a43158838e12be07f2f9e--2b856880c9824c8da4e0397cf9c55115
0c04bbc96ded4eb68e2192e5cf11278f
2b856880c9824c8da4e0397cf9c55115--0c04bbc96ded4eb68e2192e5cf11278f
8e28a7303b1b41b08237bce772bdeb84
dd8a16a5cdbf4edb89033fc73462d7ae
RX(theta₁)
daa25264c65844798cc91db6e243ef6d--dd8a16a5cdbf4edb89033fc73462d7ae
12e02e554ecf40aba69f83965d337840
2
e4730f7b326046e3b7f1b27cfb989cc2
RY(theta₇)
dd8a16a5cdbf4edb89033fc73462d7ae--e4730f7b326046e3b7f1b27cfb989cc2
3050403a391a4c00a2bed0a72d273cb0
RX(theta₁₃)
e4730f7b326046e3b7f1b27cfb989cc2--3050403a391a4c00a2bed0a72d273cb0
d1917e9e78c3470caccfb7bafbf2a740
3050403a391a4c00a2bed0a72d273cb0--d1917e9e78c3470caccfb7bafbf2a740
36baefb09a3240e9bebdb011f1bb50d0
RX(theta₁₉)
d1917e9e78c3470caccfb7bafbf2a740--36baefb09a3240e9bebdb011f1bb50d0
6514b86eade847e09791e8f55ad16403
RY(theta₂₅)
36baefb09a3240e9bebdb011f1bb50d0--6514b86eade847e09791e8f55ad16403
d13e8b02579a4b97a6f9ce4fad51125b
RX(theta₃₁)
6514b86eade847e09791e8f55ad16403--d13e8b02579a4b97a6f9ce4fad51125b
7280147ff9c14f41ab5065be0d8e46dd
d13e8b02579a4b97a6f9ce4fad51125b--7280147ff9c14f41ab5065be0d8e46dd
7280147ff9c14f41ab5065be0d8e46dd--8e28a7303b1b41b08237bce772bdeb84
ff207b1b9e704774b02e3ff9d86cd679
84bbfa559625414ca5afbf0d8a0719ca
RX(theta₂)
12e02e554ecf40aba69f83965d337840--84bbfa559625414ca5afbf0d8a0719ca
1339ca0908044dc5bc972f17a8183171
3
d24903f37e404b32bc1b2ec79bf31dc8
RY(theta₈)
84bbfa559625414ca5afbf0d8a0719ca--d24903f37e404b32bc1b2ec79bf31dc8
0bc7245b80e84835a02c82f569fa1bbe
RX(theta₁₄)
d24903f37e404b32bc1b2ec79bf31dc8--0bc7245b80e84835a02c82f569fa1bbe
b69a1428f4f44d769043189694ca81f3
HamEvo
0bc7245b80e84835a02c82f569fa1bbe--b69a1428f4f44d769043189694ca81f3
28b180a440614fe5aa329a0e057f2b9d
RX(theta₂₀)
b69a1428f4f44d769043189694ca81f3--28b180a440614fe5aa329a0e057f2b9d
1aeed4c725904bfd80dd0c1bbfd62101
RY(theta₂₆)
28b180a440614fe5aa329a0e057f2b9d--1aeed4c725904bfd80dd0c1bbfd62101
767e9ae74aa142438b84e4bd8f9edf40
RX(theta₃₂)
1aeed4c725904bfd80dd0c1bbfd62101--767e9ae74aa142438b84e4bd8f9edf40
743d34089b4843c495138366da9a3bb1
HamEvo
767e9ae74aa142438b84e4bd8f9edf40--743d34089b4843c495138366da9a3bb1
743d34089b4843c495138366da9a3bb1--ff207b1b9e704774b02e3ff9d86cd679
9b8dd79227af45cba2d9fbc77d4f5e10
05430b33b1144d56983f26a4bd59d77a
RX(theta₃)
1339ca0908044dc5bc972f17a8183171--05430b33b1144d56983f26a4bd59d77a
088283896c784071a139fec6ab7edea7
4
4ce4a805a7984feab616bf9832ea1fb4
RY(theta₉)
05430b33b1144d56983f26a4bd59d77a--4ce4a805a7984feab616bf9832ea1fb4
ca2521a4265648d6a13ba812606e15eb
RX(theta₁₅)
4ce4a805a7984feab616bf9832ea1fb4--ca2521a4265648d6a13ba812606e15eb
ddb610be97c349ae9427b5eee6128777
t = theta_t₀
ca2521a4265648d6a13ba812606e15eb--ddb610be97c349ae9427b5eee6128777
977af27cea174156bea83c517d2f2ee6
RX(theta₂₁)
ddb610be97c349ae9427b5eee6128777--977af27cea174156bea83c517d2f2ee6
07627b42098845809d67d8ed6a56ad7c
RY(theta₂₇)
977af27cea174156bea83c517d2f2ee6--07627b42098845809d67d8ed6a56ad7c
d23d2412890642a8b771088a09ea1437
RX(theta₃₃)
07627b42098845809d67d8ed6a56ad7c--d23d2412890642a8b771088a09ea1437
6ad87238dccb424984002c133da74b2b
t = theta_t₁
d23d2412890642a8b771088a09ea1437--6ad87238dccb424984002c133da74b2b
6ad87238dccb424984002c133da74b2b--9b8dd79227af45cba2d9fbc77d4f5e10
ab82030d044f4552ae6648fa20d44ae1
f0aba015176f4424967c292758c3d958
RX(theta₄)
088283896c784071a139fec6ab7edea7--f0aba015176f4424967c292758c3d958
03ee4ddcedc045d6a2dadb8c5af69a87
5
c8b5a029979546aba07ceb5f892e48b7
RY(theta₁₀)
f0aba015176f4424967c292758c3d958--c8b5a029979546aba07ceb5f892e48b7
0445a0df2d4a40839b1659c25650fae2
RX(theta₁₆)
c8b5a029979546aba07ceb5f892e48b7--0445a0df2d4a40839b1659c25650fae2
1740ff51632742ce8aaf7325f7c4dd33
0445a0df2d4a40839b1659c25650fae2--1740ff51632742ce8aaf7325f7c4dd33
01cb883b214a4d0892a8b49ca3544cf7
RX(theta₂₂)
1740ff51632742ce8aaf7325f7c4dd33--01cb883b214a4d0892a8b49ca3544cf7
423cba5828f44278adc38e7ed048a0e2
RY(theta₂₈)
01cb883b214a4d0892a8b49ca3544cf7--423cba5828f44278adc38e7ed048a0e2
ef3a97060432454d8cc431f52c18173a
RX(theta₃₄)
423cba5828f44278adc38e7ed048a0e2--ef3a97060432454d8cc431f52c18173a
3bf7956bc7e647d0ad0b95c43d2233fe
ef3a97060432454d8cc431f52c18173a--3bf7956bc7e647d0ad0b95c43d2233fe
3bf7956bc7e647d0ad0b95c43d2233fe--ab82030d044f4552ae6648fa20d44ae1
60fef0825b88492383207df0e764ae0f
5f15b15ff2b9412cae2f5d28400065d1
RX(theta₅)
03ee4ddcedc045d6a2dadb8c5af69a87--5f15b15ff2b9412cae2f5d28400065d1
0acc24a6b0fc48a4b3f7c53ff7a0d481
RY(theta₁₁)
5f15b15ff2b9412cae2f5d28400065d1--0acc24a6b0fc48a4b3f7c53ff7a0d481
cb15f4c0560f4d9386da7f9ed1f75601
RX(theta₁₇)
0acc24a6b0fc48a4b3f7c53ff7a0d481--cb15f4c0560f4d9386da7f9ed1f75601
d731aa49b5cc4578b2ddef1190a8a016
cb15f4c0560f4d9386da7f9ed1f75601--d731aa49b5cc4578b2ddef1190a8a016
00206eae48d74537a90946658cc0936f
RX(theta₂₃)
d731aa49b5cc4578b2ddef1190a8a016--00206eae48d74537a90946658cc0936f
3d3a160c7a2540499de29c49dfced7fc
RY(theta₂₉)
00206eae48d74537a90946658cc0936f--3d3a160c7a2540499de29c49dfced7fc
3484b069814c4cc5ba871ff6d885f5a6
RX(theta₃₅)
3d3a160c7a2540499de29c49dfced7fc--3484b069814c4cc5ba871ff6d885f5a6
88444683b36a43ee9a11c5f178946dd2
3484b069814c4cc5ba871ff6d885f5a6--88444683b36a43ee9a11c5f178946dd2
88444683b36a43ee9a11c5f178946dd2--60fef0825b88492383207df0e764ae0f
Creating the QuantumModel
The rest of the procedure is the same as any other Qadence workflow. We start by defining a feature map for input encoding and an observable for output decoding.
from qadence import feature_map , BasisSet , ReuploadScaling
from qadence import Z , I
fm = feature_map (
n_qubits = reg . n_qubits ,
param = "x" ,
fm_type = BasisSet . CHEBYSHEV ,
reupload_scaling = ReuploadScaling . TOWER ,
)
# Total magnetization
observable = add ( Z ( i ) for i in range ( reg . n_qubits ))
And we have all the ingredients to initialize the QuantumModel
:
from qadence import QuantumCircuit , QuantumModel
circuit = QuantumCircuit ( reg , fm , da_ansatz )
model = QuantumModel ( circuit , observable = observable )
Training the model
We can now train the model. We use a set of 20 equally spaced training points.
# Chebyshev FM does not accept x = -1, 1
xmin = - 0.99
xmax = 0.99
n_train = 20
x_train = torch . linspace ( xmin , xmax , steps = n_train )
y_train = f ( x_train )
# Initial model prediction
y_pred_initial = model . expectation ({ "x" : x_test }) . detach ()
And we use a simple custom training loop.
criterion = torch . nn . MSELoss ()
optimizer = torch . optim . Adam ( model . parameters (), lr = 0.1 )
n_epochs = 200
def loss_fn ( x_train , y_train ):
out = model . expectation ({ "x" : x_train })
loss = criterion ( out . squeeze (), y_train )
return loss
for i in range ( n_epochs ):
optimizer . zero_grad ()
loss = loss_fn ( x_train , y_train )
loss . backward ()
optimizer . step ()
Results
Finally we can plot the resulting trained model.
y_pred_final = model . expectation ({ "x" : x_test }) . detach ()
plt . plot ( x_test , y_pred_initial , label = "Initial prediction" )
plt . plot ( x_test , y_pred_final , label = "Final prediction" )
plt . scatter ( x_train , y_train , label = "Training points" )
plt . xlabel ( "x" )
plt . ylabel ( "f(x)" )
plt . legend ()
plt . xlim (( - 1.1 , 1.1 ))
plt . ylim (( - 1.1 , 1.1 ))
2025-01-08T18:02:06.867555
image/svg+xml
Matplotlib v3.10.0, https://matplotlib.org/