Fitting a function with a Hamiltonian ansatz
In the analog QCL tutorial we used analog blocks to learn a function of interest. The analog blocks are a direct abstraction of device execution with global addressing. However, we may want to directly program an Hamiltonian-level ansatz to have a finer control on our model. In Qadence this can easily be done through digital-analog programs. In this tutorial we will solve a simple QCL problem with this approach.
Setting up the problem
The example problem considered is to fit a function of interest in a specified range. Below we define and plot the function \(f(x)=x^5\) .
import torch
import matplotlib.pyplot as plt
# Function to fit:
def f ( x ):
return x ** 5
xmin = - 1.0
xmax = 1.0
n_test = 100
x_test = torch . linspace ( xmin , xmax , steps = n_test )
y_test = f ( x_test )
plt . plot ( x_test , y_test )
plt . xlim (( - 1.1 , 1.1 ))
plt . ylim (( - 1.1 , 1.1 ))
2025-05-12T10:17:35.360078
image/svg+xml
Matplotlib v3.10.3, https://matplotlib.org/
Digital-Analog Ansatz
We start by defining the register of qubits. The topology we use now will define the interactions in the entangling Hamiltonian. As an example, we can define a rectangular lattice with 6 qubits.
from qadence import Register
reg = Register . rectangular_lattice (
qubits_row = 3 ,
qubits_col = 2 ,
)
Inspired by the Ising interaction mode of Rydberg atoms, we can now define an interaction Hamiltonian as \(\mathcal{H}_{ij}=\frac{1}{r_{ij}^6}N_iN_j\) , where \(N_i=(1/2)(I_i-Z_i)\) is the number operator and and \(r_{ij}\) is the distance between qubits \(i\) and \(j\) . We can easily instatiate this interaction Hamiltonian from the register information :
from qadence import N , add
def h_ij ( i : int , j : int ):
return N ( i ) @N ( j )
h_int = add ( h_ij ( * edge ) / r ** 6 for edge , r in reg . edge_distances . items ())
To build the digital-analog ansatz we can make use of the standard hea function by specifying we want to use the Strategy.SDAQC and passing the Hamiltonian we created as the entangler, as see in the QML constructors tutorial . The entangling operation will be replaced by the evolution of this Hamiltonian HamEvo(h_int, t), where the time parameter t is considered to be a variational parameter at each layer.
from qadence import hea , Strategy , RX , RY
depth = 2
da_ansatz = hea (
n_qubits = reg . n_qubits ,
depth = depth ,
operations = [ RX , RY , RX ],
entangler = h_int ,
strategy = Strategy . SDAQC ,
)
print ( html_string ( da_ansatz ))
%3
cluster_81689b3ef4094d5da60621de9c4080f2
cluster_71484cdc44284ad9b30537d57f057db9
09644ab682e54fa19c0fe2f13210e071
0
55043911588e4139a350fd1c256044af
RX(theta₀)
09644ab682e54fa19c0fe2f13210e071--55043911588e4139a350fd1c256044af
9c10f0ce5a7e40c9bd74661b3ae4fbdc
1
99bc91d1d5924b209043f5aa6f554280
RY(theta₆)
55043911588e4139a350fd1c256044af--99bc91d1d5924b209043f5aa6f554280
f697589205b94c90869bfd1df423afa0
RX(theta₁₂)
99bc91d1d5924b209043f5aa6f554280--f697589205b94c90869bfd1df423afa0
ce8880ecc0214cac8063988078e500fd
f697589205b94c90869bfd1df423afa0--ce8880ecc0214cac8063988078e500fd
1e2d7b076ef94415900db5f1fac6d8fe
RX(theta₁₈)
ce8880ecc0214cac8063988078e500fd--1e2d7b076ef94415900db5f1fac6d8fe
602a57ca410f4d1c901bf4715aee149d
RY(theta₂₄)
1e2d7b076ef94415900db5f1fac6d8fe--602a57ca410f4d1c901bf4715aee149d
430e7f2ec30c4d8cb7efca58e71490e5
RX(theta₃₀)
602a57ca410f4d1c901bf4715aee149d--430e7f2ec30c4d8cb7efca58e71490e5
9cb12ffc222d4d81a81fcd35f11537e3
430e7f2ec30c4d8cb7efca58e71490e5--9cb12ffc222d4d81a81fcd35f11537e3
85e09466df7947899b5b9481c67154ad
9cb12ffc222d4d81a81fcd35f11537e3--85e09466df7947899b5b9481c67154ad
1326351bae4742359a162f4f6eee917c
1d2627c14b574dc4aab01f0de59dac82
RX(theta₁)
9c10f0ce5a7e40c9bd74661b3ae4fbdc--1d2627c14b574dc4aab01f0de59dac82
44d604782414460ab803515caf8ea18c
2
97658ef1838b4f82b769c6e0f75869be
RY(theta₇)
1d2627c14b574dc4aab01f0de59dac82--97658ef1838b4f82b769c6e0f75869be
8e0c356ffd2543bc9592cc4d6a38b280
RX(theta₁₃)
97658ef1838b4f82b769c6e0f75869be--8e0c356ffd2543bc9592cc4d6a38b280
cf2e6c729b9543b49888e6e6f2f97c9b
8e0c356ffd2543bc9592cc4d6a38b280--cf2e6c729b9543b49888e6e6f2f97c9b
1794e04eb677432388e4ffdafad025b3
RX(theta₁₉)
cf2e6c729b9543b49888e6e6f2f97c9b--1794e04eb677432388e4ffdafad025b3
084e8274226d4312b5b56382f40d0ead
RY(theta₂₅)
1794e04eb677432388e4ffdafad025b3--084e8274226d4312b5b56382f40d0ead
892ad86f8ac4471594cd53107f7d91c9
RX(theta₃₁)
084e8274226d4312b5b56382f40d0ead--892ad86f8ac4471594cd53107f7d91c9
a09154af52504d8093bb519da14517f4
892ad86f8ac4471594cd53107f7d91c9--a09154af52504d8093bb519da14517f4
a09154af52504d8093bb519da14517f4--1326351bae4742359a162f4f6eee917c
7d99eb94f63647edb74fb1fae5286bc5
46f8e0b7dd81413fb9dcecac39d4e38d
RX(theta₂)
44d604782414460ab803515caf8ea18c--46f8e0b7dd81413fb9dcecac39d4e38d
6b73aaab203f460b8de66058eb1d2789
3
08e79903cae843d09f61d5b18e42cfbc
RY(theta₈)
46f8e0b7dd81413fb9dcecac39d4e38d--08e79903cae843d09f61d5b18e42cfbc
d8fed52e943941348096b4eabbe1c8a9
RX(theta₁₄)
08e79903cae843d09f61d5b18e42cfbc--d8fed52e943941348096b4eabbe1c8a9
05382f0cefed4475956643dd12a90be8
HamEvo
d8fed52e943941348096b4eabbe1c8a9--05382f0cefed4475956643dd12a90be8
91a9aa956b174c12abd87b61d2f58651
RX(theta₂₀)
05382f0cefed4475956643dd12a90be8--91a9aa956b174c12abd87b61d2f58651
ba783dc4876d4052a2200f90a8f64033
RY(theta₂₆)
91a9aa956b174c12abd87b61d2f58651--ba783dc4876d4052a2200f90a8f64033
d67b9518867a409e8c957d347411808e
RX(theta₃₂)
ba783dc4876d4052a2200f90a8f64033--d67b9518867a409e8c957d347411808e
0c50cfbcf0aa4a8b902be02ac8128904
HamEvo
d67b9518867a409e8c957d347411808e--0c50cfbcf0aa4a8b902be02ac8128904
0c50cfbcf0aa4a8b902be02ac8128904--7d99eb94f63647edb74fb1fae5286bc5
839993a7b4804bafa49aa985b667ad5a
114cfd5e087a4c1882e3fc8ff44272dd
RX(theta₃)
6b73aaab203f460b8de66058eb1d2789--114cfd5e087a4c1882e3fc8ff44272dd
3b26f38562284fbda02c08bc22adf012
4
c71745b7383d435d95d5787aac6e3d20
RY(theta₉)
114cfd5e087a4c1882e3fc8ff44272dd--c71745b7383d435d95d5787aac6e3d20
ae5421017625445590afa5f2c3da495b
RX(theta₁₅)
c71745b7383d435d95d5787aac6e3d20--ae5421017625445590afa5f2c3da495b
6385ab0bd4a14578b400e9453a7bca5c
t = theta_t₀
ae5421017625445590afa5f2c3da495b--6385ab0bd4a14578b400e9453a7bca5c
8d7f0d17822f4f7c92b72d06642f69e2
RX(theta₂₁)
6385ab0bd4a14578b400e9453a7bca5c--8d7f0d17822f4f7c92b72d06642f69e2
b55d103471394442a66016769967c83f
RY(theta₂₇)
8d7f0d17822f4f7c92b72d06642f69e2--b55d103471394442a66016769967c83f
28f4e7b2edf94f61a17216c7bd4aa264
RX(theta₃₃)
b55d103471394442a66016769967c83f--28f4e7b2edf94f61a17216c7bd4aa264
3cb7e1c2057d4993b4feaff82bb8d0c0
t = theta_t₁
28f4e7b2edf94f61a17216c7bd4aa264--3cb7e1c2057d4993b4feaff82bb8d0c0
3cb7e1c2057d4993b4feaff82bb8d0c0--839993a7b4804bafa49aa985b667ad5a
e1294bf16f0f4c9aa6dbf4e6e8b93208
b4aa4f606ca647e488a4fad419bd93b6
RX(theta₄)
3b26f38562284fbda02c08bc22adf012--b4aa4f606ca647e488a4fad419bd93b6
d13c4c0814ac45e78dd254aabe689d01
5
1af0773c90614b6f8b76b7b53ec9c950
RY(theta₁₀)
b4aa4f606ca647e488a4fad419bd93b6--1af0773c90614b6f8b76b7b53ec9c950
722f5d9e8f614ef2a203b639dd53f943
RX(theta₁₆)
1af0773c90614b6f8b76b7b53ec9c950--722f5d9e8f614ef2a203b639dd53f943
a94940d2d548423d9530d5c0c0f09835
722f5d9e8f614ef2a203b639dd53f943--a94940d2d548423d9530d5c0c0f09835
f339645c1ebc47b8a9625bed40d1af21
RX(theta₂₂)
a94940d2d548423d9530d5c0c0f09835--f339645c1ebc47b8a9625bed40d1af21
158efc7d0b964ba0844e68ea2bb015a5
RY(theta₂₈)
f339645c1ebc47b8a9625bed40d1af21--158efc7d0b964ba0844e68ea2bb015a5
937021b5431f4b7caccc91fbefbf6040
RX(theta₃₄)
158efc7d0b964ba0844e68ea2bb015a5--937021b5431f4b7caccc91fbefbf6040
78f36de39a184113b7bb8b4884b17845
937021b5431f4b7caccc91fbefbf6040--78f36de39a184113b7bb8b4884b17845
78f36de39a184113b7bb8b4884b17845--e1294bf16f0f4c9aa6dbf4e6e8b93208
d4514e5b2c5b41d6855aad684a0cc3b6
f8e04edf874f4285bf3363b7fc8d9ee3
RX(theta₅)
d13c4c0814ac45e78dd254aabe689d01--f8e04edf874f4285bf3363b7fc8d9ee3
c7b3435838204d0d9133b1550dd9d5e2
RY(theta₁₁)
f8e04edf874f4285bf3363b7fc8d9ee3--c7b3435838204d0d9133b1550dd9d5e2
50991cc5ed0942f88d555512262a4c4c
RX(theta₁₇)
c7b3435838204d0d9133b1550dd9d5e2--50991cc5ed0942f88d555512262a4c4c
a4c38b6aaa0f4ef6bc6cc3b108fdc506
50991cc5ed0942f88d555512262a4c4c--a4c38b6aaa0f4ef6bc6cc3b108fdc506
efbccc8de2064a2199aabede41d6e5d5
RX(theta₂₃)
a4c38b6aaa0f4ef6bc6cc3b108fdc506--efbccc8de2064a2199aabede41d6e5d5
6b548e67b0c24123ae6cb733019253d3
RY(theta₂₉)
efbccc8de2064a2199aabede41d6e5d5--6b548e67b0c24123ae6cb733019253d3
35085fa2b8914b05885a38fe71e114ae
RX(theta₃₅)
6b548e67b0c24123ae6cb733019253d3--35085fa2b8914b05885a38fe71e114ae
e0891a0af0cd449593c5bab34e0ea1a9
35085fa2b8914b05885a38fe71e114ae--e0891a0af0cd449593c5bab34e0ea1a9
e0891a0af0cd449593c5bab34e0ea1a9--d4514e5b2c5b41d6855aad684a0cc3b6
Creating the QuantumModel
The rest of the procedure is the same as any other Qadence workflow. We start by defining a feature map for input encoding and an observable for output decoding.
from qadence import feature_map , BasisSet , ReuploadScaling
from qadence import Z , I
fm = feature_map (
n_qubits = reg . n_qubits ,
param = "x" ,
fm_type = BasisSet . CHEBYSHEV ,
reupload_scaling = ReuploadScaling . TOWER ,
)
# Total magnetization
observable = add ( Z ( i ) for i in range ( reg . n_qubits ))
And we have all the ingredients to initialize the QuantumModel:
from qadence import QuantumCircuit , QuantumModel
circuit = QuantumCircuit ( reg , fm , da_ansatz )
model = QuantumModel ( circuit , observable = observable )
Training the model
We can now train the model. We use a set of 20 equally spaced training points.
# Chebyshev FM does not accept x = -1, 1
xmin = - 0.99
xmax = 0.99
n_train = 20
x_train = torch . linspace ( xmin , xmax , steps = n_train )
y_train = f ( x_train )
# Initial model prediction
y_pred_initial = model . expectation ({ "x" : x_test }) . detach ()
And we use a simple custom training loop.
criterion = torch . nn . MSELoss ()
optimizer = torch . optim . Adam ( model . parameters (), lr = 0.1 )
n_epochs = 200
def loss_fn ( x_train , y_train ):
out = model . expectation ({ "x" : x_train })
loss = criterion ( out . squeeze (), y_train )
return loss
for i in range ( n_epochs ):
optimizer . zero_grad ()
loss = loss_fn ( x_train , y_train )
loss . backward ()
optimizer . step ()
Results
Finally we can plot the resulting trained model.
y_pred_final = model . expectation ({ "x" : x_test }) . detach ()
plt . plot ( x_test , y_pred_initial , label = "Initial prediction" )
plt . plot ( x_test , y_pred_final , label = "Final prediction" )
plt . scatter ( x_train , y_train , label = "Training points" )
plt . xlabel ( "x" )
plt . ylabel ( "f(x)" )
plt . legend ()
plt . xlim (( - 1.1 , 1.1 ))
plt . ylim (( - 1.1 , 1.1 ))
2025-05-12T10:17:43.497555
image/svg+xml
Matplotlib v3.10.3, https://matplotlib.org/