Fitting a function with a Hamiltonian ansatz
In the analog QCL tutorial we used analog blocks to learn a function of interest. The analog blocks are a direct abstraction of device execution with global addressing. However, we may want to directly program an Hamiltonian-level ansatz to have a finer control on our model. In Qadence this can easily be done through digital-analog programs. In this tutorial we will solve a simple QCL problem with this approach.
Setting up the problem
The example problem considered is to fit a function of interest in a specified range. Below we define and plot the function .
import torch
import matplotlib.pyplot as plt
# Function to fit:
def f ( x ):
return x ** 5
xmin = - 1.0
xmax = 1.0
n_test = 100
x_test = torch . linspace ( xmin , xmax , steps = n_test )
y_test = f ( x_test )
plt . plot ( x_test , y_test )
plt . xlim (( - 1.1 , 1.1 ))
plt . ylim (( - 1.1 , 1.1 ))
2025-04-04T13:33:59.367041
image/svg+xml
Matplotlib v3.10.1, https://matplotlib.org/
Digital-Analog Ansatz
We start by defining the register of qubits. The topology we use now will define the interactions in the entangling Hamiltonian. As an example, we can define a rectangular lattice with 6 qubits.
from qadence import Register
reg = Register . rectangular_lattice (
qubits_row = 3 ,
qubits_col = 2 ,
)
Inspired by the Ising interaction mode of Rydberg atoms, we can now define an interaction Hamiltonian as , where is the number operator and and is the distance between qubits and . We can easily instatiate this interaction Hamiltonian from the register information :
from qadence import N , add
def h_ij ( i : int , j : int ):
return N ( i ) @N ( j )
h_int = add ( h_ij ( * edge ) / r ** 6 for edge , r in reg . edge_distances . items ())
To build the digital-analog ansatz we can make use of the standard hea
function by specifying we want to use the Strategy.SDAQC
and passing the Hamiltonian we created as the entangler, as see in the QML constructors tutorial . The entangling operation will be replaced by the evolution of this Hamiltonian HamEvo(h_int, t)
, where the time parameter t
is considered to be a variational parameter at each layer.
from qadence import hea , Strategy , RX , RY
depth = 2
da_ansatz = hea (
n_qubits = reg . n_qubits ,
depth = depth ,
operations = [ RX , RY , RX ],
entangler = h_int ,
strategy = Strategy . SDAQC ,
)
print ( html_string ( da_ansatz ))
%3
cluster_8b01782529674870bfcfb526dd534484
cluster_1d33f7dc054d41c1be07dbad6445c2cf
3241feb8229546cba183f4f39ae99cc1
0
ad6ccc76f5094a028862a146bcb1ab8c
RX(theta₀)
3241feb8229546cba183f4f39ae99cc1--ad6ccc76f5094a028862a146bcb1ab8c
0d5657f8863b40fdbdb4c54d6c184504
1
c33bee793b9d4b258d7cba97dc72ea1e
RY(theta₆)
ad6ccc76f5094a028862a146bcb1ab8c--c33bee793b9d4b258d7cba97dc72ea1e
2ac0b6dc603840babbea94eab1519f07
RX(theta₁₂)
c33bee793b9d4b258d7cba97dc72ea1e--2ac0b6dc603840babbea94eab1519f07
bf05a8bb03d243a1a0438c7b4dd5a53c
2ac0b6dc603840babbea94eab1519f07--bf05a8bb03d243a1a0438c7b4dd5a53c
8708ac1bd6cb4d6f927c79ec345cd9f2
RX(theta₁₈)
bf05a8bb03d243a1a0438c7b4dd5a53c--8708ac1bd6cb4d6f927c79ec345cd9f2
413d80bcd74c4d599efd72cfdbcca5c8
RY(theta₂₄)
8708ac1bd6cb4d6f927c79ec345cd9f2--413d80bcd74c4d599efd72cfdbcca5c8
1c4148fda96b4bbba28aaae36d52594b
RX(theta₃₀)
413d80bcd74c4d599efd72cfdbcca5c8--1c4148fda96b4bbba28aaae36d52594b
1ee903c6425f44fdb094392108e08ceb
1c4148fda96b4bbba28aaae36d52594b--1ee903c6425f44fdb094392108e08ceb
749f87a37bd943419ccace6f2071e8d2
1ee903c6425f44fdb094392108e08ceb--749f87a37bd943419ccace6f2071e8d2
ddddaff4984d42b18a09a42d67d1c518
9cfdf20cef164d99a7077463f519825f
RX(theta₁)
0d5657f8863b40fdbdb4c54d6c184504--9cfdf20cef164d99a7077463f519825f
26ec5087f19b42e7b9e81676363c1a69
2
1914e5ca5eaa423380b736b289fb74bf
RY(theta₇)
9cfdf20cef164d99a7077463f519825f--1914e5ca5eaa423380b736b289fb74bf
8eeed3d09bd942d8b0108cd2e1ff829d
RX(theta₁₃)
1914e5ca5eaa423380b736b289fb74bf--8eeed3d09bd942d8b0108cd2e1ff829d
d823d42cf82b40a4be0bc29c1918417b
8eeed3d09bd942d8b0108cd2e1ff829d--d823d42cf82b40a4be0bc29c1918417b
d100b9869b514f509d6bd899a6c5ead6
RX(theta₁₉)
d823d42cf82b40a4be0bc29c1918417b--d100b9869b514f509d6bd899a6c5ead6
3a01407f23374499b8f610c6c879675e
RY(theta₂₅)
d100b9869b514f509d6bd899a6c5ead6--3a01407f23374499b8f610c6c879675e
c11ccca1856f46c699aa4a527fdec1f8
RX(theta₃₁)
3a01407f23374499b8f610c6c879675e--c11ccca1856f46c699aa4a527fdec1f8
74d6a23f5fab4f29a52b46760cc7f869
c11ccca1856f46c699aa4a527fdec1f8--74d6a23f5fab4f29a52b46760cc7f869
74d6a23f5fab4f29a52b46760cc7f869--ddddaff4984d42b18a09a42d67d1c518
4672fe40722e4599bdd63d3bed9d76f1
b94f9f6fec12420cbd4e1c60a7614938
RX(theta₂)
26ec5087f19b42e7b9e81676363c1a69--b94f9f6fec12420cbd4e1c60a7614938
c7cafbcd1c47481baca625b7ed2c58b8
3
df86c6ed5a5c4aeabce9a6eac191bbf8
RY(theta₈)
b94f9f6fec12420cbd4e1c60a7614938--df86c6ed5a5c4aeabce9a6eac191bbf8
15942cf2c1d14b07bf407956a4cf879d
RX(theta₁₄)
df86c6ed5a5c4aeabce9a6eac191bbf8--15942cf2c1d14b07bf407956a4cf879d
b6b0be3b1adf4f1099508abc9a7ac9a3
HamEvo
15942cf2c1d14b07bf407956a4cf879d--b6b0be3b1adf4f1099508abc9a7ac9a3
a001dd858a7b48ab92e9bef895213e99
RX(theta₂₀)
b6b0be3b1adf4f1099508abc9a7ac9a3--a001dd858a7b48ab92e9bef895213e99
1420f000c7d6472e94d70465e11f67b0
RY(theta₂₆)
a001dd858a7b48ab92e9bef895213e99--1420f000c7d6472e94d70465e11f67b0
55bfa71617c6462388841bc05e9c3e0e
RX(theta₃₂)
1420f000c7d6472e94d70465e11f67b0--55bfa71617c6462388841bc05e9c3e0e
7077d41b22df4d14a6de72f5ee33de53
HamEvo
55bfa71617c6462388841bc05e9c3e0e--7077d41b22df4d14a6de72f5ee33de53
7077d41b22df4d14a6de72f5ee33de53--4672fe40722e4599bdd63d3bed9d76f1
0593520fb0c54b32bd610cbf639d3991
c8040c5e8fe443558af2ee9044d752e6
RX(theta₃)
c7cafbcd1c47481baca625b7ed2c58b8--c8040c5e8fe443558af2ee9044d752e6
472a10eb2aea42e29efe7276e8bb40b8
4
b8190da3c4c64464b658787d2b85507e
RY(theta₉)
c8040c5e8fe443558af2ee9044d752e6--b8190da3c4c64464b658787d2b85507e
d693e6500bf34694b62ac0dd56b66f82
RX(theta₁₅)
b8190da3c4c64464b658787d2b85507e--d693e6500bf34694b62ac0dd56b66f82
f832f5747a05408a82db07ee526fb1d6
t = theta_t₀
d693e6500bf34694b62ac0dd56b66f82--f832f5747a05408a82db07ee526fb1d6
4c0aaebc6c6c4d15887daa68688bed32
RX(theta₂₁)
f832f5747a05408a82db07ee526fb1d6--4c0aaebc6c6c4d15887daa68688bed32
0f5cc753226d4516bc8d7108cd266376
RY(theta₂₇)
4c0aaebc6c6c4d15887daa68688bed32--0f5cc753226d4516bc8d7108cd266376
f09d64bf84a949a9844f83bb30c4e771
RX(theta₃₃)
0f5cc753226d4516bc8d7108cd266376--f09d64bf84a949a9844f83bb30c4e771
a9dd35bfb4cf4d72ba6df468e87fb9cb
t = theta_t₁
f09d64bf84a949a9844f83bb30c4e771--a9dd35bfb4cf4d72ba6df468e87fb9cb
a9dd35bfb4cf4d72ba6df468e87fb9cb--0593520fb0c54b32bd610cbf639d3991
17ebdc7626224f9cbc47421c06b8bbaf
c2d17549bbc3465993d808064b4e12dd
RX(theta₄)
472a10eb2aea42e29efe7276e8bb40b8--c2d17549bbc3465993d808064b4e12dd
4f34f411bdbb40898a998b7669d5d9e8
5
57cc5b51cdf442cbbf482b847cc6ac83
RY(theta₁₀)
c2d17549bbc3465993d808064b4e12dd--57cc5b51cdf442cbbf482b847cc6ac83
4e6765f137204195ab9846b6f8c049f0
RX(theta₁₆)
57cc5b51cdf442cbbf482b847cc6ac83--4e6765f137204195ab9846b6f8c049f0
485a90054c2d40faa11cbabbfd3b62dc
4e6765f137204195ab9846b6f8c049f0--485a90054c2d40faa11cbabbfd3b62dc
27e6f5e59aa84260bfb60d33eef5e50a
RX(theta₂₂)
485a90054c2d40faa11cbabbfd3b62dc--27e6f5e59aa84260bfb60d33eef5e50a
a01cd07c8fa64b80b2d377e0a1147037
RY(theta₂₈)
27e6f5e59aa84260bfb60d33eef5e50a--a01cd07c8fa64b80b2d377e0a1147037
6d456ba576604818be9799fd0649d5c2
RX(theta₃₄)
a01cd07c8fa64b80b2d377e0a1147037--6d456ba576604818be9799fd0649d5c2
93688f1eda584b78a144b4c258ea47ee
6d456ba576604818be9799fd0649d5c2--93688f1eda584b78a144b4c258ea47ee
93688f1eda584b78a144b4c258ea47ee--17ebdc7626224f9cbc47421c06b8bbaf
8d29fd0eaa4a4d949ef7128a6f58a60e
65622b41ec8b482c8a65a657906fa874
RX(theta₅)
4f34f411bdbb40898a998b7669d5d9e8--65622b41ec8b482c8a65a657906fa874
836970f48a2a42a4aaa099e994a22067
RY(theta₁₁)
65622b41ec8b482c8a65a657906fa874--836970f48a2a42a4aaa099e994a22067
705f00aa2d654980bec6e3a3088b43f0
RX(theta₁₇)
836970f48a2a42a4aaa099e994a22067--705f00aa2d654980bec6e3a3088b43f0
3ce2a70751ce4425a6c8360e099a45c2
705f00aa2d654980bec6e3a3088b43f0--3ce2a70751ce4425a6c8360e099a45c2
315d473d5c66489cbb6c8a3ccb92c0c7
RX(theta₂₃)
3ce2a70751ce4425a6c8360e099a45c2--315d473d5c66489cbb6c8a3ccb92c0c7
4bfaae82d5ad4b099f4d3cf2fe35f654
RY(theta₂₉)
315d473d5c66489cbb6c8a3ccb92c0c7--4bfaae82d5ad4b099f4d3cf2fe35f654
2705910550b148138b8b28d3ef6d4fa9
RX(theta₃₅)
4bfaae82d5ad4b099f4d3cf2fe35f654--2705910550b148138b8b28d3ef6d4fa9
81b4d1bf45124f6faf284578ed73026b
2705910550b148138b8b28d3ef6d4fa9--81b4d1bf45124f6faf284578ed73026b
81b4d1bf45124f6faf284578ed73026b--8d29fd0eaa4a4d949ef7128a6f58a60e
Creating the QuantumModel
The rest of the procedure is the same as any other Qadence workflow. We start by defining a feature map for input encoding and an observable for output decoding.
from qadence import feature_map , BasisSet , ReuploadScaling
from qadence import Z , I
fm = feature_map (
n_qubits = reg . n_qubits ,
param = "x" ,
fm_type = BasisSet . CHEBYSHEV ,
reupload_scaling = ReuploadScaling . TOWER ,
)
# Total magnetization
observable = add ( Z ( i ) for i in range ( reg . n_qubits ))
And we have all the ingredients to initialize the QuantumModel
:
from qadence import QuantumCircuit , QuantumModel
circuit = QuantumCircuit ( reg , fm , da_ansatz )
model = QuantumModel ( circuit , observable = observable )
Training the model
We can now train the model. We use a set of 20 equally spaced training points.
# Chebyshev FM does not accept x = -1, 1
xmin = - 0.99
xmax = 0.99
n_train = 20
x_train = torch . linspace ( xmin , xmax , steps = n_train )
y_train = f ( x_train )
# Initial model prediction
y_pred_initial = model . expectation ({ "x" : x_test }) . detach ()
And we use a simple custom training loop.
criterion = torch . nn . MSELoss ()
optimizer = torch . optim . Adam ( model . parameters (), lr = 0.1 )
n_epochs = 200
def loss_fn ( x_train , y_train ):
out = model . expectation ({ "x" : x_train })
loss = criterion ( out . squeeze (), y_train )
return loss
for i in range ( n_epochs ):
optimizer . zero_grad ()
loss = loss_fn ( x_train , y_train )
loss . backward ()
optimizer . step ()
Results
Finally we can plot the resulting trained model.
y_pred_final = model . expectation ({ "x" : x_test }) . detach ()
plt . plot ( x_test , y_pred_initial , label = "Initial prediction" )
plt . plot ( x_test , y_pred_final , label = "Final prediction" )
plt . scatter ( x_train , y_train , label = "Training points" )
plt . xlabel ( "x" )
plt . ylabel ( "f(x)" )
plt . legend ()
plt . xlim (( - 1.1 , 1.1 ))
plt . ylim (( - 1.1 , 1.1 ))
2025-04-04T13:34:08.188112
image/svg+xml
Matplotlib v3.10.1, https://matplotlib.org/