Fitting a function with a Hamiltonian ansatz
In the analog QCL tutorial we used analog blocks to learn a function of interest. The analog blocks are a direct abstraction of device execution with global addressing. However, we may want to directly program an Hamiltonian-level ansatz to have a finer control on our model. In Qadence this can easily be done through digital-analog programs. In this tutorial we will solve a simple QCL problem with this approach.
Setting up the problem
The example problem considered is to fit a function of interest in a specified range. Below we define and plot the function \(f(x)=x^5\) .
import torch
import matplotlib.pyplot as plt
# Function to fit:
def f ( x ):
return x ** 5
xmin = - 1.0
xmax = 1.0
n_test = 100
x_test = torch . linspace ( xmin , xmax , steps = n_test )
y_test = f ( x_test )
plt . plot ( x_test , y_test )
plt . xlim (( - 1.1 , 1.1 ))
plt . ylim (( - 1.1 , 1.1 ))
2025-06-04T09:57:26.489884
image/svg+xml
Matplotlib v3.10.3, https://matplotlib.org/
Digital-Analog Ansatz
We start by defining the register of qubits. The topology we use now will define the interactions in the entangling Hamiltonian. As an example, we can define a rectangular lattice with 6 qubits.
from qadence import Register
reg = Register . rectangular_lattice (
qubits_row = 3 ,
qubits_col = 2 ,
)
Inspired by the Ising interaction mode of Rydberg atoms, we can now define an interaction Hamiltonian as \(\mathcal{H}_{ij}=\frac{1}{r_{ij}^6}N_iN_j\) , where \(N_i=(1/2)(I_i-Z_i)\) is the number operator and and \(r_{ij}\) is the distance between qubits \(i\) and \(j\) . We can easily instatiate this interaction Hamiltonian from the register information :
from qadence import N , add
def h_ij ( i : int , j : int ):
return N ( i ) @N ( j )
h_int = add ( h_ij ( * edge ) / r ** 6 for edge , r in reg . edge_distances . items ())
To build the digital-analog ansatz we can make use of the standard hea
function by specifying we want to use the Strategy.SDAQC
and passing the Hamiltonian we created as the entangler, as see in the QML constructors tutorial . The entangling operation will be replaced by the evolution of this Hamiltonian HamEvo(h_int, t)
, where the time parameter t
is considered to be a variational parameter at each layer.
from qadence import hea , Strategy , RX , RY
depth = 2
da_ansatz = hea (
n_qubits = reg . n_qubits ,
depth = depth ,
operations = [ RX , RY , RX ],
entangler = h_int ,
strategy = Strategy . SDAQC ,
)
print ( html_string ( da_ansatz ))
%3
cluster_f98f12a3f45e4973b7d2d66dffdab479
cluster_34f3ab53e9e64f6e9264405c8ef0f9b9
fd2080b378e74b50aebb68422f62da07
0
332ad0b1ffeb4b6aaf24468a00064737
RX(theta₀)
fd2080b378e74b50aebb68422f62da07--332ad0b1ffeb4b6aaf24468a00064737
7142994b47464587a143b09269bf1fb1
1
322d6e0c7a044520b7208f56fd8502d4
RY(theta₆)
332ad0b1ffeb4b6aaf24468a00064737--322d6e0c7a044520b7208f56fd8502d4
0c63b4d0ff1f4fc2acdfaa0d435893bd
RX(theta₁₂)
322d6e0c7a044520b7208f56fd8502d4--0c63b4d0ff1f4fc2acdfaa0d435893bd
f13167a466f749bf8d779ef7f1238f2c
0c63b4d0ff1f4fc2acdfaa0d435893bd--f13167a466f749bf8d779ef7f1238f2c
cae0855ca7704f608a8d0ddb6d7f199d
RX(theta₁₈)
f13167a466f749bf8d779ef7f1238f2c--cae0855ca7704f608a8d0ddb6d7f199d
41ed3d1d77054a92869482ab02ed2e3d
RY(theta₂₄)
cae0855ca7704f608a8d0ddb6d7f199d--41ed3d1d77054a92869482ab02ed2e3d
ff35fb6ce0054ada8be14ccbb216af8d
RX(theta₃₀)
41ed3d1d77054a92869482ab02ed2e3d--ff35fb6ce0054ada8be14ccbb216af8d
fb03bf48153748d48b89a988779556c3
ff35fb6ce0054ada8be14ccbb216af8d--fb03bf48153748d48b89a988779556c3
c143f45c1d64428191418a249a63f90b
fb03bf48153748d48b89a988779556c3--c143f45c1d64428191418a249a63f90b
f35a93568fee403a92c7c1763e501ae1
3b8e72dcce6f4443992ecd9e2bd6b47a
RX(theta₁)
7142994b47464587a143b09269bf1fb1--3b8e72dcce6f4443992ecd9e2bd6b47a
046d4d0af4684422bd02f3dcf5c165b7
2
736439cfd0e54f2daede9aae06a43537
RY(theta₇)
3b8e72dcce6f4443992ecd9e2bd6b47a--736439cfd0e54f2daede9aae06a43537
2a184777770e4914a12ba71bf403c38d
RX(theta₁₃)
736439cfd0e54f2daede9aae06a43537--2a184777770e4914a12ba71bf403c38d
f84f5a1c8e7f409bb9acdfc566dddb30
2a184777770e4914a12ba71bf403c38d--f84f5a1c8e7f409bb9acdfc566dddb30
d3066dbfc3ea45f4bfca6f2eaf0e316a
RX(theta₁₉)
f84f5a1c8e7f409bb9acdfc566dddb30--d3066dbfc3ea45f4bfca6f2eaf0e316a
b9178293140445d8b6c3b6f68e64f3e6
RY(theta₂₅)
d3066dbfc3ea45f4bfca6f2eaf0e316a--b9178293140445d8b6c3b6f68e64f3e6
6172796b30204b4998706d9cbac9b47e
RX(theta₃₁)
b9178293140445d8b6c3b6f68e64f3e6--6172796b30204b4998706d9cbac9b47e
826cf0550d8b455baaf3618b4a5f9d03
6172796b30204b4998706d9cbac9b47e--826cf0550d8b455baaf3618b4a5f9d03
826cf0550d8b455baaf3618b4a5f9d03--f35a93568fee403a92c7c1763e501ae1
e8c5f4c7c11243d5b996cfc0b8b465ab
aa1eb49498a944359b13a7ceeeb2148d
RX(theta₂)
046d4d0af4684422bd02f3dcf5c165b7--aa1eb49498a944359b13a7ceeeb2148d
11c43b9a965d443093ad672cba3e8964
3
5a6c6617e9f945408c512fc6a7d83623
RY(theta₈)
aa1eb49498a944359b13a7ceeeb2148d--5a6c6617e9f945408c512fc6a7d83623
a9bb7f84860041708f6b1d1926846448
RX(theta₁₄)
5a6c6617e9f945408c512fc6a7d83623--a9bb7f84860041708f6b1d1926846448
2d47efc54a404f98a5d5382f37d6253e
HamEvo
a9bb7f84860041708f6b1d1926846448--2d47efc54a404f98a5d5382f37d6253e
e892c4a743e345229d9c0e2c1106a98f
RX(theta₂₀)
2d47efc54a404f98a5d5382f37d6253e--e892c4a743e345229d9c0e2c1106a98f
c0ad875059314f7fbfc0d9952a231938
RY(theta₂₆)
e892c4a743e345229d9c0e2c1106a98f--c0ad875059314f7fbfc0d9952a231938
b4807ec2a296490b8b66ea5a0eaf2f0c
RX(theta₃₂)
c0ad875059314f7fbfc0d9952a231938--b4807ec2a296490b8b66ea5a0eaf2f0c
54f3ae13db6249b4bba5ffbfc850b75d
HamEvo
b4807ec2a296490b8b66ea5a0eaf2f0c--54f3ae13db6249b4bba5ffbfc850b75d
54f3ae13db6249b4bba5ffbfc850b75d--e8c5f4c7c11243d5b996cfc0b8b465ab
fac47118359e4adb877a1bb00cea482f
84133980af2f49e7949a23aeb913bae6
RX(theta₃)
11c43b9a965d443093ad672cba3e8964--84133980af2f49e7949a23aeb913bae6
9392caa4f6bd41378979b3008548cb57
4
af27fa20b9444f94bf77b83309ea7bd7
RY(theta₉)
84133980af2f49e7949a23aeb913bae6--af27fa20b9444f94bf77b83309ea7bd7
c2aeff72efb94acf85f54bc24992e37d
RX(theta₁₅)
af27fa20b9444f94bf77b83309ea7bd7--c2aeff72efb94acf85f54bc24992e37d
f466db493a3a4783ad1df03a482f34b9
t = theta_t₀
c2aeff72efb94acf85f54bc24992e37d--f466db493a3a4783ad1df03a482f34b9
aec025f5a03644fc9d016a43e311e630
RX(theta₂₁)
f466db493a3a4783ad1df03a482f34b9--aec025f5a03644fc9d016a43e311e630
029a7425a0fd44ce8365deddd22b128f
RY(theta₂₇)
aec025f5a03644fc9d016a43e311e630--029a7425a0fd44ce8365deddd22b128f
63aed91daf2348e9b0da177cb82c18c1
RX(theta₃₃)
029a7425a0fd44ce8365deddd22b128f--63aed91daf2348e9b0da177cb82c18c1
21e4ef26788445c9ba62f46eedfa86ba
t = theta_t₁
63aed91daf2348e9b0da177cb82c18c1--21e4ef26788445c9ba62f46eedfa86ba
21e4ef26788445c9ba62f46eedfa86ba--fac47118359e4adb877a1bb00cea482f
31387ace57de4b819239bfaa17415d1c
9e113f59f2f84ec6a82712a991a95a3c
RX(theta₄)
9392caa4f6bd41378979b3008548cb57--9e113f59f2f84ec6a82712a991a95a3c
78aae563dcc14a2baef4a99845bb6ad1
5
afbc649a4bd740a5ad743ada27df2cc7
RY(theta₁₀)
9e113f59f2f84ec6a82712a991a95a3c--afbc649a4bd740a5ad743ada27df2cc7
b199fd84f6fb43069bbfce67d94ec9d9
RX(theta₁₆)
afbc649a4bd740a5ad743ada27df2cc7--b199fd84f6fb43069bbfce67d94ec9d9
071d39aa6f8a4073bbaab28af66bd71b
b199fd84f6fb43069bbfce67d94ec9d9--071d39aa6f8a4073bbaab28af66bd71b
c1eef34d47444d459db4a77352f0382e
RX(theta₂₂)
071d39aa6f8a4073bbaab28af66bd71b--c1eef34d47444d459db4a77352f0382e
04dbc6efa2134066aa4384712bc646df
RY(theta₂₈)
c1eef34d47444d459db4a77352f0382e--04dbc6efa2134066aa4384712bc646df
372a7a3c48e94d8d97989f896a4e8d9b
RX(theta₃₄)
04dbc6efa2134066aa4384712bc646df--372a7a3c48e94d8d97989f896a4e8d9b
d4827bbbdb224829993e1b382bbab916
372a7a3c48e94d8d97989f896a4e8d9b--d4827bbbdb224829993e1b382bbab916
d4827bbbdb224829993e1b382bbab916--31387ace57de4b819239bfaa17415d1c
c05fc38ee85c4b4a8fcc4c660de9144f
57ae76e194c14234ba8a0800f7b9859f
RX(theta₅)
78aae563dcc14a2baef4a99845bb6ad1--57ae76e194c14234ba8a0800f7b9859f
9eb0cb7a80af48fb9d27db5fb7ef4fa0
RY(theta₁₁)
57ae76e194c14234ba8a0800f7b9859f--9eb0cb7a80af48fb9d27db5fb7ef4fa0
e465a3f4975e4f3e81f539a67cec1f5d
RX(theta₁₇)
9eb0cb7a80af48fb9d27db5fb7ef4fa0--e465a3f4975e4f3e81f539a67cec1f5d
11eb6eb706d847a7bdd333a5b9e5b8de
e465a3f4975e4f3e81f539a67cec1f5d--11eb6eb706d847a7bdd333a5b9e5b8de
1366fc8a7a514a3c8eeb759ec730ee1c
RX(theta₂₃)
11eb6eb706d847a7bdd333a5b9e5b8de--1366fc8a7a514a3c8eeb759ec730ee1c
e7114f5693ec43d69fc45c14f253c4fc
RY(theta₂₉)
1366fc8a7a514a3c8eeb759ec730ee1c--e7114f5693ec43d69fc45c14f253c4fc
3e4cf94bf6ef4aa7a91c421404bc772f
RX(theta₃₅)
e7114f5693ec43d69fc45c14f253c4fc--3e4cf94bf6ef4aa7a91c421404bc772f
6ee4cab1b8974d48b4b5e4f191660d6d
3e4cf94bf6ef4aa7a91c421404bc772f--6ee4cab1b8974d48b4b5e4f191660d6d
6ee4cab1b8974d48b4b5e4f191660d6d--c05fc38ee85c4b4a8fcc4c660de9144f
Creating the QuantumModel
The rest of the procedure is the same as any other Qadence workflow. We start by defining a feature map for input encoding and an observable for output decoding.
from qadence import feature_map , BasisSet , ReuploadScaling
from qadence import Z , I
fm = feature_map (
n_qubits = reg . n_qubits ,
param = "x" ,
fm_type = BasisSet . CHEBYSHEV ,
reupload_scaling = ReuploadScaling . TOWER ,
)
# Total magnetization
observable = add ( Z ( i ) for i in range ( reg . n_qubits ))
And we have all the ingredients to initialize the QuantumModel
:
from qadence import QuantumCircuit , QuantumModel
circuit = QuantumCircuit ( reg , fm , da_ansatz )
model = QuantumModel ( circuit , observable = observable )
Training the model
We can now train the model. We use a set of 20 equally spaced training points.
# Chebyshev FM does not accept x = -1, 1
xmin = - 0.99
xmax = 0.99
n_train = 20
x_train = torch . linspace ( xmin , xmax , steps = n_train )
y_train = f ( x_train )
# Initial model prediction
y_pred_initial = model . expectation ({ "x" : x_test }) . detach ()
And we use a simple custom training loop.
criterion = torch . nn . MSELoss ()
optimizer = torch . optim . Adam ( model . parameters (), lr = 0.1 )
n_epochs = 200
def loss_fn ( x_train , y_train ):
out = model . expectation ({ "x" : x_train })
loss = criterion ( out . squeeze (), y_train )
return loss
for i in range ( n_epochs ):
optimizer . zero_grad ()
loss = loss_fn ( x_train , y_train )
loss . backward ()
optimizer . step ()
Results
Finally we can plot the resulting trained model.
y_pred_final = model . expectation ({ "x" : x_test }) . detach ()
plt . plot ( x_test , y_pred_initial , label = "Initial prediction" )
plt . plot ( x_test , y_pred_final , label = "Final prediction" )
plt . scatter ( x_train , y_train , label = "Training points" )
plt . xlabel ( "x" )
plt . ylabel ( "f(x)" )
plt . legend ()
plt . xlim (( - 1.1 , 1.1 ))
plt . ylim (( - 1.1 , 1.1 ))
2025-06-04T09:57:34.919024
image/svg+xml
Matplotlib v3.10.3, https://matplotlib.org/