Fitting a function with a Hamiltonian ansatz
In the analog QCL tutorial we used analog blocks to learn a function of interest. The analog blocks are a direct abstraction of device execution with global addressing. However, we may want to directly program an Hamiltonian-level ansatz to have a finer control on our model. In Qadence this can easily be done through digital-analog programs. In this tutorial we will solve a simple QCL problem with this approach.
Setting up the problem
The example problem considered is to fit a function of interest in a specified range. Below we define and plot the function \(f(x)=x^5\) .
import torch
import matplotlib.pyplot as plt
# Function to fit:
def f ( x ):
return x ** 5
xmin = - 1.0
xmax = 1.0
n_test = 100
x_test = torch . linspace ( xmin , xmax , steps = n_test )
y_test = f ( x_test )
plt . plot ( x_test , y_test )
plt . xlim (( - 1.1 , 1.1 ))
plt . ylim (( - 1.1 , 1.1 ))
2025-03-05T09:50:18.806137
image/svg+xml
Matplotlib v3.10.1, https://matplotlib.org/
Digital-Analog Ansatz
We start by defining the register of qubits. The topology we use now will define the interactions in the entangling Hamiltonian. As an example, we can define a rectangular lattice with 6 qubits.
from qadence import Register
reg = Register . rectangular_lattice (
qubits_row = 3 ,
qubits_col = 2 ,
)
Inspired by the Ising interaction mode of Rydberg atoms, we can now define an interaction Hamiltonian as \(\mathcal{H}_{ij}=\frac{1}{r_{ij}^6}N_iN_j\) , where \(N_i=(1/2)(I_i-Z_i)\) is the number operator and and \(r_{ij}\) is the distance between qubits \(i\) and \(j\) . We can easily instatiate this interaction Hamiltonian from the register information :
from qadence import N , add
def h_ij ( i : int , j : int ):
return N ( i ) @N ( j )
h_int = add ( h_ij ( * edge ) / r ** 6 for edge , r in reg . edge_distances . items ())
To build the digital-analog ansatz we can make use of the standard hea
function by specifying we want to use the Strategy.SDAQC
and passing the Hamiltonian we created as the entangler, as see in the QML constructors tutorial . The entangling operation will be replaced by the evolution of this Hamiltonian HamEvo(h_int, t)
, where the time parameter t
is considered to be a variational parameter at each layer.
from qadence import hea , Strategy , RX , RY
depth = 2
da_ansatz = hea (
n_qubits = reg . n_qubits ,
depth = depth ,
operations = [ RX , RY , RX ],
entangler = h_int ,
strategy = Strategy . SDAQC ,
)
print ( html_string ( da_ansatz ))
%3
cluster_78c611f31caf4167892c5d1f711c909d
cluster_f30bf872301d40b99006f4a5c300c604
2ca0c4f79c4d494da442a132d8189efe
0
7d89a67519214e94a0fc2ce4a2a1e467
RX(theta₀)
2ca0c4f79c4d494da442a132d8189efe--7d89a67519214e94a0fc2ce4a2a1e467
505a7b346f4040a1b76ff532af85d677
1
1529202439a746b8b921cf2106aacc79
RY(theta₆)
7d89a67519214e94a0fc2ce4a2a1e467--1529202439a746b8b921cf2106aacc79
4e8034948e2c4c0fb830e97c9e3bd0f3
RX(theta₁₂)
1529202439a746b8b921cf2106aacc79--4e8034948e2c4c0fb830e97c9e3bd0f3
f846565d15c2444e9b7220cc9e9b2152
4e8034948e2c4c0fb830e97c9e3bd0f3--f846565d15c2444e9b7220cc9e9b2152
9903d1b1effa479b8bccfd735239a8e6
RX(theta₁₈)
f846565d15c2444e9b7220cc9e9b2152--9903d1b1effa479b8bccfd735239a8e6
82a0c05506354c9e8c22a6ddcf00848b
RY(theta₂₄)
9903d1b1effa479b8bccfd735239a8e6--82a0c05506354c9e8c22a6ddcf00848b
2b7d3c2f67f14c7f9cb4cd2612dbfe0b
RX(theta₃₀)
82a0c05506354c9e8c22a6ddcf00848b--2b7d3c2f67f14c7f9cb4cd2612dbfe0b
cd57afb3f0dc4c2f88a8cda0f4119786
2b7d3c2f67f14c7f9cb4cd2612dbfe0b--cd57afb3f0dc4c2f88a8cda0f4119786
073a325d72a840e8a126aaa2b59051a9
cd57afb3f0dc4c2f88a8cda0f4119786--073a325d72a840e8a126aaa2b59051a9
73562e06951e42ffa38789aa5eed813f
71d390c1c2e3430fb0ee7bde78308aeb
RX(theta₁)
505a7b346f4040a1b76ff532af85d677--71d390c1c2e3430fb0ee7bde78308aeb
75314576251e4892ab74cb1eece7c9a0
2
6ec8a5292e8745e88b8d45b90226dc1d
RY(theta₇)
71d390c1c2e3430fb0ee7bde78308aeb--6ec8a5292e8745e88b8d45b90226dc1d
6f4f094d5f8b4bbc9c3c1046f15e2ce0
RX(theta₁₃)
6ec8a5292e8745e88b8d45b90226dc1d--6f4f094d5f8b4bbc9c3c1046f15e2ce0
46b567113a8145cc89f1de8fb9b506e2
6f4f094d5f8b4bbc9c3c1046f15e2ce0--46b567113a8145cc89f1de8fb9b506e2
4306661e3a404fe6bb38cbd5eb2597d7
RX(theta₁₉)
46b567113a8145cc89f1de8fb9b506e2--4306661e3a404fe6bb38cbd5eb2597d7
883b8712ee214a3a9c6f605f86dd00e5
RY(theta₂₅)
4306661e3a404fe6bb38cbd5eb2597d7--883b8712ee214a3a9c6f605f86dd00e5
9033a374e1534a0391cc68bd2fa7a3f4
RX(theta₃₁)
883b8712ee214a3a9c6f605f86dd00e5--9033a374e1534a0391cc68bd2fa7a3f4
188952ee7d7245f78dc6620774a2464b
9033a374e1534a0391cc68bd2fa7a3f4--188952ee7d7245f78dc6620774a2464b
188952ee7d7245f78dc6620774a2464b--73562e06951e42ffa38789aa5eed813f
7e0e82dc590545afaed619fee3707989
c7ce08858161401ba969f2b3a21528a7
RX(theta₂)
75314576251e4892ab74cb1eece7c9a0--c7ce08858161401ba969f2b3a21528a7
65fb4a4adfaa40f895bc867657c42158
3
18a67cc4d3e94ff4a1d7fcefcd8a4052
RY(theta₈)
c7ce08858161401ba969f2b3a21528a7--18a67cc4d3e94ff4a1d7fcefcd8a4052
26deea2890d24d23b1d3034f6f5d0db1
RX(theta₁₄)
18a67cc4d3e94ff4a1d7fcefcd8a4052--26deea2890d24d23b1d3034f6f5d0db1
48540301fd224e208333decd0af684f7
HamEvo
26deea2890d24d23b1d3034f6f5d0db1--48540301fd224e208333decd0af684f7
6a439089df444942938a530a7607f5f5
RX(theta₂₀)
48540301fd224e208333decd0af684f7--6a439089df444942938a530a7607f5f5
eab81123e63b415daf0cded99d0ac4f9
RY(theta₂₆)
6a439089df444942938a530a7607f5f5--eab81123e63b415daf0cded99d0ac4f9
b7a5144409ee461c9e3215fe9c134b2d
RX(theta₃₂)
eab81123e63b415daf0cded99d0ac4f9--b7a5144409ee461c9e3215fe9c134b2d
176e11e8bd6b4fec8bbac0340b5e9d22
HamEvo
b7a5144409ee461c9e3215fe9c134b2d--176e11e8bd6b4fec8bbac0340b5e9d22
176e11e8bd6b4fec8bbac0340b5e9d22--7e0e82dc590545afaed619fee3707989
ad9f244130994af494ff568af7fa7355
825e759edcb049feb939cc159450c6e1
RX(theta₃)
65fb4a4adfaa40f895bc867657c42158--825e759edcb049feb939cc159450c6e1
6062eaad7981495c8556979d8edd0b12
4
6629b0cd1ce440b4af36b08d19e30149
RY(theta₉)
825e759edcb049feb939cc159450c6e1--6629b0cd1ce440b4af36b08d19e30149
fcd6a643cafc416c93bf7efd5f67a94d
RX(theta₁₅)
6629b0cd1ce440b4af36b08d19e30149--fcd6a643cafc416c93bf7efd5f67a94d
92bf353e3dd44c04aabeb7c7d291a8a5
t = theta_t₀
fcd6a643cafc416c93bf7efd5f67a94d--92bf353e3dd44c04aabeb7c7d291a8a5
dfa2b0ed70124c269b7098870d78f127
RX(theta₂₁)
92bf353e3dd44c04aabeb7c7d291a8a5--dfa2b0ed70124c269b7098870d78f127
4a3d601e8e3a4733b04f29259100100f
RY(theta₂₇)
dfa2b0ed70124c269b7098870d78f127--4a3d601e8e3a4733b04f29259100100f
3a06d4ea77184a6883c3fa2dcad0ae4e
RX(theta₃₃)
4a3d601e8e3a4733b04f29259100100f--3a06d4ea77184a6883c3fa2dcad0ae4e
8d585877e1564e198bb24182d6f79136
t = theta_t₁
3a06d4ea77184a6883c3fa2dcad0ae4e--8d585877e1564e198bb24182d6f79136
8d585877e1564e198bb24182d6f79136--ad9f244130994af494ff568af7fa7355
7a755b84f1b446369e064caff7c9f9d0
97fa6e32b16e426690dcf4c91979d403
RX(theta₄)
6062eaad7981495c8556979d8edd0b12--97fa6e32b16e426690dcf4c91979d403
051297a82b974b788c58b016414a032b
5
969d36629537414eb8450fc780f16ac3
RY(theta₁₀)
97fa6e32b16e426690dcf4c91979d403--969d36629537414eb8450fc780f16ac3
6d85e8a01dfa43a7ab04e189c16a3a4f
RX(theta₁₆)
969d36629537414eb8450fc780f16ac3--6d85e8a01dfa43a7ab04e189c16a3a4f
bd1f04d3e73f4f349d35ff454fd00bc3
6d85e8a01dfa43a7ab04e189c16a3a4f--bd1f04d3e73f4f349d35ff454fd00bc3
5b37221912db49639daae56ceaf7f212
RX(theta₂₂)
bd1f04d3e73f4f349d35ff454fd00bc3--5b37221912db49639daae56ceaf7f212
3ef9ca9ba79c4f9e8a5d3aee4a951584
RY(theta₂₈)
5b37221912db49639daae56ceaf7f212--3ef9ca9ba79c4f9e8a5d3aee4a951584
bbab772507df4981af036e9828485913
RX(theta₃₄)
3ef9ca9ba79c4f9e8a5d3aee4a951584--bbab772507df4981af036e9828485913
2aaee22c2a8040eeb017009c19bcea28
bbab772507df4981af036e9828485913--2aaee22c2a8040eeb017009c19bcea28
2aaee22c2a8040eeb017009c19bcea28--7a755b84f1b446369e064caff7c9f9d0
5629c7e11cf641fd97191b879d7b2088
dc85a385881f42298de57319d470af3a
RX(theta₅)
051297a82b974b788c58b016414a032b--dc85a385881f42298de57319d470af3a
49ae75717df74b9fada6adb9151edb79
RY(theta₁₁)
dc85a385881f42298de57319d470af3a--49ae75717df74b9fada6adb9151edb79
d30f36c8cb7a40b3ba94de8d0535e182
RX(theta₁₇)
49ae75717df74b9fada6adb9151edb79--d30f36c8cb7a40b3ba94de8d0535e182
6efa3e3def214d2e879976c221eec542
d30f36c8cb7a40b3ba94de8d0535e182--6efa3e3def214d2e879976c221eec542
261c5b5e71da4a7b95ec7e2787695add
RX(theta₂₃)
6efa3e3def214d2e879976c221eec542--261c5b5e71da4a7b95ec7e2787695add
36603bfb2fee48b48c67c7ddc4ecc0fd
RY(theta₂₉)
261c5b5e71da4a7b95ec7e2787695add--36603bfb2fee48b48c67c7ddc4ecc0fd
8cbde4d3ef794e2689c87d73bde36a23
RX(theta₃₅)
36603bfb2fee48b48c67c7ddc4ecc0fd--8cbde4d3ef794e2689c87d73bde36a23
49e60e67b4cd4238be8db86ddd6c4924
8cbde4d3ef794e2689c87d73bde36a23--49e60e67b4cd4238be8db86ddd6c4924
49e60e67b4cd4238be8db86ddd6c4924--5629c7e11cf641fd97191b879d7b2088
Creating the QuantumModel
The rest of the procedure is the same as any other Qadence workflow. We start by defining a feature map for input encoding and an observable for output decoding.
from qadence import feature_map , BasisSet , ReuploadScaling
from qadence import Z , I
fm = feature_map (
n_qubits = reg . n_qubits ,
param = "x" ,
fm_type = BasisSet . CHEBYSHEV ,
reupload_scaling = ReuploadScaling . TOWER ,
)
# Total magnetization
observable = add ( Z ( i ) for i in range ( reg . n_qubits ))
And we have all the ingredients to initialize the QuantumModel
:
from qadence import QuantumCircuit , QuantumModel
circuit = QuantumCircuit ( reg , fm , da_ansatz )
model = QuantumModel ( circuit , observable = observable )
Training the model
We can now train the model. We use a set of 20 equally spaced training points.
# Chebyshev FM does not accept x = -1, 1
xmin = - 0.99
xmax = 0.99
n_train = 20
x_train = torch . linspace ( xmin , xmax , steps = n_train )
y_train = f ( x_train )
# Initial model prediction
y_pred_initial = model . expectation ({ "x" : x_test }) . detach ()
And we use a simple custom training loop.
criterion = torch . nn . MSELoss ()
optimizer = torch . optim . Adam ( model . parameters (), lr = 0.1 )
n_epochs = 200
def loss_fn ( x_train , y_train ):
out = model . expectation ({ "x" : x_train })
loss = criterion ( out . squeeze (), y_train )
return loss
for i in range ( n_epochs ):
optimizer . zero_grad ()
loss = loss_fn ( x_train , y_train )
loss . backward ()
optimizer . step ()
Results
Finally we can plot the resulting trained model.
y_pred_final = model . expectation ({ "x" : x_test }) . detach ()
plt . plot ( x_test , y_pred_initial , label = "Initial prediction" )
plt . plot ( x_test , y_pred_final , label = "Final prediction" )
plt . scatter ( x_train , y_train , label = "Training points" )
plt . xlabel ( "x" )
plt . ylabel ( "f(x)" )
plt . legend ()
plt . xlim (( - 1.1 , 1.1 ))
plt . ylim (( - 1.1 , 1.1 ))
2025-03-05T09:50:27.127475
image/svg+xml
Matplotlib v3.10.1, https://matplotlib.org/