Fitting a function with a Hamiltonian ansatz
In the analog QCL tutorial we used analog blocks to learn a function of interest. The analog blocks are a direct abstraction of device execution with global addressing. However, we may want to directly program an Hamiltonian-level ansatz to have a finer control on our model. In Qadence this can easily be done through digital-analog programs. In this tutorial we will solve a simple QCL problem with this approach.
Setting up the problem
The example problem considered is to fit a function of interest in a specified range. Below we define and plot the function \(f(x)=x^5\) .
import torch
import matplotlib.pyplot as plt
# Function to fit:
def f ( x ):
return x ** 5
xmin = - 1.0
xmax = 1.0
n_test = 100
x_test = torch . linspace ( xmin , xmax , steps = n_test )
y_test = f ( x_test )
plt . plot ( x_test , y_test )
plt . xlim (( - 1.1 , 1.1 ))
plt . ylim (( - 1.1 , 1.1 ))
2025-01-23T14:19:04.915455
image/svg+xml
Matplotlib v3.10.0, https://matplotlib.org/
Digital-Analog Ansatz
We start by defining the register of qubits. The topology we use now will define the interactions in the entangling Hamiltonian. As an example, we can define a rectangular lattice with 6 qubits.
from qadence import Register
reg = Register . rectangular_lattice (
qubits_row = 3 ,
qubits_col = 2 ,
)
Inspired by the Ising interaction mode of Rydberg atoms, we can now define an interaction Hamiltonian as \(\mathcal{H}_{ij}=\frac{1}{r_{ij}^6}N_iN_j\) , where \(N_i=(1/2)(I_i-Z_i)\) is the number operator and and \(r_{ij}\) is the distance between qubits \(i\) and \(j\) . We can easily instatiate this interaction Hamiltonian from the register information :
from qadence import N , add
def h_ij ( i : int , j : int ):
return N ( i ) @N ( j )
h_int = add ( h_ij ( * edge ) / r ** 6 for edge , r in reg . edge_distances . items ())
To build the digital-analog ansatz we can make use of the standard hea
function by specifying we want to use the Strategy.SDAQC
and passing the Hamiltonian we created as the entangler, as see in the QML constructors tutorial . The entangling operation will be replaced by the evolution of this Hamiltonian HamEvo(h_int, t)
, where the time parameter t
is considered to be a variational parameter at each layer.
from qadence import hea , Strategy , RX , RY
depth = 2
da_ansatz = hea (
n_qubits = reg . n_qubits ,
depth = depth ,
operations = [ RX , RY , RX ],
entangler = h_int ,
strategy = Strategy . SDAQC ,
)
print ( html_string ( da_ansatz ))
%3
cluster_5faae7c10d8542b1a47c0f08a08b6a33
cluster_95b9918d42b046bbb8cdcde77aba478a
53148712e6fc42e2b1e315bd74637388
0
2c30cc0b1147455ebdbc3fe910c8e46a
RX(theta₀)
53148712e6fc42e2b1e315bd74637388--2c30cc0b1147455ebdbc3fe910c8e46a
c533d60bd8ec48fd91dc80c22b1ae069
1
1970a2a3ce7948a9a5f4bfda0daed97d
RY(theta₆)
2c30cc0b1147455ebdbc3fe910c8e46a--1970a2a3ce7948a9a5f4bfda0daed97d
6b6e2632151846d38824624ac050324b
RX(theta₁₂)
1970a2a3ce7948a9a5f4bfda0daed97d--6b6e2632151846d38824624ac050324b
3eecb2184dee421db8f0f6de10587e24
6b6e2632151846d38824624ac050324b--3eecb2184dee421db8f0f6de10587e24
3f281813e1ba4c8584c36a6af9b31602
RX(theta₁₈)
3eecb2184dee421db8f0f6de10587e24--3f281813e1ba4c8584c36a6af9b31602
447f882095cd44aa83750eb02c21b5ad
RY(theta₂₄)
3f281813e1ba4c8584c36a6af9b31602--447f882095cd44aa83750eb02c21b5ad
bcc7f3e0045944e0bd9612c4e2bd00a5
RX(theta₃₀)
447f882095cd44aa83750eb02c21b5ad--bcc7f3e0045944e0bd9612c4e2bd00a5
a5e0877dcc0440bbb13312148332e8f4
bcc7f3e0045944e0bd9612c4e2bd00a5--a5e0877dcc0440bbb13312148332e8f4
ef468b7a860c4e219077c396cb9a008d
a5e0877dcc0440bbb13312148332e8f4--ef468b7a860c4e219077c396cb9a008d
cbdb5c159f134b06ad3071ba22563514
3f9054d5e7144ee89e3c7028e24a7051
RX(theta₁)
c533d60bd8ec48fd91dc80c22b1ae069--3f9054d5e7144ee89e3c7028e24a7051
e55ec703459f452092014c129abf4d7c
2
b2e3a52d28ee4ef698cbed91f168d324
RY(theta₇)
3f9054d5e7144ee89e3c7028e24a7051--b2e3a52d28ee4ef698cbed91f168d324
d96b432d648f44d2bf0a8cdbdaef62af
RX(theta₁₃)
b2e3a52d28ee4ef698cbed91f168d324--d96b432d648f44d2bf0a8cdbdaef62af
6cb7f6a8cf044208ad9f6955d680a319
d96b432d648f44d2bf0a8cdbdaef62af--6cb7f6a8cf044208ad9f6955d680a319
6f762a9af81549fdbe1382bb87c2e348
RX(theta₁₉)
6cb7f6a8cf044208ad9f6955d680a319--6f762a9af81549fdbe1382bb87c2e348
137d1800885049ba98d1bff6aa975028
RY(theta₂₅)
6f762a9af81549fdbe1382bb87c2e348--137d1800885049ba98d1bff6aa975028
51a51ba69cc448a68e2d6326ce4cd151
RX(theta₃₁)
137d1800885049ba98d1bff6aa975028--51a51ba69cc448a68e2d6326ce4cd151
9c3995fe30cc4271a3c805eb9a443943
51a51ba69cc448a68e2d6326ce4cd151--9c3995fe30cc4271a3c805eb9a443943
9c3995fe30cc4271a3c805eb9a443943--cbdb5c159f134b06ad3071ba22563514
1d698c2fbf484a64a25be6d163b8c3be
bd2ff5fd2fab4082aadaa702f0070c6b
RX(theta₂)
e55ec703459f452092014c129abf4d7c--bd2ff5fd2fab4082aadaa702f0070c6b
b9871a5bcaea429096227167b8d7c89f
3
6e27dc85312c41b68dd598f97e6280be
RY(theta₈)
bd2ff5fd2fab4082aadaa702f0070c6b--6e27dc85312c41b68dd598f97e6280be
257dbce6dba34f258f5b6915bbdb95df
RX(theta₁₄)
6e27dc85312c41b68dd598f97e6280be--257dbce6dba34f258f5b6915bbdb95df
0ddfaf7fe46c4e3e965e7a36a29e0ec1
HamEvo
257dbce6dba34f258f5b6915bbdb95df--0ddfaf7fe46c4e3e965e7a36a29e0ec1
fdc5f26c434444c1ab015d9631ba9edb
RX(theta₂₀)
0ddfaf7fe46c4e3e965e7a36a29e0ec1--fdc5f26c434444c1ab015d9631ba9edb
698881f14ee5400ab1303686eb1cb72f
RY(theta₂₆)
fdc5f26c434444c1ab015d9631ba9edb--698881f14ee5400ab1303686eb1cb72f
0648dd08c1f34479acbc792747144f98
RX(theta₃₂)
698881f14ee5400ab1303686eb1cb72f--0648dd08c1f34479acbc792747144f98
89c96ab78ad54277891d08248bbd35a0
HamEvo
0648dd08c1f34479acbc792747144f98--89c96ab78ad54277891d08248bbd35a0
89c96ab78ad54277891d08248bbd35a0--1d698c2fbf484a64a25be6d163b8c3be
e4d46888f6104937b1e2a258dfd51a07
8ab2f799003d4a39968bee0635e99765
RX(theta₃)
b9871a5bcaea429096227167b8d7c89f--8ab2f799003d4a39968bee0635e99765
5f630b5728c04f84b5a6b20585ed8781
4
bd2a4cf6e4a84e70ace2b60a7b059ef6
RY(theta₉)
8ab2f799003d4a39968bee0635e99765--bd2a4cf6e4a84e70ace2b60a7b059ef6
f3a563bba5344b1e9a0d8c56bf406b96
RX(theta₁₅)
bd2a4cf6e4a84e70ace2b60a7b059ef6--f3a563bba5344b1e9a0d8c56bf406b96
1a48f259cc0144bbb99807a919d0636c
t = theta_t₀
f3a563bba5344b1e9a0d8c56bf406b96--1a48f259cc0144bbb99807a919d0636c
9812eb8535454cca92e070c51a0acd24
RX(theta₂₁)
1a48f259cc0144bbb99807a919d0636c--9812eb8535454cca92e070c51a0acd24
053168bcd5e740c0b18212d0e80c6aaa
RY(theta₂₇)
9812eb8535454cca92e070c51a0acd24--053168bcd5e740c0b18212d0e80c6aaa
055510abdda04d1588452e16d3c193b1
RX(theta₃₃)
053168bcd5e740c0b18212d0e80c6aaa--055510abdda04d1588452e16d3c193b1
5b630e3a19114635a1cf3072edb42df9
t = theta_t₁
055510abdda04d1588452e16d3c193b1--5b630e3a19114635a1cf3072edb42df9
5b630e3a19114635a1cf3072edb42df9--e4d46888f6104937b1e2a258dfd51a07
2eef75f4ed494db98ce60946d71a21c6
ffbbde64825b42288166802ce48dde5e
RX(theta₄)
5f630b5728c04f84b5a6b20585ed8781--ffbbde64825b42288166802ce48dde5e
eb76161d33cc4196a5680d5444ff1678
5
b470d9b36d90457fb8d55278dce10e77
RY(theta₁₀)
ffbbde64825b42288166802ce48dde5e--b470d9b36d90457fb8d55278dce10e77
1a398844e3364574a1d6347e5ced3b97
RX(theta₁₆)
b470d9b36d90457fb8d55278dce10e77--1a398844e3364574a1d6347e5ced3b97
1fd726133fdb4ab9a75208f80237a788
1a398844e3364574a1d6347e5ced3b97--1fd726133fdb4ab9a75208f80237a788
7d0dc3bf77e7436ead95f07393aae6bf
RX(theta₂₂)
1fd726133fdb4ab9a75208f80237a788--7d0dc3bf77e7436ead95f07393aae6bf
06f936ccea9741109412dbe00ac5f561
RY(theta₂₈)
7d0dc3bf77e7436ead95f07393aae6bf--06f936ccea9741109412dbe00ac5f561
bcccaa8d9cea4b10979166e59891c574
RX(theta₃₄)
06f936ccea9741109412dbe00ac5f561--bcccaa8d9cea4b10979166e59891c574
5f91e8d46eef416482aa04e706bee2c3
bcccaa8d9cea4b10979166e59891c574--5f91e8d46eef416482aa04e706bee2c3
5f91e8d46eef416482aa04e706bee2c3--2eef75f4ed494db98ce60946d71a21c6
aa7b671db61941268bf21785587fdadf
07a7599a96da45839eedcf1f66821a9d
RX(theta₅)
eb76161d33cc4196a5680d5444ff1678--07a7599a96da45839eedcf1f66821a9d
9da5ee566c044b3cb97f147435acfffe
RY(theta₁₁)
07a7599a96da45839eedcf1f66821a9d--9da5ee566c044b3cb97f147435acfffe
c5e03f218727423db9c630014a2f5768
RX(theta₁₇)
9da5ee566c044b3cb97f147435acfffe--c5e03f218727423db9c630014a2f5768
8f8f9c8d9bec470184612e7d3bb4d546
c5e03f218727423db9c630014a2f5768--8f8f9c8d9bec470184612e7d3bb4d546
fa7f59857ba648eea67cf5183c098f15
RX(theta₂₃)
8f8f9c8d9bec470184612e7d3bb4d546--fa7f59857ba648eea67cf5183c098f15
cc6c3163c8604dffa56bdc2bdd2dc977
RY(theta₂₉)
fa7f59857ba648eea67cf5183c098f15--cc6c3163c8604dffa56bdc2bdd2dc977
23b83f963ae24f04bbf3ba6ad3568cfe
RX(theta₃₅)
cc6c3163c8604dffa56bdc2bdd2dc977--23b83f963ae24f04bbf3ba6ad3568cfe
199ba86de97b400bb6f81242e9d02345
23b83f963ae24f04bbf3ba6ad3568cfe--199ba86de97b400bb6f81242e9d02345
199ba86de97b400bb6f81242e9d02345--aa7b671db61941268bf21785587fdadf
Creating the QuantumModel
The rest of the procedure is the same as any other Qadence workflow. We start by defining a feature map for input encoding and an observable for output decoding.
from qadence import feature_map , BasisSet , ReuploadScaling
from qadence import Z , I
fm = feature_map (
n_qubits = reg . n_qubits ,
param = "x" ,
fm_type = BasisSet . CHEBYSHEV ,
reupload_scaling = ReuploadScaling . TOWER ,
)
# Total magnetization
observable = add ( Z ( i ) for i in range ( reg . n_qubits ))
And we have all the ingredients to initialize the QuantumModel
:
from qadence import QuantumCircuit , QuantumModel
circuit = QuantumCircuit ( reg , fm , da_ansatz )
model = QuantumModel ( circuit , observable = observable )
Training the model
We can now train the model. We use a set of 20 equally spaced training points.
# Chebyshev FM does not accept x = -1, 1
xmin = - 0.99
xmax = 0.99
n_train = 20
x_train = torch . linspace ( xmin , xmax , steps = n_train )
y_train = f ( x_train )
# Initial model prediction
y_pred_initial = model . expectation ({ "x" : x_test }) . detach ()
And we use a simple custom training loop.
criterion = torch . nn . MSELoss ()
optimizer = torch . optim . Adam ( model . parameters (), lr = 0.1 )
n_epochs = 200
def loss_fn ( x_train , y_train ):
out = model . expectation ({ "x" : x_train })
loss = criterion ( out . squeeze (), y_train )
return loss
for i in range ( n_epochs ):
optimizer . zero_grad ()
loss = loss_fn ( x_train , y_train )
loss . backward ()
optimizer . step ()
Results
Finally we can plot the resulting trained model.
y_pred_final = model . expectation ({ "x" : x_test }) . detach ()
plt . plot ( x_test , y_pred_initial , label = "Initial prediction" )
plt . plot ( x_test , y_pred_final , label = "Final prediction" )
plt . scatter ( x_train , y_train , label = "Training points" )
plt . xlabel ( "x" )
plt . ylabel ( "f(x)" )
plt . legend ()
plt . xlim (( - 1.1 , 1.1 ))
plt . ylim (( - 1.1 , 1.1 ))
2025-01-23T14:19:12.598346
image/svg+xml
Matplotlib v3.10.0, https://matplotlib.org/