Fitting a function with a Hamiltonian ansatz
In the analog QCL tutorial we used analog blocks to learn a function of interest. The analog blocks are a direct abstraction of device execution with global addressing. However, we may want to directly program an Hamiltonian-level ansatz to have a finer control on our model. In Qadence this can easily be done through digital-analog programs. In this tutorial we will solve a simple QCL problem with this approach.
Setting up the problem
The example problem considered is to fit a function of interest in a specified range. Below we define and plot the function \(f(x)=x^5\) .
import torch
import matplotlib.pyplot as plt
# Function to fit:
def f ( x ):
return x ** 5
xmin = - 1.0
xmax = 1.0
n_test = 100
x_test = torch . linspace ( xmin , xmax , steps = n_test )
y_test = f ( x_test )
plt . plot ( x_test , y_test )
plt . xlim (( - 1.1 , 1.1 ))
plt . ylim (( - 1.1 , 1.1 ))
2024-11-27T09:04:52.808369
image/svg+xml
Matplotlib v3.9.2, https://matplotlib.org/
Digital-Analog Ansatz
We start by defining the register of qubits. The topology we use now will define the interactions in the entangling Hamiltonian. As an example, we can define a rectangular lattice with 6 qubits.
from qadence import Register
reg = Register . rectangular_lattice (
qubits_row = 3 ,
qubits_col = 2 ,
)
Inspired by the Ising interaction mode of Rydberg atoms, we can now define an interaction Hamiltonian as \(\mathcal{H}_{ij}=\frac{1}{r_{ij}^6}N_iN_j\) , where \(N_i=(1/2)(I_i-Z_i)\) is the number operator and and \(r_{ij}\) is the distance between qubits \(i\) and \(j\) . We can easily instatiate this interaction Hamiltonian from the register information :
from qadence import N , add
def h_ij ( i : int , j : int ):
return N ( i ) @N ( j )
h_int = add ( h_ij ( * edge ) / r ** 6 for edge , r in reg . edge_distances . items ())
To build the digital-analog ansatz we can make use of the standard hea
function by specifying we want to use the Strategy.SDAQC
and passing the Hamiltonian we created as the entangler, as see in the QML constructors tutorial . The entangling operation will be replaced by the evolution of this Hamiltonian HamEvo(h_int, t)
, where the time parameter t
is considered to be a variational parameter at each layer.
from qadence import hea , Strategy , RX , RY
depth = 2
da_ansatz = hea (
n_qubits = reg . n_qubits ,
depth = depth ,
operations = [ RX , RY , RX ],
entangler = h_int ,
strategy = Strategy . SDAQC ,
)
print ( html_string ( da_ansatz ))
%3
cluster_af51ba3947b84588a9fc34681f8f107c
cluster_82fe2499bd6c4d22ae99bf6e65982c36
f6cfd157583f4e6e912bc6503fbe98a9
0
293e2c98d89641f192ced12e1a1be155
RX(theta₀)
f6cfd157583f4e6e912bc6503fbe98a9--293e2c98d89641f192ced12e1a1be155
cc15acde90474997b4ffbfbd41bb36b6
1
44ad2a6b69084ccb9e959d3b37c4f638
RY(theta₆)
293e2c98d89641f192ced12e1a1be155--44ad2a6b69084ccb9e959d3b37c4f638
e452ff0e39cc412ebb0212f6f668fe22
RX(theta₁₂)
44ad2a6b69084ccb9e959d3b37c4f638--e452ff0e39cc412ebb0212f6f668fe22
22a5d5340eb549a189404b919b2c7d6d
e452ff0e39cc412ebb0212f6f668fe22--22a5d5340eb549a189404b919b2c7d6d
7ff5f74bcf7648a68e3d5be963a91d88
RX(theta₁₈)
22a5d5340eb549a189404b919b2c7d6d--7ff5f74bcf7648a68e3d5be963a91d88
0a4fd660b01b4985a8b67c40ca462589
RY(theta₂₄)
7ff5f74bcf7648a68e3d5be963a91d88--0a4fd660b01b4985a8b67c40ca462589
1acba3ccfaa543b48687c5878aaf5694
RX(theta₃₀)
0a4fd660b01b4985a8b67c40ca462589--1acba3ccfaa543b48687c5878aaf5694
d8cba5be811045e7b1ab4da301eabbab
1acba3ccfaa543b48687c5878aaf5694--d8cba5be811045e7b1ab4da301eabbab
81434a4e29bf46c6839ed71c0aa99d23
d8cba5be811045e7b1ab4da301eabbab--81434a4e29bf46c6839ed71c0aa99d23
7dfbe679cdc241bdb93674f1cf931606
bbef6deb2fa0408bbb405e98e0dbad4f
RX(theta₁)
cc15acde90474997b4ffbfbd41bb36b6--bbef6deb2fa0408bbb405e98e0dbad4f
bd5bc7574b13479494438866cd9062fa
2
d7b5ba30635649ce9808282f2860e673
RY(theta₇)
bbef6deb2fa0408bbb405e98e0dbad4f--d7b5ba30635649ce9808282f2860e673
8af2f92007b34117931e03b90b9d548f
RX(theta₁₃)
d7b5ba30635649ce9808282f2860e673--8af2f92007b34117931e03b90b9d548f
6aeaa2dadff045c1a6f6809188b07eb4
8af2f92007b34117931e03b90b9d548f--6aeaa2dadff045c1a6f6809188b07eb4
90431a413ab34a8a963665cbfc6cbe78
RX(theta₁₉)
6aeaa2dadff045c1a6f6809188b07eb4--90431a413ab34a8a963665cbfc6cbe78
64e725702f3f41c5b31b2c4a285cde33
RY(theta₂₅)
90431a413ab34a8a963665cbfc6cbe78--64e725702f3f41c5b31b2c4a285cde33
a1b6cc83e1264705b337dcc0a647b77f
RX(theta₃₁)
64e725702f3f41c5b31b2c4a285cde33--a1b6cc83e1264705b337dcc0a647b77f
c082b340d0e1437eb0b080a07f8c8c6e
a1b6cc83e1264705b337dcc0a647b77f--c082b340d0e1437eb0b080a07f8c8c6e
c082b340d0e1437eb0b080a07f8c8c6e--7dfbe679cdc241bdb93674f1cf931606
5f8b293f96294cd3a9619c6fd953645c
0b0431fda0cd463a84792a71b0aef72c
RX(theta₂)
bd5bc7574b13479494438866cd9062fa--0b0431fda0cd463a84792a71b0aef72c
30faafa6677f46e09cc4c5cc368f33e7
3
926f9f298d0148f9b4e2c3380a6d95c7
RY(theta₈)
0b0431fda0cd463a84792a71b0aef72c--926f9f298d0148f9b4e2c3380a6d95c7
4c6ba265173f4bb3b53ad2fac03c84e1
RX(theta₁₄)
926f9f298d0148f9b4e2c3380a6d95c7--4c6ba265173f4bb3b53ad2fac03c84e1
b72bbda4130a4f25a289e907f59e07c3
HamEvo
4c6ba265173f4bb3b53ad2fac03c84e1--b72bbda4130a4f25a289e907f59e07c3
bee62a225faf423787c1b7bfe98c7c9d
RX(theta₂₀)
b72bbda4130a4f25a289e907f59e07c3--bee62a225faf423787c1b7bfe98c7c9d
4e0007b4125a41fab847b31bfc9edb5e
RY(theta₂₆)
bee62a225faf423787c1b7bfe98c7c9d--4e0007b4125a41fab847b31bfc9edb5e
4aea5af2f74f4fe2a27f4152d4248b04
RX(theta₃₂)
4e0007b4125a41fab847b31bfc9edb5e--4aea5af2f74f4fe2a27f4152d4248b04
7f7ad018b9dc411d8d8ddd852d710826
HamEvo
4aea5af2f74f4fe2a27f4152d4248b04--7f7ad018b9dc411d8d8ddd852d710826
7f7ad018b9dc411d8d8ddd852d710826--5f8b293f96294cd3a9619c6fd953645c
81683d685d634d91a6f4895b8e8da6c1
246744b7bea94e598a846d50f3f93f9a
RX(theta₃)
30faafa6677f46e09cc4c5cc368f33e7--246744b7bea94e598a846d50f3f93f9a
7fbcddbc19664090a12abfeb5d1ccd05
4
c11f878263cc46e395d206f1e62ca239
RY(theta₉)
246744b7bea94e598a846d50f3f93f9a--c11f878263cc46e395d206f1e62ca239
478e92fb6e114d6c8cacb9681ec0ddb2
RX(theta₁₅)
c11f878263cc46e395d206f1e62ca239--478e92fb6e114d6c8cacb9681ec0ddb2
942751759e864e0d880dc7a792eb3168
t = theta_t₀
478e92fb6e114d6c8cacb9681ec0ddb2--942751759e864e0d880dc7a792eb3168
b65676c01baf4bcc8a26ee7cd7b12379
RX(theta₂₁)
942751759e864e0d880dc7a792eb3168--b65676c01baf4bcc8a26ee7cd7b12379
57d3466b2a6b49ad8a16eb478bda5c09
RY(theta₂₇)
b65676c01baf4bcc8a26ee7cd7b12379--57d3466b2a6b49ad8a16eb478bda5c09
0ef804131730419c883ef4e64cb8159b
RX(theta₃₃)
57d3466b2a6b49ad8a16eb478bda5c09--0ef804131730419c883ef4e64cb8159b
7ae01f0d6336411caf3450ac2f17f871
t = theta_t₁
0ef804131730419c883ef4e64cb8159b--7ae01f0d6336411caf3450ac2f17f871
7ae01f0d6336411caf3450ac2f17f871--81683d685d634d91a6f4895b8e8da6c1
93d9f781c3b4456eb83cf5767a6aaa6c
8173ceafe7064fb79fc391afff9405ee
RX(theta₄)
7fbcddbc19664090a12abfeb5d1ccd05--8173ceafe7064fb79fc391afff9405ee
83231c5a21014d32aef6ae57eb01a8ab
5
2d447f0059d44f92b09e92eab4a5abff
RY(theta₁₀)
8173ceafe7064fb79fc391afff9405ee--2d447f0059d44f92b09e92eab4a5abff
4506b3f0c1ca425892d7423e5a9d2d2a
RX(theta₁₆)
2d447f0059d44f92b09e92eab4a5abff--4506b3f0c1ca425892d7423e5a9d2d2a
4df2cf514e4d4a989fa870050ae601d8
4506b3f0c1ca425892d7423e5a9d2d2a--4df2cf514e4d4a989fa870050ae601d8
b0aa14229c3c4622abc5351a32bf2c58
RX(theta₂₂)
4df2cf514e4d4a989fa870050ae601d8--b0aa14229c3c4622abc5351a32bf2c58
9554b584838a4d82b1db2c9c6da4f5e4
RY(theta₂₈)
b0aa14229c3c4622abc5351a32bf2c58--9554b584838a4d82b1db2c9c6da4f5e4
3ccc79bb88794db28855a552ef55f9ee
RX(theta₃₄)
9554b584838a4d82b1db2c9c6da4f5e4--3ccc79bb88794db28855a552ef55f9ee
32ffc501d320439fa84ecbd908e8ec1b
3ccc79bb88794db28855a552ef55f9ee--32ffc501d320439fa84ecbd908e8ec1b
32ffc501d320439fa84ecbd908e8ec1b--93d9f781c3b4456eb83cf5767a6aaa6c
d492078d99504c9289770d9e57199e21
4dfb7f9771fb4fd6950b41b826665710
RX(theta₅)
83231c5a21014d32aef6ae57eb01a8ab--4dfb7f9771fb4fd6950b41b826665710
9b9a874f88a24fabb8897f57aab464ae
RY(theta₁₁)
4dfb7f9771fb4fd6950b41b826665710--9b9a874f88a24fabb8897f57aab464ae
0366cf68501f4990a0c5764b18d3adcd
RX(theta₁₇)
9b9a874f88a24fabb8897f57aab464ae--0366cf68501f4990a0c5764b18d3adcd
0d7565a6ed594d4ebef8c89a89b87a1e
0366cf68501f4990a0c5764b18d3adcd--0d7565a6ed594d4ebef8c89a89b87a1e
8fca2238a2b1425b92ffcde0bb8159d4
RX(theta₂₃)
0d7565a6ed594d4ebef8c89a89b87a1e--8fca2238a2b1425b92ffcde0bb8159d4
9d0755c425b742cb9c1a390dc81b314d
RY(theta₂₉)
8fca2238a2b1425b92ffcde0bb8159d4--9d0755c425b742cb9c1a390dc81b314d
c539c436080947f78ca6a1c47bb312e5
RX(theta₃₅)
9d0755c425b742cb9c1a390dc81b314d--c539c436080947f78ca6a1c47bb312e5
dbb639a0e54740c0ac679d36117e85e4
c539c436080947f78ca6a1c47bb312e5--dbb639a0e54740c0ac679d36117e85e4
dbb639a0e54740c0ac679d36117e85e4--d492078d99504c9289770d9e57199e21
Creating the QuantumModel
The rest of the procedure is the same as any other Qadence workflow. We start by defining a feature map for input encoding and an observable for output decoding.
from qadence import feature_map , BasisSet , ReuploadScaling
from qadence import Z , I
fm = feature_map (
n_qubits = reg . n_qubits ,
param = "x" ,
fm_type = BasisSet . CHEBYSHEV ,
reupload_scaling = ReuploadScaling . TOWER ,
)
# Total magnetization
observable = add ( Z ( i ) for i in range ( reg . n_qubits ))
And we have all the ingredients to initialize the QuantumModel
:
from qadence import QuantumCircuit , QuantumModel
circuit = QuantumCircuit ( reg , fm , da_ansatz )
model = QuantumModel ( circuit , observable = observable )
Training the model
We can now train the model. We use a set of 20 equally spaced training points.
# Chebyshev FM does not accept x = -1, 1
xmin = - 0.99
xmax = 0.99
n_train = 20
x_train = torch . linspace ( xmin , xmax , steps = n_train )
y_train = f ( x_train )
# Initial model prediction
y_pred_initial = model . expectation ({ "x" : x_test }) . detach ()
And we use a simple custom training loop.
criterion = torch . nn . MSELoss ()
optimizer = torch . optim . Adam ( model . parameters (), lr = 0.1 )
n_epochs = 200
def loss_fn ( x_train , y_train ):
out = model . expectation ({ "x" : x_train })
loss = criterion ( out . squeeze (), y_train )
return loss
for i in range ( n_epochs ):
optimizer . zero_grad ()
loss = loss_fn ( x_train , y_train )
loss . backward ()
optimizer . step ()
Results
Finally we can plot the resulting trained model.
y_pred_final = model . expectation ({ "x" : x_test }) . detach ()
plt . plot ( x_test , y_pred_initial , label = "Initial prediction" )
plt . plot ( x_test , y_pred_final , label = "Final prediction" )
plt . scatter ( x_train , y_train , label = "Training points" )
plt . xlabel ( "x" )
plt . ylabel ( "f(x)" )
plt . legend ()
plt . xlim (( - 1.1 , 1.1 ))
plt . ylim (( - 1.1 , 1.1 ))
2024-11-27T09:05:01.760044
image/svg+xml
Matplotlib v3.9.2, https://matplotlib.org/