Fitting a function with a Hamiltonian ansatz
In the analog QCL tutorial we used analog blocks to learn a function of interest. The analog blocks are a direct abstraction of device execution with global addressing. However, we may want to directly program an Hamiltonian-level ansatz to have a finer control on our model. In Qadence this can easily be done through digital-analog programs. In this tutorial we will solve a simple QCL problem with this approach.
Setting up the problem
The example problem considered is to fit a function of interest in a specified range. Below we define and plot the function \(f(x)=x^5\) .
import torch
import matplotlib.pyplot as plt
# Function to fit:
def f ( x ):
return x ** 5
xmin = - 1.0
xmax = 1.0
n_test = 100
x_test = torch . linspace ( xmin , xmax , steps = n_test )
y_test = f ( x_test )
plt . plot ( x_test , y_test )
plt . xlim (( - 1.1 , 1.1 ))
plt . ylim (( - 1.1 , 1.1 ))
2024-09-03T10:07:06.648555
image/svg+xml
Matplotlib v3.7.5, https://matplotlib.org/
Digital-Analog Ansatz
We start by defining the register of qubits. The topology we use now will define the interactions in the entangling Hamiltonian. As an example, we can define a rectangular lattice with 6 qubits.
from qadence import Register
reg = Register . rectangular_lattice (
qubits_row = 3 ,
qubits_col = 2 ,
)
Inspired by the Ising interaction mode of Rydberg atoms, we can now define an interaction Hamiltonian as \(\mathcal{H}_{ij}=\frac{1}{r_{ij}^6}N_iN_j\) , where \(N_i=(1/2)(I_i-Z_i)\) is the number operator and and \(r_{ij}\) is the distance between qubits \(i\) and \(j\) . We can easily instatiate this interaction Hamiltonian from the register information :
from qadence import N , add
def h_ij ( i : int , j : int ):
return N ( i ) @N ( j )
h_int = add ( h_ij ( * edge ) / r ** 6 for edge , r in reg . edge_distances . items ())
To build the digital-analog ansatz we can make use of the standard hea
function by specifying we want to use the Strategy.SDAQC
and passing the Hamiltonian we created as the entangler, as see in the QML constructors tutorial . The entangling operation will be replaced by the evolution of this Hamiltonian HamEvo(h_int, t)
, where the time parameter t
is considered to be a variational parameter at each layer.
from qadence import hea , Strategy , RX , RY
depth = 2
da_ansatz = hea (
n_qubits = reg . n_qubits ,
depth = depth ,
operations = [ RX , RY , RX ],
entangler = h_int ,
strategy = Strategy . SDAQC ,
)
print ( html_string ( da_ansatz ))
%3
cluster_9a1f820752d9458ebf745ca38c7bd86c
cluster_26962e8f03e247bc9d43684d928ee5f7
08326235f8e846efb4d96e7558cb1c62
0
0ed5b45c0a1949c7a6bcdcd2a4498aa9
RX(theta₀)
08326235f8e846efb4d96e7558cb1c62--0ed5b45c0a1949c7a6bcdcd2a4498aa9
2c712d2045704921bf33dab51b0ab62a
1
c133956d2b124b12bfc4ea9de3cff302
RY(theta₆)
0ed5b45c0a1949c7a6bcdcd2a4498aa9--c133956d2b124b12bfc4ea9de3cff302
a125c2388c444b178744a3e6a582d0d1
RX(theta₁₂)
c133956d2b124b12bfc4ea9de3cff302--a125c2388c444b178744a3e6a582d0d1
60d6b865ad774b539688e1f49759f1c4
a125c2388c444b178744a3e6a582d0d1--60d6b865ad774b539688e1f49759f1c4
55031f4d705b45f189cbfc8c66263f12
RX(theta₁₈)
60d6b865ad774b539688e1f49759f1c4--55031f4d705b45f189cbfc8c66263f12
e8d02737b7cf4dfe8e7ef0cacf95260a
RY(theta₂₄)
55031f4d705b45f189cbfc8c66263f12--e8d02737b7cf4dfe8e7ef0cacf95260a
d96860133fe94a01aa0754db682a3d7e
RX(theta₃₀)
e8d02737b7cf4dfe8e7ef0cacf95260a--d96860133fe94a01aa0754db682a3d7e
2f359669f1e943558bce8755ffe88f0f
d96860133fe94a01aa0754db682a3d7e--2f359669f1e943558bce8755ffe88f0f
a68d6d0eae0a403a855fce7ea3e0f13e
2f359669f1e943558bce8755ffe88f0f--a68d6d0eae0a403a855fce7ea3e0f13e
7b7552bf06954b4db79b648ef8c43087
41be9d77ec2b4d3086831a35347cba04
RX(theta₁)
2c712d2045704921bf33dab51b0ab62a--41be9d77ec2b4d3086831a35347cba04
cb59e018e1bd4eb5ae9b77ef6341683f
2
b778d7194a8e450ea6a84ac0f1bfffac
RY(theta₇)
41be9d77ec2b4d3086831a35347cba04--b778d7194a8e450ea6a84ac0f1bfffac
7076a58b83964ba780a42a96dfa78a85
RX(theta₁₃)
b778d7194a8e450ea6a84ac0f1bfffac--7076a58b83964ba780a42a96dfa78a85
a957d5908f1b49da879d88226c4dfd5b
7076a58b83964ba780a42a96dfa78a85--a957d5908f1b49da879d88226c4dfd5b
c0ab0194f5c44200a73f9aeb7108e990
RX(theta₁₉)
a957d5908f1b49da879d88226c4dfd5b--c0ab0194f5c44200a73f9aeb7108e990
ed7435640f004d60b8bf3d405ae0329f
RY(theta₂₅)
c0ab0194f5c44200a73f9aeb7108e990--ed7435640f004d60b8bf3d405ae0329f
0cb4ddc10d984cdfbf5232bef672b30a
RX(theta₃₁)
ed7435640f004d60b8bf3d405ae0329f--0cb4ddc10d984cdfbf5232bef672b30a
3ab59665447f42e3bd936868d94eae4d
0cb4ddc10d984cdfbf5232bef672b30a--3ab59665447f42e3bd936868d94eae4d
3ab59665447f42e3bd936868d94eae4d--7b7552bf06954b4db79b648ef8c43087
eb9ae9972f3c4de68619bfce90373298
a41a7d91396545809cfa489c7ab8e5e7
RX(theta₂)
cb59e018e1bd4eb5ae9b77ef6341683f--a41a7d91396545809cfa489c7ab8e5e7
cb1ba96a62744e378f921d6a84249f6d
3
8d33ce4322704efba2b474179ee7ae6a
RY(theta₈)
a41a7d91396545809cfa489c7ab8e5e7--8d33ce4322704efba2b474179ee7ae6a
a449a4c8196d473f8bdb5a1ea0c0df84
RX(theta₁₄)
8d33ce4322704efba2b474179ee7ae6a--a449a4c8196d473f8bdb5a1ea0c0df84
6bf03f8e1dc8437a969ad2ae107dc76b
HamEvo
a449a4c8196d473f8bdb5a1ea0c0df84--6bf03f8e1dc8437a969ad2ae107dc76b
c2e6cb9113944c9c82ecb4a88684c3ba
RX(theta₂₀)
6bf03f8e1dc8437a969ad2ae107dc76b--c2e6cb9113944c9c82ecb4a88684c3ba
6440b44562fb4318b90487d916804261
RY(theta₂₆)
c2e6cb9113944c9c82ecb4a88684c3ba--6440b44562fb4318b90487d916804261
a811a7ba01274143a38f70d5dec7fc15
RX(theta₃₂)
6440b44562fb4318b90487d916804261--a811a7ba01274143a38f70d5dec7fc15
7f21312671c24125aecac77519f6c5dd
HamEvo
a811a7ba01274143a38f70d5dec7fc15--7f21312671c24125aecac77519f6c5dd
7f21312671c24125aecac77519f6c5dd--eb9ae9972f3c4de68619bfce90373298
2182bc509b7d45fb952ce4b5471ef281
a7b2e5abb0244c2daa7c0d46a7a23a9f
RX(theta₃)
cb1ba96a62744e378f921d6a84249f6d--a7b2e5abb0244c2daa7c0d46a7a23a9f
265deb55b6184259af36ae62284d5849
4
1f4fe2bcfe974459abd3fa4ebbc4def6
RY(theta₉)
a7b2e5abb0244c2daa7c0d46a7a23a9f--1f4fe2bcfe974459abd3fa4ebbc4def6
4e02b5d276e04628b73543e8e6348c55
RX(theta₁₅)
1f4fe2bcfe974459abd3fa4ebbc4def6--4e02b5d276e04628b73543e8e6348c55
39c64bade4a841d0ba61e540532fd4e6
t = theta_t₀
4e02b5d276e04628b73543e8e6348c55--39c64bade4a841d0ba61e540532fd4e6
425906d1c51e4aeb91e170b674c05a71
RX(theta₂₁)
39c64bade4a841d0ba61e540532fd4e6--425906d1c51e4aeb91e170b674c05a71
0f710aa9977047dd8678ec07e94176ba
RY(theta₂₇)
425906d1c51e4aeb91e170b674c05a71--0f710aa9977047dd8678ec07e94176ba
0fedb4b3a4624189bde890d431da01eb
RX(theta₃₃)
0f710aa9977047dd8678ec07e94176ba--0fedb4b3a4624189bde890d431da01eb
9ace6840dd814222a0f9f64ef01dc6b9
t = theta_t₁
0fedb4b3a4624189bde890d431da01eb--9ace6840dd814222a0f9f64ef01dc6b9
9ace6840dd814222a0f9f64ef01dc6b9--2182bc509b7d45fb952ce4b5471ef281
49bad6c452294aad91c13c5cca4a4dc4
4201d76aa16a41409f92802b4edac6d6
RX(theta₄)
265deb55b6184259af36ae62284d5849--4201d76aa16a41409f92802b4edac6d6
64944889fabb4513beb59b598caa0958
5
3a1b0cd389454177945d734a20773cd9
RY(theta₁₀)
4201d76aa16a41409f92802b4edac6d6--3a1b0cd389454177945d734a20773cd9
85eb14c6277e48a2a8321d4fa4e781d1
RX(theta₁₆)
3a1b0cd389454177945d734a20773cd9--85eb14c6277e48a2a8321d4fa4e781d1
67b3fe3dbcf0470883efefdffe54250b
85eb14c6277e48a2a8321d4fa4e781d1--67b3fe3dbcf0470883efefdffe54250b
d824f107988642c3915c2b87919e42cb
RX(theta₂₂)
67b3fe3dbcf0470883efefdffe54250b--d824f107988642c3915c2b87919e42cb
27d763bf866c48a5a5202fd3a7bea5f5
RY(theta₂₈)
d824f107988642c3915c2b87919e42cb--27d763bf866c48a5a5202fd3a7bea5f5
f95dacfcf9c749658baac64bd7bf58a5
RX(theta₃₄)
27d763bf866c48a5a5202fd3a7bea5f5--f95dacfcf9c749658baac64bd7bf58a5
24251532f138464e966d1aa63c9c6b7a
f95dacfcf9c749658baac64bd7bf58a5--24251532f138464e966d1aa63c9c6b7a
24251532f138464e966d1aa63c9c6b7a--49bad6c452294aad91c13c5cca4a4dc4
4753caec44734f3cb9261df2115d555a
f000ce8927af49eba243f8c7b6d3bf26
RX(theta₅)
64944889fabb4513beb59b598caa0958--f000ce8927af49eba243f8c7b6d3bf26
977c38889caf4551806879ed8dacda5c
RY(theta₁₁)
f000ce8927af49eba243f8c7b6d3bf26--977c38889caf4551806879ed8dacda5c
3d035824ea29424bafb1c4ef6fa36397
RX(theta₁₇)
977c38889caf4551806879ed8dacda5c--3d035824ea29424bafb1c4ef6fa36397
637d4e815b2b48a487bb90725cfdac0c
3d035824ea29424bafb1c4ef6fa36397--637d4e815b2b48a487bb90725cfdac0c
17c12ed907ef436d9316d0e7eeadded4
RX(theta₂₃)
637d4e815b2b48a487bb90725cfdac0c--17c12ed907ef436d9316d0e7eeadded4
7d3e3f14ede94e43ad7e633b90b61054
RY(theta₂₉)
17c12ed907ef436d9316d0e7eeadded4--7d3e3f14ede94e43ad7e633b90b61054
b19f88d419a44eac9d5dd723a67e1f50
RX(theta₃₅)
7d3e3f14ede94e43ad7e633b90b61054--b19f88d419a44eac9d5dd723a67e1f50
de0ff411a5fa4fdb8bf8cf2d6f878ff4
b19f88d419a44eac9d5dd723a67e1f50--de0ff411a5fa4fdb8bf8cf2d6f878ff4
de0ff411a5fa4fdb8bf8cf2d6f878ff4--4753caec44734f3cb9261df2115d555a
Creating the QuantumModel
The rest of the procedure is the same as any other Qadence workflow. We start by defining a feature map for input encoding and an observable for output decoding.
from qadence import feature_map , BasisSet , ReuploadScaling
from qadence import Z , I
fm = feature_map (
n_qubits = reg . n_qubits ,
param = "x" ,
fm_type = BasisSet . CHEBYSHEV ,
reupload_scaling = ReuploadScaling . TOWER ,
)
# Total magnetization
observable = add ( Z ( i ) for i in range ( reg . n_qubits ))
And we have all the ingredients to initialize the QuantumModel
:
from qadence import QuantumCircuit , QuantumModel
circuit = QuantumCircuit ( reg , fm , da_ansatz )
model = QuantumModel ( circuit , observable = observable )
Training the model
We can now train the model. We use a set of 20 equally spaced training points.
# Chebyshev FM does not accept x = -1, 1
xmin = - 0.99
xmax = 0.99
n_train = 20
x_train = torch . linspace ( xmin , xmax , steps = n_train )
y_train = f ( x_train )
# Initial model prediction
y_pred_initial = model . expectation ({ "x" : x_test }) . detach ()
And we use a simple custom training loop.
criterion = torch . nn . MSELoss ()
optimizer = torch . optim . Adam ( model . parameters (), lr = 0.1 )
n_epochs = 200
def loss_fn ( x_train , y_train ):
out = model . expectation ({ "x" : x_train })
loss = criterion ( out . squeeze (), y_train )
return loss
for i in range ( n_epochs ):
optimizer . zero_grad ()
loss = loss_fn ( x_train , y_train )
loss . backward ()
optimizer . step ()
Results
Finally we can plot the resulting trained model.
y_pred_final = model . expectation ({ "x" : x_test }) . detach ()
plt . plot ( x_test , y_pred_initial , label = "Initial prediction" )
plt . plot ( x_test , y_pred_final , label = "Final prediction" )
plt . scatter ( x_train , y_train , label = "Training points" )
plt . xlabel ( "x" )
plt . ylabel ( "f(x)" )
plt . legend ()
plt . xlim (( - 1.1 , 1.1 ))
plt . ylim (( - 1.1 , 1.1 ))
2024-09-03T10:07:15.712352
image/svg+xml
Matplotlib v3.7.5, https://matplotlib.org/