Fitting a function with a Hamiltonian ansatz
In the analog QCL tutorial we used analog blocks to learn a function of interest. The analog blocks are a direct abstraction of device execution with global addressing. However, we may want to directly program an Hamiltonian-level ansatz to have a finer control on our model. In Qadence this can easily be done through digital-analog programs. In this tutorial we will solve a simple QCL problem with this approach.
Setting up the problem
The example problem considered is to fit a function of interest in a specified range. Below we define and plot the function \(f(x)=x^5\) .
import torch
import matplotlib.pyplot as plt
# Function to fit:
def f ( x ):
return x ** 5
xmin = - 1.0
xmax = 1.0
n_test = 100
x_test = torch . linspace ( xmin , xmax , steps = n_test )
y_test = f ( x_test )
plt . plot ( x_test , y_test )
plt . xlim (( - 1.1 , 1.1 ))
plt . ylim (( - 1.1 , 1.1 ))
2024-08-22T12:51:05.304291
image/svg+xml
Matplotlib v3.7.5, https://matplotlib.org/
Digital-Analog Ansatz
We start by defining the register of qubits. The topology we use now will define the interactions in the entangling Hamiltonian. As an example, we can define a rectangular lattice with 6 qubits.
from qadence import Register
reg = Register . rectangular_lattice (
qubits_row = 3 ,
qubits_col = 2 ,
)
Inspired by the Ising interaction mode of Rydberg atoms, we can now define an interaction Hamiltonian as \(\mathcal{H}_{ij}=\frac{1}{r_{ij}^6}N_iN_j\) , where \(N_i=(1/2)(I_i-Z_i)\) is the number operator and and \(r_{ij}\) is the distance between qubits \(i\) and \(j\) . We can easily instatiate this interaction Hamiltonian from the register information :
from qadence import N , add
def h_ij ( i : int , j : int ):
return N ( i ) @N ( j )
h_int = add ( h_ij ( * edge ) / r ** 6 for edge , r in reg . edge_distances . items ())
To build the digital-analog ansatz we can make use of the standard hea
function by specifying we want to use the Strategy.SDAQC
and passing the Hamiltonian we created as the entangler, as see in the QML constructors tutorial . The entangling operation will be replaced by the evolution of this Hamiltonian HamEvo(h_int, t)
, where the time parameter t
is considered to be a variational parameter at each layer.
from qadence import hea , Strategy , RX , RY
depth = 2
da_ansatz = hea (
n_qubits = reg . n_qubits ,
depth = depth ,
operations = [ RX , RY , RX ],
entangler = h_int ,
strategy = Strategy . SDAQC ,
)
print ( html_string ( da_ansatz ))
%3
cluster_5e3e66dc0a7a4df0b7809a7d048867e9
cluster_f7991af80eb347db93aac04c66bbed8a
1b29d8608482477b8c8ab2272ea10fa4
0
3733e96498af4d50acb31deebfdcb641
RX(theta₀)
1b29d8608482477b8c8ab2272ea10fa4--3733e96498af4d50acb31deebfdcb641
cc1480e35d304f6ab0e76f5076b4354a
1
62815c34b22e4a3aac6551a321045b6d
RY(theta₆)
3733e96498af4d50acb31deebfdcb641--62815c34b22e4a3aac6551a321045b6d
df0f1676fef847688e0daac5f90e13b0
RX(theta₁₂)
62815c34b22e4a3aac6551a321045b6d--df0f1676fef847688e0daac5f90e13b0
d2421730a92f4e049e8d4d9bd58e5f60
df0f1676fef847688e0daac5f90e13b0--d2421730a92f4e049e8d4d9bd58e5f60
c15d2acaee444e67aba18cca2ac9b207
RX(theta₁₈)
d2421730a92f4e049e8d4d9bd58e5f60--c15d2acaee444e67aba18cca2ac9b207
f4aad914af864528969cc497c96c1105
RY(theta₂₄)
c15d2acaee444e67aba18cca2ac9b207--f4aad914af864528969cc497c96c1105
720ef509b5a5419abf2f746ef55525da
RX(theta₃₀)
f4aad914af864528969cc497c96c1105--720ef509b5a5419abf2f746ef55525da
6e6170360bfe49638a070959a2a792e2
720ef509b5a5419abf2f746ef55525da--6e6170360bfe49638a070959a2a792e2
9791f4d9b1b0408ea98b672c757984a9
6e6170360bfe49638a070959a2a792e2--9791f4d9b1b0408ea98b672c757984a9
9a86f6b666034bcc829bf980d3d25938
32dfb6d4c747400f97f96772d62eedd0
RX(theta₁)
cc1480e35d304f6ab0e76f5076b4354a--32dfb6d4c747400f97f96772d62eedd0
9cc821df3c984646bf2a6ac396a03212
2
d11977ea9e78474588318866af131781
RY(theta₇)
32dfb6d4c747400f97f96772d62eedd0--d11977ea9e78474588318866af131781
099fcf1d5a43477f9da3f8b95a2ac1a7
RX(theta₁₃)
d11977ea9e78474588318866af131781--099fcf1d5a43477f9da3f8b95a2ac1a7
daba38311ec6492eb776222b239925c9
099fcf1d5a43477f9da3f8b95a2ac1a7--daba38311ec6492eb776222b239925c9
b39158b827d747d49d78ca4f53235f45
RX(theta₁₉)
daba38311ec6492eb776222b239925c9--b39158b827d747d49d78ca4f53235f45
29e2c2d213c542ca8b1554fc7e7fe1d4
RY(theta₂₅)
b39158b827d747d49d78ca4f53235f45--29e2c2d213c542ca8b1554fc7e7fe1d4
e0c8ae9a513b4fb2921338cb33935d3e
RX(theta₃₁)
29e2c2d213c542ca8b1554fc7e7fe1d4--e0c8ae9a513b4fb2921338cb33935d3e
740fbae49f2d4126a0866bf737c31149
e0c8ae9a513b4fb2921338cb33935d3e--740fbae49f2d4126a0866bf737c31149
740fbae49f2d4126a0866bf737c31149--9a86f6b666034bcc829bf980d3d25938
4b398e32930a496482e5fb75ed735bd5
862071e954c04c2dabfb21ce4891566e
RX(theta₂)
9cc821df3c984646bf2a6ac396a03212--862071e954c04c2dabfb21ce4891566e
2aab2c2295034133b3ded59e2d4fde9d
3
06e2b67f27a7438783194439cc81fa16
RY(theta₈)
862071e954c04c2dabfb21ce4891566e--06e2b67f27a7438783194439cc81fa16
66c7a81088e4437a9ac014a11799f955
RX(theta₁₄)
06e2b67f27a7438783194439cc81fa16--66c7a81088e4437a9ac014a11799f955
831c9ef8402840dcb2c3397ae9049c49
HamEvo
66c7a81088e4437a9ac014a11799f955--831c9ef8402840dcb2c3397ae9049c49
ebd83f6bdf044554a87015d61f039b8c
RX(theta₂₀)
831c9ef8402840dcb2c3397ae9049c49--ebd83f6bdf044554a87015d61f039b8c
00ecc4e28cc8444e988c3bc635355074
RY(theta₂₆)
ebd83f6bdf044554a87015d61f039b8c--00ecc4e28cc8444e988c3bc635355074
8db5834d2b19471080704f12d0c7d57e
RX(theta₃₂)
00ecc4e28cc8444e988c3bc635355074--8db5834d2b19471080704f12d0c7d57e
2b3707a721f64a5980603f5c184c8977
HamEvo
8db5834d2b19471080704f12d0c7d57e--2b3707a721f64a5980603f5c184c8977
2b3707a721f64a5980603f5c184c8977--4b398e32930a496482e5fb75ed735bd5
8341499f21ca4185900c024e7d4d2151
8f359b1c70c748a981541d5accf493b8
RX(theta₃)
2aab2c2295034133b3ded59e2d4fde9d--8f359b1c70c748a981541d5accf493b8
7972be97208a41e18ea1a7fd742bb70d
4
1a3048d290494dc9b7da8c42bc80a0b4
RY(theta₉)
8f359b1c70c748a981541d5accf493b8--1a3048d290494dc9b7da8c42bc80a0b4
9cbebb00003846e29a784227d03c105e
RX(theta₁₅)
1a3048d290494dc9b7da8c42bc80a0b4--9cbebb00003846e29a784227d03c105e
f28cd1ff2ca4446b9f1cf4f0648a2ded
t = theta_t₀
9cbebb00003846e29a784227d03c105e--f28cd1ff2ca4446b9f1cf4f0648a2ded
b5d32795ed51444f8fb71af682bce754
RX(theta₂₁)
f28cd1ff2ca4446b9f1cf4f0648a2ded--b5d32795ed51444f8fb71af682bce754
58c4f229cee847068379d3cd1b38afc0
RY(theta₂₇)
b5d32795ed51444f8fb71af682bce754--58c4f229cee847068379d3cd1b38afc0
370a578165ea41c1a5d142b46d714066
RX(theta₃₃)
58c4f229cee847068379d3cd1b38afc0--370a578165ea41c1a5d142b46d714066
2962e13153734e07a316d5e2fabbcf14
t = theta_t₁
370a578165ea41c1a5d142b46d714066--2962e13153734e07a316d5e2fabbcf14
2962e13153734e07a316d5e2fabbcf14--8341499f21ca4185900c024e7d4d2151
a467cdfd38fb4ddea8c49544ada07540
531d780c757d40b6823573ae24f71666
RX(theta₄)
7972be97208a41e18ea1a7fd742bb70d--531d780c757d40b6823573ae24f71666
82ba3aefba164a1c90dbade61d167afb
5
bfcfe42b7ffa4760b143cd907c661e33
RY(theta₁₀)
531d780c757d40b6823573ae24f71666--bfcfe42b7ffa4760b143cd907c661e33
cd0ddb3cc12f4e299de7de476980ced3
RX(theta₁₆)
bfcfe42b7ffa4760b143cd907c661e33--cd0ddb3cc12f4e299de7de476980ced3
d4cb41c90cdc47738a691b341f63d88b
cd0ddb3cc12f4e299de7de476980ced3--d4cb41c90cdc47738a691b341f63d88b
c5cf85f7f359440b90d710db8cd90422
RX(theta₂₂)
d4cb41c90cdc47738a691b341f63d88b--c5cf85f7f359440b90d710db8cd90422
f036cd2346464062a0c14226dbf6380f
RY(theta₂₈)
c5cf85f7f359440b90d710db8cd90422--f036cd2346464062a0c14226dbf6380f
3754433e8c1044d2a83b07d49a1e4c04
RX(theta₃₄)
f036cd2346464062a0c14226dbf6380f--3754433e8c1044d2a83b07d49a1e4c04
0c1a5aaa7a3a4eb99c72279b6dad8d09
3754433e8c1044d2a83b07d49a1e4c04--0c1a5aaa7a3a4eb99c72279b6dad8d09
0c1a5aaa7a3a4eb99c72279b6dad8d09--a467cdfd38fb4ddea8c49544ada07540
3a4acc1f92d540d585de5af69cf41d2c
23948be1d0ef4392b5972023cf48ae58
RX(theta₅)
82ba3aefba164a1c90dbade61d167afb--23948be1d0ef4392b5972023cf48ae58
9d35d91ae13a4569a43c1e56d4f32781
RY(theta₁₁)
23948be1d0ef4392b5972023cf48ae58--9d35d91ae13a4569a43c1e56d4f32781
0e95ef83baf54b15a0863bd99a7f32f2
RX(theta₁₇)
9d35d91ae13a4569a43c1e56d4f32781--0e95ef83baf54b15a0863bd99a7f32f2
16799c21b18740dfbd1d00f5d7ad0919
0e95ef83baf54b15a0863bd99a7f32f2--16799c21b18740dfbd1d00f5d7ad0919
4afabfc4b599458f9ce44c0fa05d5637
RX(theta₂₃)
16799c21b18740dfbd1d00f5d7ad0919--4afabfc4b599458f9ce44c0fa05d5637
837bae14546e4baa82d9997624d45d58
RY(theta₂₉)
4afabfc4b599458f9ce44c0fa05d5637--837bae14546e4baa82d9997624d45d58
2975f67631fc4946acc0f7ac729fe9d1
RX(theta₃₅)
837bae14546e4baa82d9997624d45d58--2975f67631fc4946acc0f7ac729fe9d1
abd6c22b88df42a48f4087d02fa1c8ea
2975f67631fc4946acc0f7ac729fe9d1--abd6c22b88df42a48f4087d02fa1c8ea
abd6c22b88df42a48f4087d02fa1c8ea--3a4acc1f92d540d585de5af69cf41d2c
Creating the QuantumModel
The rest of the procedure is the same as any other Qadence workflow. We start by defining a feature map for input encoding and an observable for output decoding.
from qadence import feature_map , BasisSet , ReuploadScaling
from qadence import Z , I
fm = feature_map (
n_qubits = reg . n_qubits ,
param = "x" ,
fm_type = BasisSet . CHEBYSHEV ,
reupload_scaling = ReuploadScaling . TOWER ,
)
# Total magnetization
observable = add ( Z ( i ) for i in range ( reg . n_qubits ))
And we have all the ingredients to initialize the QuantumModel
:
from qadence import QuantumCircuit , QuantumModel
circuit = QuantumCircuit ( reg , fm , da_ansatz )
model = QuantumModel ( circuit , observable = observable )
Training the model
We can now train the model. We use a set of 20 equally spaced training points.
# Chebyshev FM does not accept x = -1, 1
xmin = - 0.99
xmax = 0.99
n_train = 20
x_train = torch . linspace ( xmin , xmax , steps = n_train )
y_train = f ( x_train )
# Initial model prediction
y_pred_initial = model . expectation ({ "x" : x_test }) . detach ()
And we use a simple custom training loop.
criterion = torch . nn . MSELoss ()
optimizer = torch . optim . Adam ( model . parameters (), lr = 0.1 )
n_epochs = 200
def loss_fn ( x_train , y_train ):
out = model . expectation ({ "x" : x_train })
loss = criterion ( out . squeeze (), y_train )
return loss
for i in range ( n_epochs ):
optimizer . zero_grad ()
loss = loss_fn ( x_train , y_train )
loss . backward ()
optimizer . step ()
Results
Finally we can plot the resulting trained model.
y_pred_final = model . expectation ({ "x" : x_test }) . detach ()
plt . plot ( x_test , y_pred_initial , label = "Initial prediction" )
plt . plot ( x_test , y_pred_final , label = "Final prediction" )
plt . scatter ( x_train , y_train , label = "Training points" )
plt . xlabel ( "x" )
plt . ylabel ( "f(x)" )
plt . legend ()
plt . xlim (( - 1.1 , 1.1 ))
plt . ylim (( - 1.1 , 1.1 ))
2024-08-22T12:51:14.478393
image/svg+xml
Matplotlib v3.7.5, https://matplotlib.org/