Fitting a function with a Hamiltonian ansatz
In the analog QCL tutorial we used analog blocks to learn a function of interest. The analog blocks are a direct abstraction of device execution with global addressing. However, we may want to directly program an Hamiltonian-level ansatz to have a finer control on our model. In Qadence this can easily be done through digital-analog programs. In this tutorial we will solve a simple QCL problem with this approach.
Setting up the problem
The example problem considered is to fit a function of interest in a specified range. Below we define and plot the function \(f(x)=x^5\) .
import torch
import matplotlib.pyplot as plt
# Function to fit:
def f ( x ):
return x ** 5
xmin = - 1.0
xmax = 1.0
n_test = 100
x_test = torch . linspace ( xmin , xmax , steps = n_test )
y_test = f ( x_test )
plt . plot ( x_test , y_test )
plt . xlim (( - 1.1 , 1.1 ))
plt . ylim (( - 1.1 , 1.1 ))
2024-07-26T12:33:28.311417
image/svg+xml
Matplotlib v3.7.5, https://matplotlib.org/
Digital-Analog Ansatz
We start by defining the register of qubits. The topology we use now will define the interactions in the entangling Hamiltonian. As an example, we can define a rectangular lattice with 6 qubits.
from qadence import Register
reg = Register . rectangular_lattice (
qubits_row = 3 ,
qubits_col = 2 ,
)
Inspired by the Ising interaction mode of Rydberg atoms, we can now define an interaction Hamiltonian as \(\mathcal{H}_{ij}=\frac{1}{r_{ij}^6}N_iN_j\) , where \(N_i=(1/2)(I_i-Z_i)\) is the number operator and and \(r_{ij}\) is the distance between qubits \(i\) and \(j\) . We can easily instatiate this interaction Hamiltonian from the register information :
from qadence import N , add
def h_ij ( i : int , j : int ):
return N ( i ) @N ( j )
h_int = add ( h_ij ( * edge ) / r ** 6 for edge , r in reg . edge_distances . items ())
To build the digital-analog ansatz we can make use of the standard hea
function by specifying we want to use the Strategy.SDAQC
and passing the Hamiltonian we created as the entangler, as see in the QML constructors tutorial . The entangling operation will be replaced by the evolution of this Hamiltonian HamEvo(h_int, t)
, where the time parameter t
is considered to be a variational parameter at each layer.
from qadence import hea , Strategy , RX , RY
depth = 2
da_ansatz = hea (
n_qubits = reg . n_qubits ,
depth = depth ,
operations = [ RX , RY , RX ],
entangler = h_int ,
strategy = Strategy . SDAQC ,
)
print ( html_string ( da_ansatz ))
%3
cluster_e7a7b8d8ff8749b1aeed260149af5d3d
cluster_f4e1b25b1fc54647b6d1d756cf8d3b21
250d983257094c6abe998ee70eb96b56
0
aaa582bf35024d53a720cf547302d933
RX(theta₀)
250d983257094c6abe998ee70eb96b56--aaa582bf35024d53a720cf547302d933
37429e805ee8438da289957ea39f8c31
1
de810c2b0aa34bbfa4dac56ff62a27cb
RY(theta₆)
aaa582bf35024d53a720cf547302d933--de810c2b0aa34bbfa4dac56ff62a27cb
9b56506483574c73b4b94856d40f8094
RX(theta₁₂)
de810c2b0aa34bbfa4dac56ff62a27cb--9b56506483574c73b4b94856d40f8094
1e4f8e7977a14a43ab6a458cbeab5ee6
9b56506483574c73b4b94856d40f8094--1e4f8e7977a14a43ab6a458cbeab5ee6
1e493e6061a34d2485cc246217171788
RX(theta₁₈)
1e4f8e7977a14a43ab6a458cbeab5ee6--1e493e6061a34d2485cc246217171788
03b051559bc14957bd649fe1e646d340
RY(theta₂₄)
1e493e6061a34d2485cc246217171788--03b051559bc14957bd649fe1e646d340
a119246e17e74596bb26df26e5779569
RX(theta₃₀)
03b051559bc14957bd649fe1e646d340--a119246e17e74596bb26df26e5779569
fed56fcdebca45c38686180432053c4e
a119246e17e74596bb26df26e5779569--fed56fcdebca45c38686180432053c4e
6953934b6fd845849e44ce341e38f604
fed56fcdebca45c38686180432053c4e--6953934b6fd845849e44ce341e38f604
31f127633c4e48ac969a6bd26e6980f6
8e2c45967f434000b09395c9ca778b5d
RX(theta₁)
37429e805ee8438da289957ea39f8c31--8e2c45967f434000b09395c9ca778b5d
8c92daea2f244d008f45948d5c619025
2
d1dff46fd2ec46699a81efe5734a200f
RY(theta₇)
8e2c45967f434000b09395c9ca778b5d--d1dff46fd2ec46699a81efe5734a200f
467070412442471b92d6b0bcd5d767da
RX(theta₁₃)
d1dff46fd2ec46699a81efe5734a200f--467070412442471b92d6b0bcd5d767da
b5069969243b4b24970f677de4e4f2c1
467070412442471b92d6b0bcd5d767da--b5069969243b4b24970f677de4e4f2c1
da7947290fb74024a2d5ffffafb5f80e
RX(theta₁₉)
b5069969243b4b24970f677de4e4f2c1--da7947290fb74024a2d5ffffafb5f80e
f6cbf42082e2409f8e144bcd1f2993fb
RY(theta₂₅)
da7947290fb74024a2d5ffffafb5f80e--f6cbf42082e2409f8e144bcd1f2993fb
c20e0c21387f4d379beabc6c28cc3bd5
RX(theta₃₁)
f6cbf42082e2409f8e144bcd1f2993fb--c20e0c21387f4d379beabc6c28cc3bd5
b7129c82c4ab4b5a96409f35fb3bafb5
c20e0c21387f4d379beabc6c28cc3bd5--b7129c82c4ab4b5a96409f35fb3bafb5
b7129c82c4ab4b5a96409f35fb3bafb5--31f127633c4e48ac969a6bd26e6980f6
d0d042945cb0425cb244e8dec7b4c4a5
c4a0a1934f9e4f53a25311b3321a1a9b
RX(theta₂)
8c92daea2f244d008f45948d5c619025--c4a0a1934f9e4f53a25311b3321a1a9b
3c500e9b1a51439488a60b8c659d3977
3
a0491885e92b471eb10de477db61a197
RY(theta₈)
c4a0a1934f9e4f53a25311b3321a1a9b--a0491885e92b471eb10de477db61a197
a758ba2c0bd84a2bab80120ce4ea15d0
RX(theta₁₄)
a0491885e92b471eb10de477db61a197--a758ba2c0bd84a2bab80120ce4ea15d0
82eb204e5e81405399a6b7e544af0613
HamEvo
a758ba2c0bd84a2bab80120ce4ea15d0--82eb204e5e81405399a6b7e544af0613
e20cd79a84d94471baf68f86b05faddf
RX(theta₂₀)
82eb204e5e81405399a6b7e544af0613--e20cd79a84d94471baf68f86b05faddf
38549c04f86f4af188c69f6edf0f0ad4
RY(theta₂₆)
e20cd79a84d94471baf68f86b05faddf--38549c04f86f4af188c69f6edf0f0ad4
b76518075edb4080896d09e773fc221f
RX(theta₃₂)
38549c04f86f4af188c69f6edf0f0ad4--b76518075edb4080896d09e773fc221f
dc2d618bb0514bea95c66ae93467b7ca
HamEvo
b76518075edb4080896d09e773fc221f--dc2d618bb0514bea95c66ae93467b7ca
dc2d618bb0514bea95c66ae93467b7ca--d0d042945cb0425cb244e8dec7b4c4a5
b2008914353248de9962b0d0a9ea25cb
1aa53d9416174747970fd55efe88f458
RX(theta₃)
3c500e9b1a51439488a60b8c659d3977--1aa53d9416174747970fd55efe88f458
979f59e8ee1b4b02984673f94efe0357
4
259419a470a8463ca1595200536c3aa3
RY(theta₉)
1aa53d9416174747970fd55efe88f458--259419a470a8463ca1595200536c3aa3
41a89402316e4703b1f5be4c4e52a856
RX(theta₁₅)
259419a470a8463ca1595200536c3aa3--41a89402316e4703b1f5be4c4e52a856
c9f40ee14d4944b69e39ac35272bba0b
t = theta_t₀
41a89402316e4703b1f5be4c4e52a856--c9f40ee14d4944b69e39ac35272bba0b
85e85de10c4e4ebdb06578c0b7777641
RX(theta₂₁)
c9f40ee14d4944b69e39ac35272bba0b--85e85de10c4e4ebdb06578c0b7777641
2b90fa2d07b149cc98aa845a18478f0d
RY(theta₂₇)
85e85de10c4e4ebdb06578c0b7777641--2b90fa2d07b149cc98aa845a18478f0d
0667458d8a4a4ea68f91c4b4d29d7a80
RX(theta₃₃)
2b90fa2d07b149cc98aa845a18478f0d--0667458d8a4a4ea68f91c4b4d29d7a80
3c3c054da1594da09440c9099cbb2fa2
t = theta_t₁
0667458d8a4a4ea68f91c4b4d29d7a80--3c3c054da1594da09440c9099cbb2fa2
3c3c054da1594da09440c9099cbb2fa2--b2008914353248de9962b0d0a9ea25cb
090549654d2a4ec084b6043686a11fa4
3a849ab1426348d8b2fc2c0789dbece2
RX(theta₄)
979f59e8ee1b4b02984673f94efe0357--3a849ab1426348d8b2fc2c0789dbece2
82d394a8e32d413f9b1b584109b0a376
5
dda562c51c584b76b21850608065cd77
RY(theta₁₀)
3a849ab1426348d8b2fc2c0789dbece2--dda562c51c584b76b21850608065cd77
b2a1679c4ee04e0a9358fcb5cd03d82a
RX(theta₁₆)
dda562c51c584b76b21850608065cd77--b2a1679c4ee04e0a9358fcb5cd03d82a
2935045dcd6141cca6866addccb89496
b2a1679c4ee04e0a9358fcb5cd03d82a--2935045dcd6141cca6866addccb89496
935ea23a2c5f4a4a9b00fa3fbea05862
RX(theta₂₂)
2935045dcd6141cca6866addccb89496--935ea23a2c5f4a4a9b00fa3fbea05862
d68894c29c4a4623ab9afd1663730135
RY(theta₂₈)
935ea23a2c5f4a4a9b00fa3fbea05862--d68894c29c4a4623ab9afd1663730135
484ee922cb084cafae784ea48b9935ab
RX(theta₃₄)
d68894c29c4a4623ab9afd1663730135--484ee922cb084cafae784ea48b9935ab
98583380dff4488690521ccf8488d4d6
484ee922cb084cafae784ea48b9935ab--98583380dff4488690521ccf8488d4d6
98583380dff4488690521ccf8488d4d6--090549654d2a4ec084b6043686a11fa4
017ebc9ee78745bb81a8721549555ac5
2d3d2e504e10477ebd7f77c0a40e8e35
RX(theta₅)
82d394a8e32d413f9b1b584109b0a376--2d3d2e504e10477ebd7f77c0a40e8e35
c1d44acbbd08438eb61dd140a52bdf11
RY(theta₁₁)
2d3d2e504e10477ebd7f77c0a40e8e35--c1d44acbbd08438eb61dd140a52bdf11
dd125d36fc4f436f9745285a975669b6
RX(theta₁₇)
c1d44acbbd08438eb61dd140a52bdf11--dd125d36fc4f436f9745285a975669b6
0bad22cf7b184695a06fb0279a9aa096
dd125d36fc4f436f9745285a975669b6--0bad22cf7b184695a06fb0279a9aa096
1220f9234d814004a41957ad7c3482be
RX(theta₂₃)
0bad22cf7b184695a06fb0279a9aa096--1220f9234d814004a41957ad7c3482be
1b82166c2b6a41398d383e6d33f9a0fa
RY(theta₂₉)
1220f9234d814004a41957ad7c3482be--1b82166c2b6a41398d383e6d33f9a0fa
17e63f2b815d493e9d546366a3b4dd7f
RX(theta₃₅)
1b82166c2b6a41398d383e6d33f9a0fa--17e63f2b815d493e9d546366a3b4dd7f
2cd8509be8d641798963f0cc55874cda
17e63f2b815d493e9d546366a3b4dd7f--2cd8509be8d641798963f0cc55874cda
2cd8509be8d641798963f0cc55874cda--017ebc9ee78745bb81a8721549555ac5
Creating the QuantumModel
The rest of the procedure is the same as any other Qadence workflow. We start by defining a feature map for input encoding and an observable for output decoding.
from qadence import feature_map , BasisSet , ReuploadScaling
from qadence import Z , I
fm = feature_map (
n_qubits = reg . n_qubits ,
param = "x" ,
fm_type = BasisSet . CHEBYSHEV ,
reupload_scaling = ReuploadScaling . TOWER ,
)
# Total magnetization
observable = add ( Z ( i ) for i in range ( reg . n_qubits ))
And we have all the ingredients to initialize the QuantumModel
:
from qadence import QuantumCircuit , QuantumModel
circuit = QuantumCircuit ( reg , fm , da_ansatz )
model = QuantumModel ( circuit , observable = observable )
Training the model
We can now train the model. We use a set of 20 equally spaced training points.
# Chebyshev FM does not accept x = -1, 1
xmin = - 0.99
xmax = 0.99
n_train = 20
x_train = torch . linspace ( xmin , xmax , steps = n_train )
y_train = f ( x_train )
# Initial model prediction
y_pred_initial = model . expectation ({ "x" : x_test }) . detach ()
And we use a simple custom training loop.
criterion = torch . nn . MSELoss ()
optimizer = torch . optim . Adam ( model . parameters (), lr = 0.1 )
n_epochs = 200
def loss_fn ( x_train , y_train ):
out = model . expectation ({ "x" : x_train })
loss = criterion ( out . squeeze (), y_train )
return loss
for i in range ( n_epochs ):
optimizer . zero_grad ()
loss = loss_fn ( x_train , y_train )
loss . backward ()
optimizer . step ()
Results
Finally we can plot the resulting trained model.
y_pred_final = model . expectation ({ "x" : x_test }) . detach ()
plt . plot ( x_test , y_pred_initial , label = "Initial prediction" )
plt . plot ( x_test , y_pred_final , label = "Final prediction" )
plt . scatter ( x_train , y_train , label = "Training points" )
plt . xlabel ( "x" )
plt . ylabel ( "f(x)" )
plt . legend ()
plt . xlim (( - 1.1 , 1.1 ))
plt . ylim (( - 1.1 , 1.1 ))
2024-07-26T12:33:33.992157
image/svg+xml
Matplotlib v3.7.5, https://matplotlib.org/