Fitting a function with a Hamiltonian ansatz
In the analog QCL tutorial we used analog blocks to learn a function of interest. The analog blocks are a direct abstraction of device execution with global addressing. However, we may want to directly program an Hamiltonian-level ansatz to have a finer control on our model. In Qadence this can easily be done through digital-analog programs. In this tutorial we will solve a simple QCL problem with this approach.
Setting up the problem
The example problem considered is to fit a function of interest in a specified range. Below we define and plot the function \(f(x)=x^5\) .
import torch
import matplotlib.pyplot as plt
# Function to fit:
def f ( x ):
return x ** 5
xmin = - 1.0
xmax = 1.0
n_test = 100
x_test = torch . linspace ( xmin , xmax , steps = n_test )
y_test = f ( x_test )
plt . plot ( x_test , y_test )
plt . xlim (( - 1.1 , 1.1 ))
plt . ylim (( - 1.1 , 1.1 ))
2024-12-05T14:31:37.643619
image/svg+xml
Matplotlib v3.9.3, https://matplotlib.org/
Digital-Analog Ansatz
We start by defining the register of qubits. The topology we use now will define the interactions in the entangling Hamiltonian. As an example, we can define a rectangular lattice with 6 qubits.
from qadence import Register
reg = Register . rectangular_lattice (
qubits_row = 3 ,
qubits_col = 2 ,
)
Inspired by the Ising interaction mode of Rydberg atoms, we can now define an interaction Hamiltonian as \(\mathcal{H}_{ij}=\frac{1}{r_{ij}^6}N_iN_j\) , where \(N_i=(1/2)(I_i-Z_i)\) is the number operator and and \(r_{ij}\) is the distance between qubits \(i\) and \(j\) . We can easily instatiate this interaction Hamiltonian from the register information :
from qadence import N , add
def h_ij ( i : int , j : int ):
return N ( i ) @N ( j )
h_int = add ( h_ij ( * edge ) / r ** 6 for edge , r in reg . edge_distances . items ())
To build the digital-analog ansatz we can make use of the standard hea
function by specifying we want to use the Strategy.SDAQC
and passing the Hamiltonian we created as the entangler, as see in the QML constructors tutorial . The entangling operation will be replaced by the evolution of this Hamiltonian HamEvo(h_int, t)
, where the time parameter t
is considered to be a variational parameter at each layer.
from qadence import hea , Strategy , RX , RY
depth = 2
da_ansatz = hea (
n_qubits = reg . n_qubits ,
depth = depth ,
operations = [ RX , RY , RX ],
entangler = h_int ,
strategy = Strategy . SDAQC ,
)
print ( html_string ( da_ansatz ))
%3
cluster_6bc5ace3405842af82265d2234336f4b
cluster_5a3e52b8d0e74d47801ab576b47b6e14
1452e2a92dc04b88ab63a487a8815845
0
3ebf7c22973e4cabba8646c67f09e563
RX(theta₀)
1452e2a92dc04b88ab63a487a8815845--3ebf7c22973e4cabba8646c67f09e563
74c19fff0499466ca0e2814d5b9ba11f
1
03e707414e204c7192b411afe7f4f397
RY(theta₆)
3ebf7c22973e4cabba8646c67f09e563--03e707414e204c7192b411afe7f4f397
017c00cda80442f4a72357245e5f8445
RX(theta₁₂)
03e707414e204c7192b411afe7f4f397--017c00cda80442f4a72357245e5f8445
97af864e9eb949139b7ab00d7589ab54
017c00cda80442f4a72357245e5f8445--97af864e9eb949139b7ab00d7589ab54
9a51bf6f3e2c449193e539456312b548
RX(theta₁₈)
97af864e9eb949139b7ab00d7589ab54--9a51bf6f3e2c449193e539456312b548
2a6b7e70ed23422183aa81ecfe145c4b
RY(theta₂₄)
9a51bf6f3e2c449193e539456312b548--2a6b7e70ed23422183aa81ecfe145c4b
18d927b747da4f95af0d06f7836a6e78
RX(theta₃₀)
2a6b7e70ed23422183aa81ecfe145c4b--18d927b747da4f95af0d06f7836a6e78
4907da74892e4addaa3c89772ed5526b
18d927b747da4f95af0d06f7836a6e78--4907da74892e4addaa3c89772ed5526b
d844801830574f1fabe6b45db0331d69
4907da74892e4addaa3c89772ed5526b--d844801830574f1fabe6b45db0331d69
d5b03a5722cf42b5830e9b7680ed5bb0
713dd6040def484c8ce1a68793fd15d8
RX(theta₁)
74c19fff0499466ca0e2814d5b9ba11f--713dd6040def484c8ce1a68793fd15d8
17e92f6c9dd44a15b428ba3924fdbcf0
2
8c081c2018ed431e92cdf7efd700b320
RY(theta₇)
713dd6040def484c8ce1a68793fd15d8--8c081c2018ed431e92cdf7efd700b320
36c33d3fef65421389e79afbc1f40a80
RX(theta₁₃)
8c081c2018ed431e92cdf7efd700b320--36c33d3fef65421389e79afbc1f40a80
2b7981f940734c388b0e70e3be77e2cf
36c33d3fef65421389e79afbc1f40a80--2b7981f940734c388b0e70e3be77e2cf
ddbe10d6299247048b10beac7ecc400e
RX(theta₁₉)
2b7981f940734c388b0e70e3be77e2cf--ddbe10d6299247048b10beac7ecc400e
53c205b2a2764155b51b9dc27f7f942d
RY(theta₂₅)
ddbe10d6299247048b10beac7ecc400e--53c205b2a2764155b51b9dc27f7f942d
eec30c17f04a45aeb22a7329aaa38fde
RX(theta₃₁)
53c205b2a2764155b51b9dc27f7f942d--eec30c17f04a45aeb22a7329aaa38fde
af46551587c547049f6ce593a60a6ebf
eec30c17f04a45aeb22a7329aaa38fde--af46551587c547049f6ce593a60a6ebf
af46551587c547049f6ce593a60a6ebf--d5b03a5722cf42b5830e9b7680ed5bb0
8330afde03424e179829da5e59a7266a
42df8f471b8546abad748af7d4597d11
RX(theta₂)
17e92f6c9dd44a15b428ba3924fdbcf0--42df8f471b8546abad748af7d4597d11
38da0bf6fdc44e5b91ae30812c059ac1
3
a5b665a49ff64b5cbf0132510f95dee5
RY(theta₈)
42df8f471b8546abad748af7d4597d11--a5b665a49ff64b5cbf0132510f95dee5
022d50b2a3774b3ab4d78ceda0908e0b
RX(theta₁₄)
a5b665a49ff64b5cbf0132510f95dee5--022d50b2a3774b3ab4d78ceda0908e0b
4faed377f7ee46c1af70885c7a58a111
HamEvo
022d50b2a3774b3ab4d78ceda0908e0b--4faed377f7ee46c1af70885c7a58a111
501f64e750084899bb50ac425e8c979b
RX(theta₂₀)
4faed377f7ee46c1af70885c7a58a111--501f64e750084899bb50ac425e8c979b
21dc77392a994f9cae8bc98889a86aca
RY(theta₂₆)
501f64e750084899bb50ac425e8c979b--21dc77392a994f9cae8bc98889a86aca
6039b1f09cbd4105b4049e99f9cb7ce5
RX(theta₃₂)
21dc77392a994f9cae8bc98889a86aca--6039b1f09cbd4105b4049e99f9cb7ce5
1c5d1b9ef900462f95dbb107c9a26fe7
HamEvo
6039b1f09cbd4105b4049e99f9cb7ce5--1c5d1b9ef900462f95dbb107c9a26fe7
1c5d1b9ef900462f95dbb107c9a26fe7--8330afde03424e179829da5e59a7266a
4f2eb050b2c6451484e4d98e731d48c3
1a5803c791ae499a870bdbefdb30b902
RX(theta₃)
38da0bf6fdc44e5b91ae30812c059ac1--1a5803c791ae499a870bdbefdb30b902
b69d27300cd94549a98aba76671d3898
4
3eff87d2c931483cbdb3b552b04c6726
RY(theta₉)
1a5803c791ae499a870bdbefdb30b902--3eff87d2c931483cbdb3b552b04c6726
f25f2afb907547da88185fd9294e011a
RX(theta₁₅)
3eff87d2c931483cbdb3b552b04c6726--f25f2afb907547da88185fd9294e011a
e2d0a67b11dc475682164880aa02b888
t = theta_t₀
f25f2afb907547da88185fd9294e011a--e2d0a67b11dc475682164880aa02b888
a2cf7e1b74a24f969b57d92b87222e35
RX(theta₂₁)
e2d0a67b11dc475682164880aa02b888--a2cf7e1b74a24f969b57d92b87222e35
2b96d4de5d014ea194a83e483d6cd3c9
RY(theta₂₇)
a2cf7e1b74a24f969b57d92b87222e35--2b96d4de5d014ea194a83e483d6cd3c9
786773854a754ab08439966c8b80880c
RX(theta₃₃)
2b96d4de5d014ea194a83e483d6cd3c9--786773854a754ab08439966c8b80880c
5c7d072dad694277ba8936111b36c0b1
t = theta_t₁
786773854a754ab08439966c8b80880c--5c7d072dad694277ba8936111b36c0b1
5c7d072dad694277ba8936111b36c0b1--4f2eb050b2c6451484e4d98e731d48c3
d32df0b8855a4efeacb83e58ee21bed9
3cc395ba94934e7480b43ad9da9b1b3c
RX(theta₄)
b69d27300cd94549a98aba76671d3898--3cc395ba94934e7480b43ad9da9b1b3c
549c231f6d91444aa5a4c06aadc86d07
5
bc15a6f808b54280b27003341201606d
RY(theta₁₀)
3cc395ba94934e7480b43ad9da9b1b3c--bc15a6f808b54280b27003341201606d
09319c0d085042bda9a7c556b045f847
RX(theta₁₆)
bc15a6f808b54280b27003341201606d--09319c0d085042bda9a7c556b045f847
ae87c469810047caba1fb8fef3a5a4bc
09319c0d085042bda9a7c556b045f847--ae87c469810047caba1fb8fef3a5a4bc
ae5d9328b80e460d875997ae5b42340a
RX(theta₂₂)
ae87c469810047caba1fb8fef3a5a4bc--ae5d9328b80e460d875997ae5b42340a
8d646af52dc74ff7bc7b14fc7c198e91
RY(theta₂₈)
ae5d9328b80e460d875997ae5b42340a--8d646af52dc74ff7bc7b14fc7c198e91
5bc30411e40f44af9edbb1f6b5572f43
RX(theta₃₄)
8d646af52dc74ff7bc7b14fc7c198e91--5bc30411e40f44af9edbb1f6b5572f43
522ed71df86f458fac5fcbfadea3adcd
5bc30411e40f44af9edbb1f6b5572f43--522ed71df86f458fac5fcbfadea3adcd
522ed71df86f458fac5fcbfadea3adcd--d32df0b8855a4efeacb83e58ee21bed9
69d1af226cf5414cb0d01ef283d251b7
d40a3b93971948748bf91fc9fcd2e55f
RX(theta₅)
549c231f6d91444aa5a4c06aadc86d07--d40a3b93971948748bf91fc9fcd2e55f
d053f2ee08734ad5918c6370e48b6cfd
RY(theta₁₁)
d40a3b93971948748bf91fc9fcd2e55f--d053f2ee08734ad5918c6370e48b6cfd
98050d7856e54fcf8b02e6b319770247
RX(theta₁₇)
d053f2ee08734ad5918c6370e48b6cfd--98050d7856e54fcf8b02e6b319770247
c8138d321bfd41e49b915e8f4802262f
98050d7856e54fcf8b02e6b319770247--c8138d321bfd41e49b915e8f4802262f
603b4fc60f4941b493b48c305b57a6d4
RX(theta₂₃)
c8138d321bfd41e49b915e8f4802262f--603b4fc60f4941b493b48c305b57a6d4
d338d66f15934ad09857387d69e087dd
RY(theta₂₉)
603b4fc60f4941b493b48c305b57a6d4--d338d66f15934ad09857387d69e087dd
1b06c17ce97c4752b437d5d878218815
RX(theta₃₅)
d338d66f15934ad09857387d69e087dd--1b06c17ce97c4752b437d5d878218815
fcc7405d011a432fa2de7839d78b0ec1
1b06c17ce97c4752b437d5d878218815--fcc7405d011a432fa2de7839d78b0ec1
fcc7405d011a432fa2de7839d78b0ec1--69d1af226cf5414cb0d01ef283d251b7
Creating the QuantumModel
The rest of the procedure is the same as any other Qadence workflow. We start by defining a feature map for input encoding and an observable for output decoding.
from qadence import feature_map , BasisSet , ReuploadScaling
from qadence import Z , I
fm = feature_map (
n_qubits = reg . n_qubits ,
param = "x" ,
fm_type = BasisSet . CHEBYSHEV ,
reupload_scaling = ReuploadScaling . TOWER ,
)
# Total magnetization
observable = add ( Z ( i ) for i in range ( reg . n_qubits ))
And we have all the ingredients to initialize the QuantumModel
:
from qadence import QuantumCircuit , QuantumModel
circuit = QuantumCircuit ( reg , fm , da_ansatz )
model = QuantumModel ( circuit , observable = observable )
Training the model
We can now train the model. We use a set of 20 equally spaced training points.
# Chebyshev FM does not accept x = -1, 1
xmin = - 0.99
xmax = 0.99
n_train = 20
x_train = torch . linspace ( xmin , xmax , steps = n_train )
y_train = f ( x_train )
# Initial model prediction
y_pred_initial = model . expectation ({ "x" : x_test }) . detach ()
And we use a simple custom training loop.
criterion = torch . nn . MSELoss ()
optimizer = torch . optim . Adam ( model . parameters (), lr = 0.1 )
n_epochs = 200
def loss_fn ( x_train , y_train ):
out = model . expectation ({ "x" : x_train })
loss = criterion ( out . squeeze (), y_train )
return loss
for i in range ( n_epochs ):
optimizer . zero_grad ()
loss = loss_fn ( x_train , y_train )
loss . backward ()
optimizer . step ()
Results
Finally we can plot the resulting trained model.
y_pred_final = model . expectation ({ "x" : x_test }) . detach ()
plt . plot ( x_test , y_pred_initial , label = "Initial prediction" )
plt . plot ( x_test , y_pred_final , label = "Final prediction" )
plt . scatter ( x_train , y_train , label = "Training points" )
plt . xlabel ( "x" )
plt . ylabel ( "f(x)" )
plt . legend ()
plt . xlim (( - 1.1 , 1.1 ))
plt . ylim (( - 1.1 , 1.1 ))
2024-12-05T14:31:46.670564
image/svg+xml
Matplotlib v3.9.3, https://matplotlib.org/