Fitting a function with a Hamiltonian ansatz
In the analog QCL tutorial we used analog blocks to learn a function of interest. The analog blocks are a direct abstraction of device execution with global addressing. However, we may want to directly program an Hamiltonian-level ansatz to have a finer control on our model. In Qadence this can easily be done through digital-analog programs. In this tutorial we will solve a simple QCL problem with this approach.
Setting up the problem
The example problem considered is to fit a function of interest in a specified range. Below we define and plot the function \(f(x)=x^5\) .
import torch
import matplotlib.pyplot as plt
# Function to fit:
def f ( x ):
return x ** 5
xmin = - 1.0
xmax = 1.0
n_test = 100
x_test = torch . linspace ( xmin , xmax , steps = n_test )
y_test = f ( x_test )
plt . plot ( x_test , y_test )
plt . xlim (( - 1.1 , 1.1 ))
plt . ylim (( - 1.1 , 1.1 ))
2025-02-14T12:50:55.071008
image/svg+xml
Matplotlib v3.10.0, https://matplotlib.org/
Digital-Analog Ansatz
We start by defining the register of qubits. The topology we use now will define the interactions in the entangling Hamiltonian. As an example, we can define a rectangular lattice with 6 qubits.
from qadence import Register
reg = Register . rectangular_lattice (
qubits_row = 3 ,
qubits_col = 2 ,
)
Inspired by the Ising interaction mode of Rydberg atoms, we can now define an interaction Hamiltonian as \(\mathcal{H}_{ij}=\frac{1}{r_{ij}^6}N_iN_j\) , where \(N_i=(1/2)(I_i-Z_i)\) is the number operator and and \(r_{ij}\) is the distance between qubits \(i\) and \(j\) . We can easily instatiate this interaction Hamiltonian from the register information :
from qadence import N , add
def h_ij ( i : int , j : int ):
return N ( i ) @N ( j )
h_int = add ( h_ij ( * edge ) / r ** 6 for edge , r in reg . edge_distances . items ())
To build the digital-analog ansatz we can make use of the standard hea
function by specifying we want to use the Strategy.SDAQC
and passing the Hamiltonian we created as the entangler, as see in the QML constructors tutorial . The entangling operation will be replaced by the evolution of this Hamiltonian HamEvo(h_int, t)
, where the time parameter t
is considered to be a variational parameter at each layer.
from qadence import hea , Strategy , RX , RY
depth = 2
da_ansatz = hea (
n_qubits = reg . n_qubits ,
depth = depth ,
operations = [ RX , RY , RX ],
entangler = h_int ,
strategy = Strategy . SDAQC ,
)
print ( html_string ( da_ansatz ))
%3
cluster_3d51908747284846bcc973cc7a969839
cluster_509d0a4d23694a2f82371a29f2324fca
e849a9c41d2b43bd8f3328a4e330b0c5
0
5887658112ce41dc96b53cbc502cbf2d
RX(theta₀)
e849a9c41d2b43bd8f3328a4e330b0c5--5887658112ce41dc96b53cbc502cbf2d
46d8e40638fb41a597fc25fe3d3aceaf
1
9e543c6955b04eef8cc95a3acd457d10
RY(theta₆)
5887658112ce41dc96b53cbc502cbf2d--9e543c6955b04eef8cc95a3acd457d10
262e200fd8b441c087acfe7258c9a030
RX(theta₁₂)
9e543c6955b04eef8cc95a3acd457d10--262e200fd8b441c087acfe7258c9a030
4a820d4ecf5a4024ae779575a7f2eeec
262e200fd8b441c087acfe7258c9a030--4a820d4ecf5a4024ae779575a7f2eeec
4e19d4ed17c946768c9a43420115942d
RX(theta₁₈)
4a820d4ecf5a4024ae779575a7f2eeec--4e19d4ed17c946768c9a43420115942d
3c5c5e3d7107440b85d653f21a463b02
RY(theta₂₄)
4e19d4ed17c946768c9a43420115942d--3c5c5e3d7107440b85d653f21a463b02
2f47417a99084353a29ba459f3023879
RX(theta₃₀)
3c5c5e3d7107440b85d653f21a463b02--2f47417a99084353a29ba459f3023879
7057121827a845ffb5d669ea62e675dc
2f47417a99084353a29ba459f3023879--7057121827a845ffb5d669ea62e675dc
d0d920cc52114963b19b73d18a2ef7f5
7057121827a845ffb5d669ea62e675dc--d0d920cc52114963b19b73d18a2ef7f5
8b5ca5578dc54513ad24be30b99e952b
f3e88299cccc414ba9dd12028e6f90fd
RX(theta₁)
46d8e40638fb41a597fc25fe3d3aceaf--f3e88299cccc414ba9dd12028e6f90fd
087e1fc6b64c4909adb9a5f63ba0844b
2
3e998f844b7a40708f00ff7b70055385
RY(theta₇)
f3e88299cccc414ba9dd12028e6f90fd--3e998f844b7a40708f00ff7b70055385
a5d7f935d2274ef0a03500d4be6c8b94
RX(theta₁₃)
3e998f844b7a40708f00ff7b70055385--a5d7f935d2274ef0a03500d4be6c8b94
dec5eabe688b4ab49cf6cf271293b4ad
a5d7f935d2274ef0a03500d4be6c8b94--dec5eabe688b4ab49cf6cf271293b4ad
94cd0d8fb44346cba7f24f5eafd7bd1c
RX(theta₁₉)
dec5eabe688b4ab49cf6cf271293b4ad--94cd0d8fb44346cba7f24f5eafd7bd1c
3b91775f7ec449e18d00411c2dc1a15c
RY(theta₂₅)
94cd0d8fb44346cba7f24f5eafd7bd1c--3b91775f7ec449e18d00411c2dc1a15c
9c8ee7bfed1742bca536cb9abc98120d
RX(theta₃₁)
3b91775f7ec449e18d00411c2dc1a15c--9c8ee7bfed1742bca536cb9abc98120d
f547971ba9504898bc6532a651b9c943
9c8ee7bfed1742bca536cb9abc98120d--f547971ba9504898bc6532a651b9c943
f547971ba9504898bc6532a651b9c943--8b5ca5578dc54513ad24be30b99e952b
adadb2b282b647229b267dbc6afb91fe
6b52c13c5ffb417782a8512e89647619
RX(theta₂)
087e1fc6b64c4909adb9a5f63ba0844b--6b52c13c5ffb417782a8512e89647619
23589f9e2df34d658eaa95ad1c380c92
3
c8bbd6e8022b4931852a11bcb9ce35a5
RY(theta₈)
6b52c13c5ffb417782a8512e89647619--c8bbd6e8022b4931852a11bcb9ce35a5
0074cccd33744844b6ccb0a376993f60
RX(theta₁₄)
c8bbd6e8022b4931852a11bcb9ce35a5--0074cccd33744844b6ccb0a376993f60
197817cce2ed468e97b68475ac16e439
HamEvo
0074cccd33744844b6ccb0a376993f60--197817cce2ed468e97b68475ac16e439
99af104bdb7644e39d02ebf5379a4a34
RX(theta₂₀)
197817cce2ed468e97b68475ac16e439--99af104bdb7644e39d02ebf5379a4a34
24bdebcc09644531a9970e8232a1bded
RY(theta₂₆)
99af104bdb7644e39d02ebf5379a4a34--24bdebcc09644531a9970e8232a1bded
636d336f69974459936d854fb4dfd30b
RX(theta₃₂)
24bdebcc09644531a9970e8232a1bded--636d336f69974459936d854fb4dfd30b
71ab72865fbb4749921d652ad38cec01
HamEvo
636d336f69974459936d854fb4dfd30b--71ab72865fbb4749921d652ad38cec01
71ab72865fbb4749921d652ad38cec01--adadb2b282b647229b267dbc6afb91fe
72a983f5aedc47ebb8103a9612ee9a1c
1de55baa8344451ab15dd5a500199015
RX(theta₃)
23589f9e2df34d658eaa95ad1c380c92--1de55baa8344451ab15dd5a500199015
191e1acd2a5a44389295cf003fdc1cad
4
2a2532b33b514177a91e41e9eb41e826
RY(theta₉)
1de55baa8344451ab15dd5a500199015--2a2532b33b514177a91e41e9eb41e826
714ac0adf4ef4785a66332861fb5f236
RX(theta₁₅)
2a2532b33b514177a91e41e9eb41e826--714ac0adf4ef4785a66332861fb5f236
f39017e48a2647d0ab1846b2602904dd
t = theta_t₀
714ac0adf4ef4785a66332861fb5f236--f39017e48a2647d0ab1846b2602904dd
5c3dacd905c342de95ed7a93e040e8e3
RX(theta₂₁)
f39017e48a2647d0ab1846b2602904dd--5c3dacd905c342de95ed7a93e040e8e3
9c0617615ad947d08f279718cd420aa3
RY(theta₂₇)
5c3dacd905c342de95ed7a93e040e8e3--9c0617615ad947d08f279718cd420aa3
f83204f7f47144e5be9681084b709160
RX(theta₃₃)
9c0617615ad947d08f279718cd420aa3--f83204f7f47144e5be9681084b709160
4c9d839793394b9bb4e05da7585d090a
t = theta_t₁
f83204f7f47144e5be9681084b709160--4c9d839793394b9bb4e05da7585d090a
4c9d839793394b9bb4e05da7585d090a--72a983f5aedc47ebb8103a9612ee9a1c
ffb201e20a0b4a64b501aa59e9bb48ff
38ea724542a1476ab227b46d22bbc49e
RX(theta₄)
191e1acd2a5a44389295cf003fdc1cad--38ea724542a1476ab227b46d22bbc49e
c0923ac5544242c78477082e98244806
5
f770dd43ebd04e87a34e4f23c7ad3391
RY(theta₁₀)
38ea724542a1476ab227b46d22bbc49e--f770dd43ebd04e87a34e4f23c7ad3391
d21eabddcd3c48169940fab0c72b79c8
RX(theta₁₆)
f770dd43ebd04e87a34e4f23c7ad3391--d21eabddcd3c48169940fab0c72b79c8
50caa3150bff45208cc3bff8b6f9d23c
d21eabddcd3c48169940fab0c72b79c8--50caa3150bff45208cc3bff8b6f9d23c
cbdaadea949345a0a32c09048a3318fe
RX(theta₂₂)
50caa3150bff45208cc3bff8b6f9d23c--cbdaadea949345a0a32c09048a3318fe
073052a0ebf143129080a0ab9a03cf11
RY(theta₂₈)
cbdaadea949345a0a32c09048a3318fe--073052a0ebf143129080a0ab9a03cf11
cd21ad1a109240f9a544c2c12e3d3ec2
RX(theta₃₄)
073052a0ebf143129080a0ab9a03cf11--cd21ad1a109240f9a544c2c12e3d3ec2
8fe962d6f6124265990321b92d7ce3ac
cd21ad1a109240f9a544c2c12e3d3ec2--8fe962d6f6124265990321b92d7ce3ac
8fe962d6f6124265990321b92d7ce3ac--ffb201e20a0b4a64b501aa59e9bb48ff
343abd2de98841a69f929da75ce36569
4a51703c7ba748e39c334f9cc8f68852
RX(theta₅)
c0923ac5544242c78477082e98244806--4a51703c7ba748e39c334f9cc8f68852
6eeb6b40703146ba9875a55bd61d7a66
RY(theta₁₁)
4a51703c7ba748e39c334f9cc8f68852--6eeb6b40703146ba9875a55bd61d7a66
0fb980897f2b4c8ba54083a3bc819072
RX(theta₁₇)
6eeb6b40703146ba9875a55bd61d7a66--0fb980897f2b4c8ba54083a3bc819072
52392cdf5aa249a5b3f7e565bb3dee28
0fb980897f2b4c8ba54083a3bc819072--52392cdf5aa249a5b3f7e565bb3dee28
7c25302fff39457d8ace54e82e8c810c
RX(theta₂₃)
52392cdf5aa249a5b3f7e565bb3dee28--7c25302fff39457d8ace54e82e8c810c
b9c66e8c6aad4cb2a45358f8e7447f3e
RY(theta₂₉)
7c25302fff39457d8ace54e82e8c810c--b9c66e8c6aad4cb2a45358f8e7447f3e
646da6b80ab14d5198f48c5908caae8d
RX(theta₃₅)
b9c66e8c6aad4cb2a45358f8e7447f3e--646da6b80ab14d5198f48c5908caae8d
3c155b673fbc4a659fec957cf5cea6bb
646da6b80ab14d5198f48c5908caae8d--3c155b673fbc4a659fec957cf5cea6bb
3c155b673fbc4a659fec957cf5cea6bb--343abd2de98841a69f929da75ce36569
Creating the QuantumModel
The rest of the procedure is the same as any other Qadence workflow. We start by defining a feature map for input encoding and an observable for output decoding.
from qadence import feature_map , BasisSet , ReuploadScaling
from qadence import Z , I
fm = feature_map (
n_qubits = reg . n_qubits ,
param = "x" ,
fm_type = BasisSet . CHEBYSHEV ,
reupload_scaling = ReuploadScaling . TOWER ,
)
# Total magnetization
observable = add ( Z ( i ) for i in range ( reg . n_qubits ))
And we have all the ingredients to initialize the QuantumModel
:
from qadence import QuantumCircuit , QuantumModel
circuit = QuantumCircuit ( reg , fm , da_ansatz )
model = QuantumModel ( circuit , observable = observable )
Training the model
We can now train the model. We use a set of 20 equally spaced training points.
# Chebyshev FM does not accept x = -1, 1
xmin = - 0.99
xmax = 0.99
n_train = 20
x_train = torch . linspace ( xmin , xmax , steps = n_train )
y_train = f ( x_train )
# Initial model prediction
y_pred_initial = model . expectation ({ "x" : x_test }) . detach ()
And we use a simple custom training loop.
criterion = torch . nn . MSELoss ()
optimizer = torch . optim . Adam ( model . parameters (), lr = 0.1 )
n_epochs = 200
def loss_fn ( x_train , y_train ):
out = model . expectation ({ "x" : x_train })
loss = criterion ( out . squeeze (), y_train )
return loss
for i in range ( n_epochs ):
optimizer . zero_grad ()
loss = loss_fn ( x_train , y_train )
loss . backward ()
optimizer . step ()
Results
Finally we can plot the resulting trained model.
y_pred_final = model . expectation ({ "x" : x_test }) . detach ()
plt . plot ( x_test , y_pred_initial , label = "Initial prediction" )
plt . plot ( x_test , y_pred_final , label = "Final prediction" )
plt . scatter ( x_train , y_train , label = "Training points" )
plt . xlabel ( "x" )
plt . ylabel ( "f(x)" )
plt . legend ()
plt . xlim (( - 1.1 , 1.1 ))
plt . ylim (( - 1.1 , 1.1 ))
2025-02-14T12:51:03.557994
image/svg+xml
Matplotlib v3.10.0, https://matplotlib.org/