Skip to content

Solving a 1D ODE

In this tutorial we will show how to use Qadence-Model to solve a basic 1D Ordinary Differential Equation (ODE) with a QNN using Differentiable Quantum Circuits (DQC) 1.

Consider the following non-linear ODE and boundary condition:

\[ \frac{df}{dx}= 5\times(4x^3+x^2-2x-\frac12), \qquad f(0)=0 \]

It admits an exact solution:

\[ f(x)=5\times(x^4+\frac13x^3-x^2-\frac12x) \]

Our goal will be to find this solution for \(x\in[-1, 1]\).

import torch

def dfdx_equation(x: torch.Tensor) -> torch.Tensor:
    """Derivative as per the equation."""
    return 5*(4*x**3 + x**2 - 2*x - 0.5)

For the purpose of this tutorial, we will compute the derivative of the circuit using torch.autograd. The point of the DQC algorithm is to use differentiable circuits with parameter shift rules. In Qadence, PSR is implemented directly as custom overrides of the derivative function in the autograd engine, and thus we can later change the derivative method for the model itself if we wish.

def calc_deriv(outputs: torch.Tensor, inputs: torch.Tensor) -> torch.Tensor:
    """Compute a derivative of model that learns f(x), computes df/dx using torch.autograd."""
    grad = torch.autograd.grad(
        outputs=outputs,
        inputs=inputs,
        grad_outputs = torch.ones_like(inputs),
        create_graph = True,
        retain_graph = True,
    )[0]
    return grad

Defining the loss function

The essential part of solving this problem is to define the right loss function to represent our goal. In this case, we want to define a model that has the capacity to learn the target solution, and we want to minimize: - The derivative of this model in comparison with the exact derivative in the equation; - The output of the model at the boundary in comparison with the value for the boundary condition;

We can write it like so:

# Mean-squared error as the comparison criterion
criterion = torch.nn.MSELoss()

def loss_fn(model: torch.nn.Module, inputs: torch.Tensor) -> torch.Tensor:
    """Loss function encoding the problem to solve."""
    # Equation loss
    model_output = model(inputs)
    deriv_model = calc_deriv(model_output, inputs)
    deriv_exact = dfdx_equation(inputs)
    ode_loss = criterion(deriv_model, deriv_exact)

    # Boundary loss, f(0) = 0
    boundary_model = model(torch.tensor([[0.0]]))
    boundary_exact = torch.tensor([[0.0]])
    boundary_loss = criterion(boundary_model, boundary_exact)

    return ode_loss + boundary_loss

Different loss criterions could be considered, and we could also play with the balance between the sum of the two loss terms. For now, let's proceed with the definition above.

Note that so far we have not used any quantum specific assumption, and we could in principle use the same loss function with a classical neural network.

Defining a QNN with Qadence-Model

Now, we can finally use Qadence-Model to write a QNN. We will use a feature map to encode the input values, a trainable ansatz circuit, and an observable to measure as the output.

from qadence import feature_map, hea, chain, QuantumCircuit, Z
from qadence_model.models import QNN
from qadence.types import BasisSet, ReuploadScaling

n_qubits = 3
depth = 3

# Feature map
fm = feature_map(
    n_qubits = n_qubits,
    param = "x",
    fm_type = BasisSet.CHEBYSHEV,
    reupload_scaling = ReuploadScaling.TOWER,
)

# Ansatz
ansatz = hea(n_qubits = n_qubits, depth = depth)

# Observable
observable = Z(0)

circuit = QuantumCircuit(n_qubits, chain(fm, ansatz))
model = QNN(circuit = circuit, observable = observable, inputs = ["x"])

We used a Chebyshev feature map with a tower-like scaling of the input reupload, and a standard hardware-efficient ansatz. In the observable, for now we consider the simple case of measuring the magnetization of the first qubit.

from qadence.draw import display

# display(circuit)
%3 cluster_2dc9d24b76134c8b8ca948a8f7c96dde HEA cluster_0c588953958a41faa16e8c2c28d103c6 Tower Chebyshev FM 08d0816da72540edbb798d20c05c496d 0 f5e970470edb4196b3a8da6bdd01a928 RX(1.0*acos(x)) 08d0816da72540edbb798d20c05c496d--f5e970470edb4196b3a8da6bdd01a928 41eed2a162c94a60941acf1182bdd238 1 aa4735962aae44588aaee5be90f83f64 RX(theta₀) f5e970470edb4196b3a8da6bdd01a928--aa4735962aae44588aaee5be90f83f64 a248a7e0f7b24f57990fcdbe259446f3 RY(theta₃) aa4735962aae44588aaee5be90f83f64--a248a7e0f7b24f57990fcdbe259446f3 08b3f58a2cd64ff8b8d718ff68755980 RX(theta₆) a248a7e0f7b24f57990fcdbe259446f3--08b3f58a2cd64ff8b8d718ff68755980 628dd09428074d71af63de7f65ebceb0 08b3f58a2cd64ff8b8d718ff68755980--628dd09428074d71af63de7f65ebceb0 c802890e96d743b09110ff1cd0802b8e 628dd09428074d71af63de7f65ebceb0--c802890e96d743b09110ff1cd0802b8e 6d1ea780303c4acfaf5fbbbcf3721109 RX(theta₉) c802890e96d743b09110ff1cd0802b8e--6d1ea780303c4acfaf5fbbbcf3721109 db76746f2c2b44edad46b67d7f69f12f RY(theta₁₂) 6d1ea780303c4acfaf5fbbbcf3721109--db76746f2c2b44edad46b67d7f69f12f 3b589b7bc4b646f8ad2fc3e7eab92a7a RX(theta₁₅) db76746f2c2b44edad46b67d7f69f12f--3b589b7bc4b646f8ad2fc3e7eab92a7a e8e8130f844546a294870beeeb6d481e 3b589b7bc4b646f8ad2fc3e7eab92a7a--e8e8130f844546a294870beeeb6d481e 0b3efe5bb0ef48d1bcfa983bb6a80117 e8e8130f844546a294870beeeb6d481e--0b3efe5bb0ef48d1bcfa983bb6a80117 a0dbd4bc000a4f28b8bb755633e57f38 RX(theta₁₈) 0b3efe5bb0ef48d1bcfa983bb6a80117--a0dbd4bc000a4f28b8bb755633e57f38 5bf93ad92d634942a9a7a6b59b8ac306 RY(theta₂₁) a0dbd4bc000a4f28b8bb755633e57f38--5bf93ad92d634942a9a7a6b59b8ac306 d1cf44d2a92d4accbfa46b3525563030 RX(theta₂₄) 5bf93ad92d634942a9a7a6b59b8ac306--d1cf44d2a92d4accbfa46b3525563030 c689ea95f3244f77903214ffc5ef7852 d1cf44d2a92d4accbfa46b3525563030--c689ea95f3244f77903214ffc5ef7852 0d6f99c554cc4c64ae5559ce36fb7246 c689ea95f3244f77903214ffc5ef7852--0d6f99c554cc4c64ae5559ce36fb7246 f5e899f208cc49c291a3664e20ec63a4 0d6f99c554cc4c64ae5559ce36fb7246--f5e899f208cc49c291a3664e20ec63a4 3ea74ac6e341470fa6716cee2b26ce9f 045f26ed9e534aa7b0c139b79cd9bead RX(2.0*acos(x)) 41eed2a162c94a60941acf1182bdd238--045f26ed9e534aa7b0c139b79cd9bead 1d04af2186474c7bb27078fa3c134f4c 2 cf62991955a24f9a8842f8de9b8548c8 RX(theta₁) 045f26ed9e534aa7b0c139b79cd9bead--cf62991955a24f9a8842f8de9b8548c8 0aef991e64c648d3a71d88f76650125e RY(theta₄) cf62991955a24f9a8842f8de9b8548c8--0aef991e64c648d3a71d88f76650125e a502313b3b1144bcbdb03d482582d28e RX(theta₇) 0aef991e64c648d3a71d88f76650125e--a502313b3b1144bcbdb03d482582d28e 025e9bbbcfee4973aa8161d23ca75282 X a502313b3b1144bcbdb03d482582d28e--025e9bbbcfee4973aa8161d23ca75282 025e9bbbcfee4973aa8161d23ca75282--628dd09428074d71af63de7f65ebceb0 14919d4e65694fb49314269afdd11880 025e9bbbcfee4973aa8161d23ca75282--14919d4e65694fb49314269afdd11880 ddf51b4e44e84484aaf3968712fae2c4 RX(theta₁₀) 14919d4e65694fb49314269afdd11880--ddf51b4e44e84484aaf3968712fae2c4 8ea11a9f1c584ae7ba2e13b9acea8af8 RY(theta₁₃) ddf51b4e44e84484aaf3968712fae2c4--8ea11a9f1c584ae7ba2e13b9acea8af8 fd692759db004a8c9829d2c3800c55de RX(theta₁₆) 8ea11a9f1c584ae7ba2e13b9acea8af8--fd692759db004a8c9829d2c3800c55de e4bc49faa7524cada8a3edf7bf824b29 X fd692759db004a8c9829d2c3800c55de--e4bc49faa7524cada8a3edf7bf824b29 e4bc49faa7524cada8a3edf7bf824b29--e8e8130f844546a294870beeeb6d481e aea230560aaa4fe6ac994cdce74f7f16 e4bc49faa7524cada8a3edf7bf824b29--aea230560aaa4fe6ac994cdce74f7f16 400477dd3950414f80e35719568c66f3 RX(theta₁₉) aea230560aaa4fe6ac994cdce74f7f16--400477dd3950414f80e35719568c66f3 06a4be68aafd4744a021cbc79a5f6732 RY(theta₂₂) 400477dd3950414f80e35719568c66f3--06a4be68aafd4744a021cbc79a5f6732 ee4cee3e95f84168a906bfa2af1f7183 RX(theta₂₅) 06a4be68aafd4744a021cbc79a5f6732--ee4cee3e95f84168a906bfa2af1f7183 b2e10ce845b54fd18912480ea1dbf2e9 X ee4cee3e95f84168a906bfa2af1f7183--b2e10ce845b54fd18912480ea1dbf2e9 b2e10ce845b54fd18912480ea1dbf2e9--c689ea95f3244f77903214ffc5ef7852 8aee085da9d144b5b85da41c29d6a7bc b2e10ce845b54fd18912480ea1dbf2e9--8aee085da9d144b5b85da41c29d6a7bc 8aee085da9d144b5b85da41c29d6a7bc--3ea74ac6e341470fa6716cee2b26ce9f 0f20744f4f3e4e758b72df1311152021 26379b97aea642018230e2afb15af309 RX(3.0*acos(x)) 1d04af2186474c7bb27078fa3c134f4c--26379b97aea642018230e2afb15af309 51a1a565e2a5428fb242ae1300aa96d6 RX(theta₂) 26379b97aea642018230e2afb15af309--51a1a565e2a5428fb242ae1300aa96d6 a3e5a78e03064abcab679c3a6b147b69 RY(theta₅) 51a1a565e2a5428fb242ae1300aa96d6--a3e5a78e03064abcab679c3a6b147b69 fe33fb6003b8460e90024b47f1dbdf6f RX(theta₈) a3e5a78e03064abcab679c3a6b147b69--fe33fb6003b8460e90024b47f1dbdf6f a40ab4dc19984bff94def6a888491619 fe33fb6003b8460e90024b47f1dbdf6f--a40ab4dc19984bff94def6a888491619 5d472b829b564ac0aa390db3d74a9377 X a40ab4dc19984bff94def6a888491619--5d472b829b564ac0aa390db3d74a9377 5d472b829b564ac0aa390db3d74a9377--14919d4e65694fb49314269afdd11880 0ba9cc495d3346888d4f4eb65c1db9fc RX(theta₁₁) 5d472b829b564ac0aa390db3d74a9377--0ba9cc495d3346888d4f4eb65c1db9fc 438267d1b6b1461388ed417f6bce15db RY(theta₁₄) 0ba9cc495d3346888d4f4eb65c1db9fc--438267d1b6b1461388ed417f6bce15db e2c7fdf9f49942cea103042d221d59fd RX(theta₁₇) 438267d1b6b1461388ed417f6bce15db--e2c7fdf9f49942cea103042d221d59fd 9caaeb5847764914a532f9fb9f81d1ac e2c7fdf9f49942cea103042d221d59fd--9caaeb5847764914a532f9fb9f81d1ac 6af0d2c56c6a4bd8b02a80e468332c2d X 9caaeb5847764914a532f9fb9f81d1ac--6af0d2c56c6a4bd8b02a80e468332c2d 6af0d2c56c6a4bd8b02a80e468332c2d--aea230560aaa4fe6ac994cdce74f7f16 dbceb9ab2a214ab8b1e1e3f0f0ee164a RX(theta₂₀) 6af0d2c56c6a4bd8b02a80e468332c2d--dbceb9ab2a214ab8b1e1e3f0f0ee164a afd5b547e071417cb135673b02762cc7 RY(theta₂₃) dbceb9ab2a214ab8b1e1e3f0f0ee164a--afd5b547e071417cb135673b02762cc7 5d954137741c464ba1aeb7b2964ee9ba RX(theta₂₆) afd5b547e071417cb135673b02762cc7--5d954137741c464ba1aeb7b2964ee9ba f4d704eafd664c2882490bdd08e527c5 5d954137741c464ba1aeb7b2964ee9ba--f4d704eafd664c2882490bdd08e527c5 116241e2ccbc41e8951eef639d1f12e6 X f4d704eafd664c2882490bdd08e527c5--116241e2ccbc41e8951eef639d1f12e6 116241e2ccbc41e8951eef639d1f12e6--8aee085da9d144b5b85da41c29d6a7bc 116241e2ccbc41e8951eef639d1f12e6--0f20744f4f3e4e758b72df1311152021

Training the model

Now that the model is defined we can proceed with the training. the QNN class can be used like any other torch.nn.Module.

To train the model, we will select a random set of collocation points uniformly distributed within \(-1.0< x <1.0\) and compute the loss function for those points.

n_epochs = 200
n_points = 10

xmin = -0.99
xmax = 0.99

optimizer = torch.optim.Adam(model.parameters(), lr = 0.1)

for epoch in range(n_epochs):
    optimizer.zero_grad()

    # Training data. We unsqueeze essentially making each batch have a single x value.
    x_train = (xmin + (xmax-xmin)*torch.rand(n_points, requires_grad = True)).unsqueeze(1)

    loss = loss_fn(inputs = x_train, model = model)
    loss.backward()
    optimizer.step()

Note the values of \(x\) are only picked from \(x\in[-0.99, 0.99]\) since we are using a Chebyshev feature map, and derivative of \(\text{acos}(x)\) diverges at \(-1\) and \(1\).

Plotting the results

import matplotlib.pyplot as plt

def f_exact(x: torch.Tensor) -> torch.Tensor:
    return 5*(x**4 + (1/3)*x**3 - x**2 - 0.5*x)

x_test = torch.arange(xmin, xmax, step = 0.01).unsqueeze(1)

result_exact = f_exact(x_test).flatten()

result_model = model(x_test).flatten().detach()

plt.plot(x_test, result_exact, label = "Exact solution")
plt.plot(x_test, result_model, label = " Trained model")
2025-06-12T18:23:52.659603 image/svg+xml Matplotlib v3.10.3, https://matplotlib.org/

Clearly, the result is not optimal.

Improving the solution

One point to consider when defining the QNN is the possible output range, which is bounded by the spectrum of the chosen observable. For the magnetization of a single qubit, this means that the output is bounded between -1 and 1, which we can clearly see in the plot.

One option would be to define the observable as the total magnetization over all qubits, which would allow a range of -3 to 3.

from qadence import add

observable = add(Z(i) for i in range(n_qubits))

model = QNN(circuit = circuit, observable = observable, inputs = ["x"])

optimizer = torch.optim.Adam(model.parameters(), lr = 0.1)

for epoch in range(n_epochs):
    optimizer.zero_grad()

    # Training data
    x_train = (xmin + (xmax-xmin)*torch.rand(n_points, requires_grad = True)).unsqueeze(1)

    loss = loss_fn(inputs = x_train, model = model)
    loss.backward()
    optimizer.step()

And we again plot the result:

x_test = torch.arange(xmin, xmax, step = 0.01).unsqueeze(1)

result_exact = f_exact(x_test).flatten()

result_model = model(x_test).flatten().detach()

plt.plot(x_test, result_exact, label = "Exact solution")
plt.plot(x_test, result_model, label = "Trained model")
2025-06-12T18:24:00.728576 image/svg+xml Matplotlib v3.10.3, https://matplotlib.org/

References