Skip to content

What is Perceptrain?

Perceptrain is a lightweight and flexible training framework built to simplify model training — from local CPU to multi-GPU distributed environments. It is especially suited for research and prototyping, offering modularity and plug-and-play components such as optimizers, loggers, and callbacks.

What does Perceptrain offer?

Key Functionalities: • Seamless multi-GPU / multi-node training via Accelerator abstraction • Built-in support for both gradient-based and gradient-free optimization • Easy experiment tracking with TensorBoard and MLflow • YAML or Python-based configuration via TrainConfig • Customizable training loop via Trainer and callback hooks

Whether you’re developing a deep learning model or experimenting with new training techniques, Perceptrain helps you iterate faster and more reliably.

How can we use it?

The detailed documentation can be (found here)[https://pasqal-io.github.io/perceptrain/latest/]. Below, we show a classification example of using Trainer in Perceptrain.

Quantum Classification with Perceptrain

In this tutorial we will show how to use Qadence-Model and Perceptrain to solve a basic classification task using a hybrid quantum-classical model composed of a QNN and classical layers.

Dataset

We will use the Iris dataset separated into training and testing sets. The task is to classify iris plants presented as a multivariate dataset of 4 features into 3 labels (Iris Setosa, Iris Versicolour, or Iris Virginica). When applying machine learning models, and particularly neural networks, it is recommended to normalize the data. As such, we use a common StandardScaler (we transform the data \(x\) to \(z = (x - u) / s\) where \(u, s\) are respectively the mean and standard deviation of the training samples).

import random

import torch
import torch.nn as nn
from sklearn.datasets import load_iris
from sklearn.preprocessing import StandardScaler
from sklearn.model_selection import train_test_split
from torch import Tensor
from torch.utils.data import DataLoader, Dataset

from qadence import RX, FeatureParameter, QuantumCircuit, Z, chain, hea, kron
from qadence_model.models import QNN
from perceptrain import TrainConfig, Trainer

class IrisDataset(Dataset):
    """The Iris dataset split into a training set and a test set.

    A StandardScaler is applied prior to applying models.
    """

    def __init__(self):
        X, y = load_iris(return_X_y=True)
        X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33, random_state=42)

        self.scaler = StandardScaler()
        self.scaler.fit(X_train)
        self.X = torch.tensor(self.scaler.transform(X_train), requires_grad=False)
        self.y = torch.tensor(y_train, requires_grad=False)

        self.X_test = torch.tensor(self.scaler.transform(X_test), requires_grad=False)
        self.y_test = torch.tensor(y_test, requires_grad=False)

    def __getitem__(self, index) -> tuple[Tensor, Tensor]:
        return self.X[index], self.y[index]

    def __len__(self) -> int:
        return len(self.y)

n_features = 4  # sepal length, sepal width, petal length, petal width
n_layers = 3
n_neurons_final_linear_layer = 3
n_epochs = 1000
lr = 1e-1
dataset = IrisDataset()

dataloader = DataLoader(dataset, batch_size=20, shuffle=True)

Hybrid QNN

We set up the QNN part composed of multiple feature map layers, each followed by a variational layer. The type of variational layer we use is the hardware-efficient-ansatz (HEA). The output will be the expectation value with respect to a \(Z\) observable on qubit \(0\). Then we add a simple linear layer serving as a classification head. This is equivalent to applying a weight matrix \(W\) and bias vector \(b\) to the output of the QNN denoted \(o\), \(l = W * o + b\). To obtain probabilities, we can apply the softmax function defined as: \(p_i = \exp(l_i) / \sum_{j=1}^3 \exp(l_i)\). Note softmax is not applied during training with the cross-entropy loss.

feature_parameters = [FeatureParameter(f"x_{i}") for i in range(n_features)]
fm_layer = RX(0, feature_parameters[0])
for q in range(1, n_features):
    fm_layer = kron(fm_layer, RX(q, feature_parameters[q]))

ansatz_layers = [
    hea(n_qubits=n_features, depth=1, param_prefix=f"theta_{layer}")
    for layer in range(n_layers)
]
blocks = chain(fm_layer, ansatz_layers[0])
for layer in range(1, n_layers):
    blocks = chain(blocks, fm_layer, ansatz_layers[layer])

qc = QuantumCircuit(n_features, blocks)
qnn = QNN(circuit=qc, observable=Z(0), inputs=[f"x_{i}" for i in range(n_features)])
model = nn.Sequential(qnn, nn.Linear(1, n_neurons_final_linear_layer))

Below is a visualization of the QNN:


%3 cluster_1c021f9a19894049bf47f6c7623ffd84 HEA cluster_33c8db5417ce428d9ebbee3e27ea5ef3 Obs. cluster_aab2cce14800400d87b9bed9388b391e HEA cluster_8979127c90ee47ce86a4845f3b021034 HEA 447cbcbc5dbd4f13a84e66d8d0189ccb 0 1989e4c053784c4ba9c83473050d9cc6 RX(x₀) 447cbcbc5dbd4f13a84e66d8d0189ccb--1989e4c053784c4ba9c83473050d9cc6 0b6d49fe422f4ba0b2a5b5a946f18207 1 e77c484b8d2740ea867c994b600aba5d RX(theta₀₀) 1989e4c053784c4ba9c83473050d9cc6--e77c484b8d2740ea867c994b600aba5d aef0f0e2d3034dbb8589b43a33bf9571 RY(theta₀₄) e77c484b8d2740ea867c994b600aba5d--aef0f0e2d3034dbb8589b43a33bf9571 5330c7cd9ef843dfb079562e3a1980f9 RX(theta₀₈) aef0f0e2d3034dbb8589b43a33bf9571--5330c7cd9ef843dfb079562e3a1980f9 b07e480ea7f84fddaced2b61db2f1796 5330c7cd9ef843dfb079562e3a1980f9--b07e480ea7f84fddaced2b61db2f1796 b608ff9a09794a97a1eb7d167b90fd11 b07e480ea7f84fddaced2b61db2f1796--b608ff9a09794a97a1eb7d167b90fd11 64a33aeeb7d44c6cbaf07a966654b71c RX(x₀) b608ff9a09794a97a1eb7d167b90fd11--64a33aeeb7d44c6cbaf07a966654b71c 07cecb58ff6c40a48ef5d63b19865aa4 RX(theta₁₀) 64a33aeeb7d44c6cbaf07a966654b71c--07cecb58ff6c40a48ef5d63b19865aa4 10248c754f24481580c8d3fcf1ae0157 RY(theta₁₄) 07cecb58ff6c40a48ef5d63b19865aa4--10248c754f24481580c8d3fcf1ae0157 24369ff13578407cb2e908e84dc6d385 RX(theta₁₈) 10248c754f24481580c8d3fcf1ae0157--24369ff13578407cb2e908e84dc6d385 4cabbfa4249a4093a87f35fbe2e19ff3 24369ff13578407cb2e908e84dc6d385--4cabbfa4249a4093a87f35fbe2e19ff3 8270f7d9d368437fb7cef7b376653841 4cabbfa4249a4093a87f35fbe2e19ff3--8270f7d9d368437fb7cef7b376653841 32c6cc2bdd0c4633a06b00d2c2aa159c RX(x₀) 8270f7d9d368437fb7cef7b376653841--32c6cc2bdd0c4633a06b00d2c2aa159c a77a6541c219476f93d307aeac1c2cb6 RX(theta₂₀) 32c6cc2bdd0c4633a06b00d2c2aa159c--a77a6541c219476f93d307aeac1c2cb6 e7d010e33caf402ca64fc28146c0a154 RY(theta₂₄) a77a6541c219476f93d307aeac1c2cb6--e7d010e33caf402ca64fc28146c0a154 519e81d61d2e41bdb6f3877960650bfc RX(theta₂₈) e7d010e33caf402ca64fc28146c0a154--519e81d61d2e41bdb6f3877960650bfc f8b07d51965d4b149636357e18182866 519e81d61d2e41bdb6f3877960650bfc--f8b07d51965d4b149636357e18182866 884f471d6c0443ea844400de71f28ffb f8b07d51965d4b149636357e18182866--884f471d6c0443ea844400de71f28ffb d402a58e99fe4efbbdd6d0c73c97acee Z 884f471d6c0443ea844400de71f28ffb--d402a58e99fe4efbbdd6d0c73c97acee 472d9fa05e284f97a47884a6968d7fcd d402a58e99fe4efbbdd6d0c73c97acee--472d9fa05e284f97a47884a6968d7fcd 2049c3a848374cb886afab386d4349fd ddf60a582802490c94b17402dbf7a11f RX(x₁) 0b6d49fe422f4ba0b2a5b5a946f18207--ddf60a582802490c94b17402dbf7a11f 6c50da56dd3446cb866aad5b2a6992b5 2 d3a71e29bc744c50b8cf8e03dc1cfabe RX(theta₀₁) ddf60a582802490c94b17402dbf7a11f--d3a71e29bc744c50b8cf8e03dc1cfabe 456b9f3468ec46df9b2b302a17f9224b RY(theta₀₅) d3a71e29bc744c50b8cf8e03dc1cfabe--456b9f3468ec46df9b2b302a17f9224b 2dc3bd8332ef4665a0b6abff98ec75c9 RX(theta₀₉) 456b9f3468ec46df9b2b302a17f9224b--2dc3bd8332ef4665a0b6abff98ec75c9 f89d59c5742f4f3c84d5d8217470c1f4 X 2dc3bd8332ef4665a0b6abff98ec75c9--f89d59c5742f4f3c84d5d8217470c1f4 f89d59c5742f4f3c84d5d8217470c1f4--b07e480ea7f84fddaced2b61db2f1796 18de95e3931943308b86f3cd505f8708 f89d59c5742f4f3c84d5d8217470c1f4--18de95e3931943308b86f3cd505f8708 d5ae0ad1f15f4d91bb667d6ad4645a3c RX(x₁) 18de95e3931943308b86f3cd505f8708--d5ae0ad1f15f4d91bb667d6ad4645a3c 0c20c052019e4b4ba309a231f35577b0 RX(theta₁₁) d5ae0ad1f15f4d91bb667d6ad4645a3c--0c20c052019e4b4ba309a231f35577b0 5d4d06d76ee443ae981acc9a32baaa3d RY(theta₁₅) 0c20c052019e4b4ba309a231f35577b0--5d4d06d76ee443ae981acc9a32baaa3d 1daf919e57224022b048c3881f4db0a7 RX(theta₁₉) 5d4d06d76ee443ae981acc9a32baaa3d--1daf919e57224022b048c3881f4db0a7 82ac8dfb6cbb477193fa3d97800c57d9 X 1daf919e57224022b048c3881f4db0a7--82ac8dfb6cbb477193fa3d97800c57d9 82ac8dfb6cbb477193fa3d97800c57d9--4cabbfa4249a4093a87f35fbe2e19ff3 139872877ab24ce190599859fe45f1b9 82ac8dfb6cbb477193fa3d97800c57d9--139872877ab24ce190599859fe45f1b9 f7c8525e6b234257b6600e0c8bc9a47d RX(x₁) 139872877ab24ce190599859fe45f1b9--f7c8525e6b234257b6600e0c8bc9a47d f79d1ab3fb404426aea4eb51767e0d6b RX(theta₂₁) f7c8525e6b234257b6600e0c8bc9a47d--f79d1ab3fb404426aea4eb51767e0d6b 9a8bce5609cb48918a5c7050843326ff RY(theta₂₅) f79d1ab3fb404426aea4eb51767e0d6b--9a8bce5609cb48918a5c7050843326ff 610d8fe721754f05bf9711c41041fb22 RX(theta₂₉) 9a8bce5609cb48918a5c7050843326ff--610d8fe721754f05bf9711c41041fb22 d24ca788f5424ca693788fd563be0b13 X 610d8fe721754f05bf9711c41041fb22--d24ca788f5424ca693788fd563be0b13 d24ca788f5424ca693788fd563be0b13--f8b07d51965d4b149636357e18182866 28d884e3ae4348e88a1347dab0c101bf d24ca788f5424ca693788fd563be0b13--28d884e3ae4348e88a1347dab0c101bf 10e2faa1819f487e868dd44fa116f36c 28d884e3ae4348e88a1347dab0c101bf--10e2faa1819f487e868dd44fa116f36c 10e2faa1819f487e868dd44fa116f36c--2049c3a848374cb886afab386d4349fd 866fa2620e8f4d249f1d3c2876d957f8 9f2a6a43aaa443e787b78a1a126c1532 RX(x₂) 6c50da56dd3446cb866aad5b2a6992b5--9f2a6a43aaa443e787b78a1a126c1532 34bbf7c6e8394eab8caf1309d9c976a1 3 316bc07163b34b2eabd0cc234982377d RX(theta₀₂) 9f2a6a43aaa443e787b78a1a126c1532--316bc07163b34b2eabd0cc234982377d f408342efa34476bbb4d7b26344fcf92 RY(theta₀₆) 316bc07163b34b2eabd0cc234982377d--f408342efa34476bbb4d7b26344fcf92 938967c5b8f34cc3a43cd5fe464e632c RX(theta₀₁₀) f408342efa34476bbb4d7b26344fcf92--938967c5b8f34cc3a43cd5fe464e632c 8690baa8caaa42c8b5c43c0ca6fe82fc 938967c5b8f34cc3a43cd5fe464e632c--8690baa8caaa42c8b5c43c0ca6fe82fc 5a2743d49da8485a94887760b44cb038 X 8690baa8caaa42c8b5c43c0ca6fe82fc--5a2743d49da8485a94887760b44cb038 5a2743d49da8485a94887760b44cb038--18de95e3931943308b86f3cd505f8708 d233f381e1c749bab1a22621941be233 RX(x₂) 5a2743d49da8485a94887760b44cb038--d233f381e1c749bab1a22621941be233 880c8bdbae224960a756d2f63ae08f99 RX(theta₁₂) d233f381e1c749bab1a22621941be233--880c8bdbae224960a756d2f63ae08f99 4b88d2e99545447aba12b8f2fefaade8 RY(theta₁₆) 880c8bdbae224960a756d2f63ae08f99--4b88d2e99545447aba12b8f2fefaade8 363b1ef9574946b3ad8f0f1649914065 RX(theta₁₁₀) 4b88d2e99545447aba12b8f2fefaade8--363b1ef9574946b3ad8f0f1649914065 bda4a46934e94760a445574934e1e35e 363b1ef9574946b3ad8f0f1649914065--bda4a46934e94760a445574934e1e35e d307ac91f1aa4656ab68da66110c4a7d X bda4a46934e94760a445574934e1e35e--d307ac91f1aa4656ab68da66110c4a7d d307ac91f1aa4656ab68da66110c4a7d--139872877ab24ce190599859fe45f1b9 648bf404e48c489d875683c1e8202dd1 RX(x₂) d307ac91f1aa4656ab68da66110c4a7d--648bf404e48c489d875683c1e8202dd1 1a31b2c1f334425b9ed6f065fa005eee RX(theta₂₂) 648bf404e48c489d875683c1e8202dd1--1a31b2c1f334425b9ed6f065fa005eee 3fdb39696f1140068ef23f0c8e68a34b RY(theta₂₆) 1a31b2c1f334425b9ed6f065fa005eee--3fdb39696f1140068ef23f0c8e68a34b d6e0c34925184014a6e17bfcc0c90c8d RX(theta₂₁₀) 3fdb39696f1140068ef23f0c8e68a34b--d6e0c34925184014a6e17bfcc0c90c8d be9982f74738491eb059fb724aa103d8 d6e0c34925184014a6e17bfcc0c90c8d--be9982f74738491eb059fb724aa103d8 bfe288174cf64bb680f254d905df2dc6 X be9982f74738491eb059fb724aa103d8--bfe288174cf64bb680f254d905df2dc6 bfe288174cf64bb680f254d905df2dc6--28d884e3ae4348e88a1347dab0c101bf a2590e29b5284bd0a514b0df0be48720 bfe288174cf64bb680f254d905df2dc6--a2590e29b5284bd0a514b0df0be48720 a2590e29b5284bd0a514b0df0be48720--866fa2620e8f4d249f1d3c2876d957f8 0e84c156bc5f4725bb1470f9347abb24 9f99700ed52c4deba4de9ff9ee02a3ca RX(x₃) 34bbf7c6e8394eab8caf1309d9c976a1--9f99700ed52c4deba4de9ff9ee02a3ca 46d78d03166c45c8845872fd81a21a68 RX(theta₀₃) 9f99700ed52c4deba4de9ff9ee02a3ca--46d78d03166c45c8845872fd81a21a68 521cb03b2861402ea9550f7c84e3c32f RY(theta₀₇) 46d78d03166c45c8845872fd81a21a68--521cb03b2861402ea9550f7c84e3c32f 7890a5c82b7d470bbebf5e61450c32c2 RX(theta₀₁₁) 521cb03b2861402ea9550f7c84e3c32f--7890a5c82b7d470bbebf5e61450c32c2 c8bdb9de9dc04241bb071dbb7e475ce3 X 7890a5c82b7d470bbebf5e61450c32c2--c8bdb9de9dc04241bb071dbb7e475ce3 c8bdb9de9dc04241bb071dbb7e475ce3--8690baa8caaa42c8b5c43c0ca6fe82fc 65acfe0ac88d4c02946ad3dc73e3230e c8bdb9de9dc04241bb071dbb7e475ce3--65acfe0ac88d4c02946ad3dc73e3230e 3b80f0c7c90b49ee94e26684ec63cdaf RX(x₃) 65acfe0ac88d4c02946ad3dc73e3230e--3b80f0c7c90b49ee94e26684ec63cdaf 3bf2719e6e034b24a5b0da41e4b94a40 RX(theta₁₃) 3b80f0c7c90b49ee94e26684ec63cdaf--3bf2719e6e034b24a5b0da41e4b94a40 094b9317ac1945e9805c6e421ad0bdc2 RY(theta₁₇) 3bf2719e6e034b24a5b0da41e4b94a40--094b9317ac1945e9805c6e421ad0bdc2 890e2d93d55a43d09c276809d0b8f3e4 RX(theta₁₁₁) 094b9317ac1945e9805c6e421ad0bdc2--890e2d93d55a43d09c276809d0b8f3e4 f937e59bc5df4496b3716178bf31a0c2 X 890e2d93d55a43d09c276809d0b8f3e4--f937e59bc5df4496b3716178bf31a0c2 f937e59bc5df4496b3716178bf31a0c2--bda4a46934e94760a445574934e1e35e fc2fee6a832e40c09c208a4608c91e84 f937e59bc5df4496b3716178bf31a0c2--fc2fee6a832e40c09c208a4608c91e84 e9baf067160944cea4ab6700e27735b7 RX(x₃) fc2fee6a832e40c09c208a4608c91e84--e9baf067160944cea4ab6700e27735b7 2b018ffa85c041f0b0220dbc0ca9cb00 RX(theta₂₃) e9baf067160944cea4ab6700e27735b7--2b018ffa85c041f0b0220dbc0ca9cb00 afb1dc0df96645f687616bbc07f1172d RY(theta₂₇) 2b018ffa85c041f0b0220dbc0ca9cb00--afb1dc0df96645f687616bbc07f1172d b587c06955ed4438a95cfbe3a74b2570 RX(theta₂₁₁) afb1dc0df96645f687616bbc07f1172d--b587c06955ed4438a95cfbe3a74b2570 9a7c9b1cb8ff4d869c6ca3d18b63efa4 X b587c06955ed4438a95cfbe3a74b2570--9a7c9b1cb8ff4d869c6ca3d18b63efa4 9a7c9b1cb8ff4d869c6ca3d18b63efa4--be9982f74738491eb059fb724aa103d8 0dfa7e342e8044a29eda182d8c009dbd 9a7c9b1cb8ff4d869c6ca3d18b63efa4--0dfa7e342e8044a29eda182d8c009dbd 84f3b6365ccf42a0bfda03ca4cadb1d4 0dfa7e342e8044a29eda182d8c009dbd--84f3b6365ccf42a0bfda03ca4cadb1d4 84f3b6365ccf42a0bfda03ca4cadb1d4--0e84c156bc5f4725bb1470f9347abb24

Training

Then we can set up the training part:

opt = torch.optim.Adam(model.parameters(), lr=lr)
criterion = nn.CrossEntropyLoss()

def cross_entropy(model: nn.Module, data: Tensor) -> tuple[Tensor, dict]:
    x, y = data
    out = model(x)
    loss = criterion(out, y)
    return loss, {}

train_config = TrainConfig(max_iter=n_epochs, print_every=10, create_subfolder_per_run=True)
Trainer.set_use_grad(True)
trainer = Trainer(model=model, optimizer=opt, config=train_config, loss_fn=cross_entropy)


res_train = trainer.fit(dataloader)

Inference

Finally, we can apply our model on the test set and check the score.

X_test, y_test = dataset.X_test, dataset.y_test
preds_test = torch.argmax(torch.softmax(model(X_test), dim=1), dim=1)
accuracy_test = (preds_test == y_test).type(torch.float32).mean()
## Should reach higher than 0.9
Test Accuracy: 0.9399999976158142