Perceptrain is a lightweight and flexible training framework built to simplify model training — from local CPU to multi-GPU distributed environments. It is especially suited for research and prototyping, offering modularity and plug-and-play components such as optimizers, loggers, and callbacks.
What does Perceptrain offer?
Key Functionalities:
• Seamless multi-GPU / multi-node training via Accelerator abstraction
• Built-in support for both gradient-based and gradient-free optimization
• Easy experiment tracking with TensorBoard and MLflow
• YAML or Python-based configuration via TrainConfig
• Customizable training loop via Trainer and callback hooks
Whether you’re developing a deep learning model or experimenting with new training techniques, Perceptrain helps you iterate faster and more reliably.
How can we use it?
The detailed documentation can be (found here)[https://pasqal-io.github.io/perceptrain/latest/]. Below, we show a classification example of using Trainer in Perceptrain.
Quantum Classification with Perceptrain
In this tutorial we will show how to use Qadence-Model and Perceptrain to solve a basic classification task using a hybrid quantum-classical model composed of a QNN and classical layers.
Dataset
We will use the Iris dataset separated into training and testing sets.
The task is to classify iris plants presented as a multivariate dataset of 4 features into 3 labels (Iris Setosa, Iris Versicolour, or Iris Virginica).
When applying machine learning models, and particularly neural networks, it is recommended to normalize the data. As such, we use a common StandardScaler (we transform the data \(x\) to \(z = (x - u) / s\) where \(u, s\) are respectively the mean and standard deviation of the training samples).
importrandomimporttorchimporttorch.nnasnnfromsklearn.datasetsimportload_irisfromsklearn.preprocessingimportStandardScalerfromsklearn.model_selectionimporttrain_test_splitfromtorchimportTensorfromtorch.utils.dataimportDataLoader,DatasetfromqadenceimportRX,FeatureParameter,QuantumCircuit,Z,chain,hea,kronfromqadence_model.modelsimportQNNfromperceptrainimportTrainConfig,TrainerclassIrisDataset(Dataset):"""The Iris dataset split into a training set and a test set. A StandardScaler is applied prior to applying models. """def__init__(self):X,y=load_iris(return_X_y=True)X_train,X_test,y_train,y_test=train_test_split(X,y,test_size=0.33,random_state=42)self.scaler=StandardScaler()self.scaler.fit(X_train)self.X=torch.tensor(self.scaler.transform(X_train),requires_grad=False)self.y=torch.tensor(y_train,requires_grad=False)self.X_test=torch.tensor(self.scaler.transform(X_test),requires_grad=False)self.y_test=torch.tensor(y_test,requires_grad=False)def__getitem__(self,index)->tuple[Tensor,Tensor]:returnself.X[index],self.y[index]def__len__(self)->int:returnlen(self.y)n_features=4# sepal length, sepal width, petal length, petal widthn_layers=3n_neurons_final_linear_layer=3n_epochs=1000lr=1e-1dataset=IrisDataset()dataloader=DataLoader(dataset,batch_size=20,shuffle=True)
Hybrid QNN
We set up the QNN part composed of multiple feature map layers, each followed by a variational layer.
The type of variational layer we use is the hardware-efficient-ansatz (HEA).
The output will be the expectation value with respect to a \(Z\) observable on qubit \(0\).
Then we add a simple linear layer serving as a classification head. This is equivalent to applying a weight matrix \(W\) and bias vector \(b\) to the output of the QNN denoted \(o\), \(l = W * o + b\). To obtain probabilities, we can apply the softmax function defined as: \(p_i = \exp(l_i) / \sum_{j=1}^3 \exp(l_i)\).
Note softmax is not applied during training with the cross-entropy loss.
Finally, we can apply our model on the test set and check the score.
X_test,y_test=dataset.X_test,dataset.y_testpreds_test=torch.argmax(torch.softmax(model(X_test),dim=1),dim=1)accuracy_test=(preds_test==y_test).type(torch.float32).mean()## Should reach higher than 0.9