QML tools
ML Tools
This module implements gradient-free and gradient-based training loops for torch Modules and QuantumModel.
TrainConfig
dataclass
Default config for the train function. The default value of each field can be customize with the constructor:
TrainConfig(max_iter=10000, print_every=1000, write_every=50, checkpoint_every=5000, folder=PosixPath('/tmp/train'), create_subfolder_per_run=False, checkpoint_best_only=False, validation_criterion=<function TrainConfig.__post_init__.<locals>.<lambda> at 0x28afb7700>, trainstop_criterion=<function TrainConfig.__post_init__.<locals>.<lambda> at 0x28afb7430>, batch_size=1)
batch_size: int = 1
class-attribute
instance-attribute
The batch_size to use when passing a list/tuple of torch.Tensors.
checkpoint_best_only: bool = False
class-attribute
instance-attribute
Write model/optimizer checkpoint only if a metric has improved
checkpoint_every: int = 5000
class-attribute
instance-attribute
Write model/optimizer checkpoint
create_subfolder_per_run: bool = False
class-attribute
instance-attribute
Checkpoint/tensorboard logs stored in subfolder with name <timestamp>_<PID>
.
Prevents continuing from previous checkpoint, useful for fast prototyping.
folder: Optional[Path] = None
class-attribute
instance-attribute
Checkpoint/tensorboard logs folder
max_iter: int = 10000
class-attribute
instance-attribute
Number of training iterations.
print_every: int = 1000
class-attribute
instance-attribute
Print loss/metrics.
trainstop_criterion: Optional[Callable] = None
class-attribute
instance-attribute
A boolean function which evaluates a given training stopping metric is satisfied
validation_criterion: Optional[Callable] = None
class-attribute
instance-attribute
A boolean function which evaluates a given validation metric is satisfied
write_every: int = 50
class-attribute
instance-attribute
Write tensorboard logs
get_parameters(model)
Retrieve all trainable model parameters in a single vector
PARAMETER | DESCRIPTION |
---|---|
model |
the input PyTorch model
TYPE:
|
RETURNS | DESCRIPTION |
---|---|
Tensor
|
a 1-dimensional tensor with the parameters
TYPE:
|
Source code in qadence/ml_tools/parameters.py
num_parameters(model)
set_parameters(model, theta)
Set all trainable parameters of a model from a single vector
Notice that this function assumes prior knowledge of right number of parameters in the model
PARAMETER | DESCRIPTION |
---|---|
model |
the input PyTorch model
TYPE:
|
theta |
the parameters to assign
TYPE:
|
Source code in qadence/ml_tools/parameters.py
data_to_model(xs, device='cpu')
Default behavior for single-dispatched function
Just return the given data independently on the type
PARAMETER | DESCRIPTION |
---|---|
xs |
the input data
TYPE:
|
device |
The torch device. Not used in this implementation.
TYPE:
|
RETURNS | DESCRIPTION |
---|---|
Any
|
the
TYPE:
|
Source code in qadence/ml_tools/optimize_step.py
optimize_step(model, optimizer, loss_fn, xs, device='cpu')
Default Torch optimize step with closure
This is the default optimization step which should work for most of the standard use cases of optimization of Torch models
PARAMETER | DESCRIPTION |
---|---|
model |
The input model
TYPE:
|
optimizer |
The chosen Torch optimizer
TYPE:
|
loss_fn |
A custom loss function
TYPE:
|
xs |
the input data. If None it means that the given model does not require any input data
TYPE:
|
device |
The device were computations are executed. Defaults to "cpu".
TYPE:
|
RETURNS | DESCRIPTION |
---|---|
tuple
|
tuple containing the model, the optimizer, a dictionary with the collected metrics and the compute value loss
TYPE:
|
Source code in qadence/ml_tools/optimize_step.py
train(model, dataloader, optimizer, config, loss_fn, device='cpu', optimize_step=optimize_step, write_tensorboard=write_tensorboard)
Runs the training loop with gradient-based optimizer
Assumes that loss_fn
returns a tuple of (loss,
metrics: dict), where metrics
is a dict of scalars. Loss and metrics are
written to tensorboard. Checkpoints are written every
config.checkpoint_every
steps (and after the last training step). If a
checkpoint is found at config.folder
we resume training from there. The
tensorboard logs can be viewed via tensorboard --logdir /path/to/folder
.
PARAMETER | DESCRIPTION |
---|---|
model |
The model to train.
TYPE:
|
dataloader |
dataloader of different types. If None, no data is required by the model
TYPE:
|
optimizer |
The optimizer to use.
TYPE:
|
config |
TYPE:
|
loss_fn |
Loss function returning (loss: float, metrics: dict[str, float])
TYPE:
|
device |
String defining device to train on, pass 'cuda' for GPU.
TYPE:
|
optimize_step |
Customizable optimization callback which is called at every iteration.=
The function must have the signature
TYPE:
|
write_tensorboard |
Customizable tensorboard logging callback which is
called every
TYPE:
|
Example:
from pathlib import Path
import torch
from itertools import count
from qadence.constructors import hamiltonian_factory, hea, feature_map
from qadence import chain, Parameter, QuantumCircuit, Z
from qadence.models import QNN
from qadence.ml_tools import train_with_grad, TrainConfig
n_qubits = 2
fm = feature_map(n_qubits)
ansatz = hea(n_qubits=n_qubits, depth=3)
observable = hamiltonian_factory(n_qubits, detuning = Z)
circuit = QuantumCircuit(n_qubits, fm, ansatz)
model = QNN(circuit, observable, backend="pyqtorch", diff_mode="ad")
batch_size = 1
input_values = {"phi": torch.rand(batch_size, requires_grad=True)}
pred = model(input_values)
## lets prepare the train routine
cnt = count()
criterion = torch.nn.MSELoss()
optimizer = torch.optim.Adam(model.parameters(), lr=0.1)
def loss_fn(model: torch.nn.Module, data: torch.Tensor) -> tuple[torch.Tensor, dict]:
next(cnt)
x, y = data[0], data[1]
out = model(x)
loss = criterion(out, y)
return loss, {}
tmp_path = Path("/tmp")
n_epochs = 5
config = TrainConfig(
folder=tmp_path,
max_iter=n_epochs,
checkpoint_every=100,
write_every=100,
batch_size=batch_size,
)
batch_size = 25
x = torch.linspace(0, 1, batch_size).reshape(-1, 1)
y = torch.sin(x)
train_with_grad(model, (x, y), optimizer, config, loss_fn=loss_fn)
Source code in qadence/ml_tools/train_grad.py
22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 |
|
train(model, dataloader, optimizer, config, loss_fn)
Runs the training loop with a gradient-free optimizer
Assumes that loss_fn
returns a tuple of (loss, metrics: dict), where
metrics
is a dict of scalars. Loss and metrics are written to
tensorboard. Checkpoints are written every config.checkpoint_every
steps
(and after the last training step). If a checkpoint is found at config.folder
we resume training from there. The tensorboard logs can be viewed via
tensorboard --logdir /path/to/folder
.
PARAMETER | DESCRIPTION |
---|---|
model |
The model to train
TYPE:
|
dataloader |
Dataloader constructed via
TYPE:
|
optimizer |
The optimizer to use taken from the Nevergrad library. If this is not the case the function will raise an AssertionError
TYPE:
|
loss_fn |
Loss function returning (loss: float, metrics: dict[str, float])
TYPE:
|
Source code in qadence/ml_tools/train_no_grad.py
24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 |
|