QML tools
ML Tools
This module implements gradient-free and gradient-based training loops for torch Modules and QuantumModel. It also implements the QNN class.
AnsatzConfig(depth=1, ansatz_type=AnsatzType.HEA, ansatz_strategy=Strategy.DIGITAL, strategy_args=dict(), param_prefix='theta', tag=None)
dataclass
ansatz_strategy: Strategy = Strategy.DIGITAL
class-attribute
instance-attribute
Ansatz strategy.
Strategy.DIGITAL
for fully digital ansatz. Required if ansatz_type
is AnsatzType.IIA
.
Strategy.SDAQC
for analog entangling block.
Strategy.RYDBERG
for fully rydberg hea ansatz.
ansatz_type: AnsatzType = AnsatzType.HEA
class-attribute
instance-attribute
What type of ansatz.
AnsatzType.HEA
for Hardware Efficient Ansatz.
AnsatzType.IIA
for Identity intialized Ansatz.
depth: int = 1
class-attribute
instance-attribute
Number of layers of the ansatz.
param_prefix: str = 'theta'
class-attribute
instance-attribute
The base bame of the variational parameter.
strategy_args: dict = field(default_factory=dict)
class-attribute
instance-attribute
A dictionary containing keyword arguments to the function creating the ansatz.
Details about each below.
For Strategy.DIGITAL
strategy, accepts the following:
periodic (bool): if the qubits should be linked periodically.
periodic=False is not supported in emu-c.
operations (list): list of operations to cycle through in the
digital single-qubit rotations of each layer.
Defaults to [RX, RY, RX] for hea and [RX, RY] for iia.
entangler (AbstractBlock): 2-qubit entangling operation.
Supports CNOT, CZ, CRX, CRY, CRZ, CPHASE. Controlld rotations
will have variational parameters on the rotation angles.
Defaults to CNOT
For Strategy.SDAQC
strategy, accepts the following:
operations (list): list of operations to cycle through in the
digital single-qubit rotations of each layer.
Defaults to [RX, RY, RX] for hea and [RX, RY] for iia.
entangler (AbstractBlock): Hamiltonian generator for the
analog entangling layer. Time parameter is considered variational.
Defaults to NN interaction.
For Strategy.RYDBERG
strategy, accepts the following:
addressable_detuning: whether to turn on the trainable semi-local addressing pattern
on the detuning (n_i terms in the Hamiltonian).
Defaults to True.
addressable_drive: whether to turn on the trainable semi-local addressing pattern
on the drive (sigma_i^x terms in the Hamiltonian).
Defaults to False.
tunable_phase: whether to have a tunable phase to get both sigma^x and sigma^y rotations
in the drive term. If False, only a sigma^x term will be included in the drive part
of the Hamiltonian generator.
Defaults to False.
tag: str | None = None
class-attribute
instance-attribute
String to indicate the name tag of the ansatz.
Defaults to None, in which case no tag will be applied.
Callback(callback, callback_condition=None, modify_optimize_result=None, called_every=1, call_before_opt=False, call_end_epoch=True, call_after_opt=False, call_during_eval=False)
Callback functions are calling in train functions.
Each callback function should take at least as first input an OptimizeResult instance.
Note: when setting call_after_opt to True, we skip verifying iteration % called_every == 0.
ATTRIBUTE | DESCRIPTION |
---|---|
|
Callback function accepting an OptimizeResult as first argument.
TYPE:
|
|
Function that conditions the call to callback. Defaults to None.
TYPE:
|
|
Function that modify the OptimizeResult before callback.
For instance, one can change the
TYPE:
|
|
Callback to be called each
TYPE:
|
|
If true, callback is applied before training. Defaults to False.
TYPE:
|
|
If true, callback is applied during training, after an epoch is performed. Defaults to True.
TYPE:
|
|
If true, callback is applied after training. Defaults to False.
TYPE:
|
|
If true, callback is applied during evaluation. Defaults to False.
TYPE:
|
Initialized Callback.
PARAMETER | DESCRIPTION |
---|---|
callback |
Callback function accepting an OptimizeResult as ifrst argument.
TYPE:
|
callback_condition |
Function that conditions the call to callback. Defaults to None.
TYPE:
|
modify_optimize_result |
Function that modify the OptimizeResult before callback. If a dict
is provided, this updates the
TYPE:
|
called_every |
Callback to be called each
TYPE:
|
call_before_opt |
If true, callback is applied before training. Defaults to False.
TYPE:
|
call_end_epoch |
If true, callback is applied during training, after an epoch is performed. Defaults to True.
TYPE:
|
call_after_opt |
If true, callback is applied after training. Defaults to False.
TYPE:
|
call_during_eval |
If true, callback is applied during evaluation. Defaults to False.
TYPE:
|
Source code in qadence/ml_tools/config.py
__call__(opt_result, is_last_iteration=False)
Apply callback if conditions are met.
Note that the current result may be modified by specifying a function
modify_optimize_result
for instance to add inputs to the extra
argument
of the current OptimizeResult.
PARAMETER | DESCRIPTION |
---|---|
opt_result |
Current result.
TYPE:
|
is_last_iteration |
When True, avoid verifying modulo. Defaults to False. Useful when call_after_opt is True.
TYPE:
|
RETURNS | DESCRIPTION |
---|---|
Any
|
The result of the callback.
TYPE:
|
Source code in qadence/ml_tools/config.py
FeatureMapConfig(num_features=0, basis_set=BasisSet.FOURIER, reupload_scaling=ReuploadScaling.CONSTANT, feature_range=None, target_range=None, multivariate_strategy=MultivariateStrategy.PARALLEL, feature_map_strategy=Strategy.DIGITAL, param_prefix=None, num_repeats=0, operation=None, inputs=None, tag=None)
dataclass
basis_set: BasisSet | dict[str, BasisSet] = BasisSet.FOURIER
class-attribute
instance-attribute
Basis set for feature encoding.
Takes qadence.BasisSet. Give a single BasisSet to use the same for all features. Give a dict of (str, BasisSet) where the key is the name of the variable and the value is the BasisSet to use for encoding that feature. BasisSet.FOURIER for Fourier encoding. BasisSet.CHEBYSHEV for Chebyshev encoding.
feature_map_strategy: Strategy = Strategy.DIGITAL
class-attribute
instance-attribute
Strategy for feature map.
Accepts DIGITAL, ANALOG or RYDBERG. Defaults to DIGITAL.
If the strategy is incompatible with the operation
chosen, then operation
gets preference and the given strategy is ignored.
feature_range: tuple[float, float] | dict[str, tuple[float, float]] | None = None
class-attribute
instance-attribute
Range of data that the input data is assumed to come from.
Give a single tuple to use the same range for all features. Give a dict of (str, tuple) where the key is the name of the variable and the value is the feature range to use for that feature.
inputs: list[Basic | str] | None = None
class-attribute
instance-attribute
List that indicates the order of variables of the tensors that are passed.
Optional if a single feature is being encoded, required otherwise. Given input tensors
xs = torch.rand(batch_size, input_size:=2)
a QNN with inputs=["t", "x"]
will
assign t, x = xs[:,0], xs[:,1]
.
multivariate_strategy: MultivariateStrategy = MultivariateStrategy.PARALLEL
class-attribute
instance-attribute
The encoding strategy in case of multi-variate function.
Takes qadence.MultivariateStrategy.
If PARALLEL, the features are encoded in one block of rotation gates
with the register being split in sub-registers for each feature.
If SERIES, the features are encoded sequentially using the full register for each feature, with
an ansatz block between them. PARALLEL is allowed only for DIGITAL feature_map_strategy
.
num_features: int = 0
class-attribute
instance-attribute
Number of feature parameters to be encoded.
Defaults to 0. Thus, no feature parameters are encoded.
num_repeats: int | dict[str, int] = 0
class-attribute
instance-attribute
Number of feature map layers repeated in the data reuploading step.
If all features are to be repeated the same number of times, then can give a single
int
. For different number of repetitions for each feature, provide a dict
of (str, int) where the key is the name of the variable and the value is the
number of repetitions for that feature.
This amounts to the number of additional reuploads. So if num_repeats
is N,
the data gets uploaded N+1 times. Defaults to no repetition.
operation: Callable[[Parameter | Basic], AnalogBlock] | Type[RX] | None = None
class-attribute
instance-attribute
Type of operation.
Choose among the analog or digital rotations or a custom
callable function returning an AnalogBlock instance. If the type of operation is
incompatible with the strategy
chosen, then operation
gets preference and
the given strategy is ignored.
param_prefix: str | None = None
class-attribute
instance-attribute
String prefix to create trainable parameters in Feature Map.
A string prefix to create trainable parameters multiplying the feature parameter
inside the feature-encoding function. Note that currently this does not take into
account the domain of the feature-encoding function.
Defaults to None
and thus, the feature map is not trainable.
Note that this is separate from the name of the parameter.
The user can provide a single prefix for all features, and it will be appended
by appropriate feature name automatically.
reupload_scaling: ReuploadScaling | dict[str, ReuploadScaling] = ReuploadScaling.CONSTANT
class-attribute
instance-attribute
Scaling for encoding the same feature on different qubits.
Scaling used to encode the same feature on different qubits in the same layer of the feature maps. Takes qadence.ReuploadScaling. Give a single ReuploadScaling to use the same for all features. Give a dict of (str, ReuploadScaling) where the key is the name of the variable and the value is the ReuploadScaling to use for encoding that feature. ReuploadScaling.CONSTANT for constant scaling. ReuploadScaling.TOWER for linearly increasing scaling. ReuploadScaling.EXP for exponentially increasing scaling.
tag: str | None = None
class-attribute
instance-attribute
String to indicate the name tag of the feature map.
Defaults to None, in which case no tag will be applied.
target_range: tuple[float, float] | dict[str, tuple[float, float]] | None = None
class-attribute
instance-attribute
Range of data the data encoder assumes as natural range.
Give a single tuple to use the same range for all features. Give a dict of (str, tuple) where the key is the name of the variable and the value is the target range to use for that feature.
MLFlowConfig()
Configuration for mlflow tracking.
Example:
export MLFLOW_TRACKING_URI=tracking_uri
export MLFLOW_EXPERIMENT=experiment_name
export MLFLOW_RUN_NAME=run_name
Source code in qadence/ml_tools/config.py
experiment_name: str = os.getenv('MLFLOW_EXPERIMENT', str(uuid4()))
instance-attribute
The name of the experiment.
If None or empty, a new experiment is created with a random UUID.
run_name: str = os.getenv('MLFLOW_RUN_NAME', str(uuid4()))
instance-attribute
The name of the run.
tracking_uri: str = os.getenv('MLFLOW_TRACKING_URI', '')
instance-attribute
The URI of the mlflow tracking server.
An empty string, or a local file path, prefixed with file:/. Data is stored locally at the provided file (or ./mlruns if empty).
TrainConfig(max_iter=10000, print_every=1000, write_every=50, checkpoint_every=5000, plot_every=5000, callbacks=lambda: list()(), log_model=False, folder=None, create_subfolder_per_run=False, checkpoint_best_only=False, val_every=None, val_epsilon=1e-05, validation_criterion=None, trainstop_criterion=None, batch_size=1, verbose=True, tracking_tool=ExperimentTrackingTool.TENSORBOARD, hyperparams=dict(), plotting_functions=tuple())
dataclass
Default config for the train function.
The default value of each field can be customized with the constructor:
TrainConfig(max_iter=10000, print_every=1000, write_every=50, checkpoint_every=5000, plot_every=5000, callbacks=[], log_model=False, folder=PosixPath('/tmp/train'), create_subfolder_per_run=False, checkpoint_best_only=False, val_every=None, val_epsilon=1e-05, validation_criterion=<function TrainConfig.__post_init__.<locals>.<lambda> at 0x7f5fc1e4e560>, trainstop_criterion=<function TrainConfig.__post_init__.<locals>.<lambda> at 0x7f5fc1e4feb0>, batch_size=1, verbose=True, tracking_tool=<ExperimentTrackingTool.TENSORBOARD: 'tensorboard'>, hyperparams={}, plotting_functions=())
batch_size: int = 1
class-attribute
instance-attribute
The batch_size to use when passing a list/tuple of torch.Tensors.
callbacks: list[Callback] = field(default_factory=lambda: list())
class-attribute
instance-attribute
List of callbacks.
checkpoint_best_only: bool = False
class-attribute
instance-attribute
Write model/optimizer checkpoint only if a metric has improved.
checkpoint_every: int = 5000
class-attribute
instance-attribute
Write model/optimizer checkpoint.
Set to 0 to disable
create_subfolder_per_run: bool = False
class-attribute
instance-attribute
Checkpoint/tensorboard logs stored in subfolder with name <timestamp>_<PID>
.
Prevents continuing from previous checkpoint, useful for fast prototyping.
folder: Path | None = None
class-attribute
instance-attribute
Checkpoint/tensorboard logs folder.
hyperparams: dict = field(default_factory=dict)
class-attribute
instance-attribute
Hyperparameters to track.
log_model: bool = False
class-attribute
instance-attribute
Logs a serialised version of the model.
max_iter: int = 10000
class-attribute
instance-attribute
Number of training iterations.
plot_every: int = 5000
class-attribute
instance-attribute
Write figures.
Set to 0 to disable
plotting_functions: tuple[LoggablePlotFunction, ...] = field(default_factory=tuple)
class-attribute
instance-attribute
Functions for in-train plotting.
print_every: int = 1000
class-attribute
instance-attribute
Print loss/metrics.
Set to 0 to disable
tracking_tool: ExperimentTrackingTool = ExperimentTrackingTool.TENSORBOARD
class-attribute
instance-attribute
The tracking tool of choice.
trainstop_criterion: Callable | None = None
class-attribute
instance-attribute
A boolean function which evaluates a given training stopping metric is satisfied.
val_epsilon: float = 1e-05
class-attribute
instance-attribute
Safety margin to check if validation loss is smaller than the lowest.
validation loss across previous iterations.
val_every: int | None = None
class-attribute
instance-attribute
Calculate validation metric.
If None, validation check is not performed.
validation_criterion: Callable | None = None
class-attribute
instance-attribute
A boolean function which evaluates a given validation metric is satisfied.
verbose: bool = True
class-attribute
instance-attribute
Whether or not to print out metrics values during training.
write_every: int = 50
class-attribute
instance-attribute
Write loss and metrics with the tracking tool.
Set to 0 to disable
run_callbacks(callback_iterable, opt_res, is_last_iteration=False)
Run a list of Callback given the current OptimizeResult.
Used in train functions.
PARAMETER | DESCRIPTION |
---|---|
callback_iterable |
Iterable of Callbacks
TYPE:
|
opt_res |
Current optimization result,
TYPE:
|
is_last_iteration |
Whether we reached the last iteration or not. Defaults to False.
TYPE:
|
Source code in qadence/ml_tools/config.py
get_parameters(model)
Retrieve all trainable model parameters in a single vector.
PARAMETER | DESCRIPTION |
---|---|
model |
the input PyTorch model
TYPE:
|
RETURNS | DESCRIPTION |
---|---|
Tensor
|
a 1-dimensional tensor with the parameters
TYPE:
|
Source code in qadence/ml_tools/parameters.py
num_parameters(model)
set_parameters(model, theta)
Set all trainable parameters of a model from a single vector.
Notice that this function assumes prior knowledge of right number of parameters in the model
PARAMETER | DESCRIPTION |
---|---|
model |
the input PyTorch model
TYPE:
|
theta |
the parameters to assign
TYPE:
|
Source code in qadence/ml_tools/parameters.py
optimize_step(model, optimizer, loss_fn, xs, device=None, dtype=None)
Default Torch optimize step with closure.
This is the default optimization step which should work for most of the standard use cases of optimization of Torch models
PARAMETER | DESCRIPTION |
---|---|
model |
The input model
TYPE:
|
optimizer |
The chosen Torch optimizer
TYPE:
|
loss_fn |
A custom loss function
TYPE:
|
xs |
the input data. If None it means that the given model does not require any input data
TYPE:
|
device |
A target device to run computation on.
TYPE:
|
dtype |
Data type for xs conversion.
TYPE:
|
RETURNS | DESCRIPTION |
---|---|
tuple
|
tuple containing the computed loss value, and a dictionary with the collected metrics.
TYPE:
|
Source code in qadence/ml_tools/optimize_step.py
train(model, dataloader, optimizer, config, loss_fn, device=None, optimize_step=optimize_step, dtype=None)
Runs the training loop with gradient-based optimizer.
Assumes that loss_fn
returns a tuple of (loss,
metrics: dict), where metrics
is a dict of scalars. Loss and metrics are
written to tensorboard. Checkpoints are written every
config.checkpoint_every
steps (and after the last training step). If a
checkpoint is found at config.folder
we resume training from there. The
tensorboard logs can be viewed via tensorboard --logdir /path/to/folder
.
PARAMETER | DESCRIPTION |
---|---|
model |
The model to train.
TYPE:
|
dataloader |
dataloader of different types. If None, no data is required by the model
TYPE:
|
optimizer |
The optimizer to use.
TYPE:
|
config |
TYPE:
|
loss_fn |
Loss function returning (loss: float, metrics: dict[str, float], ...)
TYPE:
|
device |
String defining device to train on, pass 'cuda' for GPU.
TYPE:
|
optimize_step |
Customizable optimization callback which is called at every iteration.=
The function must have the signature
TYPE:
|
dtype |
The dtype to use for the data.
TYPE:
|
Example:
from pathlib import Path
import torch
from itertools import count
from qadence import Parameter, QuantumCircuit, Z
from qadence import hamiltonian_factory, hea, feature_map, chain
from qadence import QNN
from qadence.ml_tools import TrainConfig, train_with_grad, to_dataloader
n_qubits = 2
fm = feature_map(n_qubits)
ansatz = hea(n_qubits=n_qubits, depth=3)
observable = hamiltonian_factory(n_qubits, detuning = Z)
circuit = QuantumCircuit(n_qubits, fm, ansatz)
model = QNN(circuit, observable, backend="pyqtorch", diff_mode="ad")
batch_size = 1
input_values = {"phi": torch.rand(batch_size, requires_grad=True)}
pred = model(input_values)
## lets prepare the train routine
cnt = count()
criterion = torch.nn.MSELoss()
optimizer = torch.optim.Adam(model.parameters(), lr=0.1)
def loss_fn(model: torch.nn.Module, data: torch.Tensor) -> tuple[torch.Tensor, dict]:
next(cnt)
x, y = data[0], data[1]
out = model(x)
loss = criterion(out, y)
return loss, {}
tmp_path = Path("/tmp")
n_epochs = 5
batch_size = 25
config = TrainConfig(
folder=tmp_path,
max_iter=n_epochs,
checkpoint_every=100,
write_every=100,
)
x = torch.linspace(0, 1, batch_size).reshape(-1, 1)
y = torch.sin(x)
data = to_dataloader(x, y, batch_size=batch_size, infinite=True)
train_with_grad(model, data, optimizer, config, loss_fn=loss_fn)
Source code in qadence/ml_tools/train_grad.py
33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 |
|
train(model, dataloader, optimizer, config, loss_fn)
Runs the training loop with a gradient-free optimizer.
Assumes that loss_fn
returns a tuple of (loss, metrics: dict), where
metrics
is a dict of scalars. Loss and metrics are written to
tensorboard. Checkpoints are written every config.checkpoint_every
steps
(and after the last training step). If a checkpoint is found at config.folder
we resume training from there. The tensorboard logs can be viewed via
tensorboard --logdir /path/to/folder
.
PARAMETER | DESCRIPTION |
---|---|
model |
The model to train
TYPE:
|
dataloader |
Dataloader constructed via
TYPE:
|
optimizer |
The optimizer to use taken from the Nevergrad library. If this is not the case the function will raise an AssertionError
TYPE:
|
config |
TYPE:
|
loss_fn |
Loss function returning (loss: float, metrics: dict[str, float])
TYPE:
|
Source code in qadence/ml_tools/train_no_grad.py
32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 |
|
DictDataLoader(dataloaders)
dataclass
This class only holds a dictionary of DataLoader
s and samples from them.
InfiniteTensorDataset(*tensors)
Bases:
Randomly sample points from the first dimension of the given tensors.
Behaves like a normal torch Dataset
just that we can sample from it as
many times as we want.
Examples:
import torch
from qadence.ml_tools.data import InfiniteTensorDataset
x_data, y_data = torch.rand(5,2), torch.ones(5,1)
# The dataset accepts any number of tensors with the same batch dimension
ds = InfiniteTensorDataset(x_data, y_data)
# call `next` to get one sample from each tensor:
xs = next(iter(ds))
Source code in qadence/ml_tools/data.py
OptimizeResult(iteration, model, optimizer, loss=None, metrics=lambda: dict()(), extra=lambda: dict()())
dataclass
OptimizeResult stores many optimization intermediate values.
We store at a current iteration, the model, optimizer, loss values, metrics. An extra dict can be used for saving other information to be used for callbacks.
extra: dict = field(default_factory=lambda: dict())
class-attribute
instance-attribute
Extra dict for saving anything else to be used in callbacks.
iteration: int
instance-attribute
Current iteration number.
loss: Tensor | float | None = None
class-attribute
instance-attribute
Loss value.
metrics: dict = field(default_factory=lambda: dict())
class-attribute
instance-attribute
Metrics that can be saved during training.
model: Module
instance-attribute
Model at iteration.
optimizer: Optimizer | NGOptimizer
instance-attribute
Optimizer at iteration.
data_to_device(xs, *args, **kwargs)
Utility method to move arbitrary data to 'device'.
to_dataloader(*tensors, batch_size=1, infinite=False)
Convert torch tensors an (infinite) Dataloader.
PARAMETER | DESCRIPTION |
---|---|
*tensors |
Torch tensors to use in the dataloader.
TYPE:
|
batch_size |
batch size of sampled tensors
TYPE:
|
infinite |
if
TYPE:
|
Examples:
import torch
from qadence.ml_tools import to_dataloader
(x, y, z) = [torch.rand(10) for _ in range(3)]
loader = iter(to_dataloader(x, y, z, batch_size=5, infinite=True))
print(next(loader))
print(next(loader))
print(next(loader))
[tensor([0.0383, 0.6499, 0.2174, 0.8365, 0.7595]), tensor([0.9930, 0.6499, 0.5704, 0.3446, 0.4449]), tensor([0.9129, 0.6979, 0.9241, 0.9303, 0.6982])]
[tensor([0.1396, 0.8575, 0.2936, 0.4060, 0.4587]), tensor([0.6133, 0.4985, 0.2761, 0.3457, 0.1848]), tensor([0.7635, 0.7290, 0.6696, 0.6267, 0.4534])]
[tensor([0.0383, 0.6499, 0.2174, 0.8365, 0.7595]), tensor([0.9930, 0.6499, 0.5704, 0.3446, 0.4449]), tensor([0.9129, 0.6979, 0.9241, 0.9303, 0.6982])]
Source code in qadence/ml_tools/data.py
QNN(circuit, observable, backend=BackendName.PYQTORCH, diff_mode=DiffMode.AD, measurement=None, noise=None, configuration=None, inputs=None, input_diff_mode=InputDiffMode.AD)
Bases:
Quantum neural network model for n-dimensional inputs.
Examples:
import torch
from qadence import QuantumCircuit, QNN, Z
from qadence import hea, feature_map, hamiltonian_factory, kron
# create the circuit
n_qubits, depth = 2, 4
fm = kron(
feature_map(1, support=(0,), param="x"),
feature_map(1, support=(1,), param="y")
)
ansatz = hea(n_qubits=n_qubits, depth=depth)
circuit = QuantumCircuit(n_qubits, fm, ansatz)
obs_base = hamiltonian_factory(n_qubits, detuning=Z)
# the QNN will yield two outputs
obs = [2.0 * obs_base, 4.0 * obs_base]
# initialize and use the model
qnn = QNN(circuit, obs, inputs=["x", "y"])
y = qnn(torch.rand(3, 2))
Initialize the QNN.
The number of inputs is determined by the feature parameters in the input quantum circuit while the number of outputs is determined by how many observables are provided as input
PARAMETER | DESCRIPTION |
---|---|
circuit |
The quantum circuit to use for the QNN.
TYPE:
|
observable |
The observable.
TYPE:
|
backend |
The chosen quantum backend.
TYPE:
|
diff_mode |
The differentiation engine to use. Choices 'gpsr' or 'ad'.
TYPE:
|
measurement |
optional measurement protocol. If None, use exact expectation value with a statevector simulator
TYPE:
|
noise |
A noise model to use.
TYPE:
|
configuration |
optional configuration for the backend
TYPE:
|
inputs |
List that indicates the order of variables of the tensors that are passed
to the model. Given input tensors
TYPE:
|
input_diff_mode |
The differentiation mode for the input tensor.
TYPE:
|
Source code in qadence/ml_tools/models.py
forward(values=None, state=None, measurement=None, noise=None, endianness=Endianness.BIG)
Forward pass of the model.
This returns the (differentiable) expectation value of the given observable
operator defined in the constructor. Differently from the base QuantumModel
class, the QNN accepts also a tensor as input for the forward pass. The
tensor is expected to have shape: n_batches x in_features
where n_batches
is the number of data points and in_features
is the dimensionality of the problem
The output of the forward pass is the expectation value of the input
observable(s). If a single observable is given, the output shape is
n_batches
while if multiple observables are given the output shape
is instead n_batches x n_observables
PARAMETER | DESCRIPTION |
---|---|
values |
the values of the feature parameters
TYPE:
|
state |
Initial state.
TYPE:
|
measurement |
optional measurement protocol. If None, use exact expectation value with a statevector simulator
TYPE:
|
noise |
A noise model to use.
TYPE:
|
endianness |
Endianness of the resulting bit strings.
TYPE:
|
RETURNS | DESCRIPTION |
---|---|
Tensor
|
a tensor with the expectation value of the observables passed in the constructor of the model
TYPE:
|
Source code in qadence/ml_tools/models.py
from_configs(register, obs_config, fm_config=FeatureMapConfig(), ansatz_config=AnsatzConfig(), backend=BackendName.PYQTORCH, diff_mode=DiffMode.AD, measurement=None, noise=None, configuration=None, input_diff_mode=InputDiffMode.AD)
classmethod
Create a QNN from a set of configurations.
PARAMETER | DESCRIPTION |
---|---|
register |
The number of qubits or a register object.
TYPE:
|
obs_config |
The configuration(s) for the observable(s).
TYPE:
|
fm_config |
The configuration for the feature map. Defaults to no feature encoding block.
TYPE:
|
ansatz_config |
The configuration for the ansatz. Defaults to a single layer of hardware efficient ansatz.
TYPE:
|
backend |
The chosen quantum backend.
TYPE:
|
diff_mode |
The differentiation engine to use. Choices are 'gpsr' or 'ad'.
TYPE:
|
measurement |
Optional measurement protocol. If None, use exact expectation value with a statevector simulator.
TYPE:
|
noise |
A noise model to use.
TYPE:
|
configuration |
Optional backend configuration.
TYPE:
|
input_diff_mode |
The differentiation mode for the input tensor.
TYPE:
|
RETURNS | DESCRIPTION |
---|---|
|
A QNN object. |
RAISES | DESCRIPTION |
---|---|
|
If the observable configuration is not provided. |
Example:
import torch
from qadence.ml_tools.config import AnsatzConfig, FeatureMapConfig
from qadence.ml_tools import QNN
from qadence.constructors import ObservableConfig
from qadence.operations import Z
from qadence.types import (
AnsatzType, BackendName, BasisSet, ObservableTransform, ReuploadScaling, Strategy
)
register = 4
obs_config = ObservableConfig(
detuning=Z,
scale=5.0,
shift=0.0,
transformation_type=ObservableTransform.SCALE,
trainable_transform=None,
)
fm_config = FeatureMapConfig(
num_features=2,
inputs=["x", "y"],
basis_set=BasisSet.FOURIER,
reupload_scaling=ReuploadScaling.CONSTANT,
feature_range={
"x": (-1.0, 1.0),
"y": (0.0, 1.0),
},
)
ansatz_config = AnsatzConfig(
depth=2,
ansatz_type=AnsatzType.HEA,
ansatz_strategy=Strategy.DIGITAL,
)
qnn = QNN.from_configs(
register, obs_config, fm_config, ansatz_config, backend=BackendName.PYQTORCH
)
x = torch.rand(2, 2)
y = qnn(x)
Source code in qadence/ml_tools/models.py
211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 |
|
derivative(ufa, x, derivative_indices)
Compute derivatives w.r.t.
inputs of a UFA with a single output. The
derivative_indices
specify which derivative(s) are computed. E.g.
derivative_indices=(1,2)
would compute the a second order derivative w.r.t
to the indices 1
and 2
of the input tensor.
PARAMETER | DESCRIPTION |
---|---|
ufa |
The model for which we want to compute the derivative.
TYPE:
|
x |
(batch_size, input_size) input tensor.
TYPE:
|
derivative_indices |
Define which derivatives to compute.
TYPE:
|
Examples:
If we create a UFA with three inputs and denote the first, second, and third
input with x
, y
, and z
we can compute the following derivatives w.r.t
to those inputs:
import torch
from qadence.ml_tools.models import derivative, QNN
from qadence.ml_tools.config import FeatureMapConfig, AnsatzConfig
from qadence.constructors.hamiltonians import ObservableConfig
from qadence.operations import Z
fm_config = FeatureMapConfig(num_features=3, inputs=["x", "y", "z"])
ansatz_config = AnsatzConfig()
obs_config = ObservableConfig(detuning=Z)
f = QNN.from_configs(
register=3, obs_config=obs_config, fm_config=fm_config, ansatz_config=ansatz_config,
)
inputs = torch.rand(5,3,requires_grad=True)
# df_dx
derivative(f, inputs, (0,))
# d2f_dydz
derivative(f, inputs, (1,2))
# d3fdy2dx
derivative(f, inputs, (1,1,0))
Source code in qadence/ml_tools/models.py
format_to_dict_fn(inputs=[])
Format an input tensor into the format required by the forward pass.
The tensor is assumed to have dimensions: n_batches x in_features where in_features corresponds to the number of input features of the QNN