Skip to content

QML tools

ML Tools

This module implements gradient-free and gradient-based training loops for torch Modules and QuantumModel. It also implements the QNN class.

AnsatzConfig(depth=1, ansatz_type=AnsatzType.HEA, ansatz_strategy=Strategy.DIGITAL, strategy_args=dict(), param_prefix='theta') dataclass

ansatz_strategy: Strategy = Strategy.DIGITAL class-attribute instance-attribute

Ansatz strategy.

DIGITAL for fully digital ansatz. Required if ansatz_type is iia. SDAQC for analog entangling block. RYDBERG for fully rydberg hea ansatz.

ansatz_type: AnsatzType = AnsatzType.HEA class-attribute instance-attribute

What type of ansatz.

HEA for Hardware Efficient Ansatz. IIA for Identity intialized Ansatz.

depth: int = 1 class-attribute instance-attribute

Number of layers of the ansatz.

param_prefix: str = 'theta' class-attribute instance-attribute

The base bame of the variational parameter.

strategy_args: dict = field(default_factory=dict) class-attribute instance-attribute

A dictionary containing keyword arguments to the function creating the ansatz.

Details about each below.

For DIGITAL strategy, accepts the following: periodic (bool): if the qubits should be linked periodically. periodic=False is not supported in emu-c. operations (list): list of operations to cycle through in the digital single-qubit rotations of each layer. Defaults to [RX, RY, RX] for hea and [RX, RY] for iia. entangler (AbstractBlock): 2-qubit entangling operation. Supports CNOT, CZ, CRX, CRY, CRZ, CPHASE. Controlld rotations will have variational parameters on the rotation angles. Defaults to CNOT

For SDAQC strategy, accepts the following: operations (list): list of operations to cycle through in the digital single-qubit rotations of each layer. Defaults to [RX, RY, RX] for hea and [RX, RY] for iia. entangler (AbstractBlock): Hamiltonian generator for the analog entangling layer. Time parameter is considered variational. Defaults to NN interaction.

For RYDBERG strategy, accepts the following: addressable_detuning: whether to turn on the trainable semi-local addressing pattern on the detuning (n_i terms in the Hamiltonian). Defaults to True. addressable_drive: whether to turn on the trainable semi-local addressing pattern on the drive (sigma_i^x terms in the Hamiltonian). Defaults to False. tunable_phase: whether to have a tunable phase to get both sigma^x and sigma^y rotations in the drive term. If False, only a sigma^x term will be included in the drive part of the Hamiltonian generator. Defaults to False.

FeatureMapConfig(num_features=0, basis_set=BasisSet.FOURIER, reupload_scaling=ReuploadScaling.CONSTANT, feature_range=None, target_range=None, multivariate_strategy=MultivariateStrategy.PARALLEL, feature_map_strategy=Strategy.DIGITAL, param_prefix=None, num_repeats=0, operation=None, inputs=None) dataclass

basis_set: BasisSet | dict[str, BasisSet] = BasisSet.FOURIER class-attribute instance-attribute

Basis set for feature encoding.

Takes qadence.BasisSet. Give a single BasisSet to use the same for all features. Give a dict of (str, BasisSet) where the key is the name of the variable and the value is the BasisSet to use for encoding that feature. BasisSet.FOURIER for Fourier encoding. BasisSet.CHEBYSHEV for Chebyshev encoding.

feature_map_strategy: Strategy = Strategy.DIGITAL class-attribute instance-attribute

Strategy for feature map.

Accepts DIGITAL, ANALOG or RYDBERG. Defaults to DIGITAL. If the strategy is incompatible with the operation chosen, then operation gets preference and the given strategy is ignored.

feature_range: tuple[float, float] | dict[str, tuple[float, float]] | None = None class-attribute instance-attribute

Range of data that the input data is assumed to come from.

Give a single tuple to use the same range for all features. Give a dict of (str, tuple) where the key is the name of the variable and the value is the feature range to use for that feature.

inputs: list[Basic | str] | None = None class-attribute instance-attribute

List that indicates the order of variables of the tensors that are passed.

Optional if a single feature is being encoded, required otherwise. Given input tensors xs = torch.rand(batch_size, input_size:=2) a QNN with inputs=["t", "x"] will assign t, x = xs[:,0], xs[:,1].

multivariate_strategy: MultivariateStrategy = MultivariateStrategy.PARALLEL class-attribute instance-attribute

The encoding strategy in case of multi-variate function.

Takes qadence.MultivariateStrategy. If PARALLEL, the features are encoded in one block of rotation gates with each feature given an equal number of qubits. If SERIES, the features are encoded sequentially, with an ansatz block between. PARALLEL is allowed only for DIGITAL feature_map_strategy.

num_features: int = 0 class-attribute instance-attribute

Number of feature parameters to be encoded.

Defaults to 0. Thus, no feature parameters are encoded.

num_repeats: int | dict[str, int] = 0 class-attribute instance-attribute

Number of feature map layers repeated in the data reuploading step.

If all are to be repeated the same number of times, then can give a single int. For different number of repetitions for each feature, provide a dict of (str, int) where the key is the name of the variable and the value is the number of repetitions for that feature. This amounts to the number of additional reuploads. So if num_repeats is N, the data gets uploaded N+1 times. Defaults to no repetition.

operation: Callable[[Parameter | Basic], AnalogBlock] | Type[RX] | None = None class-attribute instance-attribute

Type of operation.

Choose among the analog or digital rotations or a custom callable function returning an AnalogBlock instance. If the type of operation is incompatible with the strategy chosen, then operation gets preference and the given strategy is ignored.

param_prefix: str | None = None class-attribute instance-attribute

String prefix to create trainable parameters in Feature Map.

A string prefix to create trainable parameters multiplying the feature parameter inside the feature-encoding function. Note that currently this does not take into account the domain of the feature-encoding function. Defaults to None and thus, the feature map is not trainable. Note that this is separate from the name of the parameter. The user can provide a single prefix for all features, and they will be appended by appropriate feature name automatically.

reupload_scaling: ReuploadScaling | dict[str, ReuploadScaling] = ReuploadScaling.CONSTANT class-attribute instance-attribute

Scaling for encoding the same feature on different qubits.

Scaling used to encode the same feature on different qubits in the same layer of the feature maps. Takes qadence.ReuploadScaling. Give a single ReuploadScaling to use the same for all features. Give a dict of (str, ReuploadScaling) where the key is the name of the variable and the value is the ReuploadScaling to use for encoding that feature. ReuploadScaling.CONSTANT for constant scaling. ReuploadScaling.TOWER for linearly increasing scaling. ReuploadScaling.EXP for exponentially increasing scaling.

target_range: tuple[float, float] | dict[str, tuple[float, float]] | None = None class-attribute instance-attribute

Range of data the data encoder assumes as natural range.

Give a single tuple to use the same range for all features. Give a dict of (str, tuple) where the key is the name of the variable and the value is the target range to use for that feature.

MLFlowConfig()

Configuration for mlflow tracking.

Example:

export MLFLOW_TRACKING_URI=tracking_uri
export MLFLOW_EXPERIMENT=experiment_name
export MLFLOW_RUN_NAME=run_name
Source code in qadence/ml_tools/config.py
def __init__(self) -> None:
    import mlflow

    self.tracking_uri: str = os.getenv("MLFLOW_TRACKING_URI", "")
    """The URI of the mlflow tracking server.

    An empty string, or a local file path, prefixed with file:/.
    Data is stored locally at the provided file (or ./mlruns if empty).
    """

    self.experiment_name: str = os.getenv("MLFLOW_EXPERIMENT", str(uuid4()))
    """The name of the experiment.

    If None or empty, a new experiment is created with a random UUID.
    """

    self.run_name: str = os.getenv("MLFLOW_RUN_NAME", str(uuid4()))
    """The name of the run."""

    mlflow.set_tracking_uri(self.tracking_uri)

    # activate existing or create experiment
    exp_filter_string = f"name = '{self.experiment_name}'"
    if not mlflow.search_experiments(filter_string=exp_filter_string):
        mlflow.create_experiment(name=self.experiment_name)

    self.experiment = mlflow.set_experiment(self.experiment_name)
    self.run = mlflow.start_run(run_name=self.run_name, nested=False)

experiment_name: str = os.getenv('MLFLOW_EXPERIMENT', str(uuid4())) instance-attribute

The name of the experiment.

If None or empty, a new experiment is created with a random UUID.

run_name: str = os.getenv('MLFLOW_RUN_NAME', str(uuid4())) instance-attribute

The name of the run.

tracking_uri: str = os.getenv('MLFLOW_TRACKING_URI', '') instance-attribute

The URI of the mlflow tracking server.

An empty string, or a local file path, prefixed with file:/. Data is stored locally at the provided file (or ./mlruns if empty).

TrainConfig(max_iter=10000, print_every=1000, write_every=50, checkpoint_every=5000, plot_every=5000, log_model=False, folder=None, create_subfolder_per_run=False, checkpoint_best_only=False, val_every=None, val_epsilon=1e-05, validation_criterion=None, trainstop_criterion=None, batch_size=1, verbose=True, tracking_tool=ExperimentTrackingTool.TENSORBOARD, hyperparams=dict(), plotting_functions=tuple()) dataclass

Default config for the train function.

The default value of each field can be customized with the constructor:

from qadence.ml_tools import TrainConfig
c = TrainConfig(folder="/tmp/train")
TrainConfig(max_iter=10000, print_every=1000, write_every=50, checkpoint_every=5000, plot_every=5000, log_model=False, folder=PosixPath('/tmp/train'), create_subfolder_per_run=False, checkpoint_best_only=False, val_every=None, val_epsilon=1e-05, validation_criterion=<function TrainConfig.__post_init__.<locals>.<lambda> at 0x7ff89f0b5240>, trainstop_criterion=<function TrainConfig.__post_init__.<locals>.<lambda> at 0x7ff89f0b5090>, batch_size=1, verbose=True, tracking_tool=<ExperimentTrackingTool.TENSORBOARD: 'tensorboard'>, hyperparams={}, plotting_functions=())

batch_size: int = 1 class-attribute instance-attribute

The batch_size to use when passing a list/tuple of torch.Tensors.

checkpoint_best_only: bool = False class-attribute instance-attribute

Write model/optimizer checkpoint only if a metric has improved.

checkpoint_every: int = 5000 class-attribute instance-attribute

Write model/optimizer checkpoint.

create_subfolder_per_run: bool = False class-attribute instance-attribute

Checkpoint/tensorboard logs stored in subfolder with name <timestamp>_<PID>.

Prevents continuing from previous checkpoint, useful for fast prototyping.

folder: Path | None = None class-attribute instance-attribute

Checkpoint/tensorboard logs folder.

hyperparams: dict = field(default_factory=dict) class-attribute instance-attribute

Hyperparameters to track.

log_model: bool = False class-attribute instance-attribute

Logs a serialised version of the model.

max_iter: int = 10000 class-attribute instance-attribute

Number of training iterations.

plot_every: int = 5000 class-attribute instance-attribute

Write figures.

plotting_functions: tuple[LoggablePlotFunction, ...] = field(default_factory=tuple) class-attribute instance-attribute

Functions for in-train plotting.

print_every: int = 1000 class-attribute instance-attribute

Print loss/metrics.

tracking_tool: ExperimentTrackingTool = ExperimentTrackingTool.TENSORBOARD class-attribute instance-attribute

The tracking tool of choice.

trainstop_criterion: Callable | None = None class-attribute instance-attribute

A boolean function which evaluates a given training stopping metric is satisfied.

val_epsilon: float = 1e-05 class-attribute instance-attribute

Safety margin to check if validation loss is smaller than the lowest.

validation loss across previous iterations.

val_every: int | None = None class-attribute instance-attribute

Calculate validation metric.

If None, validation check is not performed.

validation_criterion: Callable | None = None class-attribute instance-attribute

A boolean function which evaluates a given validation metric is satisfied.

verbose: bool = True class-attribute instance-attribute

Whether or not to print out metrics values during training.

write_every: int = 50 class-attribute instance-attribute

Write loss and metrics with the tracking tool.

get_parameters(model)

Retrieve all trainable model parameters in a single vector.

PARAMETER DESCRIPTION
model

the input PyTorch model

TYPE: Module

RETURNS DESCRIPTION
Tensor

a 1-dimensional tensor with the parameters

TYPE: Tensor

Source code in qadence/ml_tools/parameters.py
def get_parameters(model: Module) -> Tensor:
    """Retrieve all trainable model parameters in a single vector.

    Args:
        model (Module): the input PyTorch model

    Returns:
        Tensor: a 1-dimensional tensor with the parameters
    """
    ps = [p.reshape(-1) for p in model.parameters() if p.requires_grad]
    return torch.concat(ps)

num_parameters(model)

Return the total number of parameters of the given model.

Source code in qadence/ml_tools/parameters.py
def num_parameters(model: Module) -> int:
    """Return the total number of parameters of the given model."""
    return len(get_parameters(model))

set_parameters(model, theta)

Set all trainable parameters of a model from a single vector.

Notice that this function assumes prior knowledge of right number of parameters in the model

PARAMETER DESCRIPTION
model

the input PyTorch model

TYPE: Module

theta

the parameters to assign

TYPE: Tensor

Source code in qadence/ml_tools/parameters.py
def set_parameters(model: Module, theta: Tensor) -> None:
    """Set all trainable parameters of a model from a single vector.

    Notice that this function assumes prior knowledge of right number
    of parameters in the model

    Args:
        model (Module): the input PyTorch model
        theta (Tensor): the parameters to assign
    """

    with torch.no_grad():
        idx = 0
        for ps in model.parameters():
            if ps.requires_grad:
                n = torch.numel(ps)
                if ps.ndim == 0:
                    ps[()] = theta[idx : idx + n]
                else:
                    ps[:] = theta[idx : idx + n].reshape(ps.size())
                idx += n

optimize_step(model, optimizer, loss_fn, xs, device=None, dtype=None)

Default Torch optimize step with closure.

This is the default optimization step which should work for most of the standard use cases of optimization of Torch models

PARAMETER DESCRIPTION
model

The input model

TYPE: Module

optimizer

The chosen Torch optimizer

TYPE: Optimizer

loss_fn

A custom loss function

TYPE: Callable

xs

the input data. If None it means that the given model does not require any input data

TYPE: dict | list | Tensor | None

device

A target device to run computation on.

TYPE: device DEFAULT: None

RETURNS DESCRIPTION
tuple

tuple containing the model, the optimizer, a dictionary with the collected metrics and the compute value loss

TYPE: tuple[Tensor | float, dict | None]

Source code in qadence/ml_tools/optimize_step.py
def optimize_step(
    model: Module,
    optimizer: Optimizer,
    loss_fn: Callable,
    xs: dict | list | torch.Tensor | None,
    device: torch.device = None,
    dtype: torch.dtype = None,
) -> tuple[torch.Tensor | float, dict | None]:
    """Default Torch optimize step with closure.

    This is the default optimization step which should work for most
    of the standard use cases of optimization of Torch models

    Args:
        model (Module): The input model
        optimizer (Optimizer): The chosen Torch optimizer
        loss_fn (Callable): A custom loss function
        xs (dict | list | torch.Tensor | None): the input data. If None it means
            that the given model does not require any input data
        device (torch.device): A target device to run computation on.

    Returns:
        tuple: tuple containing the model, the optimizer, a dictionary with
            the collected metrics and the compute value loss
    """

    loss, metrics = None, {}
    xs_to_device = data_to_device(xs, device=device, dtype=dtype)

    def closure() -> Any:
        # NOTE: We need the nonlocal as we can't return a metric dict and
        # because e.g. LBFGS calls this closure multiple times but for some
        # reason the returned loss is always the first one...
        nonlocal metrics, loss
        optimizer.zero_grad()
        loss, metrics = loss_fn(model, xs_to_device)
        loss.backward(retain_graph=True)
        return loss.item()

    optimizer.step(closure)
    # return the loss/metrics that are being mutated inside the closure...
    return loss, metrics

train(model, dataloader, optimizer, config, loss_fn, device=None, optimize_step=optimize_step, dtype=None)

Runs the training loop with gradient-based optimizer.

Assumes that loss_fn returns a tuple of (loss, metrics: dict), where metrics is a dict of scalars. Loss and metrics are written to tensorboard. Checkpoints are written every config.checkpoint_every steps (and after the last training step). If a checkpoint is found at config.folder we resume training from there. The tensorboard logs can be viewed via tensorboard --logdir /path/to/folder.

PARAMETER DESCRIPTION
model

The model to train.

TYPE: Module

dataloader

dataloader of different types. If None, no data is required by the model

TYPE: Union[None, DataLoader, DictDataLoader]

optimizer

The optimizer to use.

TYPE: Optimizer

config

TrainConfig with additional training options.

TYPE: TrainConfig

loss_fn

Loss function returning (loss: float, metrics: dict[str, float], ...)

TYPE: Callable

device

String defining device to train on, pass 'cuda' for GPU.

TYPE: device DEFAULT: None

optimize_step

Customizable optimization callback which is called at every iteration.= The function must have the signature optimize_step(model, optimizer, loss_fn, xs, device="cpu").

TYPE: Callable DEFAULT: optimize_step

dtype

The dtype to use for the data.

TYPE: dtype DEFAULT: None

Example:

from pathlib import Path
import torch
from itertools import count
from qadence import Parameter, QuantumCircuit, Z
from qadence import hamiltonian_factory, hea, feature_map, chain
from qadence import QNN
from qadence.ml_tools import TrainConfig, train_with_grad, to_dataloader

n_qubits = 2
fm = feature_map(n_qubits)
ansatz = hea(n_qubits=n_qubits, depth=3)
observable = hamiltonian_factory(n_qubits, detuning = Z)
circuit = QuantumCircuit(n_qubits, fm, ansatz)

model = QNN(circuit, observable, backend="pyqtorch", diff_mode="ad")
batch_size = 1
input_values = {"phi": torch.rand(batch_size, requires_grad=True)}
pred = model(input_values)

## lets prepare the train routine

cnt = count()
criterion = torch.nn.MSELoss()
optimizer = torch.optim.Adam(model.parameters(), lr=0.1)

def loss_fn(model: torch.nn.Module, data: torch.Tensor) -> tuple[torch.Tensor, dict]:
    next(cnt)
    x, y = data[0], data[1]
    out = model(x)
    loss = criterion(out, y)
    return loss, {}

tmp_path = Path("/tmp")
n_epochs = 5
batch_size = 25
config = TrainConfig(
    folder=tmp_path,
    max_iter=n_epochs,
    checkpoint_every=100,
    write_every=100,
)
x = torch.linspace(0, 1, batch_size).reshape(-1, 1)
y = torch.sin(x)
data = to_dataloader(x, y, batch_size=batch_size, infinite=True)
train_with_grad(model, data, optimizer, config, loss_fn=loss_fn)

Source code in qadence/ml_tools/train_grad.py
def train(
    model: Module,
    dataloader: Union[None, DataLoader, DictDataLoader],
    optimizer: Optimizer,
    config: TrainConfig,
    loss_fn: Callable,
    device: torch_device = None,
    optimize_step: Callable = optimize_step,
    dtype: torch_dtype = None,
) -> tuple[Module, Optimizer]:
    """Runs the training loop with gradient-based optimizer.

    Assumes that `loss_fn` returns a tuple of (loss,
    metrics: dict), where `metrics` is a dict of scalars. Loss and metrics are
    written to tensorboard. Checkpoints are written every
    `config.checkpoint_every` steps (and after the last training step).  If a
    checkpoint is found at `config.folder` we resume training from there.  The
    tensorboard logs can be viewed via `tensorboard --logdir /path/to/folder`.

    Args:
        model: The model to train.
        dataloader: dataloader of different types. If None, no data is required by
            the model
        optimizer: The optimizer to use.
        config: `TrainConfig` with additional training options.
        loss_fn: Loss function returning (loss: float, metrics: dict[str, float], ...)
        device: String defining device to train on, pass 'cuda' for GPU.
        optimize_step: Customizable optimization callback which is called at every iteration.=
            The function must have the signature `optimize_step(model,
            optimizer, loss_fn, xs, device="cpu")`.
        dtype: The dtype to use for the data.

    Example:
    ```python exec="on" source="material-block"
    from pathlib import Path
    import torch
    from itertools import count
    from qadence import Parameter, QuantumCircuit, Z
    from qadence import hamiltonian_factory, hea, feature_map, chain
    from qadence import QNN
    from qadence.ml_tools import TrainConfig, train_with_grad, to_dataloader

    n_qubits = 2
    fm = feature_map(n_qubits)
    ansatz = hea(n_qubits=n_qubits, depth=3)
    observable = hamiltonian_factory(n_qubits, detuning = Z)
    circuit = QuantumCircuit(n_qubits, fm, ansatz)

    model = QNN(circuit, observable, backend="pyqtorch", diff_mode="ad")
    batch_size = 1
    input_values = {"phi": torch.rand(batch_size, requires_grad=True)}
    pred = model(input_values)

    ## lets prepare the train routine

    cnt = count()
    criterion = torch.nn.MSELoss()
    optimizer = torch.optim.Adam(model.parameters(), lr=0.1)

    def loss_fn(model: torch.nn.Module, data: torch.Tensor) -> tuple[torch.Tensor, dict]:
        next(cnt)
        x, y = data[0], data[1]
        out = model(x)
        loss = criterion(out, y)
        return loss, {}

    tmp_path = Path("/tmp")
    n_epochs = 5
    batch_size = 25
    config = TrainConfig(
        folder=tmp_path,
        max_iter=n_epochs,
        checkpoint_every=100,
        write_every=100,
    )
    x = torch.linspace(0, 1, batch_size).reshape(-1, 1)
    y = torch.sin(x)
    data = to_dataloader(x, y, batch_size=batch_size, infinite=True)
    train_with_grad(model, data, optimizer, config, loss_fn=loss_fn)
    ```
    """
    # load available checkpoint
    init_iter = 0
    log_device = "cpu" if device is None else device
    if config.folder:
        model, optimizer, init_iter = load_checkpoint(
            config.folder, model, optimizer, device=log_device
        )
        logger.debug(f"Loaded model and optimizer from {config.folder}")

    # Move model to device before optimizer is loaded
    if isinstance(model, DataParallel):
        model = model.module.to(device=device, dtype=dtype)
    else:
        model = model.to(device=device, dtype=dtype)
    # initialize tracking tool
    if config.tracking_tool == ExperimentTrackingTool.TENSORBOARD:
        writer = SummaryWriter(config.folder, purge_step=init_iter)
    else:
        writer = importlib.import_module("mlflow")

    perform_val = isinstance(config.val_every, int)
    if perform_val:
        if not isinstance(dataloader, DictDataLoader):
            raise ValueError(
                "If `config.val_every` is provided as an integer, dataloader must"
                "be an instance of `DictDataLoader`."
            )
        iter_keys = dataloader.dataloaders.keys()
        if "train" not in iter_keys or "val" not in iter_keys:
            raise ValueError(
                "If `config.val_every` is provided as an integer, the dictdataloader"
                "must have `train` and `val` keys to access the respective dataloaders."
            )
        val_dataloader = dataloader.dataloaders["val"]
        dataloader = dataloader.dataloaders["train"]

    ## Training
    progress = Progress(
        TextColumn("[progress.description]{task.description}"),
        BarColumn(),
        TaskProgressColumn(),
        TimeRemainingColumn(elapsed_when_finished=True),
    )
    data_dtype = None
    if dtype:
        data_dtype = float64 if dtype == complex128 else float32

    best_val_loss = math.inf

    with progress:
        dl_iter = iter(dataloader) if dataloader is not None else None

        # Initial validation evaluation
        try:
            if perform_val:
                dl_iter_val = iter(val_dataloader) if val_dataloader is not None else None
                xs = next(dl_iter_val)
                xs_to_device = data_to_device(xs, device=device, dtype=data_dtype)
                best_val_loss, metrics = loss_fn(model, xs_to_device)

                metrics["val_loss"] = best_val_loss
                write_tracker(writer, None, metrics, init_iter, tracking_tool=config.tracking_tool)

            if config.folder:
                if config.checkpoint_best_only:
                    write_checkpoint(config.folder, model, optimizer, iteration="best")
                else:
                    write_checkpoint(config.folder, model, optimizer, init_iter)

            plot_tracker(
                writer,
                model,
                init_iter,
                config.plotting_functions,
                tracking_tool=config.tracking_tool,
            )

        except KeyboardInterrupt:
            logger.info("Terminating training gracefully after the current iteration.")

        # outer epoch loop
        init_iter += 1
        for iteration in progress.track(range(init_iter, init_iter + config.max_iter)):
            try:
                # in case there is not data needed by the model
                # this is the case, for example, of quantum models
                # which do not have classical input data (e.g. chemistry)
                if dataloader is None:
                    loss, metrics = optimize_step(
                        model=model,
                        optimizer=optimizer,
                        loss_fn=loss_fn,
                        xs=None,
                        device=device,
                        dtype=data_dtype,
                    )
                    loss = loss.item()

                elif isinstance(dataloader, (DictDataLoader, DataLoader)):
                    loss, metrics = optimize_step(
                        model=model,
                        optimizer=optimizer,
                        loss_fn=loss_fn,
                        xs=next(dl_iter),  # type: ignore[arg-type]
                        device=device,
                        dtype=data_dtype,
                    )

                else:
                    raise NotImplementedError(
                        f"Unsupported dataloader type: {type(dataloader)}. "
                        "You can use e.g. `qadence.ml_tools.to_dataloader` to build a dataloader."
                    )

                if iteration % config.print_every == 0 and config.verbose:
                    # Note that the loss returned by optimize_step
                    # is the value before doing the training step
                    # which is printed accordingly by the previous iteration number
                    print_metrics(loss, metrics, iteration - 1)

                if iteration % config.write_every == 0:
                    write_tracker(
                        writer, loss, metrics, iteration, tracking_tool=config.tracking_tool
                    )

                if iteration % config.plot_every == 0:
                    plot_tracker(
                        writer,
                        model,
                        iteration,
                        config.plotting_functions,
                        tracking_tool=config.tracking_tool,
                    )
                if perform_val:
                    if iteration % config.val_every == 0:
                        xs = next(dl_iter_val)
                        xs_to_device = data_to_device(xs, device=device, dtype=data_dtype)
                        val_loss, *_ = loss_fn(model, xs_to_device)
                        if config.validation_criterion(val_loss, best_val_loss, config.val_epsilon):  # type: ignore[misc]
                            best_val_loss = val_loss
                            if config.folder and config.checkpoint_best_only:
                                write_checkpoint(config.folder, model, optimizer, iteration="best")
                            metrics["val_loss"] = val_loss
                            write_tracker(
                                writer, loss, metrics, iteration, tracking_tool=config.tracking_tool
                            )

                if config.folder:
                    if iteration % config.checkpoint_every == 0 and not config.checkpoint_best_only:
                        write_checkpoint(config.folder, model, optimizer, iteration)

            except KeyboardInterrupt:
                logger.info("Terminating training gracefully after the current iteration.")
                break

        # Handling printing the last training loss
        # as optimize_step does not give the loss value at the last iteration
        try:
            xs = next(dl_iter) if dataloader is not None else None  # type: ignore[arg-type]
            xs_to_device = data_to_device(xs, device=device, dtype=data_dtype)
            loss, metrics, *_ = loss_fn(model, xs_to_device)
            if dataloader is None:
                loss = loss.item()
            if iteration % config.print_every == 0 and config.verbose:
                print_metrics(loss, metrics, iteration)

        except KeyboardInterrupt:
            logger.info("Terminating training gracefully after the current iteration.")

    # Final checkpointing and writing
    if config.folder and not config.checkpoint_best_only:
        write_checkpoint(config.folder, model, optimizer, iteration)
    write_tracker(writer, loss, metrics, iteration, tracking_tool=config.tracking_tool)

    # writing hyperparameters
    if config.hyperparams:
        log_tracker(writer, config.hyperparams, metrics, tracking_tool=config.tracking_tool)

    # logging the model
    if config.log_model:
        log_model_tracker(writer, model, dataloader, tracking_tool=config.tracking_tool)

    # close tracker
    if config.tracking_tool == ExperimentTrackingTool.TENSORBOARD:
        writer.close()
    elif config.tracking_tool == ExperimentTrackingTool.MLFLOW:
        writer.end_run()

    return model, optimizer

train(model, dataloader, optimizer, config, loss_fn)

Runs the training loop with a gradient-free optimizer.

Assumes that loss_fn returns a tuple of (loss, metrics: dict), where metrics is a dict of scalars. Loss and metrics are written to tensorboard. Checkpoints are written every config.checkpoint_every steps (and after the last training step). If a checkpoint is found at config.folder we resume training from there. The tensorboard logs can be viewed via tensorboard --logdir /path/to/folder.

PARAMETER DESCRIPTION
model

The model to train

TYPE: Module

dataloader

Dataloader constructed via dictdataloader

TYPE: DictDataLoader | DataLoader | None

optimizer

The optimizer to use taken from the Nevergrad library. If this is not the case the function will raise an AssertionError

TYPE: Optimizer

config

TrainConfig with additional training options.

TYPE: TrainConfig

loss_fn

Loss function returning (loss: float, metrics: dict[str, float])

TYPE: Callable

Source code in qadence/ml_tools/train_no_grad.py
def train(
    model: Module,
    dataloader: DictDataLoader | DataLoader | None,
    optimizer: NGOptimizer,
    config: TrainConfig,
    loss_fn: Callable,
) -> tuple[Module, NGOptimizer]:
    """Runs the training loop with a gradient-free optimizer.

    Assumes that `loss_fn` returns a tuple of (loss, metrics: dict), where
    `metrics` is a dict of scalars. Loss and metrics are written to
    tensorboard. Checkpoints are written every `config.checkpoint_every` steps
    (and after the last training step).  If a checkpoint is found at `config.folder`
    we resume training from there.  The tensorboard logs can be viewed via
    `tensorboard --logdir /path/to/folder`.

    Args:
        model: The model to train
        dataloader: Dataloader constructed via `dictdataloader`
        optimizer: The optimizer to use taken from the Nevergrad library. If this is not
            the case the function will raise an AssertionError
        config: `TrainConfig` with additional training options.
        loss_fn: Loss function returning (loss: float, metrics: dict[str, float])
    """
    init_iter = 0
    if config.folder:
        model, optimizer, init_iter = load_checkpoint(config.folder, model, optimizer)
        logger.debug(f"Loaded model and optimizer from {config.folder}")

    def _update_parameters(
        data: Tensor | None, ng_params: ng.p.Array
    ) -> tuple[float, dict, ng.p.Array]:
        loss, metrics = loss_fn(model, data)  # type: ignore[misc]
        optimizer.tell(ng_params, float(loss))
        ng_params = optimizer.ask()  # type: ignore [assignment]
        params = promote_to_tensor(ng_params.value, requires_grad=False)
        set_parameters(model, params)
        return loss, metrics, ng_params

    assert loss_fn is not None, "Provide a valid loss function"
    # TODO: support also Scipy optimizers
    assert isinstance(optimizer, NGOptimizer), "Use only optimizers from the Nevergrad library"

    # initialize tracking tool
    if config.tracking_tool == ExperimentTrackingTool.TENSORBOARD:
        writer = SummaryWriter(config.folder, purge_step=init_iter)
    else:
        writer = importlib.import_module("mlflow")

    # set optimizer configuration and initial parameters
    optimizer.budget = config.max_iter
    optimizer.enable_pickling()

    # TODO: Make it GPU compatible if possible
    params = get_parameters(model).detach().numpy()
    ng_params = ng.p.Array(init=params)

    # serial training
    # TODO: Add a parallelization using the num_workers argument in Nevergrad
    progress = Progress(
        TextColumn("[progress.description]{task.description}"),
        BarColumn(),
        TaskProgressColumn(),
        TimeRemainingColumn(elapsed_when_finished=True),
    )
    with progress:
        dl_iter = iter(dataloader) if dataloader is not None else None

        for iteration in progress.track(range(init_iter, init_iter + config.max_iter)):
            if dataloader is None:
                loss, metrics, ng_params = _update_parameters(None, ng_params)

            elif isinstance(dataloader, (DictDataLoader, DataLoader)):
                data = next(dl_iter)  # type: ignore[arg-type]
                loss, metrics, ng_params = _update_parameters(data, ng_params)

            else:
                raise NotImplementedError("Unsupported dataloader type!")

            if iteration % config.print_every == 0 and config.verbose:
                print_metrics(loss, metrics, iteration)

            if iteration % config.write_every == 0:
                write_tracker(writer, loss, metrics, iteration, tracking_tool=config.tracking_tool)

            if iteration % config.plot_every == 0:
                plot_tracker(
                    writer,
                    model,
                    iteration,
                    config.plotting_functions,
                    tracking_tool=config.tracking_tool,
                )

            if config.folder:
                if iteration % config.checkpoint_every == 0:
                    write_checkpoint(config.folder, model, optimizer, iteration)

            if iteration >= init_iter + config.max_iter:
                break

    # writing hyperparameters
    if config.hyperparams:
        log_tracker(writer, config.hyperparams, metrics, tracking_tool=config.tracking_tool)

    if config.log_model:
        log_model_tracker(writer, model, dataloader, tracking_tool=config.tracking_tool)

    # Final writing and checkpointing
    if config.folder:
        write_checkpoint(config.folder, model, optimizer, iteration)
    write_tracker(writer, loss, metrics, iteration, tracking_tool=config.tracking_tool)

    # close tracker
    if config.tracking_tool == ExperimentTrackingTool.TENSORBOARD:
        writer.close()
    elif config.tracking_tool == ExperimentTrackingTool.MLFLOW:
        writer.end_run()

    return model, optimizer

DictDataLoader(dataloaders) dataclass

This class only holds a dictionary of DataLoaders and samples from them.

InfiniteTensorDataset(*tensors)

Bases: IterableDataset

Randomly sample points from the first dimension of the given tensors.

Behaves like a normal torch Dataset just that we can sample from it as many times as we want.

Examples:

import torch
from qadence.ml_tools.data import InfiniteTensorDataset

x_data, y_data = torch.rand(5,2), torch.ones(5,1)
# The dataset accepts any number of tensors with the same batch dimension
ds = InfiniteTensorDataset(x_data, y_data)

# call `next` to get one sample from each tensor:
xs = next(iter(ds))
(tensor([0.6561, 0.0635]), tensor([1.]))

Source code in qadence/ml_tools/data.py
def __init__(self, *tensors: Tensor):
    """Randomly sample points from the first dimension of the given tensors.

    Behaves like a normal torch `Dataset` just that we can sample from it as
    many times as we want.

    Examples:
    ```python exec="on" source="above" result="json"
    import torch
    from qadence.ml_tools.data import InfiniteTensorDataset

    x_data, y_data = torch.rand(5,2), torch.ones(5,1)
    # The dataset accepts any number of tensors with the same batch dimension
    ds = InfiniteTensorDataset(x_data, y_data)

    # call `next` to get one sample from each tensor:
    xs = next(iter(ds))
    print(str(xs)) # markdown-exec: hide
    ```
    """
    self.tensors = tensors

data_to_device(xs, *args, **kwargs)

Utility method to move arbitrary data to 'device'.

Source code in qadence/ml_tools/data.py
@singledispatch
def data_to_device(xs: Any, *args: Any, **kwargs: Any) -> Any:
    """Utility method to move arbitrary data to 'device'."""
    raise ValueError(f"Unable to move {type(xs)} with input args: {args} and kwargs: {kwargs}.")

to_dataloader(*tensors, batch_size=1, infinite=False)

Convert torch tensors an (infinite) Dataloader.

PARAMETER DESCRIPTION
*tensors

Torch tensors to use in the dataloader.

TYPE: Tensor DEFAULT: ()

batch_size

batch size of sampled tensors

TYPE: int DEFAULT: 1

infinite

if True, the dataloader will keep sampling indefinitely even after the whole dataset was sampled once

TYPE: bool DEFAULT: False

Examples:

import torch
from qadence.ml_tools import to_dataloader

(x, y, z) = [torch.rand(10) for _ in range(3)]
loader = iter(to_dataloader(x, y, z, batch_size=5, infinite=True))
print(next(loader))
print(next(loader))
print(next(loader))
[tensor([0.2418, 0.0795, 0.1999, 0.7200, 0.1064]), tensor([0.0602, 0.0151, 0.0662, 0.0123, 0.5005]), tensor([0.1959, 0.9833, 0.7598, 0.3911, 0.1083])]
[tensor([0.0648, 0.6455, 0.7995, 0.9121, 0.7288]), tensor([0.2677, 0.5181, 0.8973, 0.3108, 0.8163]), tensor([0.0913, 0.7591, 0.7432, 0.0403, 0.6998])]
[tensor([0.2418, 0.0795, 0.1999, 0.7200, 0.1064]), tensor([0.0602, 0.0151, 0.0662, 0.0123, 0.5005]), tensor([0.1959, 0.9833, 0.7598, 0.3911, 0.1083])]
Source code in qadence/ml_tools/data.py
def to_dataloader(*tensors: Tensor, batch_size: int = 1, infinite: bool = False) -> DataLoader:
    """Convert torch tensors an (infinite) Dataloader.

    Arguments:
        *tensors: Torch tensors to use in the dataloader.
        batch_size: batch size of sampled tensors
        infinite: if `True`, the dataloader will keep sampling indefinitely even after the whole
            dataset was sampled once

    Examples:

    ```python exec="on" source="above" result="json"
    import torch
    from qadence.ml_tools import to_dataloader

    (x, y, z) = [torch.rand(10) for _ in range(3)]
    loader = iter(to_dataloader(x, y, z, batch_size=5, infinite=True))
    print(next(loader))
    print(next(loader))
    print(next(loader))
    ```
    """
    ds = InfiniteTensorDataset(*tensors) if infinite else TensorDataset(*tensors)
    return DataLoader(ds, batch_size=batch_size)

QNN(circuit, observable, backend=BackendName.PYQTORCH, diff_mode=DiffMode.AD, measurement=None, noise=None, configuration=None, inputs=None, input_diff_mode=InputDiffMode.AD)

Bases: QuantumModel

Quantum neural network model for n-dimensional inputs.

Examples:

import torch
from qadence import QuantumCircuit, QNN, Z
from qadence import hea, feature_map, hamiltonian_factory, kron

# create the circuit
n_qubits, depth = 2, 4
fm = kron(
    feature_map(1, support=(0,), param="x"),
    feature_map(1, support=(1,), param="y")
)
ansatz = hea(n_qubits=n_qubits, depth=depth)
circuit = QuantumCircuit(n_qubits, fm, ansatz)
obs_base = hamiltonian_factory(n_qubits, detuning=Z)

# the QNN will yield two outputs
obs = [2.0 * obs_base, 4.0 * obs_base]

# initialize and use the model
qnn = QNN(circuit, obs, inputs=["x", "y"])
y = qnn(torch.rand(3, 2))
tensor([[1.5858, 3.1717],
        [1.4628, 2.9257],
        [1.3796, 2.7592]], grad_fn=<CatBackward0>)

Initialize the QNN.

The number of inputs is determined by the feature parameters in the input quantum circuit while the number of outputs is determined by how many observables are provided as input

PARAMETER DESCRIPTION
circuit

The quantum circuit to use for the QNN.

TYPE: QuantumCircuit

observable

The observable.

TYPE: list[AbstractBlock] | AbstractBlock

backend

The chosen quantum backend.

TYPE: BackendName DEFAULT: PYQTORCH

diff_mode

The differentiation engine to use. Choices 'gpsr' or 'ad'.

TYPE: DiffMode DEFAULT: AD

measurement

optional measurement protocol. If None, use exact expectation value with a statevector simulator

TYPE: Measurements | None DEFAULT: None

noise

A noise model to use.

TYPE: Noise | None DEFAULT: None

configuration

optional configuration for the backend

TYPE: BackendConfiguration | dict | None DEFAULT: None

inputs

List that indicates the order of variables of the tensors that are passed to the model. Given input tensors xs = torch.rand(batch_size, input_size:=2) a QNN with inputs=["t", "x"] will assign t, x = xs[:,0], xs[:,1].

TYPE: list[Basic | str] | None DEFAULT: None

input_diff_mode

The differentiation mode for the input tensor.

TYPE: InputDiffMode | str DEFAULT: AD

Source code in qadence/ml_tools/models.py
def __init__(
    self,
    circuit: QuantumCircuit,
    observable: list[AbstractBlock] | AbstractBlock,
    backend: BackendName = BackendName.PYQTORCH,
    diff_mode: DiffMode = DiffMode.AD,
    measurement: Measurements | None = None,
    noise: Noise | None = None,
    configuration: BackendConfiguration | dict | None = None,
    inputs: list[sympy.Basic | str] | None = None,
    input_diff_mode: InputDiffMode | str = InputDiffMode.AD,
):
    """Initialize the QNN.

    The number of inputs is determined by the feature parameters in the input
    quantum circuit while the number of outputs is determined by how many
    observables are provided as input

    Args:
        circuit: The quantum circuit to use for the QNN.
        observable: The observable.
        backend: The chosen quantum backend.
        diff_mode: The differentiation engine to use. Choices 'gpsr' or 'ad'.
        measurement: optional measurement protocol. If None,
            use exact expectation value with a statevector simulator
        noise: A noise model to use.
        configuration: optional configuration for the backend
        inputs: List that indicates the order of variables of the tensors that are passed
            to the model. Given input tensors `xs = torch.rand(batch_size, input_size:=2)` a QNN
            with `inputs=["t", "x"]` will assign `t, x = xs[:,0], xs[:,1]`.
        input_diff_mode: The differentiation mode for the input tensor.
    """
    super().__init__(
        circuit,
        observable=observable,
        backend=backend,
        diff_mode=diff_mode,
        measurement=measurement,
        configuration=configuration,
        noise=noise,
    )
    if self._observable is None:
        raise ValueError("You need to provide at least one observable in the QNN constructor")
    if (inputs is not None) and (len(self.inputs) == len(inputs)):
        self.inputs = [sympy.symbols(x) if isinstance(x, str) else x for x in inputs]  # type: ignore[union-attr]
    elif (inputs is None) and len(self.inputs) <= 1:
        self.inputs = [sympy.symbols(x) if isinstance(x, str) else x for x in self.inputs]  # type: ignore[union-attr]
    else:
        raise ValueError(
            """
            Your QNN has more than one input. Please provide a list of inputs in the order of
            your tensor domain. For example, if you want to pass
            `xs = torch.rand(batch_size, input_size:=3)` to you QNN, where
            ```
            t = x[:,0]
            x = x[:,1]
            y = x[:,2]
            ```
            you have to specify
            ```
            QNN(circuit, observable, inputs=["t", "x", "y"])
            ```
            You can also pass a list of sympy symbols.
        """
        )
    self.format_to_dict = format_to_dict_fn(self.inputs)  # type: ignore[arg-type]
    self.input_diff_mode = InputDiffMode(input_diff_mode)
    if self.input_diff_mode == InputDiffMode.FD:
        from qadence.backends.utils import finitediff

        self.__derivative = finitediff
    elif self.input_diff_mode == InputDiffMode.AD:
        self.__derivative = _torch_derivative  # type: ignore[assignment]
    else:
        raise ValueError(f"Unkown forward diff mode: {self.input_diff_mode}")

forward(values=None, state=None, measurement=None, noise=None, endianness=Endianness.BIG)

Forward pass of the model.

This returns the (differentiable) expectation value of the given observable operator defined in the constructor. Differently from the base QuantumModel class, the QNN accepts also a tensor as input for the forward pass. The tensor is expected to have shape: n_batches x in_features where n_batches is the number of data points and in_features is the dimensionality of the problem

The output of the forward pass is the expectation value of the input observable(s). If a single observable is given, the output shape is n_batches while if multiple observables are given the output shape is instead n_batches x n_observables

PARAMETER DESCRIPTION
values

the values of the feature parameters

TYPE: dict[str, Tensor] | Tensor DEFAULT: None

state

Initial state.

TYPE: Tensor | None DEFAULT: None

measurement

optional measurement protocol. If None, use exact expectation value with a statevector simulator

TYPE: Measurements | None DEFAULT: None

noise

A noise model to use.

TYPE: Noise | None DEFAULT: None

endianness

Endianness of the resulting bit strings.

TYPE: Endianness DEFAULT: BIG

RETURNS DESCRIPTION
Tensor

a tensor with the expectation value of the observables passed in the constructor of the model

TYPE: Tensor

Source code in qadence/ml_tools/models.py
def forward(
    self,
    values: dict[str, Tensor] | Tensor = None,
    state: Tensor | None = None,
    measurement: Measurements | None = None,
    noise: Noise | None = None,
    endianness: Endianness = Endianness.BIG,
) -> Tensor:
    """Forward pass of the model.

    This returns the (differentiable) expectation value of the given observable
    operator defined in the constructor. Differently from the base QuantumModel
    class, the QNN accepts also a tensor as input for the forward pass. The
    tensor is expected to have shape: `n_batches x in_features` where `n_batches`
    is the number of data points and `in_features` is the dimensionality of the problem

    The output of the forward pass is the expectation value of the input
    observable(s). If a single observable is given, the output shape is
    `n_batches` while if multiple observables are given the output shape
    is instead `n_batches x n_observables`

    Args:
        values: the values of the feature parameters
        state: Initial state.
        measurement: optional measurement protocol. If None,
            use exact expectation value with a statevector simulator
        noise: A noise model to use.
        endianness: Endianness of the resulting bit strings.

    Returns:
        Tensor: a tensor with the expectation value of the observables passed
            in the constructor of the model
    """
    return self.expectation(
        values, state=state, measurement=measurement, noise=noise, endianness=endianness
    )

from_configs(register, obs_config, fm_config=FeatureMapConfig(), ansatz_config=AnsatzConfig(), backend=BackendName.PYQTORCH, diff_mode=DiffMode.AD, measurement=None, noise=None, configuration=None, input_diff_mode=InputDiffMode.AD) classmethod

Create a QNN from a set of configurations.

PARAMETER DESCRIPTION
register

The number of qubits or a register object.

TYPE: int | Register

obs_config

The configuration(s) for the observable(s).

TYPE: list[ObservableConfig] | ObservableConfig

fm_config

The configuration for the feature map. Defaults to no feature encoding block.

TYPE: FeatureMapConfig DEFAULT: FeatureMapConfig()

ansatz_config

The configuration for the ansatz. Defaults to a single layer of hardware efficient ansatz.

TYPE: AnsatzConfig DEFAULT: AnsatzConfig()

backend

The chosen quantum backend.

TYPE: BackendName DEFAULT: PYQTORCH

diff_mode

The differentiation engine to use. Choices are 'gpsr' or 'ad'.

TYPE: DiffMode DEFAULT: AD

measurement

Optional measurement protocol. If None, use exact expectation value with a statevector simulator.

TYPE: Measurements DEFAULT: None

noise

A noise model to use.

TYPE: Noise DEFAULT: None

configuration

Optional backend configuration.

TYPE: BackendConfiguration | dict DEFAULT: None

input_diff_mode

The differentiation mode for the input tensor.

TYPE: InputDiffMode DEFAULT: AD

RETURNS DESCRIPTION
QNN

A QNN object.

RAISES DESCRIPTION
ValueError

If the observable configuration is not provided.

Example:

import torch
from qadence.ml_tools.config import AnsatzConfig, FeatureMapConfig
from qadence.ml_tools import QNN
from qadence.constructors import ObservableConfig
from qadence.operations import Z
from qadence.types import (
    AnsatzType, BackendName, BasisSet, ObservableTransform, ReuploadScaling, Strategy
)

register = 4
obs_config = ObservableConfig(
    detuning=Z,
    scale=5.0,
    shift=0.0,
    transformation_type=ObservableTransform.SCALE,
    trainable_transform=None,
)
fm_config = FeatureMapConfig(
    num_features=2,
    inputs=["x", "y"],
    basis_set=BasisSet.FOURIER,
    reupload_scaling=ReuploadScaling.CONSTANT,
    feature_range={
        "x": (-1.0, 1.0),
        "y": (0.0, 1.0),
    },
)
ansatz_config = AnsatzConfig(
    depth=2,
    ansatz_type=AnsatzType.HEA,
    ansatz_strategy=Strategy.DIGITAL,
)

qnn = QNN.from_configs(
    register, obs_config, fm_config, ansatz_config, backend=BackendName.PYQTORCH
)

x = torch.rand(2, 2)
y = qnn(x)
tensor([[0.0318],
        [0.5561]], grad_fn=<CatBackward0>)

Source code in qadence/ml_tools/models.py
@classmethod
def from_configs(
    cls,
    register: int | Register,
    obs_config: Any,
    fm_config: Any = FeatureMapConfig(),
    ansatz_config: Any = AnsatzConfig(),
    backend: BackendName = BackendName.PYQTORCH,
    diff_mode: DiffMode = DiffMode.AD,
    measurement: Measurements | None = None,
    noise: Noise | None = None,
    configuration: BackendConfiguration | dict | None = None,
    input_diff_mode: InputDiffMode | str = InputDiffMode.AD,
) -> QNN:
    """Create a QNN from a set of configurations.

    Args:
        register (int | Register): The number of qubits or a register object.
        obs_config (list[ObservableConfig] | ObservableConfig): The configuration(s)
            for the observable(s).
        fm_config (FeatureMapConfig): The configuration for the feature map.
            Defaults to no feature encoding block.
        ansatz_config (AnsatzConfig): The configuration for the ansatz.
            Defaults to a single layer of hardware efficient ansatz.
        backend (BackendName): The chosen quantum backend.
        diff_mode (DiffMode): The differentiation engine to use. Choices are
            'gpsr' or 'ad'.
        measurement (Measurements): Optional measurement protocol. If None,
            use exact expectation value with a statevector simulator.
        noise (Noise): A noise model to use.
        configuration (BackendConfiguration | dict): Optional backend configuration.
        input_diff_mode (InputDiffMode): The differentiation mode for the input tensor.

    Returns:
        A QNN object.

    Raises:
        ValueError: If the observable configuration is not provided.

    Example:
    ```python exec="on" source="material-block" result="json"
    import torch
    from qadence.ml_tools.config import AnsatzConfig, FeatureMapConfig
    from qadence.ml_tools import QNN
    from qadence.constructors import ObservableConfig
    from qadence.operations import Z
    from qadence.types import (
        AnsatzType, BackendName, BasisSet, ObservableTransform, ReuploadScaling, Strategy
    )

    register = 4
    obs_config = ObservableConfig(
        detuning=Z,
        scale=5.0,
        shift=0.0,
        transformation_type=ObservableTransform.SCALE,
        trainable_transform=None,
    )
    fm_config = FeatureMapConfig(
        num_features=2,
        inputs=["x", "y"],
        basis_set=BasisSet.FOURIER,
        reupload_scaling=ReuploadScaling.CONSTANT,
        feature_range={
            "x": (-1.0, 1.0),
            "y": (0.0, 1.0),
        },
    )
    ansatz_config = AnsatzConfig(
        depth=2,
        ansatz_type=AnsatzType.HEA,
        ansatz_strategy=Strategy.DIGITAL,
    )

    qnn = QNN.from_configs(
        register, obs_config, fm_config, ansatz_config, backend=BackendName.PYQTORCH
    )

    x = torch.rand(2, 2)
    y = qnn(x)
    print(str(y)) # markdown-exec: hide
    ```
    """
    from .constructors import build_qnn_from_configs

    return build_qnn_from_configs(
        register=register,
        observable_config=obs_config,
        fm_config=fm_config,
        ansatz_config=ansatz_config,
        backend=backend,
        diff_mode=diff_mode,
        measurement=measurement,
        noise=noise,
        configuration=configuration,
        input_diff_mode=input_diff_mode,
    )

derivative(ufa, x, derivative_indices)

Compute derivatives w.r.t.

inputs of a UFA with a single output. The derivative_indices specify which derivative(s) are computed. E.g. derivative_indices=(1,2) would compute the a second order derivative w.r.t to the indices 1 and 2 of the input tensor.

PARAMETER DESCRIPTION
ufa

The model for which we want to compute the derivative.

TYPE: Module

x

(batch_size, input_size) input tensor.

TYPE: Tensor

derivative_indices

Define which derivatives to compute.

TYPE: tuple

Examples: If we create a UFA with three inputs and denote the first, second, and third input with x, y, and z we can compute the following derivatives w.r.t to those inputs:

import torch
from qadence.ml_tools.models import derivative, QNN
from qadence.ml_tools.config import FeatureMapConfig, AnsatzConfig
from qadence.constructors.hamiltonians import ObservableConfig
from qadence.operations import Z

fm_config = FeatureMapConfig(num_features=3, inputs=["x", "y", "z"])
ansatz_config = AnsatzConfig()
obs_config = ObservableConfig(detuning=Z)

f = QNN.from_configs(
    register=3, obs_config=obs_config, fm_config=fm_config, ansatz_config=ansatz_config,
)
inputs = torch.rand(5,3,requires_grad=True)

# df_dx
derivative(f, inputs, (0,))

# d2f_dydz
derivative(f, inputs, (1,2))

# d3fdy2dx
derivative(f, inputs, (1,1,0))

Source code in qadence/ml_tools/models.py
def derivative(ufa: torch.nn.Module, x: Tensor, derivative_indices: tuple[int, ...]) -> Tensor:
    """Compute derivatives w.r.t.

    inputs of a UFA with a single output. The
    `derivative_indices` specify which derivative(s) are computed.  E.g.
    `derivative_indices=(1,2)` would compute the a second order derivative w.r.t
    to the indices `1` and `2` of the input tensor.

    Arguments:
        ufa: The model for which we want to compute the derivative.
        x (Tensor): (batch_size, input_size) input tensor.
        derivative_indices (tuple): Define which derivatives to compute.

    Examples:
    If we create a UFA with three inputs and denote the first, second, and third
    input with `x`, `y`, and `z` we can compute the following derivatives w.r.t
    to those inputs:
    ```py exec="on" source="material-block"
    import torch
    from qadence.ml_tools.models import derivative, QNN
    from qadence.ml_tools.config import FeatureMapConfig, AnsatzConfig
    from qadence.constructors.hamiltonians import ObservableConfig
    from qadence.operations import Z

    fm_config = FeatureMapConfig(num_features=3, inputs=["x", "y", "z"])
    ansatz_config = AnsatzConfig()
    obs_config = ObservableConfig(detuning=Z)

    f = QNN.from_configs(
        register=3, obs_config=obs_config, fm_config=fm_config, ansatz_config=ansatz_config,
    )
    inputs = torch.rand(5,3,requires_grad=True)

    # df_dx
    derivative(f, inputs, (0,))

    # d2f_dydz
    derivative(f, inputs, (1,2))

    # d3fdy2dx
    derivative(f, inputs, (1,1,0))
    ```
    """
    assert ufa.out_features == 1, "Can only call `derivative` on models with 1D output."
    return ufa._derivative(x, derivative_indices)

format_to_dict_fn(inputs=[])

Format an input tensor into the format required by the forward pass.

The tensor is assumed to have dimensions: n_batches x in_features where in_features corresponds to the number of input features of the QNN

Source code in qadence/ml_tools/models.py
def format_to_dict_fn(
    inputs: list[sympy.Symbol | str] = [],
) -> Callable[[Tensor | ParamDictType], ParamDictType]:
    """Format an input tensor into the format required by the forward pass.

    The tensor is assumed to have dimensions: n_batches x in_features where in_features
    corresponds to the number of input features of the QNN
    """
    in_features = len(inputs)

    def tensor_to_dict(values: Tensor | ParamDictType) -> ParamDictType:
        if isinstance(values, Tensor):
            values = values.reshape(-1, 1) if len(values.size()) == 1 else values
            if not values.shape[1] == in_features:
                raise ValueError(
                    f"Model expects in_features={in_features} but got {values.shape[1]}."
                )
            values = {fparam.name: values[:, inputs.index(fparam)] for fparam in inputs}  # type: ignore[union-attr]
        return values

    return tensor_to_dict