Callbacks
Callbacks
Callback(on='idle', called_every=1, callback=None, callback_condition=None, modify_optimize_result=None)
Base class for defining various training callbacks.
ATTRIBUTE | DESCRIPTION |
---|---|
on |
The event on which to trigger the callback. Must be a valid on value from: ["train_start", "train_end", "train_epoch_start", "train_epoch_end", "train_batch_start", "train_batch_end","val_epoch_start", "val_epoch_end", "val_batch_start", "val_batch_end", "test_batch_start", "test_batch_end"]
TYPE:
|
called_every |
Frequency of callback calls in terms of iterations.
TYPE:
|
callback |
The function to call if the condition is met.
TYPE:
|
callback_condition |
Condition to check before calling.
TYPE:
|
modify_optimize_result |
Function to modify
TYPE:
|
A callback can be defined in two ways:
- By providing a callback function directly in the base class: This is useful for simple callbacks that don't require subclassing.
Example:
from perceptrain.callbacks import Callback
def custom_callback_function(trainer, config, writer):
print("Custom callback executed.")
custom_callback = Callback(
on="train_end",
called_every=5,
callback=custom_callback_function
)
- By inheriting and implementing the
run_callback
method: This is suitable for more complex callbacks that require customization.
Example:
from perceptrain.callbacks import Callback
class CustomCallback(Callback):
def run_callback(self, trainer, config, writer):
print("Custom behavior in the inherited run_callback method.")
custom_callback = CustomCallback(on="train_end", called_every=10)
Source code in perceptrain/callbacks/callback.py
on
property
writable
Returns the TrainingStage.
RETURNS | DESCRIPTION |
---|---|
TrainingStage
|
TrainingStage for the callback
TYPE:
|
__call__(when, trainer, config, writer)
Executes the callback if conditions are met.
PARAMETER | DESCRIPTION |
---|---|
when
|
The event when the callback is triggered.
TYPE:
|
trainer
|
The training object.
TYPE:
|
config
|
The configuration object.
TYPE:
|
writer
|
The writer object for logging.
TYPE:
|
RETURNS | DESCRIPTION |
---|---|
Any
|
Result of the callback function if executed.
TYPE:
|
Source code in perceptrain/callbacks/callback.py
run_callback(trainer, config, writer)
Executes the defined callback.
PARAMETER | DESCRIPTION |
---|---|
trainer
|
The training object.
TYPE:
|
config
|
The configuration object.
TYPE:
|
writer
|
The writer object for logging.
TYPE:
|
RETURNS | DESCRIPTION |
---|---|
Any
|
Result of the callback execution.
TYPE:
|
RAISES | DESCRIPTION |
---|---|
NotImplementedError
|
If not implemented in subclasses. |
Source code in perceptrain/callbacks/callback.py
EarlyStopping(on, called_every, monitor, patience=5, mode='min')
Bases: Callback
Stops training when a monitored metric has not improved for a specified number of epochs.
This callback monitors a specified metric (e.g., validation loss or accuracy). If the metric does not improve for a given patience period, training is stopped.
Example Usage in TrainConfig
:
To use EarlyStopping
, include it in the callbacks
list when setting up your TrainConfig
:
from perceptrain import TrainConfig
from perceptrain.callbacks import EarlyStopping
# Create an instance of the EarlyStopping callback
early_stopping = EarlyStopping(on="val_epoch_end",
called_every=1,
monitor="val_loss",
patience=5,
mode="min")
config = TrainConfig(
max_iter=10000,
print_every=1000,
callbacks=[early_stopping]
)
Initializes the EarlyStopping callback.
PARAMETER | DESCRIPTION |
---|---|
on
|
The event to trigger the callback (e.g., "val_epoch_end").
TYPE:
|
called_every
|
Frequency of callback calls in terms of iterations.
TYPE:
|
monitor
|
The metric to monitor (e.g., "val_loss" or "train_loss"). All metrics returned by optimize step are available to monitor. Please add "val_" and "train_" strings at the start of the metric name.
TYPE:
|
patience
|
Number of iterations to wait for improvement. Default is 5.
TYPE:
|
mode
|
Whether to minimize ("min") or maximize ("max") the metric. Default is "min".
TYPE:
|
Source code in perceptrain/callbacks/callback.py
run_callback(trainer, config, writer)
Monitors the metric and stops training if no improvement is observed.
PARAMETER | DESCRIPTION |
---|---|
trainer
|
The training object.
TYPE:
|
config
|
The configuration object.
TYPE:
|
writer
|
The writer object for logging.
TYPE:
|
Source code in perceptrain/callbacks/callback.py
GradientMonitoring(on, called_every=1)
Bases: Callback
Logs gradient statistics (e.g., mean, standard deviation, max) during training.
This callback monitors and logs statistics about the gradients of the model parameters to help debug or optimize the training process.
Example Usage in TrainConfig
:
To use GradientMonitoring
, include it in the callbacks
list when
setting up your TrainConfig
:
from perceptrain import TrainConfig
from perceptrain.callbacks import GradientMonitoring
# Create an instance of the GradientMonitoring callback
gradient_monitoring = GradientMonitoring(on="train_batch_end", called_every=10)
config = TrainConfig(
max_iter=10000,
print_every=1000,
callbacks=[gradient_monitoring]
)
Initializes the GradientMonitoring callback.
PARAMETER | DESCRIPTION |
---|---|
on
|
The event to trigger the callback (e.g., "train_batch_end").
TYPE:
|
called_every
|
Frequency of callback calls in terms of iterations.
TYPE:
|
Source code in perceptrain/callbacks/callback.py
run_callback(trainer, config, writer)
Logs gradient statistics.
PARAMETER | DESCRIPTION |
---|---|
trainer
|
The training object.
TYPE:
|
config
|
The configuration object.
TYPE:
|
writer
|
The writer object for logging.
TYPE:
|
Source code in perceptrain/callbacks/callback.py
LRSchedulerCosineAnnealing(on, called_every, t_max, min_lr=0.0)
Bases: Callback
Applies cosine annealing to the learning rate during training.
This callback decreases the learning rate following a cosine curve, starting from the initial learning rate and annealing to a minimum (min_lr).
Example Usage in TrainConfig
:
To use LRSchedulerCosineAnnealing
, include it in the callbacks
list
when setting up your TrainConfig
:
from perceptrain import TrainConfig
from perceptrain.callbacks import LRSchedulerCosineAnnealing
# Create an instance of the LRSchedulerCosineAnnealing callback
lr_cosine = LRSchedulerCosineAnnealing(on="train_batch_end",
called_every=1,
t_max=5000,
min_lr=1e-6)
config = TrainConfig(
max_iter=10000,
# Print metrics every 1000 training epochs
print_every=1000,
# Add the custom callback
callbacks=[lr_cosine]
)
Initializes the LRSchedulerCosineAnnealing callback.
PARAMETER | DESCRIPTION |
---|---|
on
|
The event to trigger the callback.
TYPE:
|
called_every
|
Frequency of callback calls in terms of iterations.
TYPE:
|
t_max
|
The total number of iterations for one annealing cycle.
TYPE:
|
min_lr
|
The minimum learning rate. Default is 0.0.
TYPE:
|
Source code in perceptrain/callbacks/callback.py
run_callback(trainer, config, writer)
Adjusts the learning rate using cosine annealing.
PARAMETER | DESCRIPTION |
---|---|
trainer
|
The training object.
TYPE:
|
config
|
The configuration object.
TYPE:
|
writer
|
The writer object for logging.
TYPE:
|
Source code in perceptrain/callbacks/callback.py
LRSchedulerCyclic(on, called_every, base_lr, max_lr, step_size)
Bases: Callback
Applies a cyclic learning rate schedule during training.
This callback oscillates the learning rate between a minimum (base_lr) and a maximum (max_lr) over a defined cycle length (step_size). The learning rate follows a triangular wave pattern.
Example Usage in TrainConfig
:
To use LRSchedulerCyclic
, include it in the callbacks
list when setting
up your TrainConfig
:
from perceptrain import TrainConfig
from perceptrain.callbacks import LRSchedulerCyclic
# Create an instance of the LRSchedulerCyclic callback
lr_cyclic = LRSchedulerCyclic(on="train_batch_end",
called_every=1,
base_lr=0.001,
max_lr=0.01,
step_size=2000)
config = TrainConfig(
max_iter=10000,
# Print metrics every 1000 training epochs
print_every=1000,
# Add the custom callback
callbacks=[lr_cyclic]
)
Initializes the LRSchedulerCyclic callback.
PARAMETER | DESCRIPTION |
---|---|
on
|
The event to trigger the callback.
TYPE:
|
called_every
|
Frequency of callback calls in terms of iterations.
TYPE:
|
base_lr
|
The minimum learning rate.
TYPE:
|
max_lr
|
The maximum learning rate.
TYPE:
|
step_size
|
Number of iterations for half a cycle.
TYPE:
|
Source code in perceptrain/callbacks/callback.py
run_callback(trainer, config, writer)
Adjusts the learning rate cyclically.
PARAMETER | DESCRIPTION |
---|---|
trainer
|
The training object.
TYPE:
|
config
|
The configuration object.
TYPE:
|
writer
|
The writer object for logging.
TYPE:
|
Source code in perceptrain/callbacks/callback.py
LRSchedulerReduceOnPlateau(on, called_every=1, monitor='train_loss', patience=20, mode='min', gamma=0.5, threshold=0.0001, min_lr=1e-06, verbose=True)
Bases: Callback
Reduces learning rate when a given metric reaches a plateau.
This callback decreases the learning rate by a factor gamma
when a given
metric does not improve after a given number of epochs by more than
a given threshold, until a minimum learning rate is reached.
Example Usage in TrainConfig
:
To use LRSchedulerReduceOnPlateau
, include it in the callbacks
list
when setting up your TrainConfig
:
from perceptrain import TrainConfig
from perceptrain.callbacks import LRSchedulerReduceOnPlateau
# Create an instance of the LRSchedulerReduceOnPlateau callback
lr_plateau = LRSchedulerReduceOnPlateau(
on="train_epoch_end",
called_every=1,
monitor="train_loss",
patience=20,
mode="min",
gamma=0.5,
threshold=1e-4,
min_lr=1e-5,
)
config = TrainConfig(
max_iter=10000,
# Print metrics every 1000 training epochs
print_every=1000,
# Add the custom callback
callbacks=[lr_plateau]
)
Initializes the LRSchedulerReduceOnPlateau callback.
PARAMETER | DESCRIPTION |
---|---|
on
|
The event to trigger the callback. Default is
TYPE:
|
called_every
|
Frequency of callback calls in terms of iterations. Default is 1.
TYPE:
|
monitor
|
The metric to monitor (e.g., "val_loss" or "train_loss"). All metrics returned by optimize step are available to monitor. Please add "val_" and "train_" strings at the start of the metric name. Default is "train_loss".
TYPE:
|
mode
|
Whether to minimize ("min") or maximize ("max") the metric. Default is "min".
TYPE:
|
patience
|
Number of allowed iterations with no loss improvement before reducing the learning rate. Default is 20.
TYPE:
|
gamma
|
The decay factor applied to the learning rate. A value < 1 reduces the learning rate over time. Default is 0.5.
TYPE:
|
threshold
|
Amount by which the loss must improve to count as an improvement. Default is 1e-4.
TYPE:
|
min_lr
|
Minimum learning rate past which no further reducing is applied. Default is 1e-5.
TYPE:
|
verbose
|
If True, the logger prints when the learning rate decreases or reaches the minimum (INFO level)
TYPE:
|
Source code in perceptrain/callbacks/callback.py
run_callback(trainer, config, writer)
Reduces the learning rate when the loss reaches a plateau.
PARAMETER | DESCRIPTION |
---|---|
trainer
|
The training object.
TYPE:
|
config
|
The configuration object.
TYPE:
|
writer
|
The writer object for logging.
TYPE:
|
Source code in perceptrain/callbacks/callback.py
LRSchedulerStepDecay(on, called_every, gamma=0.5)
Bases: Callback
Reduces the learning rate by a factor at regular intervals.
This callback adjusts the learning rate by multiplying it with a decay factor after a specified number of iterations. The learning rate is updated as: lr = lr * gamma
Example Usage in TrainConfig
:
To use LRSchedulerStepDecay
, include it in the callbacks
list when setting
up your TrainConfig
:
from perceptrain import TrainConfig
from perceptrain.callbacks import LRSchedulerStepDecay
# Create an instance of the LRSchedulerStepDecay callback
lr_step_decay = LRSchedulerStepDecay(on="train_epoch_end",
called_every=100,
gamma=0.5)
config = TrainConfig(
max_iter=10000,
# Print metrics every 1000 training epochs
print_every=1000,
# Add the custom callback
callbacks=[lr_step_decay]
)
Initializes the LRSchedulerStepDecay callback.
PARAMETER | DESCRIPTION |
---|---|
on
|
The event to trigger the callback.
TYPE:
|
called_every
|
Frequency of callback calls in terms of iterations.
TYPE:
|
gamma
|
The decay factor applied to the learning rate. A value < 1 reduces the learning rate over time. Default is 0.5.
TYPE:
|
Source code in perceptrain/callbacks/callback.py
run_callback(trainer, config, writer)
Runs the callback to apply step decay to the learning rate.
PARAMETER | DESCRIPTION |
---|---|
trainer
|
The training object.
TYPE:
|
config
|
The configuration object.
TYPE:
|
writer
|
The writer object for logging.
TYPE:
|
Source code in perceptrain/callbacks/callback.py
LivePlotMetrics(on, called_every, arrange=None)
Bases: Callback
Callback to follow metrics on screen during training.
It uses livelossplot to update losses and metrics at every call and plot them via matplotlib.
Initializes the callback.
PARAMETER | DESCRIPTION |
---|---|
on
|
The event to trigger the callback.
TYPE:
|
called_every
|
Frequency of callback calls in terms of iterations.
TYPE:
|
arrange
|
How metrics are arranged for the subplots. Each entry is a different group, which will correspond to a different subplot. If None, all metrics will be plotted in a single subplot. Defaults to None.
TYPE:
|
Source code in perceptrain/callbacks/callback.py
LoadCheckpoint(on='idle', called_every=1, callback=None, callback_condition=None, modify_optimize_result=None)
Bases: Callback
Callback to load a model checkpoint.
Source code in perceptrain/callbacks/callback.py
run_callback(trainer, config, writer)
Loads a model checkpoint.
PARAMETER | DESCRIPTION |
---|---|
trainer
|
The training object.
TYPE:
|
config
|
The configuration object.
TYPE:
|
writer
|
The writer object for logging.
TYPE:
|
RETURNS | DESCRIPTION |
---|---|
Any
|
The result of loading the checkpoint.
TYPE:
|
Source code in perceptrain/callbacks/callback.py
LogHyperparameters(on='idle', called_every=1, callback=None, callback_condition=None, modify_optimize_result=None)
Bases: Callback
Callback to log hyperparameters using the writer.
The LogHyperparameters
callback can be added to the TrainConfig
callbacks
as a custom user defined callback.
Example Usage in TrainConfig
:
To use LogHyperparameters
, include it in the callbacks
list when setting up your
TrainConfig
:
from perceptrain import TrainConfig
from perceptrain.callbacks import LogHyperparameters
# Create an instance of the LogHyperparameters callback
log_hyper_callback = LogHyperparameters(on = "val_batch_end", called_every = 100)
config = TrainConfig(
max_iter=10000,
# Print metrics every 1000 training epochs
print_every=1000,
# Add the custom callback that runs every 100 val_batch_end
callbacks=[log_hyper_callback]
)
Source code in perceptrain/callbacks/callback.py
run_callback(trainer, config, writer)
Logs hyperparameters using the writer.
PARAMETER | DESCRIPTION |
---|---|
trainer
|
The training object.
TYPE:
|
config
|
The configuration object.
TYPE:
|
writer
|
The writer object for logging.
TYPE:
|
Source code in perceptrain/callbacks/callback.py
LogModelTracker(on='idle', called_every=1, callback=None, callback_condition=None, modify_optimize_result=None)
Bases: Callback
Callback to log the model using the writer.
Source code in perceptrain/callbacks/callback.py
run_callback(trainer, config, writer)
Logs the model using the writer.
PARAMETER | DESCRIPTION |
---|---|
trainer
|
The training object.
TYPE:
|
config
|
The configuration object.
TYPE:
|
writer
|
The writer object for logging.
TYPE:
|
Source code in perceptrain/callbacks/callback.py
PrintMetrics(on='idle', called_every=1, callback=None, callback_condition=None, modify_optimize_result=None)
Bases: Callback
Callback to print metrics using the writer.
The PrintMetrics
callback can be added to the TrainConfig
callbacks as a custom user defined callback.
Example Usage in TrainConfig
:
To use PrintMetrics
, include it in the callbacks
list when
setting up your TrainConfig
:
from perceptrain import TrainConfig
from perceptrain.callbacks import PrintMetrics
# Create an instance of the PrintMetrics callback
print_metrics_callback = PrintMetrics(on = "val_batch_end", called_every = 100)
config = TrainConfig(
max_iter=10000,
# Print metrics every 1000 training epochs
print_every=1000,
# Add the custom callback that runs every 100 val_batch_end
callbacks=[print_metrics_callback]
)
Source code in perceptrain/callbacks/callback.py
run_callback(trainer, config, writer)
Prints metrics using the writer.
PARAMETER | DESCRIPTION |
---|---|
trainer
|
The training object.
TYPE:
|
config
|
The configuration object.
TYPE:
|
writer
|
The writer object for logging.
TYPE:
|
Source code in perceptrain/callbacks/callback.py
R3Sampling(initial_dataset, fitness_function, verbose=False, called_every=1)
Bases: Callback
Callback for R3 sampling (https://arxiv.org/abs/2207.02338#).
PARAMETER | DESCRIPTION |
---|---|
initial_dataset
|
The dataset updating according to the R3 scheme.
TYPE:
|
fitness_function
|
The function to compute fitness scores for samples. Based on the fitness scores, the samples are retained or released.
TYPE:
|
verbose
|
Whether to print the callback's summary. Defaults to False.
TYPE:
|
called_every
|
Every how many events the callback is called. Defaults to 1.
TYPE:
|
Notes
- R3 sampling was developed as a technique for efficient sampling of physics-informed neural networks (PINNs). In this case, the fitness function can be any function of the residuals of the equations
Examples:
Learning an harmonic oscillator with PINNs and R3 sampling. For a well-posed problem, also add the two initial conditions.
import torch
m = 1.0
k = 1.0
def uniform_1d(n: int):
return torch.rand(size=(n, 1))
def harmonic_oscillator(x: torch.Tensor, model: torch.nn.Module) -> torch.Tensor:
u = model(x)
dudt = torch.autograd.grad(
outputs=u,
inputs=x,
grad_outputs=torch.ones_like(u),
create_graph=True,
retain_graph=True,
)[0]
d2udt2 = torch.autograd.grad(
outputs=dudt,
inputs=x,
grad_outputs=torch.ones_like(dudt),
)[0]
return m * d2udt2 - kappa * u
def fitness_function(x: torch.Tensor, model: PINN) -> torch.Tensor:
return torch.linalg.vector_norm(harmonic_oscillator(x, model.nn), ord=2)
dataset = R3Dataset(
proba_dist=uniform_1d,
n_samples=20,
release_threshold=1.0,
)
callback_r3 = R3Sampling(
initial_dataset=dataset,
fitness_function=fitness_function,
called_every=10,
)
Source code in perceptrain/callbacks/callback.py
1001 1002 1003 1004 1005 1006 1007 1008 1009 1010 1011 1012 1013 1014 1015 1016 1017 1018 1019 1020 1021 1022 1023 1024 1025 1026 1027 1028 1029 1030 1031 1032 1033 1034 1035 1036 1037 1038 1039 1040 1041 1042 1043 1044 1045 1046 1047 1048 1049 1050 1051 1052 1053 1054 1055 1056 1057 1058 1059 1060 1061 1062 1063 1064 1065 1066 1067 1068 1069 1070 1071 1072 1073 1074 |
|
run_callback(trainer, config, writer)
Runs the callback.
Computes fitness scores for samples and triggers the dataset update.
PARAMETER | DESCRIPTION |
---|---|
trainer
|
The trainer instance.
TYPE:
|
config
|
The training configuration.
TYPE:
|
writer
|
The writer instance.
TYPE:
|
Source code in perceptrain/callbacks/callback.py
SaveBestCheckpoint(on, called_every)
Bases: SaveCheckpoint
Callback to save the best model checkpoint based on a validation criterion.
Initializes the SaveBestCheckpoint callback.
PARAMETER | DESCRIPTION |
---|---|
on
|
The event to trigger the callback.
TYPE:
|
called_every
|
Frequency of callback calls in terms of iterations.
TYPE:
|
Source code in perceptrain/callbacks/callback.py
run_callback(trainer, config, writer)
Saves the checkpoint if the current loss is better than the best loss.
PARAMETER | DESCRIPTION |
---|---|
trainer
|
The training object.
TYPE:
|
config
|
The configuration object.
TYPE:
|
writer
|
The writer object for logging.
TYPE:
|
Source code in perceptrain/callbacks/callback.py
SaveCheckpoint(on='idle', called_every=1, callback=None, callback_condition=None, modify_optimize_result=None)
Bases: Callback
Callback to save a model checkpoint.
The SaveCheckpoint
callback can be added to the TrainConfig
callbacks
as a custom user defined callback.
Example Usage in TrainConfig
:
To use SaveCheckpoint
, include it in the callbacks
list when setting up your
TrainConfig
:
from perceptrain import TrainConfig
from perceptrain.callbacks import SaveCheckpoint
# Create an instance of the SaveCheckpoint callback
save_checkpoint_callback = SaveCheckpoint(on = "val_batch_end", called_every = 100)
config = TrainConfig(
max_iter=10000,
# Print metrics every 1000 training epochs
print_every=1000,
# Add the custom callback that runs every 100 val_batch_end
callbacks=[save_checkpoint_callback]
)
Source code in perceptrain/callbacks/callback.py
run_callback(trainer, config, writer)
Saves a model checkpoint.
PARAMETER | DESCRIPTION |
---|---|
trainer
|
The training object.
TYPE:
|
config
|
The configuration object.
TYPE:
|
writer
|
The writer object for logging.
TYPE:
|
Source code in perceptrain/callbacks/callback.py
WriteMetrics(on='idle', called_every=1, callback=None, callback_condition=None, modify_optimize_result=None)
Bases: Callback
Callback to write metrics using the writer.
The WriteMetrics
callback can be added to the TrainConfig
callbacks as
a custom user defined callback.
Example Usage in TrainConfig
:
To use WriteMetrics
, include it in the callbacks
list when setting up your
TrainConfig
:
from perceptrain import TrainConfig
from perceptrain.callbacks import WriteMetrics
# Create an instance of the WriteMetrics callback
write_metrics_callback = WriteMetrics(on = "val_batch_end", called_every = 100)
config = TrainConfig(
max_iter=10000,
# Print metrics every 1000 training epochs
print_every=1000,
# Add the custom callback that runs every 100 val_batch_end
callbacks=[write_metrics_callback]
)
Source code in perceptrain/callbacks/callback.py
run_callback(trainer, config, writer)
Writes metrics using the writer.
PARAMETER | DESCRIPTION |
---|---|
trainer
|
The training object.
TYPE:
|
config
|
The configuration object.
TYPE:
|
writer
|
The writer object for logging.
TYPE:
|
Source code in perceptrain/callbacks/callback.py
WritePlots(on='idle', called_every=1, callback=None, callback_condition=None, modify_optimize_result=None)
Bases: Callback
Callback to plot metrics using the writer.
The WritePlots
callback can be added to the TrainConfig
callbacks as
a custom user defined callback.
Example Usage in TrainConfig
:
To use WritePlots
, include it in the callbacks
list when setting up your
TrainConfig
:
from perceptrain import TrainConfig
from perceptrain.callbacks import WritePlots
# Create an instance of the WritePlots callback
plot_metrics_callback = WritePlots(on = "val_batch_end", called_every = 100)
config = TrainConfig(
max_iter=10000,
# Print metrics every 1000 training epochs
print_every=1000,
# Add the custom callback that runs every 100 val_batch_end
callbacks=[plot_metrics_callback]
)
Source code in perceptrain/callbacks/callback.py
run_callback(trainer, config, writer)
Plots metrics using the writer.
PARAMETER | DESCRIPTION |
---|---|
trainer
|
The training object.
TYPE:
|
config
|
The configuration object.
TYPE:
|
writer
|
The writer object for logging.
TYPE:
|
Source code in perceptrain/callbacks/callback.py
BaseWriter
Bases: ABC
Abstract base class for experiment tracking writers.
METHOD | DESCRIPTION |
---|---|
open |
Opens the writer and sets up the logging environment. |
close |
Closes the writer and finalizes any ongoing logging processes. |
print_metrics |
Prints metrics and loss in a formatted manner. |
write |
Writes the optimization results to the tracking tool. |
log_hyperparams |
Logs the hyperparameters to the tracking tool. |
plot |
Logs model plots using provided plotting functions. |
log_model |
Logs the model and any relevant information. |
close()
abstractmethod
log_hyperparams(hyperparams)
abstractmethod
Logs hyperparameters.
PARAMETER | DESCRIPTION |
---|---|
hyperparams
|
A dictionary of hyperparameters to log.
TYPE:
|
Source code in perceptrain/callbacks/writer_registry.py
log_model(model, train_dataloader=None, val_dataloader=None, test_dataloader=None)
abstractmethod
Logs the model and associated data.
PARAMETER | DESCRIPTION |
---|---|
model
|
The model to log.
TYPE:
|
train_dataloader
|
DataLoader for training data.
TYPE:
|
val_dataloader
|
DataLoader for validation data.
TYPE:
|
test_dataloader
|
DataLoader for testing data.
TYPE:
|
Source code in perceptrain/callbacks/writer_registry.py
open(config, iteration=None)
abstractmethod
Opens the writer and prepares it for logging.
PARAMETER | DESCRIPTION |
---|---|
config
|
Configuration object containing settings for logging.
TYPE:
|
iteration
|
The iteration step to start logging from. Defaults to None.
TYPE:
|
Source code in perceptrain/callbacks/writer_registry.py
plot(model, iteration, plotting_functions)
abstractmethod
Logs plots of the model using provided plotting functions.
PARAMETER | DESCRIPTION |
---|---|
model
|
The model to plot.
TYPE:
|
iteration
|
The current iteration number.
TYPE:
|
plotting_functions
|
Functions used to generate plots.
TYPE:
|
Source code in perceptrain/callbacks/writer_registry.py
print_metrics(result)
Prints the metrics and loss in a readable format.
PARAMETER | DESCRIPTION |
---|---|
result
|
The optimization results to display.
TYPE:
|
Source code in perceptrain/callbacks/writer_registry.py
write(iteration, metrics)
abstractmethod
Logs the results of the current iteration.
PARAMETER | DESCRIPTION |
---|---|
iteration
|
The current training iteration.
TYPE:
|
metrics
|
A dictionary of metrics to log, where keys are metric names and values are the corresponding metric values.
TYPE:
|
Source code in perceptrain/callbacks/writer_registry.py
MLFlowWriter()
Bases: BaseWriter
Writer for logging to MLflow.
ATTRIBUTE | DESCRIPTION |
---|---|
run |
The active MLflow run.
TYPE:
|
mlflow |
The MLflow module.
TYPE:
|
Source code in perceptrain/callbacks/writer_registry.py
close()
get_signature_from_dataloader(model, dataloader)
Infers the signature of the model based on the input data from the dataloader.
PARAMETER | DESCRIPTION |
---|---|
model
|
The model to use for inference.
TYPE:
|
dataloader
|
DataLoader for model inputs.
TYPE:
|
RETURNS | DESCRIPTION |
---|---|
Any
|
Optional[Any]: The inferred signature, if available. |
Source code in perceptrain/callbacks/writer_registry.py
log_hyperparams(hyperparams)
Logs hyperparameters to MLflow.
PARAMETER | DESCRIPTION |
---|---|
hyperparams
|
A dictionary of hyperparameters to log.
TYPE:
|
Source code in perceptrain/callbacks/writer_registry.py
log_model(model, train_dataloader=None, val_dataloader=None, test_dataloader=None)
Logs the model and its signature to MLflow using the provided data loaders.
PARAMETER | DESCRIPTION |
---|---|
model
|
The model to log.
TYPE:
|
train_dataloader
|
DataLoader for training data.
TYPE:
|
val_dataloader
|
DataLoader for validation data.
TYPE:
|
test_dataloader
|
DataLoader for testing data.
TYPE:
|
Source code in perceptrain/callbacks/writer_registry.py
open(config, iteration=None)
Opens the MLflow writer and initializes an MLflow run.
PARAMETER | DESCRIPTION |
---|---|
config
|
Configuration object containing settings for logging.
TYPE:
|
iteration
|
The iteration step to start logging from. Defaults to None.
TYPE:
|
RETURNS | DESCRIPTION |
---|---|
mlflow
|
The MLflow module instance.
TYPE:
|
Source code in perceptrain/callbacks/writer_registry.py
plot(model, iteration, plotting_functions)
Logs plots of the model using provided plotting functions.
PARAMETER | DESCRIPTION |
---|---|
model
|
The model to plot.
TYPE:
|
iteration
|
The current iteration number.
TYPE:
|
plotting_functions
|
Functions used to generate plots.
TYPE:
|
Source code in perceptrain/callbacks/writer_registry.py
write(iteration, metrics)
Logs the results of the current iteration to MLflow.
PARAMETER | DESCRIPTION |
---|---|
iteration
|
The current training iteration.
TYPE:
|
metrics
|
A dictionary of metrics to log, where keys are metric names and values are the corresponding metric values.
TYPE:
|
Source code in perceptrain/callbacks/writer_registry.py
TensorBoardWriter()
Bases: BaseWriter
Writer for logging to TensorBoard.
ATTRIBUTE | DESCRIPTION |
---|---|
writer |
The TensorBoard SummaryWriter instance.
TYPE:
|
Source code in perceptrain/callbacks/writer_registry.py
close()
log_hyperparams(hyperparams)
Logs hyperparameters to TensorBoard.
PARAMETER | DESCRIPTION |
---|---|
hyperparams
|
A dictionary of hyperparameters to log.
TYPE:
|
Source code in perceptrain/callbacks/writer_registry.py
log_model(model, train_dataloader=None, val_dataloader=None, test_dataloader=None)
Logs the model.
Currently not supported by TensorBoard.
PARAMETER | DESCRIPTION |
---|---|
model
|
The model to log.
TYPE:
|
train_dataloader
|
DataLoader for training data.
TYPE:
|
val_dataloader
|
DataLoader for validation data.
TYPE:
|
test_dataloader
|
DataLoader for testing data.
TYPE:
|
Source code in perceptrain/callbacks/writer_registry.py
open(config, iteration=None)
Opens the TensorBoard writer.
PARAMETER | DESCRIPTION |
---|---|
config
|
Configuration object containing settings for logging.
TYPE:
|
iteration
|
The iteration step to start logging from. Defaults to None.
TYPE:
|
RETURNS | DESCRIPTION |
---|---|
SummaryWriter
|
The initialized TensorBoard writer.
TYPE:
|
Source code in perceptrain/callbacks/writer_registry.py
plot(model, iteration, plotting_functions)
Logs plots of the model using provided plotting functions.
PARAMETER | DESCRIPTION |
---|---|
model
|
The model to plot.
TYPE:
|
iteration
|
The current iteration number.
TYPE:
|
plotting_functions
|
Functions used to generate plots.
TYPE:
|
Source code in perceptrain/callbacks/writer_registry.py
write(iteration, metrics)
Logs the results of the current iteration to TensorBoard.
PARAMETER | DESCRIPTION |
---|---|
iteration
|
The current training iteration.
TYPE:
|
metrics
|
A dictionary of metrics to log, where keys are metric names and values are the corresponding metric values.
TYPE:
|
Source code in perceptrain/callbacks/writer_registry.py
get_writer(tracking_tool)
Factory method to get the appropriate writer based on the tracking tool.
PARAMETER | DESCRIPTION |
---|---|
tracking_tool
|
The experiment tracking tool to use.
TYPE:
|
RETURNS | DESCRIPTION |
---|---|
BaseWriter
|
An instance of the appropriate writer.
TYPE:
|