Models
Models
FFNN(layers, activation_function=nn.GELU())
Bases: Module
Standard feedforward neural network.
PARAMETER | DESCRIPTION |
---|---|
layers
|
List of layer sizes.
TYPE:
|
activation_function
|
Activation function to use between layers.
TYPE:
|
Source code in perceptrain/models.py
forward(x)
Forward pass through the neural network.
PARAMETER | DESCRIPTION |
---|---|
x
|
Input tensor of shape (batch_size, layers[0]).
TYPE:
|
RETURNS | DESCRIPTION |
---|---|
Tensor
|
Output tensor of shape (batch_size, layers[-1]).
TYPE:
|
Source code in perceptrain/models.py
PINN(nn, equations)
Bases: Module
Physics-informed neural network.
PARAMETER | DESCRIPTION |
---|---|
nn
|
Neural network module.
TYPE:
|
equations
|
Dictionary of equations.
These are assumed in the form LHS(x) = 0, so each term of
TYPE:
|
Notes
Example of equations: heat equation with a Gaussian initial condition.
import torch
alpha = 1.0
def heat_eqn(x: torch.Tensor, model: torch.nn.Module) -> torch.Tensor:
grad_u = torch.autograd.grad(
outputs=model(x),
inputs=x,
grad_outputs=torch.ones_like(u),
create_graph=True,
retain_graph=True,
)[0]
dudt = grad_u[:, 0]
dudx = grad_u[:, 1]
grad2_u = torch.autograd.grad(
outputs=dudx,
inputs=x,
grad_outputs=torch.ones_like(dudx),
create_graph=True,
retain_graph=True,
)[0]
d2udx2 = grad2_u[:, 1]
return dudt - beta * d2udx2
def initial_condition(x: torch.Tensor, model: torch.nn.Module):
def gaussian(z):
return torch.exp(-z**2)
return model(x) - gaussian(x[:, 1])
def boundary_condition(x: torch.Tensor, model: torch.nn.Module):
grad_u = torch.autograd.grad(
outputs=model(x),
inputs=x,
grad_outputs=torch.ones_like(model(x)),
create_graph=True,
retain_graph=True,
)[0]
return dudx[:, 1] - torch.zeros_like(x[:, 1])
equations = {
"pde": heat_eqn,
"initial_condition": initial_condition,
"boundary_condition_left": boundary_condition,
"boundary_condition_right": boundary_condition,
}
Source code in perceptrain/models.py
forward(x)
Forward pass through the physical-informed neural network.
PARAMETER | DESCRIPTION |
---|---|
x
|
Dictionary of input tensors. The keys of the dictionary should
match the keys in the
TYPE:
|
RETURNS | DESCRIPTION |
---|---|
dict[str, Tensor]
|
dict[str, Tensor]: Dictionary of output tensors. |
Source code in perceptrain/models.py
QNN()
Bases: QuantumModel
A specialized quantum neural network that extends QuantumModel.
You can define additional layers, parameters, and logic specific to your quantum model here.
forward(x)
The forward pass for the quantum neural network.
Replace with your actual quantum circuit logic if you have a quantum simulator or hardware integration. This example just passes x through a classical linear layer.
Source code in perceptrain/models.py
QuantumModel()
Bases: Module
Base class for any quantum-based model.
Inherits from nn.Module. Subclasses should implement a forward method that handles quantum logic.