Example 1: NeuralPDE
In this example we will show how to train the GrINd model on the Burgers’ equation. The Burgers’ equation is a fundamental partial differential equation in fluid dynamics. The equation describes the evolution of a fluid flow in one dimension. The equation is given by:
where \(u\) is the velocity field, \(t\) is time, \(x\) is the spatial coordinate, and \(\nu\) is the kinematic viscosity.
We will train the GrINd model on the Burgers’ equation using the following steps:
Load the dataset
Let’s start by loading the dataset. We will use the dynabench.dataset.DynabenchIterator
to download the dataset and iterate over the data.
The dataset is generated by solving the Burgers’ equation using a finite difference method and sampled on a scattered set of points.
Additionally, we will use torch.utils.data.DataLoader to create a data loader for the training dataset.
from dynabench.dataset import DynabenchIterator, download_equation
from torch.utils.data import DataLoader
download_equation('burgers', structure='cloud', resolution='low')
burgers_train_iterator = DynabenchIterator(split="train",
equation='burgers',
structure='cloud',
resolution='low',
lookback=1,
rollout=1)
train_loader = DataLoader(burgers_train_iterator, batch_size=32, shuffle=True)
Define the neural network architecture
Next, we will define the neural network architecture. We need the GrINd model to handle the interpolation into and back from the grid structure. We will use the NeuralPDE model to solve the Burgers’ equation in the high-resolution grid space. The NeuralPDE model is a neural network that approximates the solution to the partial differential equation using differentiable ODE solvers and the method of lines.
from dynabench.model import NeuralPDE
from dynabench.model.grind import GrIND
prediction_net = NeuralPDE(input_dim=2, hidden_channels=64, hidden_layers=3,
solver={'method': 'euler', 'options': {'step_size': 0.1}},
use_adjoint=False)
model = GrIND(prediction_net, num_ks=9, grid_resolution=64, spatial_dim=2)
The GrIND model requires the following parameters: - prediction_net: The neural network that will be used to predict the solution to the PDE in the high-resolution grid space. - num_ks: The number of frequencies in each spatial dimension to be used in the interpolation. - grid_resolution: The resolution of the grid in each spatial dimension on which the data will be interpolated.
Train the model
Now we will train the model on the training dataset using PyTorch. We will use the Adam optimizer and the Mean Squared Error (MSE) loss function to train the model. We will train the model for 10 epochs.
import torch.optim as optim
import torch.nn as nn
optimizer = optim.Adam(model.parameters(), lr=1e-3)
criterion = nn.MSELoss()
for epoch in range(10):
model.train()
for i, (x, y, p) in enumerate(train_loader):
x, y, p = x[:,0].float(), y.float(), p.float() # only use the first channel and convert to float32
optimizer.zero_grad()
y_pred = model(x, p)
loss = criterion(y_pred, y)
loss.backward()
optimizer.step()
print(f"Epoch: {epoch}, Batch: {i}, Loss: {loss.item()}")
Evaluate the model
Finally, we will evaluate the model on the test dataset. We will use the test dataset to evaluate the model’s performance on unseen data. To do this we need to load the test dataset and create a data loader for the test dataset.
We want to evaluate the model’s performance over a longer time horizon, so we will set the rollout parameter to 16. This means that the model will have to predict the next 16 time steps given the input data. We can specify this in the forward pass of the model by passing the t_eval parameter to the model.
burgers_test_iterator = DynabenchIterator(split="test",
equation='burgers',
structure='cloud',
resolution='low',
lookback=1,
rollout=16)
test_loader = DataLoader(burgers_test_iterator, batch_size=32, shuffle=False)
model.eval()
loss_values = []
for i, (x, y, p) in enumerate(test_loader):
x, y, p = x[:,0].float(), y.float(), p.float() # only use the first channel and convert to float32
y_pred = model(x, p, t_eval=range(17))
loss = criterion(y_pred, y)
loss_values.append(loss.item())
print(f"Mean Loss: {sum(loss_values) / len(loss_values)}")
Summary
Overall the code for training the NeuralPDE model on the Burgers’ equation is as follows:
from dynabench.dataset import DynabenchIterator, download_equation
from torch.utils.data import DataLoader
from dynabench.model import NeuralPDE
from dynabench.model.grind import GrIND
import torch.optim as optim
import torch.nn as nn
download_equation('burgers', structure='cloud', resolution='low')
burgers_train_iterator = DynabenchIterator(split="train",
equation='burgers',
structure='cloud',
resolution='low',
lookback=1,
rollout=1)
train_loader = DataLoader(burgers_train_iterator, batch_size=32, shuffle=True)
prediction_net = NeuralPDE(input_dim=2, hidden_channels=64, hidden_layers=3,
solver={'method': 'euler', 'options': {'step_size': 0.1}},
use_adjoint=False)
model = GrIND(prediction_net, num_ks=9, grid_resolution=64, spatial_dim=2)
optimizer = optim.Adam(model.parameters(), lr=1e-3)
criterion = nn.MSELoss()
for epoch in range(10):
model.train()
for i, (x, y, p) in enumerate(train_loader):
x, y, p = x[:,0].float(), y.float(), p.float() # only use the first channel and convert to float32
optimizer.zero_grad()
y_pred = model(x, p)
loss = criterion(y_pred, y)
loss.backward()
optimizer.step()
print(f"Epoch: {epoch}, Batch: {i}, Loss: {loss.item()}")
break
break
burgers_test_iterator = DynabenchIterator(split="test",
equation='burgers',
structure='cloud',
resolution='low',
lookback=1,
rollout=16)
test_loader = DataLoader(burgers_test_iterator, batch_size=32, shuffle=False)
model.eval()
loss_values = []
for i, (x, y, p) in enumerate(test_loader):
x, y, p = x[:,0].float(), y.float(), p.float() # only use the first channel and convert to float32
y_pred = model(x, p, t_eval=range(17))
loss = criterion(y_pred, y)
loss_values.append(loss.item())
print(f"Mean Loss: {sum(loss_values) / len(loss_values)}")