Framework for Experiments
This documents the some of the best practices I have learnt for programming NNs.
Pytorch training loop
def train_loop(self, ...):
for epoch in range(epochs):
for batch in dataloader:
self.train()
# 1. Move data to device
= batch.to(device)
batch # 2. Zero gradients
self.optimizer.zero_grad()
# 3. Forward pass
= self.model(batch)
output # 4. Compute loss
= self.loss_function(output, batch)
loss # 5. Backward pass
loss.backward()# 6. Update weights
self.optimizer.step()
omegaconf + dataclass
configuring NNs and storing (hyper)parameters
@dataclass
class ModelConfig:
# model parameters
int = 128
hidden_dim: # training parameters
float = 1e-3
lr: int = 32
batch_size:
def save_config(self, path: Path):
self), path)
OmegaConf.save(OmegaConf.to_yaml(
@staticmethod
def load_config(path: Path) -> "ModelConfig":
= OmegaConf.load(path)
conf return ModelConfig(**conf)
Tensorboard
for logging: see base.py
A list of potential avenues to explore:
- pytorch-lightning
- hydra
- fire