Efficient workflow and reproducibility are crucially important components in every machine learning project, which enables to:
PyTorch Lightning and Hydra serve as the foundation of this template. Such reasonable technology stack for deep learning prototyping provides a comprehensive and seamless solution, allowing you to effortlessly explore different tasks across a variety of hardware accelerators such as CPUs, multi-GPUs, and TPUs. Furthermore, it includes a curated collection of best practices and extensive documentation for greater clarity and comprehension.
This template can be used as is for some basic tasks like Classification, Segmentation, or Metric Learning, or be easily extended for any other tasks due to high-level modularity and scalable structure.
As a baseline, I have used the gorgeous Lightning Hydra Template, reshaped and polished it, and implemented more features that can improve the overall efficiency of workflow and reproducibility.
PyTorch Lightning - a lightweight deep learning framework / PyTorch wrapper for professional AI researchers and machine learning engineers who need maximal flexibility without sacrificing performance at scale.
Hydra - a framework that simplifies configuring complex applications. The key feature is the ability to dynamically create a hierarchical configuration by composition and override it through config files and the command line.
The machine learning project structure may differ depending on the specific requirements and goals of the project, as well as the tools and frameworks being used. However, this is a typical directory structure of machine learning project:
src/
data/
logs/
tests/
notebooks/
, docs/
, etc.
In this particular case, the directory structure looks like this:
├── configs <- Hydra configuration files
│ ├── callbacks <- Callbacks configs
│ ├── datamodule <- Datamodule configs
│ ├── debug <- Debugging configs
│ ├── experiment <- Experiment configs
│ ├── extras <- Extra utilities configs
│ ├── hparams_search <- Hyperparameter search configs
│ ├── hydra <- Hydra settings configs
│ ├── local <- Local configs
│ ├── logger <- Logger configs
│ ├── module <- Module configs
│ ├── paths <- Project paths configs
│ ├── trainer <- Trainer configs
│ │
│ ├── eval.yaml <- Main config for evaluation
│ └── train.yaml <- Main config for training
│
├── data <- Project data
├── logs <- Generated logs
├── notebooks <- Jupyter notebooks
├── scripts <- Shell scripts
│
├── src <- Source code
│ ├── callbacks <- Additional callbacks
│ ├── datamodules <- Lightning datamodules
│ ├── modules <- Lightning modules
│ ├── utils <- Utility scripts
│ │
│ ├── eval.py <- Run evaluation
│ └── train.py <- Run training
│
├── tests <- Tests of any kind
│
├── .dockerignore <- List of files ignored by docker
├── .gitattributes <- List of attributes to pathnames
├── .gitignore <- List of files ignored by git
├── .pre-commit-config.yaml <- Configuration of pre-commit hooks
├── Dockerfile <- Dockerfile
├── Makefile <- Makefile
├── pyproject.toml <- Config for testing and linting
├── requirements.txt <- Python dependencies
├── setup.py <- Setup file
└── README.md
Before starting a project, you should consider the following aspects to ensure the reproducibility of results:
This template could be used as is for some basic tasks like Classification, Segmentation, or Metric Learning approach, but if you need to do something more complex, here is a general workflow:
Write your PyTorch Lightning Module (see examples in src/modules/single_module.py)
Write your PyTorch Lightning DataModule (see examples in src/datamodules/datamodules.py)
Fill up your configs, particularly create experiment configs
Run experiments:
python src/train.py experiment=experiment_name.yaml
# using Hydra multirun mode
python src/train.py -m hparams_search=mnist_optuna
python src/train.py -m logger=csv module.optimizer.weight_decay=0.0,0.00001,0.0001
Run evaluation with different checkpoints or run prediction on a custom dataset for additional analysis
The template contains an example with MNIST
classification, which uses for tests by the way. If you run python src/train.py
, you will get something like this: Show terminal screen when running pipeline in the template documentation.
At the start, you need to create PyTorch Dataset for your task. It has to include __getitem__
and __len__
methods. Maybe you can use as is or easily modify already implemented datasets in the template. See more details in PyTorch documentation.
Also, it could be useful to see a data section about how it is possible to save data for training and evaluation.
Then, you need to create DataModule using PyTorch Lightning DataModule API. By default, API has the following methods:
prepare_data
(optional): perform data operations on CPU via a single process, like load and preprocess data, etc.setup
(optional): perform data operations on every GPU, like train/val/test splits, create datasets, etc.train_dataloader
: used to generate the training dataloader(s)val_dataloader
: used to generate the validation dataloader(s)test_dataloader
: used to generate the test dataloader(s)predict_dataloader
(optional): used to generate the prediction dataloader(s)
See examples of datamodule
configs in configs/datamodule folder.
Show LightningDataModule API in the template documentation.
By default, the template contains the following DataModules:
train_dataloader
, val_dataloader
and test_dataloader
return single DataLoader, predict_dataloader
returns list of DataLoaderstrain_dataloader
return dict of DataLoaders, val_dataloader
, test_dataloader
and predict_dataloader
return list of DataLoaders
In the template, DataModules has _get_dataset_
method to simplify datasets instantiation.
Next, your need to create LightningModule using PyTorch Lightning LightningModule API. Minimum API has the following methods:
forward
: use for inference only (separate from training_step)training_step
: the complete training loopvalidation_step
: the complete validation looptest_step
: the complete test looppredict_step
: the complete prediction loopconfigure_optimizers
: define optimizers and LR schedulers
Also, you can override optional methods for each step to perform additional logic:
training_step_end
: training step end operationstraining_epoch_end
: training epoch end operationsvalidation_step_end
: validation step end operationsvalidation_epoch_end
: validation epoch end operationstest_step_end
: test step end operationstest_epoch_end
: test epoch end operations
Show LightningModule API methods and appropriate order in the template documentation.
In the template, LightningModule has model_step
method to adjust repeated operations, like forward
or loss
calculation, which are required in training_step
, validation_step
and test_step
.
The template offers the following Metrics API
:
main
metric: main metric, which also uses for all callbacks or trackers like model_checkpoint
, early_stopping
or scheduler.monitor
.valid_best
metric: used for tracking the best validation metric. Usually, it can be MaxMetric
or MinMetric
.additional
metrics: some additional metrics.
Each metric config should contain _target_
key with the metric class name and other parameters, which are required by the metric. The template allows to use any metrics, for example from torchmetrics or implemented by yourself. See more details about torchmetrics API, implemented Metrics API and metrics
config as a part of network
configs in configs/module/network folder.
Metric config example:
metrics:
main:
_target_: "torchmetrics.Accuracy"
task: "binary"
valid_best:
_target_: "torchmetrics.MaxMetric"
additional:
AUROC:
_target_: "torchmetrics.AUROC"
task: "binary"
The template suggests the following Losses API
:
_target_
key with the loss class name and other parameters requiredweight
string in name will be wrapped by torch.tensor
and cast to torch.float
type before passing to loss due to requirements from most of the losses.
The template allows you to use any losses, for example from PyTorch or implemented by yourself. See more details about implemented Losses API and loss
config as a part of network
configs in configs/module/network folder.
Loss config examples:
loss:
_target_: "torch.nn.CrossEntropyLoss"
loss:
_target_: "torch.nn.BCEWithLogitsLoss"
pos_weight: [0.25]
loss:
_target_: "src.modules.losses.VicRegLoss"
sim_loss_weight: 25.0
var_loss_weight: 25.0
cov_loss_weight: 1.0
Also, the template includes few manually implemented losses:
The template offers the following Model API
, model config should contain:
_target_
: key with the model class namemodel_name
: model namemodel_repo
(optional): model repository
By default, a model can be loaded from:
model_name
as torchvision.models/<model-name>
, for example torchvision.models/mobilenet_v3_large
model_name
as segmentation_models_pytorch/<model-name>
, for example segmentation_models_pytorch/Unet
model_name
as timm/<model-name>
, for example timm/mobilenetv3_100
model_name
as torch.hub/<model-name>
and model_repo
, for example model_name="torch.hub/resnet18"
and model_repo="pytorch/vision"
See more details about implemented Model API and model
config as a part of network
configs in configs/module/network folder.
Model config example:
model:
_target_: "src.modules.models.classification.Classifier"
model_name: "torchvision.models/mobilenet_v3_large"
model_repo: null
weights: "IMAGENET1K_V2"
num_classes: 1
By default, the template comes with the following LightningModules:
See examples of module
configs in configs/module folder. Some LightningModule config example:
_target_: src.modules.single_module.MNISTLitModule
defaults:
- _self_
- network: mnist.yaml
optimizer:
_target_: torch.optim.Adam
lr: 0.001
weight_decay: 0.0
scheduler:
scheduler:
_target_: torch.optim.lr_scheduler.ReduceLROnPlateau
mode: "max"
factor: 0.1
min_lr: 1.0e-9
patience: 10
verbose: True
extras:
monitor: ${replace:"__metric__/valid"}
interval: "epoch"
frequency: 1
logging:
on_step: False
on_epoch: True
sync_dist: False
prog_bar: True
Training loop in the template consists of the following stages:
See more details in training loop and configs/train.yaml.
Evaluation loop in the template consists of the following stages:
See more details in evaluation loop and configs/eval.yaml.
The template contains the following Prediction API:
predict: True
in configs/eval.yaml
to turn on prediction mode.datasets:
predict:
dataset1:
_target_: src.datamodules.datasets.ClassificationDataset
json_path: ${paths.data_dir}/predict/data1.json
dataset2:
_target_: src.datamodules.datasets.ClassificationDataset
json_path: ${paths.data_dir}/predict/data2.json
LightningDataModule.predict_dataloader()
returns a single dataloader, and a list of lists of batch predictions, when LightningDataModule.predict_dataloader()
returns multiple dataloaders.{cfg.paths.output_dir}/predictions/
folder._<dataloader_idx>
postfix. It isn’t possible to use dataset names due to PyTorch Lightning doesn’t allow to return a dict of dataloaders from LightningDataModule.predict_dataloader()
method.csv
and json
. json
format is used by default, but it might be more effective to use csv
format for a large number of predictions, it may help to avoid RAM memory overflow, because csv
allows writing row by row and doesn’t require keeping in RAM the whole dict like in case of json
. To change the output format, set predictions_saving_params.output_format
variable in configs/extra/default.yaml
config file.parquet
, you can easily modify src.utils.saving_utils.save_predictions()
method.
See more details about Prediction API and predict_step
in LightningModule.
PyTorch Lightning has a lot of built-in callbacks, which can be used just by adding them to the callbacks config, thanks to Hydra. See examples in callbacks config folder.
By default, the template contains a few of them:
However, there is an additional LightProgressBar
callback, which might be more elegant and useful, instead of using RichProgressbar
:
Hydra creates new output directory in logs/
for every executed run.
Furthermore, template offers to save additional metadata for better reproducibility and debugging, including:
pip
logsgit
logsenvironment
logs: CPU, GPU (nvidia-smi)src/
and configs/
directories
Default logging structure:
├── logs
│ ├── task_name
│ │ ├── runs <- Logs generated by runs
│ │ │ ├── YYYY-MM-DD_HH-MM-SS <- Datetime of the run
│ │ │ │ ├── .hydra <- Hydra logs
│ │ │ │ ├── csv <- Csv logs
│ │ │ │ ├── wandb <- Weights & Biases logs
│ │ │ │ ├── checkpoints <- Training checkpoints
│ │ │ │ ├── metadata <- Metadata
│ │ │ │ │ ├── pip.log <- Pip logs
│ │ │ │ │ ├── git.log <- Git logs
│ │ │ │ │ ├── env.log <- Environment logs
│ │ │ │ │ ├── src <- Full copy of `src/`
│ │ │ │ │ └── configs <- Full copy of `configs/`
│ │ │ │ └── ... <- Any other saved files
│ │ │ └── ...
│ │ │
│ │ └── multiruns <- Logs generated by multiruns
│ │ ├── YYYY-MM-DD_HH-MM-SS <- Datetime of the multirun
│ │ │ ├──1 <- Multirun job number
│ │ │ ├──2
│ │ │ └── ...
│ │ └── ...
│ │
│ └── debugs <- Logs generated during debug
│ └── ...
Usually, images or any other data files just stored on disk in folders. It is a simple and convenient way.
However, there are other methods and one of them calls as Hierarchical Data Format HDF5 or h5py, which has a few reasons why it might be more beneficial to store images in HDF5 files instead of just folders:
This template contains a tool which might be used to easily create and read HDF5 files.
To create HDF5 file:
from src.datamodules.components.h5_file import H5PyFile
H5PyFile().create(
filename="/path/to/dataset_train_set_v1.h5",
content=["/path/to/image_0.png", "/path/to/image_1.png", ...],
# each content item loads as np.fromfile(filepath, dtype=np.uint8)
)
To read HDF5 file in the wild:
import matplotlib.pyplot as plt
from src.datamodules.components.h5_file import H5PyFile
h5py_file = H5PyFile(filename="/path/to/dataset_train_set_v1.h5")
image = h5py_file[0]
plt.imshow(image)
To read HDF5 file in Dataset.__getitem__
:
def __getitem__(self, index: int) -> Any:
key = self.keys[index] # get the image key, e.g. path
data_file = self.data_file
source = data_file[key] # get the image
image = io.BytesIO(source) # read the image
...
Hydra provides out-of-the-box hyperparameters sweepers: Optuna, Nevergrad or Ax.
You may define hyperparameters search by adding new config file to configs/hparams_search.
See example of hyperparameters search config. With this method, there is no need to add extra code, everything is specified in a single configuration file. The only requirement is to return the optimized metric value from the launch file.
Execute it with:
python src/train.py -m hparams_search=mnist_optuna
The optimization_results.yaml
will be available under logs/task_name/multirun
folder.
Docker is an essential part of environment reproducibility that makes it possible to easily package a machine learning pipeline and its dependencies into a single container that can be easily deployed and run on any environment. This is particularly useful due to it helps to ensure that the code will run consistently, regardless of the environment in which it is deployed.
Docker image could require some additional packages depends on which device is used for running. For example, for running on cluster with NVIDIA GPUs it requires the CUDA Toolkit from NVIDIA. The CUDA Toolkit provides everything you need to develop GPU-accelerated applications, including GPU-accelerated libraries, a compiler, development tools and the CUDA runtime.
In general, there are many way how to set up it, but to simplify this process you can use:
Dockerfile
here.
Moreover, it can be advantageous to use:
-cpuset-cpus
, -gpus
, etc.
Here it is some example of container running based on proposed Dockerfile and .dockerignore:
set -o errexit
export DOCKER_BUILDKIT=1
export PROGRESS_NO_TRUNC=1
docker build --tag <project-name> \
--build-arg OS_VERSION="22.04" \
--build-arg CUDA_VERSION="11.7.0" \
--build-arg PYTHON_VERSION="3.10" \
--build-arg USER_ID=$(id -u) \
--build-arg GROUP_ID=$(id -g) \
--build-arg NAME="<your-name>" \
--build-arg WORKDIR_PATH=$(pwd) .
docker run \
--name <task-name> \
--rm \
-u $(id -u):$(id -g) \
-v $(pwd):$(pwd):rw \
--gpus '"device=0,1,3,4"' \
--cpuset-cpus "0-47" \
-it \
--entrypoint /bin/bash \
<project-name>:latest
Tests are an important aspect of software development in general, and especially in Machine Learning, because here it can be much more difficult to understand if code are working correctly without testing. Consequently, template contains some generic tests implemented with pytest.
For this purpose MNIST is used. It is a small dataset, so it is possible to run all tests on CPU. However, it is easy to implement tests for your own dataset if it requires.
As a baseline the tests cover:
All this implemented tests created for verifying that the main pipeline modules and utils are executable and working as expected However, sometimes it couldn’t be enough to ensure that the code is working correctly, especially in case of more complex pipelines and models.
For running:
# run all tests
pytest
# run tests from specific file
pytest tests/test_train.py
# run tests from specific test
pytest tests/test_train.py::test_train_ddp_sim
# run all tests except the ones marked as slow
pytest -k "not slow"
The template contains a few initial CI workflows via the GitHub Actions platform. It makes it easy to automate and streamline development workflows, which can help to save time and effort, increase efficiency, and improve overall quality of the code. In particularly, it includes:
.github/workflows/test.yaml
: running all tests from tests/
with pytest
on Linux
, Mac
and Windows
platforms.github/workflows/code-quality-main.yaml
: running pre-commits
on main branch for all files.github/workflows/code-quality-pr.yaml
: running pre-commits
on pull requests for modified files only
Note: You need to enable the GitHub Actions from the settings in your repository.
See more about GitHub Actions for CI.
In the case of using GitLab, it is easy to set up GitLab CI based on GitHub Actions workflows. Here it manages by .gitlab-ci.yml
file. See more here.
Also published here.