Papers & Links EfficientNet: Neural Architecture Search + Smart Scaling of CNNs A very brief introduction to EfficientNets: what they are, how they were developed and how to use them.
Papers & Links How to Measure Algorithmic Progress and Efficiency in Deep Learning How can we disentangle algorithmic from hardware and data-driven progress in deep learning? Measuring the Algorithmic Efficiency of Neural Networks tries to answer that question and shows how much more efficient image classification models have become over the last few years.
Papers & Links Train Large, Then Compress – Making Transformer Training and Inference More Efficient A paper that suggests that increasing model size can actually reduce training and inference time.
Training Training 10x Larger Models and Accelerating Training on a Single GPU with ZeRO-Offloading A brief introduction to what ZeRO-Offloading is, when it's useful and how to use it.
Papers & Links An Overview of Research on Resource-Efficient Neural Networks for Embedded Systems Resource-Efficient Neural Networks for Embedded Systems, a paper by Roth et al. from January 2020 is a summary of recent research on making neural nets more resource-efficient.
Papers & Links Training a ResNet to 94% Accuracy on CIFAR-10 in 26 Seconds on a Single GPU – a Summary A summary of Myrtle.ai's How to Train Your ResNet, a series of blog posts describing how to train a ResNet to 94% accuracy on CIFAR10 in 26 seconds!
Training Faster Deep Learning Training with PyTorch – a 2021 Guide An overview of some of the lowest-effort, highest-impact ways of accelerating the training of deep learning models in PyTorch.