Phil 4.26.2022

Tensor Programs V: Tuning Large Neural Networks via Zero-Shot Hyperparameter Transfer

  • Hyperparameter (HP) tuning in deep learning is an expensive process, prohibitively so for neural networks (NNs) with billions of parameters. We show that, in the recently discovered Maximal Update Parametrization (muP), many optimal HPs remain stable even as model size changes. This leads to a new HP tuning paradigm we call muTransfer: parametrize the target model in muP, tune the HP indirectly on a smaller model, and zero-shot transfer them to the full-sized model, i.e., without directly tuning the latter at all. We verify muTransfer on Transformer and ResNet. For example, 1) by transferring pretraining HPs from a model of 13M parameters, we outperform published numbers of BERT-large (350M parameters), with a total tuning cost equivalent to pretraining BERT-large once; 2) by transferring from 40M parameters, we outperform published numbers of the 6.7B GPT-3 model, with tuning cost only 7% of total pretraining cost. A Pytorch implementation of our technique can be found at this http URL and installable via `pip install mup`.

sktime

sktime features a unified interface for multiple time series learning tasks. Currently, we support forecastingtime series classification and time series regression. We have experimental support for time series clustering and time series annotation.

Features:

  • API for machine learning with time series, for the purpose of specifying, fitting, applying and validating machine learning models
  • Interactive user experience with scikit-learn like syntax conventions

Book

  • More epilogue
  • Chase TODOs?

SBIRs

  • 10:00 proposal meeting. Make slides on background and concept
  • Add story for SimAccel library productization
  • 2:00 Sprint planning

GPT Agents

  • Put together study documents
  • 3:30 Meeting