Phil 4.27.2023

Calibrated Chaos: Variance Between Runs of Neural Network Training is Harmless and Inevitable

  • Typical neural network trainings have substantial variance in test-set performance between repeated runs, impeding hyperparameter comparison and training reproducibility. We present the following results towards understanding this variation. (1) Despite having significant variance on their test-sets, we demonstrate that standard CIFAR-10 and ImageNet trainings have very little variance in their performance on the test-distributions from which those test-sets are sampled, suggesting that variance is less of a practical issue than previously thought. (2) We present a simplifying statistical assumption which closely approximates the structure of the test-set accuracy distribution. (3) We argue that test-set variance is inevitable in the following two senses. First, we show that variance is largely caused by high sensitivity of the training process to initial conditions, rather than by specific sources of randomness like the data order and augmentations. Second, we prove that variance is unavoidable given the observation that ensembles of trained networks are well-calibrated. (4) We conduct preliminary studies of distribution-shift, fine-tuning, data augmentation and learning rate through the lens of variance between runs.


  • Spending the day at Explore Azure OpenAI & ChatGPT for Federal Agencies
  • Need to get back to slides

GPT Agents

  • After getting lists to work in the TopicNode class yesterday, I realize that I need a ListExplorer and SequenceExplorer app. It will be to confusing to stuff everything into NarrativeExplorer.