Phil 8.30.21

If you want to summarize your research in a sentence… have an AI do it. SciTLDR sums up papers given an abstract, intro & conclusion. And it works impressively well: https://scitldr.apps.allenai.org (Via Twitter)

The Devil is in the Detail: Simple Tricks Improve Systematic Generalization of Transformers

  • Recently, many datasets have been proposed to test the systematic generalization ability of neural networks. The companion baseline Transformers, typically trained with default hyper-parameters from standard tasks, are shown to fail dramatically. Here we demonstrate that by revisiting model configurations as basic as scaling of embeddings, early stopping, relative positional embedding, and Universal Transformer variants, we can drastically improve the performance of Transformers on systematic generalization. We report improvements on five popular datasets: SCAN, CFQ, PCFG, COGS, and Mathematics dataset. Our models improve accuracy from 50% to 85% on the PCFG productivity split, and from 35% to 81% on COGS. On SCAN, relative positional embedding largely mitigates the EOS decision problem (Newman et al., 2020), yielding 100% accuracy on the length split with a cutoff at 26. Importantly, performance differences between these models are typically invisible on the IID data split. This calls for proper generalization validation sets for developing neural networks that generalize systematically. We publicly release the code to reproduce our results.

SBIRs

  • Got the client communicating with the server using Websockets and the server relaying those messages to RabbitMQ!
https://viztales.files.wordpress.com/2021/08/image-20.png
  • Sprint Demos and story writing today
  • Starting to look at Docker for this effort

GPT Agents

  • Finish 1-5 star parser and start run on GPT-large, then GPT. Curious what we’ll get
    • Verified that everything seems to be working on a small run. Lots of parsing to get star values
    • Tring a full-sized run of 100 batches of 10 experiments with 10 return sequences
  • OpenAI: The fine-tuning endpoint is now ready, and we’re excited to share it with you! Here’s how to get started: link