Phil 8.13.21

This looks super interesting for building domain-specific belief maps:

https://twitter.com/ssgrn/status/1425615542837075968?s=12

Here’s a link to the paper: DEMix Layers: Disentangling Domains for Modular Language Modeling

  • We introduce a new domain expert mixture (DEMix) layer that enables conditioning a language model (LM) on the domain of the input text. A DEMix layer is a collection of expert feedforward networks, each specialized to a domain, that makes the LM modular: experts can be mixed, added or removed after initial training. Extensive experiments with autoregressive transformer LMs (up to 1.3B parameters) show that DEMix layers reduce test-time perplexity, increase training efficiency, and enable rapid adaptation with little overhead. We show that mixing experts during inference, using a parameter-free weighted ensemble, allows the model to better generalize to heterogeneous or unseen domains. We also show that experts can be added to iteratively incorporate new domains without forgetting older ones, and that experts can be removed to restrict access to unwanted domains, without additional training. Overall, these results demonstrate benefits of explicitly conditioning on textual domains during language modeling.
  • Git repo: github.com/kernelmachine/demix

GPT Agents

  • Get the review extraction working and produce some content. Got everything running and generating 10,000 reviews. We’ll see how the pattern of stars looks first, and then do a sentiment run on the stored data
  • Export the DB and run sentiment analysis

SBIR(s)

  • Had a long talk yesterday with Aaron about what to do with MARE. I think it becomes the framework for training and using our enhanced simulation scenario explorer. Basically AlphaZero but for physics-based games like tennis.
  • Got Andrew to buy off on the LAIC stories and show me how to put them properly(!) in Jira, so I’ll do that today
  • Endless, mind-numbing training
  • EXPENSE REPORT

Book

  • Skipping this week – Michelle has meetings