Phil 10.9.20

Glasses!

Batteries in kitty feeder

Tweaked my resume some.

The ultimate guide to Encoder Decoder Models

  • The goal of the blog post is to give an in-detail explanation of how the transformer-based encoder-decoder architecture models sequence-to-sequence problems. We will focus on the mathematical model defined by the architecture and how the model can be used in inference. Along the way, we will give some background on sequence-to-sequence models in NLP and break down the transformer-based encoder-decoder architecture into its encoder and decoder part. We provide many illustrations and establish the link between the theory of transformer-based encoder-decoder models and their practical usage in 🤗Transformers for inference. Note that this blog post does not explain how such models can be trained – this will be the topic of a future blog post.

JuryRoom

  • Need to respond to Tony’s email. Talk about how text entropy works at the limit: “all work and no play make Jack a dull boy” at one end and random text at the other. Different populations will produce different probabilities of (possibly the same) words. These can further be clustered using word2vec. Additionally, doc2vec could cluster different rooms.

Book

  • More work on the Money section. Set up discussion in the emergence section as what happens once money is established
  • Add content from Ted Hiebert
  • Working on the cult section. Thinking about the Stonkettle thread on Nazis as victims. It sounds a bit like the chapter I’m reading on Jonestown in Cults and New Religious Movements. Maybe interview him?
  • 2:00 Meeting with Michelle
  • 7:00 Drinking with historians is going to cover “patriotism”. This might tie into dimension reduction and cults. Going to see if it’s possible to ask questions

GOES

  • Still thinking about the hiccup in the calculations. Maybe test for which of the four vectors (X, Y, Z, and combined) that’s closest to the previous vector and choose that?