Phil 6.3.21

Decision Transformer: Reinforcement Learning via Sequence Modeling

  • We present a framework that abstracts Reinforcement Learning (RL) as a sequence modeling problem. This allows us to draw upon the simplicity and scalability of the Transformer architecture, and associated advances in language modeling such as GPT-x and BERT. In particular, we present Decision Transformer, an architecture that casts the problem of RL as conditional sequence modeling. Unlike prior approaches to RL that fit value functions or compute policy gradients, Decision Transformer simply outputs the optimal actions by leveraging a causally masked Transformer. By conditioning an autoregressive model on the desired return (reward), past states, and actions, our Decision Transformer model can generate future actions that achieve the desired return. Despite its simplicity, Decision Transformer matches or exceeds the performance of state-of-the-art model-free offline RL baselines on Atari, OpenAI Gym, and Key-to-Door tasks.
  • I think this means that the backwards transformer could be trained to write questions that are most likely to result in a particular answer.

Book

  • Did a little fixing of the maps and chapters when I realized that the government is not like a large company. Companies are much more tied up in money, which makes sense. The government is about the power to protect, punish, and hide knowledge. It’s much closer to Greek/Roman gods?
  • Need to respond to Uprenda today

SBIR

  • More final report writing
  • 9:15 standup
  • 10:30 proposal meeting

GPT-Agents

  • More slide re-working
  • At 600k updates. So this will take about 2 weeks