Freaked myself out when I saw one of my variations on the Cozy Bear Podesta email. That’s such an effective technique!
SBIRs
- Q7 report – Finished Task 2
- Check with Aaron, but aim to submit the slide deck by COB – getting approval
- Put the GPT content in the Killer Apps project for later reference. – done
- Wound up having to do more self-evaluation
- Had a nice chat with Aaron about prompt swarms, or maybe synthetic organisms (sorgs?). We’re going to look into setting up agents to run the Grog dungeon. I’m really curious about how they will handle the Trolley problem
GPT Agents
- Send email to Scott Shapiro
- Alden Meeting – went well. Some interesting stuff about how models become more negative as the temp goes up. I’m thinking that there could be something like a phase change?
- Had a good (in person!) meeting with Tyler. I’m not sure that he buys the chemistry metaphor, but he does agree that generative models work in a bounded domain. One thought is to use an LLM to do chemistry. Train it on SMILES notation, and let it do chemical reactions. It could be a smaller, GPT-2 sized model too. For that matter, it would be straightforward to write a SMILES expander like I did with the GPT-2 chess experiments, so that the model only needs to be finetuned. Other aspects, like energy would have to be included to see if the model could produce new “trajectories” through the reaction space. The other option is to use the chess model again, since that’s another bounded domain that the model can clearly work with.
