
This is true! I’ve put together a spreadsheet so you can see for yourself
SBIRs
- More FOM stuff. Maybe a meeting at 2:00?
- MORS paper with Aaron. Nope, but did finish the second draft.
GPT Agents
- 4:00 Meeting
- Went on a bit of a tangent discussing Bostrom’s paperclip conjecture and how recommender algorithms could be that, but from a human/ai source, not agi. The problem is at the scales that these systems might have effects at, it is not clear what the objective function means, and if we are, in fact destroying the world by creating an algorithm that seeks to optimize for one thing, but does so in ways that are ultimately destructive to humans. Venue could be the 5th AAAI/ACM Conference on AI, Ethics, and Society Papers are due on March 5.
Book