Carbon credits!
SBIRs
- Submit request to charge for work on SIGCHI and 5th Workshop on Human-AI Co-Creation with Generative Models – done
- 9:00 Standup – done
- AI ethics? Cancelled
- OpenAI lays out plan for dealing with dangers of AI
- OpenAI’s “Preparedness” team, led by MIT AI professor Aleksander Madry, will hire AI researchers, computer scientists, national security experts and policy professionals to monitor the tech, continually test it and warn the company if it believes any of its AI capabilities are becoming dangerous.
- OpenAI’s post
- Openings, but I can’t find any job description?

GPT Agents
- Write a query that pulls all examples of all “with context” hallucinations and see what’s going on. It’s not many, and I suspect user problems. Done. It seems to be associated with unusual formatting. Working the unhelpful and partials. I think the angle is going to be that “Context helps a lot,” but it’s not a panacea. Errors leak through when the model can’t navigate the context effectively. At scale, this is millions of responses. Mitigation is going to be a version of Zeno’s paradox.
- Start framing out the paper. Created appropriate Overleaf doc.
