Woke up nice and relaxed after a good night’s sleep. The night before a presentation is not easy for me.
I’ve been thinking about this slide from the talk yesterday:

I think that AI researchers are in a place that nuclear researchers were in the ’30’s. There is this amazing technology that is going to change the world, but no one is sure how. Then the world engages in a total war that depends on technology and the Allies are not doing well. Some of the researchers think that a nuclear weapon might turn the tide. It works, but in retrospect it was too much too late. But for 10 years the chance that there could be a broad nuclear war was high, and take as just an extension of current developments – a bigger bomb. It took decades for that viewpoint to shift. AI weapons are probably here already, and there are nations and organizations that are working out the best way to use them – as an extension of current “active measures” strategies and tactics. And like the atomic bomb, we really have no idea where this will go.
SBIRs
- Read a bunch of stuff for upcoming meetings
- Fire up the NNM instance and see if I can remember how to use it. Add an instruction section to the notebook – got sidetracked into doing a detailed read of a BAA
- 9:00 standup
- 11:30 AI Ethics training discussion with Hall Research. They are legit as it gets. Let’s see what kind of training they put together, but for now I give a ringing endorsement
- 3:30 meeting on the Phase IIe. We have three weeks to respond, but it doesn’t seem like they are asking for much? Very confused. Maybe because it’s an extension?
