Can the “hallucination” / invention / lying problem be fixed? No. These are systems of prediction. Predictions made from insufficient data will always be random. The problem is that the same thing that makes them really useful (that they are learning about culture e.g. language at many different levels) also ensures that they are deeply inhuman – there is no way to tell from the syntax or tone of a sentence how correct the model is. Nothing in modelling performed this way retains information about how much data underlies the predictions.
SBIRs
- Slides for NIST talk
- “A final example of possible Chinese disinformation came when Typhoon Jebi hit Osaka, Japan and stranded thousands of tourists at Kansai International Airport. A fabricated story spread on social media alleging that Su Chii-cherng, director of the Taipei Economic and Cultural Representatives Office did nothing to help stranded Taiwanese citizens, while the PRC Consulate in Osaka dispatched buses to help rescue stranded Taiwan citizens. Shortly after the story began circulating, Su came under intense criticism online and ultimately hung himself, with the Ministry of Foreign Affairs claiming he left a suicide note blaming the disinformation surrounding his office’s incompetence. The Taiwan government found no evidence to support the rumors of Chinese assistance during the typhoon, ostensibly illustrating that this was another case of China-linked disinformation. However, in December 2019, two Taiwanese citizens were charged with creating and spreading the rumor online. Although China might have played a role in furthering the rumors spread, it still remains unclear and again highlights the challenge of definitive attribution.” Via Geopolitical Monitor
- Sent the above to Kyle
- 3:00 Meeting with Rukan – nope
- Meeting with Protima about generic madlibs JSON generator
- SimAccel review/refactor meeting
GPT Agents
- Poster for IUI. Going to play with generative features
