
A good example of Bostrom Pollution (https://bsky.app/profile/justicar.xyz/post/3kn4eim4f622c)
This is what I’ve been calling The Pancake Printer Economy, which I’ve been dreading since seeing this:

- As Large Language Models (LLMs) become more proficient, their misuse in large-scale viral disinformation campaigns is a growing concern. This study explores the capability of ChatGPT to generate unconditioned claims about the war in Ukraine, an event beyond its knowledge cutoff, and evaluates whether such claims can be differentiated by human readers and automated tools from human-written ones. We compare war-related claims from ClaimReview, authored by IFCN-registered fact-checkers, and similar short-form content generated by ChatGPT. We demonstrate that ChatGPT can produce realistic, target-specific disinformation cheaply, fast, and at scale, and that these claims cannot be reliably distinguished by humans or existing automated tools.
SBIRs
- Work on the white paper
- 9:00 Standup
- 10:00 SimAccel code review
- 11:30 SST dataset tagup
- 3:30 USNA
GPT Agents
- Working on the poster, and expanding the discussion section in the KA paper to talk about White Hat AI, since that went over well at NIST
- 2:00 Meeting with Shimei to go over SIGCHI reviews. I do want to discuss the idea of the construction of White Hat AI’s that take an understanding of individual and group psychology to detect dangerous manipulation from AI. And human actors, since this will soon get to the point that human and AI manipulation will be indistinguishable. Also, we need to do this in a way that respects agency for the individuals, with controls and opt-out/in approaches. There could easily be a “bias knob” that could be built to
