How malicious AI swarms can threaten democracy | Science
- Advances in artificial intelligence (AI) offer the prospect of manipulating beliefs and behaviors on a population-wide level (1). Large language models (LLMs) and autonomous agents (2) let influence campaigns reach unprecedented scale and precision. Generative tools can expand propaganda output without sacrificing credibility (3) and inexpensively create falsehoods that are rated as more human-like than those written by humans (3, 4). Techniques meant to refine AI reasoning, such as chain-of-thought prompting, can be used to generate more convincing falsehoods. Enabled by these capabilities, a disruptive threat is emerging: swarms of collaborative, malicious AI agents. Fusing LLM reasoning with multiagent architectures (2), these systems are capable of coordinating autonomously, infiltrating communities, and fabricating consensus efficiently. By adaptively mimicking human social dynamics, they threaten democracy. Because the resulting harms stem from design, commercial incentives, and governance, we prioritize interventions at multiple leverage points, focusing on pragmatic mechanisms over voluntary compliance.
Tasks
- Bills – done
- Ping Vet – done 8:15 appt tomorrow
- Start listing out services – started
- Plumber
- Lawn
- Yardwork
- Contractor
- Electrician
- Floors
- Painter
- Appt paperwork! Started
- Pack
SBIRs
- Make a pitch deck for ChatTyphoon? Done
- Kicked off the UMAP run for the day – done
- Class evaluations – done
- Class day 5 and practicum – done!
