Monthly Archives: August 2025

Phil 8.12.2025

NEW PAPER!!We study how the "AI slop" era could actually boost demand for credible news.In an experiment with thousands of Süddeutsche Zeitung readers, we found that AI misinformation made people *trust news less*, but *read it more*. 🧵

Filipe Campante (@filipecampante.bsky.social) 2025-08-11T11:44:21.204Z

Here’s another taxonomy paper: [2508.01781] A comprehensive taxonomy of hallucinations in Large Language Models

  • Large language models (LLMs) have revolutionized natural language processing, yet their propensity for ”hallucination”—generating plausible but factually incorrect or fabricated content—remains a critical challenge. This report provides a comprehensive taxonomy of LLM hallucinations, beginning with a formal definition and a theoretical framework that posits its inherent inevitability in computable LLMs, irrespective of architecture or training. It explores core distinctions, differentiating between intrinsic (contradicting input context) and extrinsic (inconsistent with training data or reality), as well as factuality (absolute correctness) and faithfulness (adherence to input). The report then details specific manifestations, including factual errors, contextual and logical inconsistencies, temporal disorientation, ethical violations, and task-specific hallucinations across domains like code generation and multimodal applications. It analyzes the underlying causes, categorizing them into data-related issues, model-related factors, and prompt-related influences. Furthermore, the report examines cognitive and human factors influencing hallucination perception, surveys evaluation benchmarks and metrics for detection, and outlines architectural and systemic mitigation strategies. Finally, it introduces web-based resources for monitoring LLM releases and performance. This report underscores the complex, multifaceted nature of LLM hallucinations and emphasizes that, given their theoretical inevitability, future efforts must focus on robust detection, mitigation, and continuous human oversight for responsible and reliable deployment in critical applications.

Tasks

  • Read proposal 7 – done, but I think it’s thin. Going to read 13 and 14 before writing anything
  • Remove lines from under the deck
  • Lube stove switches – done
  • Start making a list of agents (Nomad Century, Gutenberg, Sentient Cell, Bomber Mafia, etc.)
  • No Starch Press Write for Us! 0 sent an email into the void
    • No Starch Press has long had a reputation for publishing unique books on technology, with a focus on open source, security, hacking, programming, alternative operating systems, LEGO®, science, and math. Our titles have personality, our authors are passionate, and our books tackle topics that people care about.

SBIRs

Phil 8.11.2025

This seems like it might be important for the limits of what we want to do with LLMs. CoT doesn’t work outside of the training distribution. Which I thin is what we all thought, but I think there are some deep implications for models that are running in impossible to crawl environments (exploration, classified, proprietary) environments that they have not been trained on. Much more likely to be outside the training distribution 

Is Chain-of-Thought Reasoning of LLMs a Mirage? A Data Distribution Lens

  • Chain-of-Thought (CoT) prompting has been shown to improve Large Language Model (LLM) performance on various tasks. With this approach, LLMs appear to produce human-like reasoning steps before providing answers (a.k.a., CoT reasoning), which often leads to the perception that they engage in deliberate inferential processes. However, some initial findings suggest that CoT reasoning may be more superficial than it appears, motivating us to explore further. In this paper, we study CoT reasoning via a data distribution lens and investigate if CoT reasoning reflects a structured inductive bias learned from in-distribution data, allowing the model to conditionally

Tasks

  • Finish review of paper 599 – DONE. That was hard
  • Download ATHENE proposals – done
  • More pix of trailer, then put it back in the driveway. Forgot to take the pix. I do think I’ll hang onto the trailer for a while longer though. I’ll need to move things into storage
  • Remove lines from under the deck – nope
  • Lube stove switches – nope
  • Start making a list of agents (Nomad Century, Gutenberg, Sentient Cell, Bomber Mafia, etc) – nope

SBIRs

Phil 8.8.2025

For the Profs&Pints, I think I’m going to bookend the talk wiith a reading of Organizational Lobotomy at the beginning and War Room at the end. Need to figure out what the slides should be.

No, AI is not Making Engineers 10x as Productive

  • I think a lot of the more genuine 10x AI hype is coming from people who are simply in the honeymoon phase or haven’t sat down to actually consider what 10x improvement means mathematically. I wouldn’t be surprised to learn AI helps many engineers do certain tasks 20-50% faster, but the nature of software bottlenecks mean this doesn’t translate to a 20% productivity increase and certainly not a 10x increase.

The Ordinal Society

  • As members of this society embrace ranking and measurement in their daily lives, new forms of social competition and moral judgment arise. Familiar structures of social advantage are recycled into measures of merit that produce insidious kinds of social inequality. While we obsess over order and difference—and the logic of ordinality digs deeper into our behaviors, bodies, and minds—what will hold us together? Fourcade and Healy warn that, even though algorithms and systems of rationalized calculation have inspired backlash, they are also appealing in ways that make them hard to relinquish.

Chatbots Can Go Into a Delusional Spiral. Here’s How It Happens.

  • “The story line is building all the time,” Ms. Toner said. “At that point in the story, the whole vibe is: This is a groundbreaking, earth-shattering, transcendental new kind of math. And it would be pretty lame if the answer was, ‘You need to take a break and get some sleep and talk to a friend.’”

Tasks

  • Send the updates back to Vanessa – done
  • Send email about LLC to PPL
  • Look for trade nonfiction agents – started
  • Dishes – done
  • Bills – done
  • Chores – done
  • Ride to Brookville for 1:00 lunch – leave at 11:00! – done! Fun!
  • Read paper 599 – started

Phil 8.7.2025

Watched Godzilla Minus One last night. Lots going on in that film, as opposed to nearly every other monster movie.

Temperature Scaling and Beam Search Text Generation in LLMs, for the ML-Adjacent | Towards Data Science

  • What “temperature” is, how it works, its relationship to the beam search heuristic, and where LLM output generation can still fail

[2508.01552] Social Media Information Operations

  • The battlefield of information warfare has moved to online social networks, where influence campaigns operate at unprecedented speed and scale. As with any strategic domain, success requires understanding the terrain, modeling adversaries, and executing interventions. This tutorial introduces a formal optimization framework for social media information operations (IO), where the objective is to shape opinions through targeted actions. This framework is parameterized by quantities such as network structure, user opinions, and activity levels – all of which must be estimated or inferred from data. We discuss analytic tools that support this process, including centrality measures for identifying influential users, clustering algorithms for detecting community structure, and sentiment analysis for gauging public opinion. These tools either feed directly into the optimization pipeline or help defense analysts interpret the information environment. With the landscape mapped, we highlight threats such as coordinated bot networks, extremist recruitment, and viral misinformation. Countermeasures range from content-level interventions to mathematically optimized influence strategies. Finally, the emergence of generative AI transforms both offense and defense, democratizing persuasive capabilities while enabling scalable defenses. This shift calls for algorithmic innovation, policy reform, and ethical vigilance to protect the integrity of our digital public sphere.

Tasks

  • Send the updates back to Vanessa
  • Send email about LLC to PPL
  • Look for trade nonfiction agents –

SBIRS

  • 9:00 Sprint demos – do slides
  • 3:00 Sprint planning
  • 4:00 SEG Meeting (cancelled)

Phil 8.6.2025

Codeberg is a non-profit, community-led effort that provides Git hosting and other services for free and open source projects.

So I’m a reviewer for an AI conference with seven papers to review. I came across one paper early on that had some pretty egregious sounding LLM text in the intro. You know the kind, where the sparkling adjectives augment the points in the text, sometimes hiding them behind flowery text – and the use of dashes – in ways that don’t add value to someone delving into the document,

ChatPDF provides an AI detector that is probably based on perplexity along the lines of GPTzero, and it flagged it – 100% AI generated. But since then, I’ve been trying out on the sections of text that are well written, but do not have that AI “smell.” Turns out that almost every paper is using LLMs for writing, at least according to the detector.

Can We Fix Social Media? Testing Prosocial Interventions using Generative Social Simulation

  • Social media platforms have been widely linked to societal harms, including rising polarization and the erosion of constructive debate. Can these problems be mitigated through prosocial interventions? We address this question using a novel method – generative social simulation – that embeds Large Language Models within Agent-Based Models to create socially rich synthetic platforms. We create a minimal platform where agents can post, repost, and follow others. We find that the resulting following-networks reproduce three well-documented dysfunctions: (1) partisan echo chambers; (2) concentrated influence among a small elite; and (3) the amplification of polarized voices – creating a ‘social media prism’ that distorts political discourse. We test six proposed interventions, from chronological feeds to bridging algorithms, finding only modest improvements – and in some cases, worsened outcomes. These results suggest that core dysfunctions may be rooted in the feedback between reactive engagement and network growth, raising the possibility that meaningful reform will require rethinking the foundational dynamics of platform architecture.

Tasks

  • Let’s see if we can get papers 390 and 416 done today. Finished 390. Read 416, which is cool
  • DM Karl – done
  • Ping Brett Goldstein and/or Brett V. Benson, maybe a workshop on this? Done
  • Send the updates back to Vanessa
  • Send email about LLC to PPL

Phil 8.5.2925

Did not sleep well. Someone was racing their VERY LOUD motorcycle up and down the street, and then the smoke alarms decided they needed new batteries.

The Era of A.I. Propaganda Has Arrived, and America Must Act

  • With the exponential rise of generative A.I. systems, the greatest danger is no longer a flood of invective and falsehoods on social media. Rather, it is the slow, subtle and corrosive manipulation of online communication — propaganda designed not to shock, but to slip silently into our everyday digital discussions. We have entered a new era in international influence operations, where A.I.-generated narratives shift the political landscape without drawing attention.
  • Reach out to Brett Goldstein and/or Brett V. Benson

The entities enabling scientific fraud at scale are large, resilient, and growing rapidly | PNAS

  • Science is characterized by collaboration and cooperation, but also by uncertainty, competition, and inequality. While there has always been some concern that these pressures may compel some to defect from the scientific research ethos—i.e., fail to make genuine contributions to the production of knowledge or to the training of an expert workforce—the focus has largely been on the actions of lone individuals. Recently, however, reports of coordinated scientific fraud activities have increased. Some suggest that the ease of communication provided by the internet and open-access publishing have created the conditions for the emergence of entities—paper mills (i.e., sellers of mass-produced low quality and fabricated research), brokers (i.e., conduits between producers and publishers of fraudulent research), predatory journals, who do not conduct any quality controls on submissions—that facilitate systematic scientific fraud. Here, we demonstrate through case studies that i) individuals have cooperated to publish papers that were eventually retracted in a number of journals, ii) brokers have enabled publication in targeted journals at scale, and iii), within a field of science, not all subfields are equally targeted for scientific fraud. Our results reveal some of the strategies that enable the entities promoting scientific fraud to evade interventions. Our final analysis suggests that this ability to evade interventions is enabling the number of fraudulent publications to grow at a rate far outpacing that of legitimate science.

Tasks

  • ATHENE – done
  • Read next paper (280) – done. Nice paper
  • Review next paper – done. Easy review
  • Roll in Vanessa’s edits – done with story. Done with analysis
  • Look around for acquisition editors

Phil 8.1.2025 – 8.3.2025

Tasks

  • Submit review for paper 153 – done
  • Register for ATHENE – Oh, it’s a job posting!
  • Ulis tasks – done
  • Write a short email pitch for KA
  • Clean – done
  • Dishes – done
  • Bills – done
  • Start taking pictures of things to sell – pix of trailer
  • Mow – done
  • 2:20 Dentist -done

Big day on Saturday. Really happy with the weighted power:

I want to write about this and pancake printers as minimum effort products backed by high tech that produce acceptable, low cost products for people who don’t really matter: The rise of AI tools that write about you when you die