Author Archives: pgfeldman

Phil 8.15.2025

Tasks

  • Switch UPS
  • Bills – done
  • Chores
  • Dishes
  • Weed
  • Mow
  • Schedule power wash
  • Read Proposal 14

Peter Turchin

  • is a complexity scientist who works in the field of historical social science that he and his colleagues call: Cliodynamics
  • His research interests lie at the intersection of social and cultural evolution, historical macrosociology, economic history, mathematical modeling of long-term social processes, and the construction and analysis of historical databases.
  • How do human societies evolve? Why do we see such a staggering degree of inequality in effectiveness of governance and economic performance among nations?
  • Currently he investigates a set of broad and interrelated questions: In particular, what processes explain the evolution of ultrasociality—our capacity to cooperate in huge anonymous societies of millions?
  • Peter’s main research effort at the moment is directed at coordinating the Seshat Databank —a massive historical database of cultural evolution that is gathering and systematically organizing the vast amount of knowledge about past human societies, held collectively by thousands of historians and archaeologists.

Phil8.14.2025

[2507.21206] Agentic Web: Weaving the Next Web with AI Agents

  • The emergence of AI agents powered by large language models (LLMs) marks a pivotal shift toward the Agentic Web, a new phase of the internet defined by autonomous, goal-driven interactions. In this paradigm, agents interact directly with one another to plan, coordinate, and execute complex tasks on behalf of users. This transition from human-driven to machine-to-machine interaction allows intent to be delegated, relieving users from routine digital operations and enabling a more interactive, automated web experience. In this paper, we present a structured framework for understanding and building the Agentic Web. We trace its evolution from the PC and Mobile Web eras and identify the core technological foundations that support this shift. Central to our framework is a conceptual model consisting of three key dimensions: intelligence, interaction, and economics. These dimensions collectively enable the capabilities of AI agents, such as retrieval, recommendation, planning, and collaboration. We analyze the architectural and infrastructural challenges involved in creating scalable agentic systems, including communication protocols, orchestration strategies, and emerging paradigms such as the Agent Attention Economy. We conclude by discussing the potential applications, societal risks, and governance issues posed by agentic systems, and outline research directions for developing open, secure, and intelligent ecosystems shaped by both human intent and autonomous agent behavior. A continuously updated collection of relevant studies for agentic web is available at: this https URL.

Tasks

  • Finish reading proposal 13 (Done! Better than 7. Much better detail) and read 14 before writing anything
  • Remove lines from under the deck
  • Start making a list of agents (Nomad Century, Gutenberg, Sentient Cell, Bomber Mafia, etc.)
  • 10:30 and 3:00 for shop pickup – everything ran late, but the saw, the welder, the grinders, and a shop vac are gone

SBIRs

  • 9:00 Standup – done
  • Work with Ron on socket code – done! Works!
  • FedEx shipment today maybe? Managed to change the delivery options. They still didn’t leave it
  • 4:00 SEG meeting – skipped for garage-emptying

Phil 8.13.2025

I need husband: AI beauty standards, fascism and the proliferation of bot driven content

  • Generative AI is proliferating on social media at an alarming rate. Images are generated and disseminated with political agendas, particularly in right-wing spheres. These AI-generated images often depict soldiers, sad children, or interior designs. Of particular note are the catfishing-style “I need husband” posts featuring women with impossible proportions, ostensibly seeking partners. These chimeric creations are bot-driven posts designed to farm engagement, but they also hint at something more sinister. These posts reflect a mechanical view of the male gaze. However, an AI cannot truly comprehend the male gaze, and in its attempt to mimic it, it creates beings beyond understanding. This research aims to analyze the patterns in these images, explore posting methods and engagement, and examine the meaning behind the images. It culminates in an artistic piece in progress critiquing both the images and their creation and dissemination methods. By rendering these AI-generated images as classical Greek statues through Gaussian splatting and 3D printing, I aim to create a visual commentary on the intersection of AI, the male gaze and fascism. This artistic approach not only highlights the absurdity of these digital constructs but also invites viewers to critically examine AI’s role in shaping contemporary perceptions of beauty and gender roles.

‘It’s a robot war’: eastern Ukraine faces onslaught of Russian glide bombs, rockets and kamikaze drones

  • Over the past week, from 4 to 10 August, the Russian military deployed more than 1,000 aerial bombs and nearly 1,400 kamikaze drones against Ukraine. The current record is 728 drones and 13 missiles sent in a single night in July, most directed at the western city of Lutsk. By autumn, German experts predict Moscow could send 2,000 drones a day.
  • Ukrainian manufacturers have been working on a solution, too: a cheap, scalable interceptor drone that can knock out incoming Shaheds. Last month Zelenskyy toured a factory where they are being made. “A clear task has been set for the manufacturers: Ukraine must be capable of deploying at least 1,000 interceptors per day within a defined timeframe,” he told engineers and officials, saying they “protected lives”.
  • My thoughts on where this is going from 2023

Tasks

  • Read proposals 13 and 14 before writing anything
  • Remove lines from under the deck
  • Start making a list of agents (Nomad Century, Gutenberg, Sentient Cell, Bomber Mafia, etc.)

SBIRs

  • Laptop crap

GPT Agents

  • 2:30 meeting
    • Talk about all the AI in the papers I reviewed, and how most of it was good, with one “slop” paper. What we need in many cases is just an AI slop detector, and there are particular patterns in slop – local repetition, drift, etc. Maybe trajectories over sentence-level embeddings?
    • Also, how the bot invasion of social media and the robot war for Ukraine are distorted reflections of each other/

Phil 8.12.2025

NEW PAPER!!We study how the "AI slop" era could actually boost demand for credible news.In an experiment with thousands of Süddeutsche Zeitung readers, we found that AI misinformation made people *trust news less*, but *read it more*. 🧵

Filipe Campante (@filipecampante.bsky.social) 2025-08-11T11:44:21.204Z

Here’s another taxonomy paper: [2508.01781] A comprehensive taxonomy of hallucinations in Large Language Models

  • Large language models (LLMs) have revolutionized natural language processing, yet their propensity for ”hallucination”—generating plausible but factually incorrect or fabricated content—remains a critical challenge. This report provides a comprehensive taxonomy of LLM hallucinations, beginning with a formal definition and a theoretical framework that posits its inherent inevitability in computable LLMs, irrespective of architecture or training. It explores core distinctions, differentiating between intrinsic (contradicting input context) and extrinsic (inconsistent with training data or reality), as well as factuality (absolute correctness) and faithfulness (adherence to input). The report then details specific manifestations, including factual errors, contextual and logical inconsistencies, temporal disorientation, ethical violations, and task-specific hallucinations across domains like code generation and multimodal applications. It analyzes the underlying causes, categorizing them into data-related issues, model-related factors, and prompt-related influences. Furthermore, the report examines cognitive and human factors influencing hallucination perception, surveys evaluation benchmarks and metrics for detection, and outlines architectural and systemic mitigation strategies. Finally, it introduces web-based resources for monitoring LLM releases and performance. This report underscores the complex, multifaceted nature of LLM hallucinations and emphasizes that, given their theoretical inevitability, future efforts must focus on robust detection, mitigation, and continuous human oversight for responsible and reliable deployment in critical applications.

Tasks

  • Read proposal 7 – done, but I think it’s thin. Going to read 13 and 14 before writing anything
  • Remove lines from under the deck
  • Lube stove switches – done
  • Start making a list of agents (Nomad Century, Gutenberg, Sentient Cell, Bomber Mafia, etc.)
  • No Starch Press Write for Us! 0 sent an email into the void
    • No Starch Press has long had a reputation for publishing unique books on technology, with a focus on open source, security, hacking, programming, alternative operating systems, LEGO®, science, and math. Our titles have personality, our authors are passionate, and our books tackle topics that people care about.

SBIRs

Phil 8.11.2025

This seems like it might be important for the limits of what we want to do with LLMs. CoT doesn’t work outside of the training distribution. Which I thin is what we all thought, but I think there are some deep implications for models that are running in impossible to crawl environments (exploration, classified, proprietary) environments that they have not been trained on. Much more likely to be outside the training distribution 

Is Chain-of-Thought Reasoning of LLMs a Mirage? A Data Distribution Lens

  • Chain-of-Thought (CoT) prompting has been shown to improve Large Language Model (LLM) performance on various tasks. With this approach, LLMs appear to produce human-like reasoning steps before providing answers (a.k.a., CoT reasoning), which often leads to the perception that they engage in deliberate inferential processes. However, some initial findings suggest that CoT reasoning may be more superficial than it appears, motivating us to explore further. In this paper, we study CoT reasoning via a data distribution lens and investigate if CoT reasoning reflects a structured inductive bias learned from in-distribution data, allowing the model to conditionally

Tasks

  • Finish review of paper 599 – DONE. That was hard
  • Download ATHENE proposals – done
  • More pix of trailer, then put it back in the driveway. Forgot to take the pix. I do think I’ll hang onto the trailer for a while longer though. I’ll need to move things into storage
  • Remove lines from under the deck – nope
  • Lube stove switches – nope
  • Start making a list of agents (Nomad Century, Gutenberg, Sentient Cell, Bomber Mafia, etc) – nope

SBIRs

Phil 8.8.2025

For the Profs&Pints, I think I’m going to bookend the talk wiith a reading of Organizational Lobotomy at the beginning and War Room at the end. Need to figure out what the slides should be.

No, AI is not Making Engineers 10x as Productive

  • I think a lot of the more genuine 10x AI hype is coming from people who are simply in the honeymoon phase or haven’t sat down to actually consider what 10x improvement means mathematically. I wouldn’t be surprised to learn AI helps many engineers do certain tasks 20-50% faster, but the nature of software bottlenecks mean this doesn’t translate to a 20% productivity increase and certainly not a 10x increase.

The Ordinal Society

  • As members of this society embrace ranking and measurement in their daily lives, new forms of social competition and moral judgment arise. Familiar structures of social advantage are recycled into measures of merit that produce insidious kinds of social inequality. While we obsess over order and difference—and the logic of ordinality digs deeper into our behaviors, bodies, and minds—what will hold us together? Fourcade and Healy warn that, even though algorithms and systems of rationalized calculation have inspired backlash, they are also appealing in ways that make them hard to relinquish.

Chatbots Can Go Into a Delusional Spiral. Here’s How It Happens.

  • “The story line is building all the time,” Ms. Toner said. “At that point in the story, the whole vibe is: This is a groundbreaking, earth-shattering, transcendental new kind of math. And it would be pretty lame if the answer was, ‘You need to take a break and get some sleep and talk to a friend.’”

Tasks

  • Send the updates back to Vanessa – done
  • Send email about LLC to PPL
  • Look for trade nonfiction agents – started
  • Dishes – done
  • Bills – done
  • Chores – done
  • Ride to Brookville for 1:00 lunch – leave at 11:00! – done! Fun!
  • Read paper 599 – started

Phil 8.7.2025

Watched Godzilla Minus One last night. Lots going on in that film, as opposed to nearly every other monster movie.

Temperature Scaling and Beam Search Text Generation in LLMs, for the ML-Adjacent | Towards Data Science

  • What “temperature” is, how it works, its relationship to the beam search heuristic, and where LLM output generation can still fail

[2508.01552] Social Media Information Operations

  • The battlefield of information warfare has moved to online social networks, where influence campaigns operate at unprecedented speed and scale. As with any strategic domain, success requires understanding the terrain, modeling adversaries, and executing interventions. This tutorial introduces a formal optimization framework for social media information operations (IO), where the objective is to shape opinions through targeted actions. This framework is parameterized by quantities such as network structure, user opinions, and activity levels – all of which must be estimated or inferred from data. We discuss analytic tools that support this process, including centrality measures for identifying influential users, clustering algorithms for detecting community structure, and sentiment analysis for gauging public opinion. These tools either feed directly into the optimization pipeline or help defense analysts interpret the information environment. With the landscape mapped, we highlight threats such as coordinated bot networks, extremist recruitment, and viral misinformation. Countermeasures range from content-level interventions to mathematically optimized influence strategies. Finally, the emergence of generative AI transforms both offense and defense, democratizing persuasive capabilities while enabling scalable defenses. This shift calls for algorithmic innovation, policy reform, and ethical vigilance to protect the integrity of our digital public sphere.

Tasks

  • Send the updates back to Vanessa
  • Send email about LLC to PPL
  • Look for trade nonfiction agents –

SBIRS

  • 9:00 Sprint demos – do slides
  • 3:00 Sprint planning
  • 4:00 SEG Meeting (cancelled)

Phil 8.6.2025

Codeberg is a non-profit, community-led effort that provides Git hosting and other services for free and open source projects.

So I’m a reviewer for an AI conference with seven papers to review. I came across one paper early on that had some pretty egregious sounding LLM text in the intro. You know the kind, where the sparkling adjectives augment the points in the text, sometimes hiding them behind flowery text – and the use of dashes – in ways that don’t add value to someone delving into the document,

ChatPDF provides an AI detector that is probably based on perplexity along the lines of GPTzero, and it flagged it – 100% AI generated. But since then, I’ve been trying out on the sections of text that are well written, but do not have that AI “smell.” Turns out that almost every paper is using LLMs for writing, at least according to the detector.

Can We Fix Social Media? Testing Prosocial Interventions using Generative Social Simulation

  • Social media platforms have been widely linked to societal harms, including rising polarization and the erosion of constructive debate. Can these problems be mitigated through prosocial interventions? We address this question using a novel method – generative social simulation – that embeds Large Language Models within Agent-Based Models to create socially rich synthetic platforms. We create a minimal platform where agents can post, repost, and follow others. We find that the resulting following-networks reproduce three well-documented dysfunctions: (1) partisan echo chambers; (2) concentrated influence among a small elite; and (3) the amplification of polarized voices – creating a ‘social media prism’ that distorts political discourse. We test six proposed interventions, from chronological feeds to bridging algorithms, finding only modest improvements – and in some cases, worsened outcomes. These results suggest that core dysfunctions may be rooted in the feedback between reactive engagement and network growth, raising the possibility that meaningful reform will require rethinking the foundational dynamics of platform architecture.

Tasks

  • Let’s see if we can get papers 390 and 416 done today. Finished 390. Read 416, which is cool
  • DM Karl – done
  • Ping Brett Goldstein and/or Brett V. Benson, maybe a workshop on this? Done
  • Send the updates back to Vanessa
  • Send email about LLC to PPL

Phil 8.5.2925

Did not sleep well. Someone was racing their VERY LOUD motorcycle up and down the street, and then the smoke alarms decided they needed new batteries.

The Era of A.I. Propaganda Has Arrived, and America Must Act

  • With the exponential rise of generative A.I. systems, the greatest danger is no longer a flood of invective and falsehoods on social media. Rather, it is the slow, subtle and corrosive manipulation of online communication — propaganda designed not to shock, but to slip silently into our everyday digital discussions. We have entered a new era in international influence operations, where A.I.-generated narratives shift the political landscape without drawing attention.
  • Reach out to Brett Goldstein and/or Brett V. Benson

The entities enabling scientific fraud at scale are large, resilient, and growing rapidly | PNAS

  • Science is characterized by collaboration and cooperation, but also by uncertainty, competition, and inequality. While there has always been some concern that these pressures may compel some to defect from the scientific research ethos—i.e., fail to make genuine contributions to the production of knowledge or to the training of an expert workforce—the focus has largely been on the actions of lone individuals. Recently, however, reports of coordinated scientific fraud activities have increased. Some suggest that the ease of communication provided by the internet and open-access publishing have created the conditions for the emergence of entities—paper mills (i.e., sellers of mass-produced low quality and fabricated research), brokers (i.e., conduits between producers and publishers of fraudulent research), predatory journals, who do not conduct any quality controls on submissions—that facilitate systematic scientific fraud. Here, we demonstrate through case studies that i) individuals have cooperated to publish papers that were eventually retracted in a number of journals, ii) brokers have enabled publication in targeted journals at scale, and iii), within a field of science, not all subfields are equally targeted for scientific fraud. Our results reveal some of the strategies that enable the entities promoting scientific fraud to evade interventions. Our final analysis suggests that this ability to evade interventions is enabling the number of fraudulent publications to grow at a rate far outpacing that of legitimate science.

Tasks

  • ATHENE – done
  • Read next paper (280) – done. Nice paper
  • Review next paper – done. Easy review
  • Roll in Vanessa’s edits – done with story. Done with analysis
  • Look around for acquisition editors

Phil 8.1.2025 – 8.3.2025

Tasks

  • Submit review for paper 153 – done
  • Register for ATHENE – Oh, it’s a job posting!
  • Ulis tasks – done
  • Write a short email pitch for KA
  • Clean – done
  • Dishes – done
  • Bills – done
  • Start taking pictures of things to sell – pix of trailer
  • Mow – done
  • 2:20 Dentist -done

Big day on Saturday. Really happy with the weighted power:

I want to write about this and pancake printers as minimum effort products backed by high tech that produce acceptable, low cost products for people who don’t really matter: The rise of AI tools that write about you when you die

Phil 7.31.2025

Put in Skyland ride for Saturday – done

Ping Aaron B for Friday?

Something that might be cool for white hat AI

  • Zentropi instantly helps you build intelligent text labelers that are accurate, flexible, and fast. No subscription required.

SBIRS

  • 9:00 standup – done
  • 4:00 SEG meeting – done. Ron needs to look at FFTs, I need to write a python socket. No meeting next week.

GPT Agents

  • Worked on the CACM abstract yesterday and made good progress. Send pdfs of the V5 chapter to Shimei and Jimmy – done

Phil 7.30.2025

Got the FCUL VPN working and webmail!

Early ride today. It’s going to get hot fast

SBIRs

  • Setup a Vrlgrl the grlble project in Overleaf – already there!
  • Tweak the slides – done

GPT Agents

  • 2:30 meeting – went well. Need to finish the abstract by Monday and send it in
  • Send new chapters to Vanessa and update spreadsheet – done

P33

Phil 7.29.2025

OpenAI’s ChatGPT Agent casually clicks through “I am not a robot” verification test – Ars Technica

  • The evidence came from Reddit, where a user named “logkn” of the r/OpenAI community posted screenshots of the AI agent effortlessly clicking through the screening step before it would otherwise present a CAPTCHA (short for “Completely Automated Public Turing tests to tell Computers and Humans Apart”) while completing a video conversion task—narrating its own process as it went.

SBIRs

  • Day trip to NJ done!

Tasks

  • Finished rolling in corrections to vignette 2 analysis

Phil 7.28.2025

One of the things that could be interesting for WH/AI to do is to recognize questions and responses to llms and point out what could be hallucinations and maybe(?) point to sources so that the user can look them up?

Pinged pbump about his aquisition editor. Never hurts to try

LLM Visualization

  • A visualization and walkthrough of the LLM algorithm that backs OpenAI’s ChatGPT. Explore the algorithm down to every add & multiply, seeing the whole process in action.

Exploring Activation Patterns of Parameters in Language Models

  • Most work treats large language models as black boxes without an in-depth understanding of their internal working mechanism. To explain the internal representations of LLMs, we utilize a gradient-based metric to assess the activation level of model parameters. Based on this metric, we obtain three preliminary findings. (1) When the inputs are in the same domain, parameters in the shallow layers will be activated densely, which means a larger portion of parameters will have great impacts on the outputs. In contrast, parameters in the deep layers are activated sparsely. (2) When the inputs are across different domains, parameters in shallow layers exhibit higher similarity in the activation behavior than in deep layers. (3) In deep layers, the similarity of the distributions of activated parameters is positively correlated to the empirical data relevance. Further, we develop three validation experiments to solidify these findings. (1) Firstly, starting from the first finding, we attempt to configure different sparsities for different layers and find this method can benefit model pruning. (2) Secondly, we find that a pruned model based on one calibration set can better handle tasks related to the calibration task than those not related, which validates the second finding. (3) Thirdly, Based on the STS-B and SICK benchmarks, we find that two sentences with consistent semantics tend to share similar parameter activation patterns in deep layers, which aligns with our third finding. Our work sheds light on the behavior of parameter activation in LLMs, and we hope these findings will have the potential to inspire more practical applications.

llamafile lets you distribute and run LLMs with a single file. (announcement blog post)

  • Our goal is to make open LLMs much more accessible to both developers and end users. We’re doing that by combining llama.cpp with Cosmopolitan Libc into one framework that collapses all the complexity of LLMs down to a single-file executable (called a “llamafile”) that runs locally on most computers, with no installation.

Tasks

  • Try Outlook fix – No joy, but made a bunch of screenshots and sent them off.
  • Fill out LASIGE profile info – done
  • Write up review for first paper – done
  • First pass of Abstract for ACM opinion – done
  • Delete big model from svn
  • Reschedule dentist – done

SBIRS

  • Write up notes from last Friday – done
  • Send SOW to Dr. J and tell him that we are going to ask for a NCE – done