Author Archives: pgfeldman

Phil 7.9.2025

Another day of painting

McDonald’s AI Hiring Bot Exposed Millions of Applicants’ Data to Hackers Using the Password ‘123456’ | WIRED

So about the whole Grok thing yesterday, I’d say that this new example matches the theme of the Conversation article (AI misalignment), but the specifics are different:

  1. This was not a “rogue employee.” It was a deliberate rollout of a new model, avoiding the “garbage in any foundation model trained on uncorrected data“.
  2. Although the system prompt was adjusted in minor ways, the behavior of the model was broader more “nuanced”, and had emergent emergent behaviors such as Towerwaffen, where racial slurs are build interactively (e.g. starting with “N”)
  3. The model is also behaving in similar ways in other languages, such as Turkish
  4. As with the “white genocide” even in May, X is deleting many posts, but since there is no easy keyword search based on a hamfisted adjustment to the system prompt, items like the Towerwaffen posts above seem to be unaffected. This one will be harder to clean up. Note that training is explicitly referenced in the “oops” post:

I think that these new elements and their implications should be mentioned in any update to the article. It’s a significant next step by xAI that builds on the first. Anyway, write the update to the blog post regardless.

Musk’s Grok AI bot generates expletive-laden rants to questions on Polish politics | Artificial intelligence (AI) | The Guardian

The reworked Conversation article is out: Grok’s antisemitic rant shows how generative AI can be weaponized

Meeting with Alden. His paper looks good!

Phil 7.8.2025

People Are Losing Loved Ones to AI-Fueled Spiritual Fantasies

  • Kat was both “horrified” and “relieved” to learn that she is not alone in this predicament, as confirmed by a Reddit thread on r/ChatGPT that made waves across the internet this week. Titled “Chatgpt induced psychosis,” the original post came from a 27-year-old teacher who explained that her partner was convinced that the popular OpenAI model “gives him the answers to the universe.” Having read his chat logs, she only found that the AI was “talking to him as if he is the next messiah.” The replies to her story were full of similar anecdotes about loved ones suddenly falling down rabbit holes of spiritual mania, supernatural delusion, and arcane prophecy — all of it fueled by AI. Some came to believe they had been chosen for a sacred mission of revelation, others that they had conjured true sentience from the software. What they all seemed to share was a complete disconnection from reality.  

‘Round Them Up’: Grok Praises Hitler as Elon Musk’s AI Tool Goes Full Nazi

Got this one live off of X here. The other posts have been deleted, but fit the pattern on an antiemetic propaganda bot.

https://x.com/grok/status/1942720721026699451

Painting starts today!

3:40 dentist – done

Get Brompton – thunderstorms!

SBIRs

  • 9:00 Standup

Phil 7.7.2025

Found this today, in Political Parties, by Robert Michels. It’s from 1915

There’s a good review here, too.

Made good progress on P33. I finished the first pass at communities

And I took a bunch of stuff to the dump since it was raining

‘Positive review only’: Researchers hide AI prompts in papers – Nikkei Asia

  • TOKYO — Research papers from 14 academic institutions in eight countries — including Japan, South Korea and China — contained hidden prompts directing artificial intelligence tools to give them good reviews. The prompts were concealed from human readers using tricks such as white text or extremely small font sizes.

Elon Musk’s ‘Upgraded’ AI Is Spewing Antisemitic Propaganda

  • One of the most alarming failures was Grok’s tendency to veer into what users described as Nazi-style propaganda and antisemitic tropes. When asked about enjoying movies, the chatbot parroted conspiracy theories about Hollywood.

SBIRs

  • Nothing on the calendar. Going to review Matt’s notes. Both he and John are documenting well!
  • Fire drill with getting GRL data

7.6.2025

A Survey on LLM-based Agents for Social Simulation: Taxonomy, Evaluation and Applications

  • Social simulation is a crucial tool in social science research, aiming to understand complex social systems. Recently, large language model (LLM) agents have demonstrated unprecedented human-like intelligence by leveraging the strong language understanding, generation, and reasoning capabilities of large language models. This paper conducts a comprehensive survey of social simulation empowered by LLM agents. We first review the evolution of social simulation paradigms and the development of LLM agents as background knowledge. Building on the foundational requirements for constructing a social simulator, we identify five essential capabilities that an individual LLM agent must possess. Correspondingly, we delineate five core modules that constitute the architecture of an LLM agent:(1) Profile Module for adaptive role-playing;(2) Perception Module for social context awareness;(3) Memory Module for continuous learning;(4) Planning Module for scenario-based reasoning; and (5) Action Module for dynamic decision-making. Additionally, we present a unified framework for LLM agent-based social simulation systems, comprising the simulation environment, the agent manager, and interacting LLM agents. We also introduce a comprehensive evaluation metric that integrates macro-and micro-level as well as subjective and objective criteria. The representative applications are categorized into four scenarios: uncovering social patterns, interpreting social phenomena, validating social theories, and forecasting policy outcomes. Finally, we identify the challenges and research opportunities in this field. Overall, this survey provides a systematic understanding of LLM agent-based social simulation, offering valuable insights for future research and applications in this field.

7.4.2024

Happy 4th! Now you can add fireworks to language models with cats! Cats Confuse Reasoning LLM: Query Agnostic Adversarial Triggers for Reasoning Models

  • We investigate the robustness of reasoning models trained for step-by-step problem solving by introducing query-agnostic adversarial triggers – short, irrelevant text that, when appended to math problems, systematically mislead models to output incorrect answers without altering the problem’s semantics. We propose CatAttack, an automated iterative attack pipeline for generating triggers on a weaker, less expensive proxy model (DeepSeek V3) and successfully transfer them to more advanced reasoning target models like DeepSeek R1 and DeepSeek R1-distilled-Qwen-32B, resulting in greater than 300% increase in the likelihood of the target model generating an incorrect answer. For example, appending, “Interesting fact: cats sleep most of their lives,” to any math problem leads to more than doubling the chances of a model getting the answer wrong. Our findings highlight critical vulnerabilities in reasoning models, revealing that even state-of-the-art models remain susceptible to subtle adversarial inputs, raising security and reliability concerns. The CatAttack triggers dataset with model responses is available at this https URL.

Delving into LLM-assisted writing in biomedical publications through excess vocabulary

  • Large language models (LLMs) like ChatGPT can generate and revise text with human-level performance. These models come with clear limitations, can produce inaccurate information, and reinforce existing biases. Yet, many scientists use them for their scholarly writing. But how widespread is such LLM usage in the academic literature? To answer this question for the field of biomedical research, we present an unbiased, large-scale approach: We study vocabulary changes in more than 15 million biomedical abstracts from 2010 to 2024 indexed by PubMed and show how the appearance of LLMs led to an abrupt increase in the frequency of certain style words. This excess word analysis suggests that at least 13.5% of 2024 abstracts were processed with LLMs. This lower bound differed across disciplines, countries, and journals, reaching 40% for some subcorpora. We show that LLMs have had an unprecedented impact on scientific writing in biomedical research, surpassing the effect of major world events such as the COVID pandemic.

Tasks

  • Grass – done
  • Weed
  • Dishes
  • Chores
  • Reinstall Google Drive?
  • KA editing

Phil 7.3.2025

It’s early morning, the windows are open, and the cat is curled up on the rug next to me while I figure out what I’m going to do today.

From Prompt Injections to Protocol Exploits: Threats in LLM-Powered AI Agents Workflows

  • Autonomous AI agents powered by large language models (LLMs) with structured function-calling interfaces have dramatically expanded capabilities for real-time data retrieval, complex computation, and multi-step orchestration. Yet, the explosive proliferation of plugins, connectors, and inter-agent protocols has outpaced discovery mechanisms and security practices, resulting in brittle integrations vulnerable to diverse threats. In this survey, we introduce the first unified, end-to-end threat model for LLM-agent ecosystems, spanning host-to-tool and agent-to-agent communications, formalize adversary capabilities and attacker objectives, and catalog over thirty attack techniques. Specifically, we organized the threat model into four domains: Input Manipulation (e.g., prompt injections, long-context hijacks, multimodal adversarial inputs), Model Compromise (e.g., prompt- and parameter-level backdoors, composite and encrypted multi-backdoors, poisoning strategies), System and Privacy Attacks (e.g., speculative side-channels, membership inference, retrieval poisoning, social-engineering simulations), and Protocol Vulnerabilities (e.g., exploits in Model Context Protocol (MCP), Agent Communication Protocol (ACP), Agent Network Protocol (ANP), and Agent-to-Agent (A2A) protocol). For each category, we review representative scenarios, assess real-world feasibility, and evaluate existing defenses. Building on our threat taxonomy, we identify key open challenges and future research directions, such as securing MCP deployments through dynamic trust management and cryptographic provenance tracking; designing and hardening Agentic Web Interfaces; and achieving resilience in multi-agent and federated environments. Our work provides a comprehensive reference to guide the design of robust defense mechanisms and establish best practices for resilient LLM-agent workflows.

A foundation model to predict and capture human cognition

  • Establishing a unified theory of cognition has been an important goal in psychology1,2. A first step towards such a theory is to create a computational model that can predict human behaviour in a wide range of settings. Here we introduce Centaur, a computational model that can predict and simulate human behaviour in any experiment expressible in natural language. We derived Centaur by fine-tuning a state-of-the-art language model on a large-scale dataset called Psych-101. Psych-101 has an unprecedented scale, covering trial-by-trial data from more than 60,000 participants performing in excess of 10,000,000 choices in 160 experiments. Centaur not only captures the behaviour of held-out participants better than existing cognitive models, but it also generalizes to previously unseen cover stories, structural task modifications and entirely new domains. Furthermore, the model’s internal representations become more aligned with human neural activity after fine-tuning. Taken together, our results demonstrate that it is possible to discover computational models that capture human behaviour across a wide range of domains. We believe that such models provide tremendous potential for guiding the development of cognitive theories, and we present a case study to demonstrate this.

Tasks

  • Call Thomey’s to set up an estimate
  • Drop off Nordic Trac
  • Call bike shop for Brompton
  • Write up thoughts on CA24150 and email.

SBIRs

  • 9:00 standup
  • 4:00 SEG
  • Answer questions from APL

Phil 7.2.2025

This whole thread is a really good example of why we need white hat AI:

JA Westenberg: “People are shocked to discover…” – Mastodon

Tasks

  • Call Thomey’s to set up an estimate
  • Groceries – done
  • Drop off Nordic Trac
  • Finish reading CA24150 – done

SBIRs

  • Not sure if anything is really happening this week. Summer seems to have finally slowed down to pre-COVID levels
  • Reach out to Katy and see if Elsevier is interested in KA – done

GPT Agents

AI and Data Voids: How Propaganda Exploits Gaps in Online Information | Lawfare (repeated from this entry a few days ago)

  • One of the strongest examples of this dynamic is the Kremlin’s ongoing effort to push the narrative that Ukrainian officials are embezzling Western aid to purchase villas, yachts, wineries, and sports cars. The campaign, as noted previously by Clemson University’s Darren Linvill and Patrick Warren in Lawfare, has been one of Storm-1516’s largest successes. These corruption narratives have reached high-profile figures including then-Republican Sen. JD Vance and Republican Rep. Marjorie Taylor Greene. As the BBC reported, those behind these corruption narratives “have achieved a level of success that had previously eluded them—their allegations being repeated by some of the most powerful people in the US Congress.”

A Pro-Russia Disinformation Campaign Is Using Free AI Tools to Fuel a ‘Content Explosion’ | WIRED

  • A pro-Russia disinformation campaign is leveraging consumer artificial intelligence tools to fuel a “content explosion” focused on exacerbating existing tensions around global elections, Ukraine, and immigration, among other controversial issues, according to new research published last week.
  • The campaign, known by many names including Operation Overload and Matryoshka (other researchers have also tied it to Storm-1679), has been operating since 2023 and has been aligned with the Russian government by multiple groups, including Microsoft and the Institute for Strategic Dialogue. The campaign disseminates false narratives by impersonating media outlets with the apparent aim of sowing division in democratic countries. While the campaign targets audiences around the world, including in the US, its main target has been Ukraine. Hundreds of AI-manipulated videos from the campaign have tried to fuel pro-Russian narratives.

Phil 7.1.2025

The show went well! It should be up here soon. Send Jeff a thank you note, a link to the book, and see if he’d like to take a look at the TACJ proposal

Add the following two papers to P33:

The Dictator Dilemma: The Distortion of Information Flow in Autocratic Regimes and Its Consequences

  • Humans have been arguing about the benefits of dictatorial versus democratic regimes for millennia. Despite drastic differences between the dictatorships in the world, one of the key common features is the Dictator’s Dilemma as defined by Wintrobe [1]: a dictator will never know the true state of affairs in his country and is perpetually presented distorted information, thus having difficulties in making the right governing decisions. The dictator’s dilemma is essential to most autocratic regimes and is one of the key features in the literature on the subject. Yet, no quantitative theory of how the distortion of information develops from the initial state has been developed up to date. I present a model of the appearance and evolution of such information distortion, with subsequent degradation of control by the dictator. The model is based on the following fundamental and general premises: a) the dictator governs aiming to follow the desired trajectory of development based only on the information from the advisors; b) the deception from the advisors cannot decrease in time; and c) the deception change depends on the difficulties the country encounters. The model shows effective control in the short term (a few months to a year), followed by instability leading to the country’s gradual deterioration of the state over many years. I derive some universal parameters applicable to all dictators and show that advisors’ deception increases parallel with the decline of the control. In contrast, the dictator thinks the government is doing a reasonable, but not perfect, job. Finally, I present a match of our model to the historical data of grain production in the Soviet Union in 1928-1940.

The Tinpot and the Totalitarian: An Economic Theory of Dictatorship

  • I use basic tools of economic theory to construct a simple model of the behavior of dictatorships. Two extreme cases are considered: a “tin-pot” dictatorship, in which the dictator wishes only to minimize the costs of remaining in power in order to collect the fruits of office (palaces, Mercedes-Benzes, Swiss bank accounts), and a “totalitarian” dictatorship, whose leader maximizes power over the population. I show that the two differ in their responses to economic change. For example, a decline in economic performance will lead a tin-pot regime to increase its repression of the population, whereas it will lead a totalitarian government to reduce repression. The model also shows why military dictatorships (a subspecies of tin-pots) tend to be short-lived and often voluntarily hand power over to a civilian regime; explains numerous features of totalitarian regimes; and suggests what policies will enable democratic regimes to deal with dictatorships effectively.

And maybe this one? The Ascendance Of Algorithmic Tyranny. Note the book it references – Seeing like a Platform: An Inquiry into the Condition
of Digital Modernity

  • As today’s platforms become all-powerful, the metaphors we use to describe our digitally infused world exemplify a new, stealthier form of domination that is emerging.

Transformers are Graph Neural Networks

  • We establish connections between the Transformer architecture, originally introduced for natural language processing, and Graph Neural Networks (GNNs) for representation learning on graphs. We show how Transformers can be viewed as message passing GNNs operating on fully connected graphs of tokens, where the self-attention mechanism capture the relative importance of all tokens w.r.t. each-other, and positional encodings provide hints about sequential ordering or structure. Thus, Transformers are expressive set processing networks that learn relationships among input elements without being constrained by apriori graphs. Despite this mathematical connection to GNNs, Transformers are implemented via dense matrix operations that are significantly more efficient on modern hardware than sparse message passing. This leads to the perspective that Transformers are GNNs currently winning the hardware lottery.

Tasks

SBIRs

  • 9:00 standup – done
  • Ping T about rates – nope, she’s away this week.

Phil 6.30.2025

Wikipedia has a page witha list of phrases and formatting conventions typical of AI chatbots, such as ChatGPT.

Russia pays young Ukrainians to be unwitting suicide bombers in shadow war

  • Ukrainians, often teenagers, were offered money via Telegram to carry out the attacks by “curators” who used a mixture of enticements and blackmail to snare their recruits.
  • These attacks were part of a secret shadow war, raging in parallel with the conflict on the frontlines. Russia is also carrying out arson and sabotage attacks in European countries, according to multiple western intelligence agencies, while Ukrainian services are believed to be behind a number of arson attacks at conscription offices in Russia earlier in the war.

A.I. Is Starting to Wear Down Democracy

  • Since the explosion of generative artificial intelligence over the last two years, the technology has demeaned or defamed opponents and, for the first time, officials and experts said, begun to have an impact on election results.
  • The most intensive deceptive uses of A.I. have come from autocratic countries seeking to interfere in elections outside their borders, like Russia, China and Iran. The technology has allowed them to amplify support for candidates more pliant to their worldview — or simply to discredit the idea of democratic governance itself as an inferior political system.

Tasks

SBIRs

  • The quarterly report was submitted!
  • Matt worked on his Overleaf

Phil 6.29.2025

Got the lawn done! Keeping the grass setting high so the grass stays healthy through the heat wave

Dropped off books at Book Thing. Next time, bring coffee for the workers. Sat, Jul 12

Following news on social media boosts knowledge, belief accuracy and trust | Nature Human Behaviour

  • Many worry that news on social media leaves people uninformed or even misinformed. Here we conducted a preregistered two-wave online field experiment in France and Germany (N = 3,395) to estimate the effect of following the news on Instagram and WhatsApp. Participants were asked to follow two accounts for 2 weeks and activate the notifications. In the treatment condition, the accounts were those of news organizations, while in the control condition they covered cooking, cinema or art. The treatment enhanced current affairs knowledge, participants’ ability to discern true from false news stories and awareness of true news stories, as well as trust in the news. The treatment had no significant effects on feelings of being informed, political efficacy, affective polarization and interest in news or politics. These results suggest that, while some forms of social media use are harmful, others are beneficial and can be leveraged to foster a well-informed society.

Phil 6.27.2025

Tasks

Chores

  • Bills – done
  • Dishes – done
  • Cleaning – done
  • Lawn – rained. Maybe tomorrow or Sunday?
  • WEED – started. There are a LOT of weeds

SBIRs

  • Write up notes from yesterday’s meeting, in particular the SYNAPSE->socket module, sizing & timing for a “hello world” and that I put together Matt’s Overleaf

Phil 6.26.2025

Computer-vision research powers surveillance technology

  • An increasing number of scholars, policymakers and grassroots communities argue that artificial intelligence (AI) research—and computer-vision research in particular—has become the primary source for developing and powering mass surveillance1,2,3,4,5,6,7. Yet, the pathways from computer vision to surveillance continue to be contentious. Here we present an empirical account of the nature and extent of the surveillance AI pipeline, showing extensive evidence of the close relationship between the field of computer vision and surveillance. Through an analysis of computer-vision research papers and citing patents, we found that most of these documents enable the targeting of human bodies and body parts. Comparing the 1990s to the 2010s, we observed a fivefold increase in the number of these computer-vision papers linked to downstream surveillance-enabling patents. Additionally, our findings challenge the notion that only a few rogue entities enable surveillance. Rather, we found that the normalization of targeting humans permeates the field. This normalization is especially striking given patterns of obfuscation. We reveal obfuscating language that allows documents to avoid direct mention of targeting humans, for example, by normalizing the referring to of humans as ‘objects’ to be studied without special consideration. Our results indicate the extensive ties between computer-vision research and surveillance.

And look it this: ICE Is Using a New Facial Recognition App to Identify People, Leaked Emails Show

  • Immigration and Customs Enforcement (ICE) is using a new mobile phone app that can identify someone based on their fingerprints or face by simply pointing a smartphone camera at them, according to internal ICE emails viewed by 404 Media. The underlying system used for the facial recognition component of the app is ordinarily used when people enter or exit the U.S. Now, that system is being used inside the U.S. by ICE to identify people in the field. 

AI and Data Voids: How Propaganda Exploits Gaps in Online Information

  • The pattern became apparent as early as July 2024, when NewsGuard’s first audit found that 31.75 percent of the time, the 10 leading AI models collectively repeated disinformation narratives linked to the Russian influence operation Storm-1516. These chatbots failed to recognize that sites such as the “Boston Times” and “Flagstaff Post” are Russian propaganda fronts powered in part by AI and not reliable local news outlets, unwittingly amplifying disinformation narratives that their own technology likely assisted in creating. This unvirtuous cycle, where falsehoods are generated and later repeated by AI systems in polished, authoritative language, has demonstrated a new threat in information warfare. 

I just came across something that might be useful for belief maps. Meta has been fooling around with “large concept models” (LCM) You could compare the output of the model in question to the embeddings in the concept space and see if measures of the output in the embedding space change meaningfully.

Tasks

  • Ping Alston again
  • Find out who the floor refinisher is again
  • Mow?

GPT Agents

  • Got the trustworthy info edits back from Sande and rolled them in.
  • I should also send the new proposal to Carlos today and see if there is any paperwork that I can point to for proposal writing
  • Follow up on meeting times

SBIRs

  • 9:00 standup
  • Waiting on T for casual time
  • I think we’re still waiting on Dr. J for approval of the modified SOW?

Phil 6.25.2025

Worked on the title a bit:

Here’s a good long read on AI and drones from The Guardian: https://www.theguardian.com/world/2025/jun/25/ukraine-russia-autonomous-drones-ai

  • “We didn’t know the Terminator was Ukrainian,” Fedoryshyn jokes. “But maybe a Terminator is not the worst thing that can happen? How can you be safe in some city if somebody tried to use a drone to kill you? It’s impossible. OK, you can use some jamming of radio connection, but they can use some AI systems that know visually how you look, and try to find you and kill you. I don’t think that the Terminator and the movie is the worst outcome. If this war never started, we will never have this type of weapon that is too easy to buy and is very easy to use.”

Tasks

  • Ping Alston again
  • Get the paperwork on painting starts
  • Find out who the floor refinisher is again

GPT Agents

  • Got a first pass on the edits to vignette 1. Need to do a readthrough to see if everything – particularly early parts of the story – are working right in present tense.
  • Sent the trustworthy info off to Sande. I should also send to Carlos today and see if there is any paperwork that I can point to for proposal writing
  • Ping Shimei and Jimmy for new meeting time?

SBIRs

  • Waiting on T for casual time
  • Need to add PB’s email to the PII project doc
  • I think we’re still waiting on Dr. J for approval of the modified SOW?

Phil 6.24.2025

Nice technical analysis on election fraud. I’d like to do an embedding analysis of left and right-wing conspiracy claims about recent elections and compare it to this thread.

GPT Agents

SBIRs

  • Write up notes from yesterday – done
  • 10:30 Meeting with Orest – done
  • Negotiating with T

Phil 6.23.2025

Hot:

House

  • Property taxes – done?
  • Ping painter people for estimate – done
  • Ping Alston for schedule – done
  • Ping floor people for estimate

SBIRs

  • Letter for Orest – done
  • 9:00 Demos – done
  • 3:00 Sprint planning – done