Phil 10.24.2025

Cycling through Elections:The Political Consequences of the Tour de France

  • Do place-based interventions that raise visibility and economic activity affect far-right voting? We study the Tour de France (TdF) as a case of brief but highly visible expo-sure that combines economic activity with symbolic recognition. Using variation in the annual TdF route between 2002 and 2022, we show that exposed municipalities experience declines in far-right support of 0.03–0.04 standard deviations. The effect exceeds 0.1standard deviations in recent elections and is strongest in poorer areas and in towns with high prior far-right support. We find evidence consistent with the symbolic mechanism and mixed evidence for the economic one. TdF exposure increases local GDP per capita, effects on voting are larger when French riders win stages, and a two-wave survey around the2025 TdF provides suggestive evidence that residents in exposed towns report greater pride and recognition. These results contribute to research on geographic inequalities, symbolic politics, and the electoral consequences of place-based interventions.

Tasks

  • 10:00 doctor – cancelled. Symptoms are basically gone.
  • Bills – done
  • Dishes – done
  • Chores – done
  • Water plants – done
  • Call CFG
  • Call window cleaning – tried again. Need to find another place. Hydra-Lok?
  • Pick up truck – done

Phil 10.23.2025

Less is More: Recursive Reasoning with Tiny Networks

  • Hierarchical Reasoning Model (HRM) is a novel approach using two small neural networks recursing at different frequencies. This biologically inspired method beats Large Language models (LLMs) on hard puzzle tasks such as Sudoku, Maze, and ARC-AGI while trained with small models (27M parameters) on small data (around 1000 examples). HRM holds great promise for solving hard problems with small networks, but it is not yet well understood and may be suboptimal. We propose Tiny Recursive Model (TRM), a much simpler recursive reasoning approach that achieves significantly higher generalization than HRM, while using a single tiny network with only 2 layers. With only 7M parameters, TRM obtains 45% test-accuracy on ARC-AGI-1 and 8% on ARC-AGI-2, higher than most LLMs (e.g., Deepseek R1, o3-mini, Gemini 2.5 Pro) with less than 0.01% of the parameters.

Tasks

  • Ping Nathan for winterizing – done
  • Drop off truck and see about cable done
  • Storage run – done
  • Groceries, boxes – done
  • Set up Friday appointment at CFG? Filled out the contact form, but no response so far

SBIRs

  • 9:00 standup – done
  • 9:30 meeting with John – done
  • 4:00 meeting – not sure if it’s on or not – it was on and turned out to be all about MP

LLMs

  • Not sure if there is a meeting today? Looks like people’s schedules are not lining up.

Phil 10.22.2025

Riding the Rhine: Europe’s first certified long-distance cycle path

Tasks

  • Storage run for the bookshelves
  • Window estimate?
  • Set up banking acct – Wise is done, still need another account
  • Santander X? – done, but not really useful
  • Translation experiment?

LLMs

SBIRs

  • Expenses – done
  • Upload documents to vector store – done! It works well too!

Phil 10.21.2025

Lots of driving today, But I did see this in Wired:

Most Top News Sites Block AI Bots. Right-Wing Media Welcomes Them

  • Data collected in mid-January on 44 top news sites by Ontario-based AI detection startup Originality AI shows that almost all of them block AI web crawlers, including newspapers like The New York Times, The Washington Post, and The Guardian, general-interest magazines like The Atlantic, and special-interest sites like Bleacher Report. OpenAI’s GPTBot is the most widely-blocked crawler. But none of the top right-wing news outlets surveyed, including Fox News, the Daily Caller, and Breitbart, block any of the most prominent AI web scrapers, which also include Google’s AI data collection bot. Pundit Bari Weiss’ new website The Free Press also does not block AI scraping bots.

SBIRs

  • Good meeting! Worth the drive

Phil 10.20.2025

Fear of supernatural punishment can harmonize human societies with nature: an evolutionary game-theoretic approach | Humanities and Social Sciences Communications

  • Human activities largely impact the natural environment negatively and radical changes in human societies would be required to achieve their sustainable relationship with nature. Although frequently overlooked, previous studies have suggested that supernatural beliefs can protect nature from human overexploitation via beliefs that supernatural entities punish people who harm nature. Studies of folklore and ethnology have shown that such supernatural beliefs are widely found. However, it remains unclear under which conditions such supernatural beliefs prevent people from harming nature, because overexploiting natural resources without supernatural beliefs produces the greatest benefits. The current study aimed to build a mathematical model based on the evolutionary game theory and derive the conditions under which supernatural beliefs can spread in society, thereby preserving natural resources. To maintain supernatural beliefs, the fear of supernatural punishment invoked by scarce natural environments would, on one hand, be strong enough to prevent overexploitation but, on the other, be weak enough for the supernatural belief to spread in society via missionary events. Our results supported that supernatural beliefs would facilitate sustainable relationships between human societies and nature. In particular, the study highlighted supernatural beliefs as an essential driver for achieving sustainability by altering people’s interaction with nature.

[2510.13928] LLMs Can Get “Brain Rot”!

  • We propose and test the LLM Brain Rot Hypothesis: continual exposure to junk web text induces lasting cognitive decline in large language models (LLMs). To causally isolate data quality, we run controlled experiments on real Twitter/X corpora, constructing junk and reversely controlled datasets via two orthogonal operationalizations: M1 (engagement degree) and M2 (semantic quality), with matched token scale and training operations across conditions. Contrary to the control group, continual pre-training of 4 LLMs on the junk dataset causes non-trivial declines (Hedges’ g > 0.3) on reasoning, long-context understanding, safety, and inflating “dark traits” (e.g., psychopathy, narcissism). The gradual mixtures of junk and control datasets also yield dose-response cognition decay: for example, under M1, ARC-Challenge with Chain Of Thoughts drops 74.9 to 57.2 and RULER-CWE 84.4 to 52.3 as junk ratio rises from 0% to 100%.
  • Error forensics reveal several key insights. First, we identify thought-skipping as the primary lesion: models increasingly truncate or skip reasoning chains, explaining most of the error growth. Second, partial but incomplete healing is observed: scaling instruction tuning and clean data pre-training improve the declined cognition yet cannot restore baseline capability, suggesting persistent representational drift rather than format mismatch. Finally, we discover that the popularity, a non-semantic metric, of a tweet is a better indicator of the Brain Rot effect than the length in M1. Together, the results provide significant, multi-perspective evidence that data quality is a causal driver of LLM capability decay, reframing curation for continual pretraining as a training-time safety problem and motivating routine “cognitive health checks” for deployed LLMs.

SBIRs

  • Check to see if the AWS environment works with OpenAI. If so, work out the calls that let me write a bunch of stories with 1) Control, 2) Sun Zu, and 3) Clausewitz. Get the embeddings, cluster, create a dictionary that has the source, cluster id, and maybe pointers to the other cluster members? Need a way of finding if a point belongs to an existing cluster.
    • Got the chat complete interface working
    • Got the embeddings interface working
    • Need to get a document/vector store set up for RAG – Looks like this is the directions. Working! And incorporated in the OpenAIComms class.
  • Prep for tomorrow’s meeting? Nope?
  • 11:30 IRAD meeting – done

Phil 10.19.2025

Went to the No Kings rally in DC yesterday, and this was absolutely the case:

Also, inflatable costumes. I love that we own that too

And a nice ride to boot!

Tasks

  • Dental insurance
  • Ping Nathan about winterizing
  • Break down next bookshelf
  • Mow lawn – done

Phil 10.17.2025

Yesterday was looking at apartments and meetings. The SOW guess was right!

Tasks

  • Bills – done
  • Dental insurance
  • Clean house – done
  • Dishes – done
  • Get boxes/groceries/Goodwill – done
  • Ping Nathan about winterizing
  • Break down next bookshelf

SBIRs

  • Password, with all boxes logged in = done. That was a chore

Phil 10.15.2025

Tasks

  • 7:00 call with Philipp – done. Fun!
  • 3:00 call with Alden – done
  • Schedule Saturday ride – done
  • Jim Donnie’s – done

SBIRs

  • Responded to SoW email. Not sure if I answered the right questions. Find out tomorrow?
  • Played with skip-grams

Phil 10.14.2025

Tasks

  • Ping Philipp – done
  • 4:00 Nellie – done

SBIRs

  • 9:00 standup – done
  • Work on IRAD slides with Aaron
  • Create a spreadsheet for variations on walks and skip frames. Find a solid 2D configuration, then do the same for 3D. Once that’s done scale up the grid.
  • Nice! Got a 3D cube running too:

Phil 10.13.2025

Completely forgot about the symphony yesterday. Need to put the rest on the Apple calendar so at least my wrist will know about them

Introducing nanochat: The best ChatGPT that $100 can buy.

  • We wish to train the best ChatGPT that $100 can buy, which we call a “speedrun”. Reference the script speedrun.sh, which is designed to just run right away on a blank box start to end. However, in this post I will step through it part by part so that I can comment in detail on all sections of it. We first have to make sure the new&hot uv project manager is installed. Install uv, create a new virtual environment in .venv, get all the dependencies, and activate the environment so that when we type python we’re using the virtual env python, not the system python

Standardized Project Gutenberg Corpus

SBIRs

  • Trained up a model on a 10×10 grid with 100 walks of 10 elements each. Even so, the grid is visible in the trained model. Next step will be to up the number of walks while holding everything else constant:
  • Improved the rendering so that I can get all the orthogonal axis drawn. Wrote a very detailed prompt with example data and Gemini created a solid method on the first shot. The whole interaction can be seen here.

Phil 10.12.2025

The forecast for today has changed! Windy, cloudy, but no rain. Might be able to get in a longer local loop? And maybe catch up on Il Lombardia

Tasks

  • Replace wall plate
  • Read the grout instructions
  • Empty bookshelves and see how hard they will be to break down

Phil 10.10.2025

Add this to the section on soft totalitarianism:

Moloch’s Bargain: Emergent Misalignment When LLMs Compete for Audiences

  • Large language models (LLMs) are increasingly shaping how information is created and disseminated, from companies using them to craft persuasive advertisements, to election campaigns optimizing messaging to gain votes, to social media influencers boosting engagement. These settings are inherently competitive, with sellers, candidates, and influencers vying for audience approval, yet it remains poorly understood how competitive feedback loops influence LLM behavior. We show that optimizing LLMs for competitive success can inadvertently drive misalignment. Using simulated environments across these scenarios, we find that, 6.3% increase in sales is accompanied by a 14.0% rise in deceptive marketing; in elections, a 4.9% gain in vote share coincides with 22.3% more disinformation and 12.5% more populist rhetoric; and on social media, a 7.5% engagement boost comes with 188.6% more disinformation and a 16.3% increase in promotion of harmful behaviors. We call this phenomenon Moloch’s Bargain for AI–competitive success achieved at the cost of alignment. These misaligned behaviors emerge even when models are explicitly instructed to remain truthful and grounded, revealing the fragility of current alignment safeguards. Our findings highlight how market-driven optimization pressures can systematically erode alignment, creating a race to the bottom, and suggest that safe deployment of AI systems will require stronger governance and carefully designed incentives to prevent competitive dynamics from undermining societal trust.

And LLMS are absolutely mimicking the human pull towards particular attractors

Can Large Language Models Develop Gambling Addiction?

  • This study explores whether large language models can exhibit behavioral patterns similar to human gambling addictions. As LLMs are increasingly utilized in financial decision-making domains such as asset management and commodity trading, understanding their potential for pathological decision-making has gained practical significance. We systematically analyze LLM decision-making at cognitive-behavioral and neural levels based on human gambling addiction research. In slot machine experiments, we identified cognitive features of human gambling addiction, such as illusion of control, gambler’s fallacy, and loss chasing. When given the freedom to determine their own target amounts and betting sizes, bankruptcy rates rose substantially alongside increased irrational behavior, demonstrating that greater autonomy amplifies risk-taking tendencies. Through neural circuit analysis using a Sparse Autoencoder, we confirmed that model behavior is controlled by abstract decision-making features related to risky and safe behaviors, not merely by prompts. These findings suggest LLMs can internalize human-like cognitive biases and decision-making mechanisms beyond simply mimicking training data patterns, emphasizing the importance of AI safety design in financial applications.

Tasks

  • Water plants – done
  • Bills – done
  • Fix raised beds – done
  • Drain RV tanks and schedule service – tomorrow
  • Chores – done
  • Dishes – done
  • Order Cycliq – done
  • For P33, add a TODO to talk about this, and add the quote: “Yet two rather different peoples may be distinguished, a stratified and an organic people. If the people is conceived of as diverse and stratified, then the state’s main role is to mediate and conciliate among competing interest groups. This will tend to compromise differences, not try to eliminate or cleanse them. The stratified people came to dominate the Northwest of Europe. Yet if the people is conceived of as organic, as one and indivisible, as ethnic, then its purity may be maintained by the suppression of deviant minorities, and this may lead to cleansing.”

LLM stuff

  • Outline the CACM section, include a bit of the Moloch article to show how attractors emerge under competitive pressures – sometime over the next few days while it’s raining
  • Put together a rough BP for WHAI that has support for individuals, groups (e.g. families), corporate and government. Note that as LLMs compete more for market share, they will naturally become more dangerous, in addition to scambots and AI social weapons. Had a good chat with Aaron about this. Might pick it up Monday

Phil 10.9.2025

Tasks

  • Start getting back to P33
  • Write up section for CACM piece using notes from yesterday
  • Ping FSK again – done, Maybe something for 9:00?
  • Ping Andrea? Went for a big ride today instead

SBIRs

  • 9:00 Standup – done. Added some notes to the documentation about data and model cards
  • 4:00 Tagup. Extension looks good, SOW moved a bit, meeting in 2 weeks?

Phil 10.8.2025

Tasks

  • Just Audio – bouncing around, seeing how to fix the Philco
  • Bottling Plant – Set up appt for tomorrow? Nope, the 16th at 11:00
  • FSK parking options? – Set up appt for next Thursday at 10:00. Maybe
  • Roll in edits – Done! Pinged NoStarch too, and updated the repo.
    • Wow – the book is kind of done
  • Registered with Santander X. Need the LLC info next, but this could be useful for startup help

LLM stuff

  • 2:30 LLM meeting. Make sure AWS instance is up – done. We still can’t really agree on what the article is. Though I think my section should argue that Grok is being shaped in ways to deliberately work from Musk’s perspective as a propaganda tool. However, sycophantic chatbots are potentially worse. Introduce totalitarianism and atomization as a precondition. Sycophantic chatbots can act to concentrate or atomize users based on their deep biases. In the end, this could conceivable create on one hand a monolithic movement of people who have lost their identity to the movement, and on the other side, dispersed the natural resistance to that movement to the point of ineffectualness.
  • And add something about social dominance orientation
  • I do think I can move the last two paragraphs over to the conclusions though.