Phil 10.20.2025

Fear of supernatural punishment can harmonize human societies with nature: an evolutionary game-theoretic approach | Humanities and Social Sciences Communications

  • Human activities largely impact the natural environment negatively and radical changes in human societies would be required to achieve their sustainable relationship with nature. Although frequently overlooked, previous studies have suggested that supernatural beliefs can protect nature from human overexploitation via beliefs that supernatural entities punish people who harm nature. Studies of folklore and ethnology have shown that such supernatural beliefs are widely found. However, it remains unclear under which conditions such supernatural beliefs prevent people from harming nature, because overexploiting natural resources without supernatural beliefs produces the greatest benefits. The current study aimed to build a mathematical model based on the evolutionary game theory and derive the conditions under which supernatural beliefs can spread in society, thereby preserving natural resources. To maintain supernatural beliefs, the fear of supernatural punishment invoked by scarce natural environments would, on one hand, be strong enough to prevent overexploitation but, on the other, be weak enough for the supernatural belief to spread in society via missionary events. Our results supported that supernatural beliefs would facilitate sustainable relationships between human societies and nature. In particular, the study highlighted supernatural beliefs as an essential driver for achieving sustainability by altering people’s interaction with nature.

[2510.13928] LLMs Can Get “Brain Rot”!

  • We propose and test the LLM Brain Rot Hypothesis: continual exposure to junk web text induces lasting cognitive decline in large language models (LLMs). To causally isolate data quality, we run controlled experiments on real Twitter/X corpora, constructing junk and reversely controlled datasets via two orthogonal operationalizations: M1 (engagement degree) and M2 (semantic quality), with matched token scale and training operations across conditions. Contrary to the control group, continual pre-training of 4 LLMs on the junk dataset causes non-trivial declines (Hedges’ g > 0.3) on reasoning, long-context understanding, safety, and inflating “dark traits” (e.g., psychopathy, narcissism). The gradual mixtures of junk and control datasets also yield dose-response cognition decay: for example, under M1, ARC-Challenge with Chain Of Thoughts drops 74.9 to 57.2 and RULER-CWE 84.4 to 52.3 as junk ratio rises from 0% to 100%.
  • Error forensics reveal several key insights. First, we identify thought-skipping as the primary lesion: models increasingly truncate or skip reasoning chains, explaining most of the error growth. Second, partial but incomplete healing is observed: scaling instruction tuning and clean data pre-training improve the declined cognition yet cannot restore baseline capability, suggesting persistent representational drift rather than format mismatch. Finally, we discover that the popularity, a non-semantic metric, of a tweet is a better indicator of the Brain Rot effect than the length in M1. Together, the results provide significant, multi-perspective evidence that data quality is a causal driver of LLM capability decay, reframing curation for continual pretraining as a training-time safety problem and motivating routine “cognitive health checks” for deployed LLMs.

SBIRs

  • Check to see if the AWS environment works with OpenAI. If so, work out the calls that let me write a bunch of stories with 1) Control, 2) Sun Zu, and 3) Clausewitz. Get the embeddings, cluster, create a dictionary that has the source, cluster id, and maybe pointers to the other cluster members? Need a way of finding if a point belongs to an existing cluster.
    • Got the chat complete interface working
    • Got the embeddings interface working
    • Need to get a document/vector store set up for RAG – Looks like this is the directions. Working! And incorporated in the OpenAIComms class.
  • Prep for tomorrow’s meeting? Nope?
  • 11:30 IRAD meeting – done