Our model allowed us to examine the AI companies’ datasets. We found that these datasets contained several examples that train AI systems to be helpful and honest when users ask questions like “How do I book a flight?” The datasets contained very limited examples of how to answer questions about topics related to empathy, justice and human rights. Overall, wisdom and knowledge and information seeking were the two most common values, while justice, human rights and animal rights was the least common value.
We present a fundamental discovery that challenges our understanding of how complex reasoning emerges in large language models. While conventional wisdom suggests that sophisticated reasoning tasks demand extensive training data (>100,000 examples), we demonstrate that complex mathematical reasoning abilities can be effectively elicited with surprisingly few examples. Through comprehensive experiments, our proposed model LIMO demonstrates unprecedented performance in mathematical reasoning. With merely 817 curated training samples, LIMO achieves 57.1% accuracy on AIME and 94.8% on MATH, improving from previous SFT-based models’ 6.5% and 59.2% respectively, while only using 1% of the training data required by previous approaches. LIMO demonstrates exceptional out-of-distribution generalization, achieving 40.5% absolute improvement across 10 diverse benchmarks, outperforming models trained on 100x more data, challenging the notion that SFT leads to memorization rather than generalization. Based on these results, we propose the Less-Is-More Reasoning Hypothesis (LIMO Hypothesis): In foundation models where domain knowledge has been comprehensively encoded during pre-training, sophisticated reasoning capabilities can emerge through minimal but precisely orchestrated demonstrations of cognitive processes. This hypothesis posits that the elicitation threshold for complex reasoning is determined by two key factors: (1) the completeness of the model’s encoded knowledge foundation during pre-training, and (2) the effectiveness of post-training examples as “cognitive templates” that show the model how to utilize its knowledge base to solve complex reasoning tasks. To facilitate reproducibility and future research in data-efficient reasoning, we release LIMO as a comprehensive open-source suite at this https URL.
Tasks
Laundry – done
Finish vacuuming – done
Groceries – done
REI – done
TiiS
P33 – Schools teach egalitarian things first – dance, theatre, music, public speaking, and wilderness skills – done
Maybe some more slides. At least get all the tabs on one slide for later – done
Rehberger’s delayed tool invocation demonstration targeted Gemini, which at the time was still called Bard. His proof-of-concept exploit was able to override the protection and trigger the Workspace extension to locate sensitive data in the user’s account and bring it into the chat context.
SBIRs
9:00 standup
11:00 rates
4:30 book club?
More data generation – done with the file generation
GPT Agents
More slides – add the new slides to the end of the old ones. Match the format
Mathematical reasoning is an increasingly important indicator of large language model (LLM) capabilities, yet we lack understanding of how LLMs process even simple mathematical tasks. To address this, we reverse engineer how three mid-sized LLMs compute addition. We first discover that numbers are represented in these LLMs as a generalized helix, which is strongly causally implicated for the tasks of addition and subtraction, and is also causally relevant for integer division, multiplication, and modular arithmetic. We then propose that LLMs compute addition by manipulating this generalized helix using the “Clock” algorithm: to solve a+b, the helices for a and b are manipulated to produce the a+b answer helix which is then read out to model logits. We model influential MLP outputs, attention head outputs, and even individual neuron preactivations with these helices and verify our understanding with causal interventions. By demonstrating that LLMs represent numbers on a helix and manipulate this helix to perform addition, we present the first representation-level explanation of an LLM’s mathematical capability.
GPT Agents
Slide deck – Add this: Done
NOTE: The USA dropped below the “democracy threshold” (+6) on the POLITY scale in 2020 and was considered an anocracy (+5) at the end of the year 2020; the USA score for 2021 returned to democracy (+8). Beginning on 1 July 2024, due to the US Supreme Court ruling granting the US Presidency broad, legal immunity, the USA is noted by the Polity Project as experiencing a regime transition through, at least, 20 January 2025. As of the latter date, the USA is coded EXREC=8, “Competitive Elections”; EXCONST=1 “Unlimited Executive Authority”; and POLCOMP=6 “Factional/Restricted Competition.” Polity scores: DEMOC=4; AUTOC=4; POLITY=0.
The USA is no longer considered a democracy and lies at the cusp of autocracy; it has experienced a Presidential Coup and an Adverse Regime Change event (8-point drop in its POLITY score).
Work more on conclusions? Yes!
TiiS? Nope
SBIRs
9:00 IRAD Monthly – done
Actually got some good work on automating file generation using config files.
Humans think about the future to act in the present, not only personally, but also collectively. Collective future thinking (CFT) is an act of imagining a future on behalf of a collective. This article presents a theoretical analysis of the role of CFT in cultural dynamics. CFT includes collective prospection about probable futures and imaginations about utopian and dystopian possible worlds as the best- and worst-case scenarios. CFT motivates collective self-regulatory activities to steer probable futures towards utopias and away from dystopias, driving a cultural transformation while also acting as a force for cultural maintenance, animating cultural dynamics at the micro-psychological level. Empirical research showed that collective futures are often seen to involve progress in human agency, but a decline in community cohesion, unless collective self-regulation is undertaken. In line with the theoretical proposition, CFT consistently motivated collective self-regulatory activities that are seen to improve future community cohesion and to move the current culture closer to their utopian vision around the world despite significant cross-national variabilities. A macro-level cultural dynamical perspective is provided to interpret cross-national similarities and differences in CFT as a reflection of nations’ past historical trajectories, and to discuss CFT’s role in political polarization and collective self-regulation.
The rise of Generative AI (GenAI) in knowledge workflows raises questions about its impact on critical thinking skills and practices. We survey 319 knowledge workers to investigate 1) when and how they perceive the enaction of critical thinking when using GenAI, and 2) when and why GenAI affects their effort to do so. Participants shared 936 first-hand examples of using GenAI in work tasks. Quantitatively, when considering both task- and user-specific factors, a user’s task-specific self-confidence and confidence in GenAI are predictive of whether critical thinking is enacted and the effort of doing so in GenAI-assisted tasks. Specifically, higher confidence in GenAI is associated with less critical thinking, while higher self-confidence is associated with more critical thinking. Qualitatively, GenAI shifts the nature of critical thinking toward information verification, response integration, and task stewardship. Our insights reveal new design challenges and opportunities for developing GenAI tools for knowledge work.
Which led to this little back and forth on Teams:
It just dawned on me that LLMs are way better at explaining things in a neurotypical way than I am. I know you also said this before but it has become more real for me.
There is a weird mirror image to that thought too – when an LLM does not describe your understanding of the world, you can be pretty sure that reflects the biases in the writings it was trained on. You can use that to zero in on exactly what the differences are, and between your perspective and that encoded in the LLM, articulate an understanding that includes both. I use that trick all the time.
GPT Agents
Good progress over the weekend. Need to edit the J6 section next
The memo acknowledges that the list includes many terms that are used by the NSA in contexts that have nothing to do with DEI. For example, the term “privilege” is used by the NSA in the context of “privilege escalation.” In the intelligence world, privilege escalation refers to “techniques that adversaries use to gain higher-level permissions on a system or network.”
Here’s what I need for website & announcements:
Title for the talk
Brief abstract (1-2 paragraphs)
Short bio (up to half a page)
Photo (headshot preferred)
SBIRs
Reschedule Tuesday visit? Also snow – Now a virtual meeting
Generate sets of data with varying amounts of train but keep the test set the same size. The goal is to find the smallest raining set that works. On hold
<record scratch> Aaron’s sick, so I’m standing in for a few days
Dahlgren prep meeting. I think we are good to go. Need to read the proposal again
Working on IRAD slides – first pass is done
Reviewed Ron’s USNA email , which was very “AI slop”
One of the most disturbing things we did was create thousands of fake accounts using advanced AI systems called Grok and Eliza. These accounts looked completely real and pushed political messages that spread like wildfire. Havn’t you noticed they all disappeared? Like magic.
The pilot program for the Eliza AI Agent, was election interference. Eliza was release officially in October of 2024, but we had access to it before then thanks to Marc Andreessen.
{
"name": "trump",
"clients": ["discord", "direct"],
"settings": {
"voice": { "model": "en_US-male-medium" }
},
"bio": [
"Built a strong economy and reduced inflation.",
"Promises to make America the crypto capital and restore affordability."
],
"lore": [
"Secret Service allocations used for election interference.",
"Promotes WorldLibertyFi for crypto leadership."
],
"knowledge": [
"Understands border issues, Secret Service dynamics, and financial impacts on families."
],
"messageExamples": [
{
"user": "{{user1}}",
"content": { "text": "What about the border crisis?" },
"response": "Current administration lets in violent criminals. I secured the border; they destroyed it."
}
],
"postExamples": [
"End inflation and make America affordable again.",
"America needs law and order, not crime creation."
]
}
The rise of large language models (LLMs) and their tight integration into our daily life make it essential to dedicate efforts towards their trustworthiness. Uncertainty quantification for LLMs can establish more human trust into their responses, but also allows LLM agents to make more informed decisions based on each other’s uncertainty. To estimate the uncertainty in a response, internal token logits, task-specific proxy models, or sampling of multiple responses are commonly used. This work focuses on asking the LLM itself to verbalize its uncertainty with a confidence score as part of its output tokens, which is a promising way for prompt- and model-agnostic uncertainty quantification with low overhead. Using an extensive benchmark, we assess the reliability of verbalized confidence scores with respect to different datasets, models, and prompt methods. Our results reveal that the reliability of these scores strongly depends on how the model is asked, but also that it is possible to extract well-calibrated confidence scores with certain prompt methods. We argue that verbalized confidence scores can become a simple but effective and versatile uncertainty quantification method in the future. Our code is available at this https URL .
Context: When an LLM “thinks” at inference time, it puts it’s thoughts inside <think> and </think> XML tags. Once it gets past the end tag the model is taught to change voice into a confident and authoritative tone for the final answer.
In s1, when the LLM tries to stop thinking with “”, they force it to keep going by replacing it with “Wait”. It’ll then begin to second guess and double check it’s answer. They do this to trim or extend thinking time (trimming is just abruptly inserting “/think>”).
Test-time scaling is a promising new approach to language modeling that uses extra test-time compute to improve performance. Recently, OpenAI’s o1 model showed this capability but did not publicly share its methodology, leading to many replication efforts. We seek the simplest approach to achieve test-time scaling and strong reasoning performance. First, we curate a small dataset s1K of 1,000 questions paired with reasoning traces relying on three criteria we validate through ablations: difficulty, diversity, and quality. Second, we develop budget forcing to control test-time compute by forcefully terminating the model’s thinking process or lengthening it by appending “Wait” multiple times to the model’s generation when it tries to end. This can lead the model to double-check its answer, often fixing incorrect reasoning steps. After supervised finetuning the Qwen2.5-32B-Instruct language model on s1K and equipping it with budget forcing, our model s1-32B exceeds o1-preview on competition math questions by up to 27% (MATH and AIME24). Further, scaling s1-32B with budget forcing allows extrapolating beyond its performance without test-time intervention: from 50% to 57% on AIME24. Our model, data, and code are open-source at this https URL
Despite reporting that suggests that Musk’s so-called Department of Government Efficiency (DOGE) task force has access to these Treasury systems on a “read-only” level, sources say Elez, who has visited a Kansas City office housing BFS systems, has many administrator-level privileges. Typically, those admin privileges could give someone the power to log into servers through secure shell access, navigate the entire file system, change user permissions, and delete or modify critical files. That could allow someone to bypass the security measures of, and potentially cause irreversible changes to, the very systems they have access to.
P33
Found some good papers about modern sortition
GPT Agents
More Discussion section
SBIRs
9:00 standup
Write two-way rotate/translate method. Need to build a 4×4 matrix. Looks like I might have something in my old PyBullet code. Nothing there, but wrote a nice little class:
Send note to Kat – done. She is not interested. Darn.
Edit Detection section – done
Add TACJ overview to P33. This is part of living with smart machines
Make a ppt that has a web page in it
Bills – done (dentist! tomorrow)
Dishes – done
Chores – done
Laundry – nope, but the dryer is hooked up now!
Review paper – finished reading and it’s much better. Still too wordy, but that’s not the sort of thing that is critical, since it has no impact on the findings, which are solid. Basically some formatting issues at this point.
Today, the U.S. Copyright Office is releasing Part 2 of its Report on the legal and policy issues related to copyright and artificial intelligence (AI). This Part of the Report addresses the copyrightability of outputs created using generative AI. The Office affirms that existing principles of copyright law are flexible enough to apply to this new technology, as they have applied to technological innovations in the past. It concludes that the outputs of generative AI can be protected by copyright only where a human author has determined sufficient expressive elements. This can include situations where a human-authored work is perceptible in an AI output, or a human makes creative arrangements or modifications of the output, but not the mere provision of prompts. The Office confirms that the use of AI to assist in the process of creation or the inclusion of AI-generated material in a larger human-generated work does not bar copyrightability. It also finds that the case has not been made for changes to existing law to provide additional protection for AI-generated outputs.
An AI research team from the University of California, Berkeley, led by Ph.D. candidate Jiayi Pan, claims to have reproduced DeepSeek R1-Zero’s core technologies for just $30, showing how advanced models could be implemented affordably. According to Jiayi Pan on Nitter, their team reproduced DeepSeek R1-Zero in the Countdown game, and the small language model, with its 3 billion parameters, developed self-verification and search abilities through reinforcement learning.
Made a gif of a root growing as a metaphor for an LLM generating text from the same prompt four times (from this video):
P33
Added no confidence voting
GPT Agents
Arms control – finished!
SBIRs
9:00 standup
12:50 – 1:20 USNA
4:30 book club
More RTAT – Worked out how to iterate along the line segments as a function of t.
You must be logged in to post a comment.