Category Archives: Phil

Phil 2.12.2025

Snowed about 5-6 inches last night, so I need to dig out before the “wintry mix” hits around noon

Language Models Use Trigonometry to Do Addition

  • Mathematical reasoning is an increasingly important indicator of large language model (LLM) capabilities, yet we lack understanding of how LLMs process even simple mathematical tasks. To address this, we reverse engineer how three mid-sized LLMs compute addition. We first discover that numbers are represented in these LLMs as a generalized helix, which is strongly causally implicated for the tasks of addition and subtraction, and is also causally relevant for integer division, multiplication, and modular arithmetic. We then propose that LLMs compute addition by manipulating this generalized helix using the “Clock” algorithm: to solve a+b, the helices for a and b are manipulated to produce the a+b answer helix which is then read out to model logits. We model influential MLP outputs, attention head outputs, and even individual neuron preactivations with these helices and verify our understanding with causal interventions. By demonstrating that LLMs represent numbers on a helix and manipulate this helix to perform addition, we present the first representation-level explanation of an LLM’s mathematical capability.

GPT Agents

  • Slide deck – Add this: Done

NOTE: The USA dropped below the “democracy threshold” (+6) on the POLITY scale in 2020 and was considered an anocracy (+5) at the end of the year 2020; the USA score for 2021 returned to democracy (+8). Beginning on 1 July 2024, due to the US Supreme Court ruling granting the US Presidency broad, legal immunity, the USA is noted by the Polity Project as experiencing a regime transition through, at least, 20 January 2025. As of the latter date, the USA is coded EXREC=8, “Competitive Elections”; EXCONST=1 “Unlimited Executive Authority”; and POLCOMP=6 “Factional/Restricted Competition.” Polity scores: DEMOC=4; AUTOC=4; POLITY=0.

The USA is no longer considered a democracy and lies at the cusp of autocracy; it has experienced a Presidential Coup and an Adverse Regime Change event (8-point drop in its POLITY score).

  • Work more on conclusions? Yes!
  • TiiS? Nope

SBIRs

  • 9:00 IRAD Monthly – done
  • Actually got some good work on automating file generation using config files.

Phil 2.11.2024

This happened today:

SBIRs

  • 9:00 standup – done
  • Read through proposal for meeting – done
  • 11:00 BD Meeting – went well, but no one has money

GPT Agents

  • JHU Speaker info – done
  • Start slide deck – technically, yes
  • Work more on conclusions? – Nope
  • TiiS? – Nope
  • Did do a lot on the money section of P33

Phil 2.10.2025

Reschedule Wednesday visit since snow – done

See about moving records?

TiiS review!

Collective future thinking in Cultural Dynamics

  • Humans think about the future to act in the present, not only personally, but also collectively. Collective future thinking (CFT) is an act of imagining a future on behalf of a collective. This article presents a theoretical analysis of the role of CFT in cultural dynamics. CFT includes collective prospection about probable futures and imaginations about utopian and dystopian possible worlds as the best- and worst-case scenarios. CFT motivates collective self-regulatory activities to steer probable futures towards utopias and away from dystopias, driving a cultural transformation while also acting as a force for cultural maintenance, animating cultural dynamics at the micro-psychological level. Empirical research showed that collective futures are often seen to involve progress in human agency, but a decline in community cohesion, unless collective self-regulation is undertaken. In line with the theoretical proposition, CFT consistently motivated collective self-regulatory activities that are seen to improve future community cohesion and to move the current culture closer to their utopian vision around the world despite significant cross-national variabilities. A macro-level cultural dynamical perspective is provided to interpret cross-national similarities and differences in CFT as a reflection of nations’ past historical trajectories, and to discuss CFT’s role in political polarization and collective self-regulation.

Good thing to use for the AI slop talk: The Impact of Generative AI on Critical Thinking: Self-Reported Reductions in Cognitive Effort and Confidence Effects From a Survey of Knowledge Workers

  • The rise of Generative AI (GenAI) in knowledge workflows raises questions about its impact on critical thinking skills and practices. We survey 319 knowledge workers to investigate 1) when and how they perceive the enaction of critical thinking when using GenAI, and 2) when and why GenAI affects their effort to do so. Participants shared 936 first-hand examples of using GenAI in work tasks. Quantitatively, when considering both task- and user-specific factors, a user’s task-specific self-confidence and confidence in GenAI are predictive of whether critical thinking is enacted and the effort of doing so in GenAI-assisted tasks. Specifically, higher confidence in GenAI is associated with less critical thinking, while higher self-confidence is associated with more critical thinking. Qualitatively, GenAI shifts the nature of critical thinking toward information verification, response integration, and task stewardship. Our insights reveal new design challenges and opportunities for developing GenAI tools for knowledge work.
  • Which led to this little back and forth on Teams:
    • It just dawned on me that LLMs are way better at explaining things in a neurotypical way than I am.  I know you also said this before but it has become more real for me.
    • There is a weird mirror image to that thought too – when  an LLM does not describe your understanding of the world, you can be pretty sure that reflects the biases in the writings it was trained on. You can use that to zero in on exactly what the differences are, and between your perspective and that encoded in the LLM, articulate an understanding that includes both. I use that trick all the time.

GPT Agents

  • Good progress over the weekend. Need to edit the J6 section next
  • And this is a good example of “blast radius”: The NSA’s “Big Delete”
    • The memo acknowledges that the list includes many terms that are used by the NSA in contexts that have nothing to do with DEI. For example, the term “privilege” is used by the NSA in the context of “privilege escalation.” In the intelligence world, privilege escalation refers to “techniques that adversaries use to gain higher-level permissions on a system or network.”
  • Here’s what I need for website & announcements:
    • Title for the talk
    • Brief abstract (1-2 paragraphs)
    • Short bio (up to half a page)
    • Photo (headshot preferred)

SBIRs

  • Reschedule Tuesday visit? Also snow – Now a virtual meeting
  • Generate sets of data with varying amounts of train but keep the test set the same size. The goal is to find the smallest raining set that works. On hold
  • <record scratch> Aaron’s sick, so I’m standing in for a few days
    • Dahlgren prep meeting. I think we are good to go. Need to read the proposal again
    • Working on IRAD slides – first pass is done
    • Reviewed Ron’s USNA email , which was very “AI slop”

Phil 2.7.2025

May or may not be true, but good material for the KA talk this month: Elon Musk’s and X’s Role in 2024 Election Interference

  • One of the most disturbing things we did was create thousands of fake accounts using advanced AI systems called Grok and Eliza. These accounts looked completely real and pushed political messages that spread like wildfire. Havn’t you noticed they all disappeared? Like magic.
  • The pilot program for the Eliza AI Agent, was election interference. Eliza was release officially in October of 2024, but we had access to it before then thanks to Marc Andreessen.
  • The link to the Eliza API is legit (Copied here for future reference)
{
    "name": "trump",
    "clients": ["discord", "direct"],
    "settings": {
        "voice": { "model": "en_US-male-medium" }
    },
    "bio": [
        "Built a strong economy and reduced inflation.",
        "Promises to make America the crypto capital and restore affordability."
    ],
    "lore": [
        "Secret Service allocations used for election interference.",
        "Promotes WorldLibertyFi for crypto leadership."
    ],
    "knowledge": [
        "Understands border issues, Secret Service dynamics, and financial impacts on families."
    ],
    "messageExamples": [
        {
            "user": "{{user1}}",
            "content": { "text": "What about the border crisis?" },
            "response": "Current administration lets in violent criminals. I secured the border; they destroyed it."
        }
    ],
    "postExamples": [
        "End inflation and make America affordable again.",
        "America needs law and order, not crime creation."
    ]
}

Tasks

This is wild. Need to read the paper carefully: On Verbalized Confidence Scores for LLMs: https://arxiv.org/abs/2412.14737

  • The rise of large language models (LLMs) and their tight integration into our daily life make it essential to dedicate efforts towards their trustworthiness. Uncertainty quantification for LLMs can establish more human trust into their responses, but also allows LLM agents to make more informed decisions based on each other’s uncertainty. To estimate the uncertainty in a response, internal token logits, task-specific proxy models, or sampling of multiple responses are commonly used. This work focuses on asking the LLM itself to verbalize its uncertainty with a confidence score as part of its output tokens, which is a promising way for prompt- and model-agnostic uncertainty quantification with low overhead. Using an extensive benchmark, we assess the reliability of verbalized confidence scores with respect to different datasets, models, and prompt methods. Our results reveal that the reliability of these scores strongly depends on how the model is asked, but also that it is possible to extract well-calibrated confidence scores with certain prompt methods. We argue that verbalized confidence scores can become a simple but effective and versatile uncertainty quantification method in the future. Our code is available at this https URL .
  • Bills – done
  • Call – done
  • Chores – done
  • Dishes – done
  • Go for a reasonably big ride – done

Phil 2.6.2025

From The Bulwark. Good example of creating a social reality and using it for an organizational lobotomy. Add to the book following Jan 6 section?

Full thread here

There is an interesting blog post (and thread) from Tim Kellogg that says this:

  • Context: When an LLM “thinks” at inference time, it puts it’s thoughts inside <think> and </think> XML tags. Once it gets past the end tag the model is taught to change voice into a confident and authoritative tone for the final answer.
  • In s1, when the LLM tries to stop thinking with “”, they force it to keep going by replacing it with “Wait”. It’ll then begin to second guess and double check it’s answer. They do this to trim or extend thinking time (trimming is just abruptly inserting “/think>”).

This is the paper: s1: Simple test-time scaling

  • Test-time scaling is a promising new approach to language modeling that uses extra test-time compute to improve performance. Recently, OpenAI’s o1 model showed this capability but did not publicly share its methodology, leading to many replication efforts. We seek the simplest approach to achieve test-time scaling and strong reasoning performance. First, we curate a small dataset s1K of 1,000 questions paired with reasoning traces relying on three criteria we validate through ablations: difficulty, diversity, and quality. Second, we develop budget forcing to control test-time compute by forcefully terminating the model’s thinking process or lengthening it by appending “Wait” multiple times to the model’s generation when it tries to end. This can lead the model to double-check its answer, often fixing incorrect reasoning steps. After supervised finetuning the Qwen2.5-32B-Instruct language model on s1K and equipping it with budget forcing, our model s1-32B exceeds o1-preview on competition math questions by up to 27% (MATH and AIME24). Further, scaling s1-32B with budget forcing allows extrapolating beyond its performance without test-time intervention: from 50% to 57% on AIME24. Our model, data, and code are open-source at this https URL

Tasks

SBIRs

  • 9:00 standup – done
  • 10:00 MLOPS whitepaper review
  • 12:50 USNA

Phil 2.5.2025

9:40 physical

Post cards!

SBIRs

  • 3:30 BD meeting
  • Move FrameMapper to its own file – done
  • Updated the official GitHub
  • Working on using FrameMapper in ScenarioSim test – got it all working, I think.

GPT Agents

  • Alden meeting – done
  • More KA

P33

  • Made some good first draft progress on sortition

Phil 2.4.2024

At any other time in my life, this would be a 5-alarm fire scandal. People would be going to jail. Now, it’s Tuesday: A 25-Year-Old With Elon Musk Ties Has Direct Access to the Federal Payment System

  • Despite reporting that suggests that Musk’s so-called Department of Government Efficiency (DOGE) task force has access to these Treasury systems on a “read-only” level, sources say Elez, who has visited a Kansas City office housing BFS systems, has many administrator-level privileges. Typically, those admin privileges could give someone the power to log into servers through secure shell access, navigate the entire file system, change user permissions, and delete or modify critical files. That could allow someone to bypass the security measures of, and potentially cause irreversible changes to, the very systems they have access to.

P33

  • Found some good papers about modern sortition

GPT Agents

  • More Discussion section

SBIRs

  • 9:00 standup
  • Write two-way rotate/translate method. Need to build a 4×4 matrix. Looks like I might have something in my old PyBullet code. Nothing there, but wrote a nice little class:
class FrameMapper:
    #{"radians": rads, "degrees": degs, "distance": dist, "offset": source_v}
    radians:float
    degrees:float
    distance:float
    offset:np.array
    fwd_mat:np.array
    rev_mat:np.array

    def __init__(self, source_v:np.array, target_v:np.array):
        self.offset = source_v
        unit_vector1 = np.array([1, 0])
        vector2 = target_v - source_v
        self.distance = np.linalg.norm(vector2)
        unit_vector2 = vector2 / self.distance
        dot_product = np.dot(unit_vector1, unit_vector2)
        self.radians = np.arccos(dot_product)  # angle in radian
        self.degrees = np.rad2deg(self.radians)
        cos_a = np.cos(self.radians)
        sin_a = np.sin(self.radians)
        self.fwd_mat = np.array([[cos_a, -sin_a], [sin_a, cos_a]])
        cos_a = np.cos(-self.radians)
        sin_a = np.sin(-self.radians)
        self.rev_mat = np.array([[cos_a, -sin_a], [sin_a, cos_a]])

    def to_calc_frame(self, point:np.array) -> np.array:
        p = np.copy(point) - self.offset
        p = np.dot(self.fwd_mat, p)
        return p

    def from_calc_frame(self, point:np.array) -> np.array:
        p = np.copy(point)
        p = np.dot(self.rev_mat, p)
        p += self.offset
        return p

    def to_string(self) -> str:
        return "Offset: {}, distance: {}, angle = {}".format(self.offset, self.distance, self.degrees)

Phil 1.2.2025

Made a log of progress on Killer Apps. Finished Detection, Disruption, and Attacks and Counterattacks

Meanwhile, of the Fox News home page:

Phil 1.31.2025

The census.gov website is dead:

Tasks

  • Send note to Kat – done. She is not interested. Darn.
  • Edit Detection section – done
  • Add TACJ overview to P33. This is part of living with smart machines
  • Make a ppt that has a web page in it
  • Bills – done (dentist! tomorrow)
  • Dishes – done
  • Chores – done
  • Laundry – nope, but the dryer is hooked up now!
  • Review paper – finished reading and it’s much better. Still too wordy, but that’s not the sort of thing that is critical, since it has no impact on the findings, which are solid. Basically some formatting issues at this point.

More root stuff

Phil 1.30.2025

Copyright Office Releases Part 2 of Artificial Intelligence Report

  • Today, the U.S. Copyright Office is releasing Part 2 of its Report on the legal and policy issues related to copyright and artificial intelligence (AI). This Part of the Report addresses the copyrightability of outputs created using generative AI. The Office affirms that existing principles of copyright law are flexible enough to apply to this new technology, as they have applied to technological innovations in the past. It concludes that the outputs of generative AI can be protected by copyright only where a human author has determined sufficient expressive elements. This can include situations where a human-authored work is perceptible in an AI output, or a human makes creative arrangements or modifications of the output, but not the mere provision of prompts. The Office confirms that the use of AI to assist in the process of creation or the inclusion of AI-generated material in a larger human-generated work does not bar copyrightability. It also finds that the case has not been made for changes to existing law to provide additional protection for AI-generated outputs.

AI research team claims to reproduce DeepSeek core technologies for $30 — relatively small R1-Zero model has remarkable problem-solving abilities | Tom’s Hardware

  • An AI research team from the University of California, Berkeley, led by Ph.D. candidate Jiayi Pan, claims to have reproduced DeepSeek R1-Zero’s core technologies for just $30, showing how advanced models could be implemented affordably. According to Jiayi Pan on Nitter, their team reproduced DeepSeek R1-Zero in the Countdown game, and the small language model, with its 3 billion parameters, developed self-verification and search abilities through reinforcement learning.

Made a gif of a root growing as a metaphor for an LLM generating text from the same prompt four times (from this video):

P33

  • Added no confidence voting

GPT Agents

  • Arms control – finished!

SBIRs

  • 9:00 standup
  • 12:50 – 1:20 USNA
  • 4:30 book club
  • More RTAT – Worked out how to iterate along the line segments as a function of t.

Phil 1.29.2025

The Ignite thing went well – it looks like it should be fun! Need to see how to get a webpage running in PPT that works with an LLM

Did some more work on P33

SBIRs

  • 2:30 BD meeting. Capabilities maybe?
  • I think I need to do two sorts for the best options. The first on accuracy, then the second on distance – done. Piece-o-pie
  • Got animations in 3D pyplot working. I almost know what’s going on, too!

GPT Agents

  • More Arms Control

Phil 1.28.2025

Need to reach out to Markus Schneider for the Trustworthy Information proposal – done!

SBIRs

  • Starting on the DSR-2291 task. Which I think is a movement along the two paths at a time $t$. My sense is that I should calculate thing whole thing and start the second path at the time that gives the highest value. And if there is no solution, don’t start. And if the probability of intersection is less than 100%, do a dice roll.
  • Looks like more BD stuff. I wonder if the demo will actually get done?

Phil 1.27.2025

Made some progress on P33. Need to reach out to Manlio De Domenico on that? Also Markus Schneider for the Trustworthy Information proposal

So here’s an interesting thing. There is a considerable discourse about how AI is wrecking the environment. It is absolutely true that there are more datacenters getting made and they – on average – use a lot of water and a good deal of energy.

But there are a lot of worse offenders. Data centers consume about 4.5% of electricity in the US. That’s for everything. AI, the WordPress instance that you are reading now, Netflix streaming gigabytes of data per second – everything.

But there are much bigger energy users. To generate enough tokens for the entire Lord of the Rings trilogy, a LLama3 model probably uses about 5 watt/hours. Transportation – a much larger energy consumer shows how small this is. A Tesla Model 3 could manage to go about 25 feet, or a bit under 10 meters. Transportation, manufacturing, and energy production use a lot more energy:

Source: Wikipedia

If you want to make some changes in energy consumption. Go after small improvements in the big consumers. Reduce energy consumption in say, electricity production (37%) by doubling solar, and that’s the equivalent of cutting the power use of AI by 50%.

In addition to energy consumption, data centers require cooling. And they use a lot, though that is steadily being optimized down. On average a data center uses about 32 million gallons of water for cooling.

Sounds like a lot, right?

Let’s look at the oil and gas industry. The process of fracking, where water is injected at high pressure into oil and gas containing rock from about 2,500 wells uses about 11 million gallons to produce crude oil. So data centers are worse that fracking!

But hold on. You still have to process that oil. And it turns out that for every barrel of oil refined in the US, about 1.5 barrels of water are used. The USA refines about 5.5 billion barrels of oil per year. Combine that with the fracking numbers and the oil and gas industry uses about 500 billion gallons of water per year, or 5 times the amount of data centers doing all the things data centers do, including AI.

To keep this short, we are not going to even talk about the water use of agriculture here.

So why all the ink spilled to talk about this. Well, AI is new and it gets clicks, but I went to look at google trends to see how the discussion of water use for AI and Fracking, and I got an interesting relationship:

The amount of discussion about Fracking in this case has leveled off as the discussion of AI has taken off. And given the history that the oil industry has in generating FUD (fear, uncertainty and doubt), I would not be in the least surprised if it turns out that the oil industry is fueling the moral panic about AI to distract us from the real dangers and to keep those businesses profitable.

Killer Apps

  • More arms control

SBIRs

  • 9:00 sprint demos
  • 3:00 sprint planning
    • Training data file size sensitivity tests
    • Editor tool support for designing threats/assets
    • Transforming x to x’y’ (rotate and translate)
    • Optimize data creation
    • Single, randomized, trajectory calculation and intercept attempt for “real-time” demo
    • Multiple threat (raid) support
  • USNA support

Phil 1.26.2025

Meta has been busy:

Llama Stack defines and standardizes the core building blocks that simplify AI application development. It codified best practices across the Llama ecosystem. More specifically, it provides

  • Unified API layer for Inference, RAG, Agents, Tools, Safety, Evals, and Telemetry.
  • Plugin architecture to support the rich ecosystem of implementations of the different APIs in different environments like local development, on-premises, cloud, and mobile.
  • Prepackaged verified distributions which offer a one-stop solution for developers to get started quickly and reliably in any environment
  • Multiple developer interfaces like CLI and SDKs for Python, Node, iOS, and Android
  • Standalone applications as examples for how to build production-grade AI applications with Llama Stack

P33

  • Add section for stories that embody the egalitarian ethos – done
  • Add Implementations section for examples that have worked in the past on parts of the concept. Done
  • Also added a table of contents, since this was getting big.

Killer Apps book

  • Work on the arms control section