Monthly Archives: February 2025

Phil 2.11.2024

This happened today:

SBIRs

  • 9:00 standup – done
  • Read through proposal for meeting – done
  • 11:00 BD Meeting – went well, but no one has money

GPT Agents

  • JHU Speaker info – done
  • Start slide deck – technically, yes
  • Work more on conclusions? – Nope
  • TiiS? – Nope
  • Did do a lot on the money section of P33

Phil 2.10.2025

Reschedule Wednesday visit since snow – done

See about moving records?

TiiS review!

Collective future thinking in Cultural Dynamics

  • Humans think about the future to act in the present, not only personally, but also collectively. Collective future thinking (CFT) is an act of imagining a future on behalf of a collective. This article presents a theoretical analysis of the role of CFT in cultural dynamics. CFT includes collective prospection about probable futures and imaginations about utopian and dystopian possible worlds as the best- and worst-case scenarios. CFT motivates collective self-regulatory activities to steer probable futures towards utopias and away from dystopias, driving a cultural transformation while also acting as a force for cultural maintenance, animating cultural dynamics at the micro-psychological level. Empirical research showed that collective futures are often seen to involve progress in human agency, but a decline in community cohesion, unless collective self-regulation is undertaken. In line with the theoretical proposition, CFT consistently motivated collective self-regulatory activities that are seen to improve future community cohesion and to move the current culture closer to their utopian vision around the world despite significant cross-national variabilities. A macro-level cultural dynamical perspective is provided to interpret cross-national similarities and differences in CFT as a reflection of nations’ past historical trajectories, and to discuss CFT’s role in political polarization and collective self-regulation.

Good thing to use for the AI slop talk: The Impact of Generative AI on Critical Thinking: Self-Reported Reductions in Cognitive Effort and Confidence Effects From a Survey of Knowledge Workers

  • The rise of Generative AI (GenAI) in knowledge workflows raises questions about its impact on critical thinking skills and practices. We survey 319 knowledge workers to investigate 1) when and how they perceive the enaction of critical thinking when using GenAI, and 2) when and why GenAI affects their effort to do so. Participants shared 936 first-hand examples of using GenAI in work tasks. Quantitatively, when considering both task- and user-specific factors, a user’s task-specific self-confidence and confidence in GenAI are predictive of whether critical thinking is enacted and the effort of doing so in GenAI-assisted tasks. Specifically, higher confidence in GenAI is associated with less critical thinking, while higher self-confidence is associated with more critical thinking. Qualitatively, GenAI shifts the nature of critical thinking toward information verification, response integration, and task stewardship. Our insights reveal new design challenges and opportunities for developing GenAI tools for knowledge work.
  • Which led to this little back and forth on Teams:
    • It just dawned on me that LLMs are way better at explaining things in a neurotypical way than I am.  I know you also said this before but it has become more real for me.
    • There is a weird mirror image to that thought too – when  an LLM does not describe your understanding of the world, you can be pretty sure that reflects the biases in the writings it was trained on. You can use that to zero in on exactly what the differences are, and between your perspective and that encoded in the LLM, articulate an understanding that includes both. I use that trick all the time.

GPT Agents

  • Good progress over the weekend. Need to edit the J6 section next
  • And this is a good example of “blast radius”: The NSA’s “Big Delete”
    • The memo acknowledges that the list includes many terms that are used by the NSA in contexts that have nothing to do with DEI. For example, the term “privilege” is used by the NSA in the context of “privilege escalation.” In the intelligence world, privilege escalation refers to “techniques that adversaries use to gain higher-level permissions on a system or network.”
  • Here’s what I need for website & announcements:
    • Title for the talk
    • Brief abstract (1-2 paragraphs)
    • Short bio (up to half a page)
    • Photo (headshot preferred)

SBIRs

  • Reschedule Tuesday visit? Also snow – Now a virtual meeting
  • Generate sets of data with varying amounts of train but keep the test set the same size. The goal is to find the smallest raining set that works. On hold
  • <record scratch> Aaron’s sick, so I’m standing in for a few days
    • Dahlgren prep meeting. I think we are good to go. Need to read the proposal again
    • Working on IRAD slides – first pass is done
    • Reviewed Ron’s USNA email , which was very “AI slop”

Phil 2.7.2025

May or may not be true, but good material for the KA talk this month: Elon Musk’s and X’s Role in 2024 Election Interference

  • One of the most disturbing things we did was create thousands of fake accounts using advanced AI systems called Grok and Eliza. These accounts looked completely real and pushed political messages that spread like wildfire. Havn’t you noticed they all disappeared? Like magic.
  • The pilot program for the Eliza AI Agent, was election interference. Eliza was release officially in October of 2024, but we had access to it before then thanks to Marc Andreessen.
  • The link to the Eliza API is legit (Copied here for future reference)
{
    "name": "trump",
    "clients": ["discord", "direct"],
    "settings": {
        "voice": { "model": "en_US-male-medium" }
    },
    "bio": [
        "Built a strong economy and reduced inflation.",
        "Promises to make America the crypto capital and restore affordability."
    ],
    "lore": [
        "Secret Service allocations used for election interference.",
        "Promotes WorldLibertyFi for crypto leadership."
    ],
    "knowledge": [
        "Understands border issues, Secret Service dynamics, and financial impacts on families."
    ],
    "messageExamples": [
        {
            "user": "{{user1}}",
            "content": { "text": "What about the border crisis?" },
            "response": "Current administration lets in violent criminals. I secured the border; they destroyed it."
        }
    ],
    "postExamples": [
        "End inflation and make America affordable again.",
        "America needs law and order, not crime creation."
    ]
}

Tasks

This is wild. Need to read the paper carefully: On Verbalized Confidence Scores for LLMs: https://arxiv.org/abs/2412.14737

  • The rise of large language models (LLMs) and their tight integration into our daily life make it essential to dedicate efforts towards their trustworthiness. Uncertainty quantification for LLMs can establish more human trust into their responses, but also allows LLM agents to make more informed decisions based on each other’s uncertainty. To estimate the uncertainty in a response, internal token logits, task-specific proxy models, or sampling of multiple responses are commonly used. This work focuses on asking the LLM itself to verbalize its uncertainty with a confidence score as part of its output tokens, which is a promising way for prompt- and model-agnostic uncertainty quantification with low overhead. Using an extensive benchmark, we assess the reliability of verbalized confidence scores with respect to different datasets, models, and prompt methods. Our results reveal that the reliability of these scores strongly depends on how the model is asked, but also that it is possible to extract well-calibrated confidence scores with certain prompt methods. We argue that verbalized confidence scores can become a simple but effective and versatile uncertainty quantification method in the future. Our code is available at this https URL .
  • Bills – done
  • Call – done
  • Chores – done
  • Dishes – done
  • Go for a reasonably big ride – done

Phil 2.6.2025

From The Bulwark. Good example of creating a social reality and using it for an organizational lobotomy. Add to the book following Jan 6 section?

Full thread here

There is an interesting blog post (and thread) from Tim Kellogg that says this:

  • Context: When an LLM “thinks” at inference time, it puts it’s thoughts inside <think> and </think> XML tags. Once it gets past the end tag the model is taught to change voice into a confident and authoritative tone for the final answer.
  • In s1, when the LLM tries to stop thinking with “”, they force it to keep going by replacing it with “Wait”. It’ll then begin to second guess and double check it’s answer. They do this to trim or extend thinking time (trimming is just abruptly inserting “/think>”).

This is the paper: s1: Simple test-time scaling

  • Test-time scaling is a promising new approach to language modeling that uses extra test-time compute to improve performance. Recently, OpenAI’s o1 model showed this capability but did not publicly share its methodology, leading to many replication efforts. We seek the simplest approach to achieve test-time scaling and strong reasoning performance. First, we curate a small dataset s1K of 1,000 questions paired with reasoning traces relying on three criteria we validate through ablations: difficulty, diversity, and quality. Second, we develop budget forcing to control test-time compute by forcefully terminating the model’s thinking process or lengthening it by appending “Wait” multiple times to the model’s generation when it tries to end. This can lead the model to double-check its answer, often fixing incorrect reasoning steps. After supervised finetuning the Qwen2.5-32B-Instruct language model on s1K and equipping it with budget forcing, our model s1-32B exceeds o1-preview on competition math questions by up to 27% (MATH and AIME24). Further, scaling s1-32B with budget forcing allows extrapolating beyond its performance without test-time intervention: from 50% to 57% on AIME24. Our model, data, and code are open-source at this https URL

Tasks

SBIRs

  • 9:00 standup – done
  • 10:00 MLOPS whitepaper review
  • 12:50 USNA

Phil 2.5.2025

9:40 physical

Post cards!

SBIRs

  • 3:30 BD meeting
  • Move FrameMapper to its own file – done
  • Updated the official GitHub
  • Working on using FrameMapper in ScenarioSim test – got it all working, I think.

GPT Agents

  • Alden meeting – done
  • More KA

P33

  • Made some good first draft progress on sortition

Phil 2.4.2024

At any other time in my life, this would be a 5-alarm fire scandal. People would be going to jail. Now, it’s Tuesday: A 25-Year-Old With Elon Musk Ties Has Direct Access to the Federal Payment System

  • Despite reporting that suggests that Musk’s so-called Department of Government Efficiency (DOGE) task force has access to these Treasury systems on a “read-only” level, sources say Elez, who has visited a Kansas City office housing BFS systems, has many administrator-level privileges. Typically, those admin privileges could give someone the power to log into servers through secure shell access, navigate the entire file system, change user permissions, and delete or modify critical files. That could allow someone to bypass the security measures of, and potentially cause irreversible changes to, the very systems they have access to.

P33

  • Found some good papers about modern sortition

GPT Agents

  • More Discussion section

SBIRs

  • 9:00 standup
  • Write two-way rotate/translate method. Need to build a 4×4 matrix. Looks like I might have something in my old PyBullet code. Nothing there, but wrote a nice little class:
class FrameMapper:
    #{"radians": rads, "degrees": degs, "distance": dist, "offset": source_v}
    radians:float
    degrees:float
    distance:float
    offset:np.array
    fwd_mat:np.array
    rev_mat:np.array

    def __init__(self, source_v:np.array, target_v:np.array):
        self.offset = source_v
        unit_vector1 = np.array([1, 0])
        vector2 = target_v - source_v
        self.distance = np.linalg.norm(vector2)
        unit_vector2 = vector2 / self.distance
        dot_product = np.dot(unit_vector1, unit_vector2)
        self.radians = np.arccos(dot_product)  # angle in radian
        self.degrees = np.rad2deg(self.radians)
        cos_a = np.cos(self.radians)
        sin_a = np.sin(self.radians)
        self.fwd_mat = np.array([[cos_a, -sin_a], [sin_a, cos_a]])
        cos_a = np.cos(-self.radians)
        sin_a = np.sin(-self.radians)
        self.rev_mat = np.array([[cos_a, -sin_a], [sin_a, cos_a]])

    def to_calc_frame(self, point:np.array) -> np.array:
        p = np.copy(point) - self.offset
        p = np.dot(self.fwd_mat, p)
        return p

    def from_calc_frame(self, point:np.array) -> np.array:
        p = np.copy(point)
        p = np.dot(self.rev_mat, p)
        p += self.offset
        return p

    def to_string(self) -> str:
        return "Offset: {}, distance: {}, angle = {}".format(self.offset, self.distance, self.degrees)

Phil 1.2.2025

Made a log of progress on Killer Apps. Finished Detection, Disruption, and Attacks and Counterattacks

Meanwhile, of the Fox News home page: