Category Archives: Phil

Phil 1.6.2026

John Feeley, a career diplomat and former ambassador to Panama who resigned in protest during Trump’s first term, said that to understand what’s unfolding in Venezuela, look to the mob, not traditional foreign policy doctrines. “When Donald Trump says, ‘We’re going to run the place,’ I want you to think of the Gambino family taking over the Colombo family’s business out in Queens,” he said. “They don’t actually go out and run it. They just get an envelope.”

The Imitation Game: Using Large Language Models as Chatbots to Combat Chat-Based Cybercrimes

  • Chat-based cybercrime has emerged as a pervasive threat, with attackers leveraging real-time messaging platforms to conduct scams that rely on trust-building, deception, and psychological manipulation. Traditional defense mechanisms, which operate on static rules or shallow content filters, struggle to identify these conversational threats, especially when attackers use multimedia obfuscation and context-aware dialogue.
    In this work, we ask a provocative question inspired by the classic Imitation Game: Can machines convincingly pose as human victims to turn deception against cybercriminals? We present LURE (LLM-based User Response Engagement), the first system to deploy Large Language Models (LLMs) as active agents, not as passive classifiers, embedded within adversarial chat environments.
    LURE combines automated discovery, adversarial interaction, and OCR-based analysis of image-embedded payment data. Applied to the setting of illicit video chat scams on Telegram, our system engaged 53 actors across 98 groups. In over 56 percent of interactions, the LLM maintained multi-round conversations without being noticed as a bot, effectively “winning” the imitation game. Our findings reveal key behavioral patterns in scam operations, such as payment flows, upselling strategies, and platform migration tactics.

Tasks

  • Light cleaning
  • 4:00 Showing
  • Working with Terry on getting out hotel sorted

SBIRs

  • Created an enormous tar file of all the pkl files
  • Start on the UMAP recoding
    • Reading in the lists of lists and extracting the embeddings

Phil 1.5.2025

The theme for 2026 continues:

  • “In some cases, one of the biggest problems Venezuelans have is they have to declare independence from Cuba,” Rubio added. “They tried to basically colonize it from a security standpoint. So, yeah, look, if I lived in Havana and I was in the government, I’d be concerned at least a little bit.”

The Sortition Foundation organizes democratic lotteries for citizens’ assemblies and support the 858 Project: a campaign to replace the House of Lords with a House of Citizens.

Washington Post: Recovering from AI delusions means learning to chat with humans again

Tasks

  • Write email for ACM book proposal and send it off. DONE! Acknowledged, even!

SBIRs

  • 9:00 Sprint demos. Need to make some slides! Done.
  • 3:00 Sprint planning. Done
  • Kick off the next round, but in the background so I can use the IDE – running. Done! 44,297 files
  • Rewrite the UMAP app so that it:
    • Reads through a specified number of series for files to get the embeddings (-1 == ALL FILES)
    • Build the UMAP structure and save it out
    • Time/memory checks for different number of files. Let’s not start with 70k books
    • Visualization of files. We can probably use the spreadsheet if we want more information than the title.
  • Maybe work on the white paper for Dr. J?

Phil 1.3.2026

I see a theme emerging for 2026:

US attacks Venezuela, captures president Maduro and says he will face criminal charges in America

Tasks

  • Light cleaning – done
  • 12:30 Showing – I think that might turn into a nibble?
  • Laundry – done
  • MTB spin through the woods – fun and done

What Drives Success in Physical Planning with Joint-Embedding Predictive World Models?

  • A long-standing challenge in AI is to develop agents capable of solving a wide range of physical tasks and generalizing to new, unseen tasks and environments. A popular recent approach involves training a world model from state-action trajectories and subsequently use it with a planning algorithm to solve new tasks. Planning is commonly performed in the input space, but a recent family of methods has introduced planning algorithms that optimize in the learned representation space of the world model, with the promise that abstracting irrelevant details yields more efficient planning. In this work, we characterize models from this family as JEPA-WMs and investigate the technical choices that make algorithms from this class work. We propose a comprehensive study of several key components with the objective of finding the optimal approach within the family. We conducted experiments using both simulated environments and real-world robotic data, and studied how the model architecture, the training objective, and the planning algorithm affect planning success. We combine our findings to propose a model that outperforms two established baselines, DINO-WM and V-JEPA-2-AC, in both navigation and manipulation tasks. Code, data and checkpoints are available at this https URL.
  • However, on real-world data (DROID and Robocasa), both larger encoders and deeper predictors yield consistent improvements, suggesting that scaling benefits depend on task complexity. We introduced an interface for planning with Nevergrad optimizers, leaving room for exploration of optimizers and hyperparameters. On the planning side, we found that CEM L2 performs best overall. The NG planner performs similarly to CEM on real-world manipulation data (DROID and Robocasa) while requiring less hyperparameter tuning, making it a practical alternative when transitioning to new tasks or datasets.

Phil 1.2.2026

A little less cold today. Going to try for my first ride of the year.

Tasks

  • Bills – done
  • Cleaning – done
  • Abstract. No, really. And done! Need to finalize into a nice email

SBIRs

  • Kick off a 10,000 book embedding run – done!

Gotta read this: Deep sequence models tend to memorize geometrically; it is unclear why

  • Deep sequence models are said to store atomic facts predominantly in the form of associative memory: a brute-force lookup of co-occurring entities. We identify a dramatically different form of storage of atomic facts that we term as geometric memory. Here, the model has synthesized embeddings encoding novel global relationships between all entities, including ones that do not co-occur in training. Such storage is powerful: for instance, we show how it transforms a hard reasoning task involving an l-fold composition into an easy-to-learn 1-step navigation task.
    From this phenomenon, we extract fundamental aspects of neural embedding geometries that are hard to explain. We argue that the rise of such a geometry, as against a lookup of local associations, cannot be straightforwardly attributed to typical supervisory, architectural, or optimizational pressures. Counterintuitively, a geometry is learned even when it is more complex than the brute-force lookup.
    Then, by analyzing a connection to Node2Vec, we demonstrate how the geometry stems from a spectral bias that – in contrast to prevailing theories –indeed arises naturally despite the lack of various pressures. This analysis also points out to practitioners a visible headroom to make Transformer memory more strongly geometric. We hope the geometric view of parametric memory encourages revisiting the default intuitions that guide researchers in areas like knowledge acquisition, capacity, discovery, and unlearning.

Phil 1.1.2026

New Year’s in Lisbon!

Tasks

  • Stew! Done! Yummy!
  • Hike! Done
  • Abstract! Touched it but mostly to make sure that elements from this piece are mentioned.
  • Went to see Avatar. It is in fact a lot like a fireworks display. Some of the CGI of human faces is crazy good – basically no uncanny valley of any kind. And I like the themes of the series, even if it’s all a bit heavy-handed and not subtle.

Phil 12.31.2025

Well, that was a year. Or something. Let’s try to do better, everyone!

Got myself some Therm-IC heated socks, and am going to try them out on the sub-freezing end-of-year ride today. Looks like I will have moved over 8k miles under my own power this year.

Tasks

  • Groceries
  • Work on the abstracts

SBIRs

  • Kick off a run and call it a day. At 26k books done

Universally Converging Representations of Matter Across Scientific Foundation Models

  • Machine learning models of vastly different modalities and architectures are being trained to predict the behavior of molecules, materials, and proteins. However, it remains unclear whether they learn similar internal representations of matter. Understanding their latent structure is essential for building scientific foundation models that generalize reliably beyond their training domains. Although representational convergence has been observed in language and vision, its counterpart in the sciences has not been systematically explored. Here, we show that representations learned by nearly sixty scientific models, spanning string-, graph-, 3D atomistic, and protein-based modalities, are highly aligned across a wide range of chemical systems. Models trained on different datasets have highly similar representations of small molecules, and machine learning interatomic potentials converge in representation space as they improve in performance, suggesting that foundation models learn a common underlying representation of physical reality. We then show two distinct regimes of scientific models: on inputs similar to those seen during training, high-performing models align closely and weak models diverge into local sub-optima in representation space; on vastly different structures from those seen during training, nearly all models collapse onto a low-information representation, indicating that today’s models remain limited by training data and inductive bias and do not yet encode truly universal structure. Our findings establish representational alignment as a quantitative benchmark for foundation-level generality in scientific models. More broadly, our work can track the emergence of universal representations of matter as models scale, and for selecting and distilling models whose learned representations transfer best across modalities, domains of matter, and scientific tasks.

Phil 12.30.2025

Neuronpedia is a free and open source platform for AI interpretability. It may be a nice way at getting at the layer activations that I’ve been looking for.

Tasks

  • Set up the example chapters in the ACM book format – done. That took a while. I can’t get the whole booke to work either.
  • Cleaning/Organizing – done
  • 7:00 – 7:45 showing – done
  • Pick up Barbara 9:50 – done
  • Terry at 6:00 – done

SBIRs

  • 9:00 standup – done
  • More runs started
  • Start looking at how to run UMAP across multiple pickle files. Probably just iterate over the files to create the mapping and save it, then a second stage to calculate the mapped points
  • Send the Gemini analysis to Clay and CC Aaron – done. Clay says “Sounds like a good shrooms date, but not a collaborator” L.O.L.

Phil 12.29.2025

Looks like a 9:00 ride at 50+ degrees!

Polar Cruises & Expeditions

Tasks

  • Laundry – done
  • Air in tires – done
  • Winterize mower – done
  • ACM Book form – done, I think. Need to (maybe?) set up the example chapters in the ACM book format
  • 5:00 Terry

SBIRs

  • More runs. At $25 burned and something like 17k books processed.

Phil 12.28.2025

Beyond Context: Large Language Models Failure to Grasp Users Intent

  • Current Large Language Models (LLMs) safety approaches focus on explicitly harmful content while overlooking a critical vulnerability: the inability to understand context and recognize user intent. This creates exploitable vulnerabilities that malicious users can systematically leverage to circumvent safety mechanisms. We empirically evaluate multiple state-of-the-art LLMs, including ChatGPT, Claude, Gemini, and DeepSeek. Our analysis demonstrates the circumvention of reliable safety mechanisms through emotional framing, progressive revelation, and academic justification techniques. Notably, reasoning-enabled configurations amplified rather than mitigated the effectiveness of exploitation, increasing factual precision while failing to interrogate the underlying intent. The exception was Claude Opus 4.1, which prioritized intent detection over information provision in some use cases. This pattern reveals that current architectural designs create systematic vulnerabilities. These limitations require paradigmatic shifts toward contextual understanding and intent recognition as core safety capabilities rather than post-hoc protective mechanisms.
  • My reaction to this is that either 1) It may be a mechanism where bad actors can learn to manipulate intent or 2) bad actors can use this mechanism to search for the deeper intentions in potential candidates that align with the goals of the actors.
  • Also interesting implications for WH/AI filtering. What is the intent behind a scam, post, or news article?

74 suicide warnings and 243 mentions of hanging: What ChatGPT said to a suicidal teen

  • The Raines’ lawsuit alleges that OpenAI caused Adam’s death by distributing ChatGPT to minors despite knowing it could encourage psychological dependency and suicidal ideation. His parents were the first of five families to file wrongful-death lawsuits against OpenAI in recent months, alleging that the world’s most popular chatbot had encouraged their loved ones to kill themselves. A sixth suit filed this month alleges that ChatGPT led a man to kill his mother before taking his own life.

Tasks

  • 10:00 showing, and a 2:00 showing, which kinda upended the day
  • Finish script that goes though all the URLs in a file and looks for 404 errors – done. Found one too!
  • Finish ACM proposal – not done, but closer
  • Winterize mower – tomorrow?
  • 1:00 ride. Looks less cold. Monday looks nice, then BRRR.

Phil 12.26.2025

Tasks

  • Bills – done
  • Carbon credits – done. 300 tons
  • Groceries. Got the fixings for another beef stew.

SBIRs

  • Kick off a (5k book?) run and go for a hike – done and done. Also started another run. I have managed to spend $10!
  • Tried to load Linux on the dev box, but was thwarted by the inability to boot from the thumb drive. Rather than struggle , I dropped it off with actual professionals. Should be ready in a few days. They were fixing a Kitchen Aid mixer when I arrived. Was not expecting that.

Phil 12.25.2025

Merryhappyjoyous Ramahanukwanzamas!

Windjammer cruises reborn? Star Clippers Sailing Tall Ship Cruises

This seems insightful: How in the hell did Donald Trump convince evangelicals he’s a God-fearing man? – Quora Need to validate.

The diverse way that languages convey emotion

  • Many human languages have words for emotions such as “anger” and “fear,” yet it is not clear whether these emotions have similar meanings across languages, or why their meanings might vary. We estimate emotion semantics across a sample of 2474 spoken languages using “colexification”—a phenomenon in which languages name semantically related concepts with the same word. Analyses show significant variation in networks of emotion concept colexification, which is predicted by the geographic proximity of language families. We also find evidence of universal structure in emotion colexification networks, with all families differentiating emotions primarily on the basis of hedonic valence and physiological activation. Our findings contribute to debates about universality and diversity in how humans understand and experience emotion.

These people look interesting (Unbreaking). They are documenting the disintegration of USA norms(?) using a timeline of summaries among other things. Once I get the embedding mapping done, it would be a good thing to try to run through the system. One of their founding members wrote this:

Landslide; a ghost story

  • All this year, as I have chewed my way along the edges of this almost unfathomable problem, what happened in Valdez came to feel less like a metaphor and more like a model. That’s how I’ll work with it here. Not because the circumstances of megathrust earthquakes in fjords are literally the same as the societal problem of collective derangement, but because the model gives me new ways to take problem apart and see how the pieces interact.

Phil 12.24.2025

10:00 ride! Nice weather!

Tasks

  • Light cleaning – done
  • 3:00 Showing
  • 5:00 Ross

SBIRs

  • Try another run of 1,000 books and see what breaks. Need to integrate the bad files list too. Edits went in smoothly. 1,000 books are cooking along, and I’m picking up new bad files
  • Looks like that’s working! Started another 1,000 books

Phil 12.23.2025

Tasks

  • Mop kitchen – done
  • 10:30 – 11:30 showing
  • Tomorrow looks like it might be a nice day!

SBIRs

  • 9:00 Standup
  • Working on getting the embedding to work in batches – done
  • There are some files that are too big to send to OpenAI and throw an error. I just keep on going, but I’ll need(?) to revisit these files. Saving them out.
  • Finally got through the first 1k files. I have spent $2.70 on embeddings. Including all the errors.

Phil 12.22.2025

Strategic Engineering Workshop on LLMs and Game Theory

  • We invite submissions exploring how large language models (LLMs) / foundation models (FMs) and game theory can enable strategic, interpretable AI agents for real-world scenarios.
  • The workshop is seeking submissions of research and industrial papers, including work on modelling, evaluation, algorithmic design, human data collection, and applications in negotiation, coordination, and everyday social intelligence, as well as demonstrations of agents succeeding (or failing) in strategic interactions.
  • Note: While the primary focus of the workshop is on leveraging LLMs to translate real-world scenarios to rigorous game-theoretic models, we will also consider papers that investigate other creative applications of LLMs to game theory or vice versa.

Tasks

  • 10:00 Showing
  • Get a list of the txt and csv directories and deleting all the txt items from the list that match with the csv items, then finish the parsing – done
    # iterate over the csv files and delete any name in tat list from the text_files list. That way we can pick up when the connection gets interrrupted
    tnum = len(txt_files)
    cnum = len(csv_files)
    print("there are {} text files and {} csv files. After this, there should be {} text files".format(tnum, cnum, tnum-cnum))
    csv:str
    for csv in csv_files:
        s = csv.replace("csv", "txt")
        try:
            txt_files.remove(s)
        except ValueError:
            print("{} is not in the text file list????".format(s))

    tnum = len(txt_files)
    print("Processing {} text files".format(tnum))
  • Start on embedding. Got all the pieces working. Rather than do one large pkl file, I’m going to do the embeddings on a per-book basis. This will be much more resiliant to interruptions’ and support restarts