Author Archives: pgfeldman

Phil 1.15.2026

Wikipedia is 25 years old today! I support them with a monthly contribution, and you should too if you can afford it. Or get some fun anniversary merch

Olivier Simard-Casanova: I am a a French economist. I study how humans influence each other in organizations, especially in the workplace, in scientific communities, and on the Internet. More specifically, I am interested in personnel and organizational economics, in the interaction of monetary and non-monetary incentives in the workplace, in the diffusion of opinion, in network theory, in automated text processing, and in the meta-science of economics. I am also interested in making science more open.

Google DeepMind is thrilled to invite you to the Gemini 3 global hackathon. We are pushing the boundaries of what AI can do by enhancing reasoning capabilities, unlocking multimodal experiences and reducing latency. Now, we want to see what you can create with our most capable and intelligent model family to date.

Tasks

  • Need to send a preliminary response to ACM books (done), then work on the reviewer responses. In particular, I think I need a forward for each story that sets up the nature of the attack.
  • Groceries – nope
  • Bank stuff? Probably better tomorrow after the showing

SBIRs

  • 9:00 standup – done
  • 4:00 ADS – very relaxed meeting
  • UMAP will only work with the 100k set
  • 3D Umap with the 100k set
    • Go through the pkl files and add the 2D and 3D embeddings – got the code running, will kick it off tomorrow
    • While iterating over the pkl files, create a new table for 2D data that can be used to train the clusterer. Same reservoir technique. I think I can just use the existing code, so I think I’ll do that instead.
  • More Linux box set up – nope, just coding
  • Security things! Copied files over

Phil 1.14.2026

This is such a dumb timeline

Tasks

  • Got a response from ACM books! need to respond to some questions
  • 4:30: Barbara – done

SBIRs

  • Kicked off a 1M point run which didn’t crash the instance. After that I’m going to switch over to fitting UMAP to these vector lists
  • UMAP will only work with the 100k set
  • Starting to get the Linux box set up
  • Security things

Phil 1.13.2026

I vote perfidy to be 2026’s word of the year

  • In the context of warperfidy is a form of deceptive tactic where one side pretends to act in good faith, such as signaling a truce (e.g., raising a white flag), but does so with the deliberate intention of breaking that promise. The goal is to trick the enemy into lowering its guard, such as stepping out of cover to accept a supposed surrender, only to exploit its vulnerability.

Tasks

  • Tim at 1:00 – done
  • Is there an LLM meeting tomorrow? Still not sure
  • Got my Linux’ed box back and started setup. Ubuntu has gotten nice

SBIRs

  • 9:00 standup – done
  • Write a script that fills a pre-allocated ndarray of an arbitrary size by randomly sampling from the list of embeddings and then pickles it. I think 250k and 500k. Mostly vibe-coded and it works like a charm. FAST!
  • Look into using the UMBC HPCF?

Phil 1.12.2026

It’s been a busy week

US will have Greenland ‘one way or the other’, says Trump – Europe live

Criminal investigation into Fed chair Powell has ‘reinforced’ concerns over independence, Goldman Sachs warns – business live

Trump tells Cuba to ‘make a deal’ or face the consequences

Homeland security sends more agents to Minneapolis as protests erupt in US

There is a developing consensus that this is the tail wagging the dog: US justice department has released less than 1% of Epstein files, filing reveals

Tasks

  • Trash – done
  • Look through the bank stuff and see if there is enough to open an account – completely forgot
  • Progress on getting the Alienware set up as a Linux box. I also asked them how much RAM they could stuff in since that seems to be an issue these days for me

SBIRs

  • Work on loading arrays.
    • See how big all the files are. Iterate over all the pkl files but don’t keep anything, just increment the memory size value and the number of vectors. create a list of dicts
      • And the answer is: 262,626,464 bytes, in 30,556,975 vectors
    • Sort the list by memory, and try loading up all the small ones until 14GB is passed. See if that works. If it does, use those to create a mapping
    • Based on the overall size of the pkl footprint, determine an optimal subsampling strategy – looks like 1:50 ratio. That’s not bad
    • See how much it would cost to use a bigger box – at least $110/hr. Or I could get a box for about $15k that could handle this. It would pay for itself with a week of compute. Hmmm
    • Maybe try the NN approach? Possibly in steps until the array is the size that can fit in memory? Talked to Aaron about this. Some neat ideas.

Phil 1.9.2026

Tasks

  • Bills – done
  • Finish chores – done
  • Groceries – done

SBIRs

  • Kicked off the run on the adjusted UMAP. Lt’s see what happens. Blew up immediately. I need to refactor so I’m not storing things smarter. Fixed
    • Still killed the box at 160 files though
    • I think Monday I’m going to try the batch version of the code and see if I can get something reasonable
    • I should be able to just use the last UMAP model that was saved out
  • Also, just for kicks, I’d like to see if a NN could be trained to do manifold mapping based on maintaining the distance between high-dimensional points in lower-dimensional spaces. The distance function (linear, log, exponential, etc.) would adjust the learning behavior. And since the data could be loaded in batches, the memory issues are better. It’s basically an autoencoder? In fact, training an autoencoder that necks down to the desired number of dimensions (e.g 2 or 3) then attempts to reconstruct the original vector could be an interesting approach too.
  • Lunch with Aaron. Fun! Discussed many things

Phil 1.8.2026

The leopard expands the circle of faces it will eat:

I remember overhearing a conversation in a grocery store a few days before election day. An older white guy was telling a young woman that he was sure that he was expecting prices to come down “real soon.” She was looking concerned and trying to edge away. He was giddy. I am pretty sure that not enough has happened to change that dynamic.

Life Under a Clicktatorship

  • But I want to suggest that what we are witnessing from the Trump administration is not just skillful manipulation of social media—it’s something more profoundly worrying. Today, we live in a clicktatorship, ruled by a LOLviathan. Our algothracy is governed by poster brains.1

Ties nicely into this WIRED piece from last year: The ‘Contentification’ of Trump Policy

  • But instead, the “contentification” of President Donald Trump’s policy is indeed the logical next step for a team that won the election with the help of influencers and content creators. Following suit, Trump’s cabinet has basically created the White House’s own cinematic universe.

Tasks

  • Looks like an offer might be forthcoming?
  • Lunch with Aaron? – Nope, tomorrow
  • Light cleaning – done
  • 5:00 Showing – done

SBIRs

  • Work on getting UMAP working better – done
  • 9:00 Standup – done
  • 9:30 SEG pre-meeting – done
  • 3:00 SEG meeting – done
  • 4:00 ADS tagup – done

Phil 1.7.2026

Warm today!

Grok Is Pushing AI ‘Undressing’ Mainstream

  • Elon Musk hasn’t stopped Grok, the chatbot developed by his artificial intelligence company xAI, from generating sexualized images of women. After reports emerged last week that the image generation tool on X was being used to create sexualized images of children, Grok has created potentially thousands of nonconsensual images of women in “undressed” and “bikini” photos.

LNEC – Laboratório Nacional de Engenharia Civil (National Laboratory for Civil Engineering) is a public institute of Science and Technology (S&T), with the status of a State Laboratory that carries out research in all fields of civil engineering, giving it a unique multidisciplinary perspective. (Research fellowships)

Tasks

  • Lunch ride. Nice!
  • 3:00 Alden meeting – just chatting. More stuff in 2 weeks.
  • Added a section about community financial instruments toP33

SBIRs

  • Kick off embedding timing run – and pretty promptly killed the machine. Need to see how to minimize memory use. Had a chat with Gemini that produced some things worth trying.
  • 9:00 Meeting with Aaron. Time to revisit these charts:
  • Done! Looks pretty good too.

Phil 1.6.2026

John Feeley, a career diplomat and former ambassador to Panama who resigned in protest during Trump’s first term, said that to understand what’s unfolding in Venezuela, look to the mob, not traditional foreign policy doctrines. “When Donald Trump says, ‘We’re going to run the place,’ I want you to think of the Gambino family taking over the Colombo family’s business out in Queens,” he said. “They don’t actually go out and run it. They just get an envelope.”

The Imitation Game: Using Large Language Models as Chatbots to Combat Chat-Based Cybercrimes

  • Chat-based cybercrime has emerged as a pervasive threat, with attackers leveraging real-time messaging platforms to conduct scams that rely on trust-building, deception, and psychological manipulation. Traditional defense mechanisms, which operate on static rules or shallow content filters, struggle to identify these conversational threats, especially when attackers use multimedia obfuscation and context-aware dialogue.
    In this work, we ask a provocative question inspired by the classic Imitation Game: Can machines convincingly pose as human victims to turn deception against cybercriminals? We present LURE (LLM-based User Response Engagement), the first system to deploy Large Language Models (LLMs) as active agents, not as passive classifiers, embedded within adversarial chat environments.
    LURE combines automated discovery, adversarial interaction, and OCR-based analysis of image-embedded payment data. Applied to the setting of illicit video chat scams on Telegram, our system engaged 53 actors across 98 groups. In over 56 percent of interactions, the LLM maintained multi-round conversations without being noticed as a bot, effectively “winning” the imitation game. Our findings reveal key behavioral patterns in scam operations, such as payment flows, upselling strategies, and platform migration tactics.

Now, to be clear, those workers haven’t been laid off because their jobs are now being done by AI, and they’ve been replaced by bots. Instead, they’ve been laid off by execs who now have AI to use as an excuse for going after workers they’ve wanted to cut all along. (From Anil Dash)

Tasks

  • Light cleaning
  • 4:00 Showing
  • Working with Terry on getting out hotel sorted

SBIRs

  • Created an enormous tar file of all the pkl files
  • Start on the UMAP recoding
    • Reading in the lists of lists and extracting the embeddings

Phil 1.5.2025

The theme for 2026 continues:

  • “In some cases, one of the biggest problems Venezuelans have is they have to declare independence from Cuba,” Rubio added. “They tried to basically colonize it from a security standpoint. So, yeah, look, if I lived in Havana and I was in the government, I’d be concerned at least a little bit.”

The Sortition Foundation organizes democratic lotteries for citizens’ assemblies and support the 858 Project: a campaign to replace the House of Lords with a House of Citizens.

Washington Post: Recovering from AI delusions means learning to chat with humans again

Tasks

  • Write email for ACM book proposal and send it off. DONE! Acknowledged, even!

SBIRs

  • 9:00 Sprint demos. Need to make some slides! Done.
  • 3:00 Sprint planning. Done
  • Kick off the next round, but in the background so I can use the IDE – running. Done! 44,297 files
  • Rewrite the UMAP app so that it:
    • Reads through a specified number of series for files to get the embeddings (-1 == ALL FILES)
    • Build the UMAP structure and save it out
    • Time/memory checks for different number of files. Let’s not start with 70k books
    • Visualization of files. We can probably use the spreadsheet if we want more information than the title.
  • Maybe work on the white paper for Dr. J?

Phil 1.3.2026

I see a theme emerging for 2026:

US attacks Venezuela, captures president Maduro and says he will face criminal charges in America

Tasks

  • Light cleaning – done
  • 12:30 Showing – I think that might turn into a nibble?
  • Laundry – done
  • MTB spin through the woods – fun and done

What Drives Success in Physical Planning with Joint-Embedding Predictive World Models?

  • A long-standing challenge in AI is to develop agents capable of solving a wide range of physical tasks and generalizing to new, unseen tasks and environments. A popular recent approach involves training a world model from state-action trajectories and subsequently use it with a planning algorithm to solve new tasks. Planning is commonly performed in the input space, but a recent family of methods has introduced planning algorithms that optimize in the learned representation space of the world model, with the promise that abstracting irrelevant details yields more efficient planning. In this work, we characterize models from this family as JEPA-WMs and investigate the technical choices that make algorithms from this class work. We propose a comprehensive study of several key components with the objective of finding the optimal approach within the family. We conducted experiments using both simulated environments and real-world robotic data, and studied how the model architecture, the training objective, and the planning algorithm affect planning success. We combine our findings to propose a model that outperforms two established baselines, DINO-WM and V-JEPA-2-AC, in both navigation and manipulation tasks. Code, data and checkpoints are available at this https URL.
  • However, on real-world data (DROID and Robocasa), both larger encoders and deeper predictors yield consistent improvements, suggesting that scaling benefits depend on task complexity. We introduced an interface for planning with Nevergrad optimizers, leaving room for exploration of optimizers and hyperparameters. On the planning side, we found that CEM L2 performs best overall. The NG planner performs similarly to CEM on real-world manipulation data (DROID and Robocasa) while requiring less hyperparameter tuning, making it a practical alternative when transitioning to new tasks or datasets.

Phil 1.2.2026

A little less cold today. Going to try for my first ride of the year.

Tasks

  • Bills – done
  • Cleaning – done
  • Abstract. No, really. And done! Need to finalize into a nice email

SBIRs

  • Kick off a 10,000 book embedding run – done!

Gotta read this: Deep sequence models tend to memorize geometrically; it is unclear why

  • Deep sequence models are said to store atomic facts predominantly in the form of associative memory: a brute-force lookup of co-occurring entities. We identify a dramatically different form of storage of atomic facts that we term as geometric memory. Here, the model has synthesized embeddings encoding novel global relationships between all entities, including ones that do not co-occur in training. Such storage is powerful: for instance, we show how it transforms a hard reasoning task involving an l-fold composition into an easy-to-learn 1-step navigation task.
    From this phenomenon, we extract fundamental aspects of neural embedding geometries that are hard to explain. We argue that the rise of such a geometry, as against a lookup of local associations, cannot be straightforwardly attributed to typical supervisory, architectural, or optimizational pressures. Counterintuitively, a geometry is learned even when it is more complex than the brute-force lookup.
    Then, by analyzing a connection to Node2Vec, we demonstrate how the geometry stems from a spectral bias that – in contrast to prevailing theories –indeed arises naturally despite the lack of various pressures. This analysis also points out to practitioners a visible headroom to make Transformer memory more strongly geometric. We hope the geometric view of parametric memory encourages revisiting the default intuitions that guide researchers in areas like knowledge acquisition, capacity, discovery, and unlearning.

Phil 1.1.2026

New Year’s in Lisbon!

Tasks

  • Stew! Done! Yummy!
  • Hike! Done
  • Abstract! Touched it but mostly to make sure that elements from this piece are mentioned.
  • Went to see Avatar. It is in fact a lot like a fireworks display. Some of the CGI of human faces is crazy good – basically no uncanny valley of any kind. And I like the themes of the series, even if it’s all a bit heavy-handed and not subtle.

Phil 12.31.2025

Well, that was a year. Or something. Let’s try to do better, everyone!

Got myself some Therm-IC heated socks, and am going to try them out on the sub-freezing end-of-year ride today. Looks like I will have moved over 8k miles under my own power this year.

Tasks

  • Groceries
  • Work on the abstracts

SBIRs

  • Kick off a run and call it a day. At 26k books done

Universally Converging Representations of Matter Across Scientific Foundation Models

  • Machine learning models of vastly different modalities and architectures are being trained to predict the behavior of molecules, materials, and proteins. However, it remains unclear whether they learn similar internal representations of matter. Understanding their latent structure is essential for building scientific foundation models that generalize reliably beyond their training domains. Although representational convergence has been observed in language and vision, its counterpart in the sciences has not been systematically explored. Here, we show that representations learned by nearly sixty scientific models, spanning string-, graph-, 3D atomistic, and protein-based modalities, are highly aligned across a wide range of chemical systems. Models trained on different datasets have highly similar representations of small molecules, and machine learning interatomic potentials converge in representation space as they improve in performance, suggesting that foundation models learn a common underlying representation of physical reality. We then show two distinct regimes of scientific models: on inputs similar to those seen during training, high-performing models align closely and weak models diverge into local sub-optima in representation space; on vastly different structures from those seen during training, nearly all models collapse onto a low-information representation, indicating that today’s models remain limited by training data and inductive bias and do not yet encode truly universal structure. Our findings establish representational alignment as a quantitative benchmark for foundation-level generality in scientific models. More broadly, our work can track the emergence of universal representations of matter as models scale, and for selecting and distilling models whose learned representations transfer best across modalities, domains of matter, and scientific tasks.

Phil 12.30.2025

Neuronpedia is a free and open source platform for AI interpretability. It may be a nice way at getting at the layer activations that I’ve been looking for.

Tasks

  • Set up the example chapters in the ACM book format – done. That took a while. I can’t get the whole booke to work either.
  • Cleaning/Organizing – done
  • 7:00 – 7:45 showing – done
  • Pick up Barbara 9:50 – done
  • Terry at 6:00 – done

SBIRs

  • 9:00 standup – done
  • More runs started
  • Start looking at how to run UMAP across multiple pickle files. Probably just iterate over the files to create the mapping and save it, then a second stage to calculate the mapped points
  • Send the Gemini analysis to Clay and CC Aaron – done. Clay says “Sounds like a good shrooms date, but not a collaborator” L.O.L.

Phil 12.29.2025

Looks like a 9:00 ride at 50+ degrees!

Polar Cruises & Expeditions

Tasks

  • Laundry – done
  • Air in tires – done
  • Winterize mower – done
  • ACM Book form – done, I think. Need to (maybe?) set up the example chapters in the ACM book format
  • 5:00 Terry

SBIRs

  • More runs. At $25 burned and something like 17k books processed.