Monthly Archives: September 2025

Phil 9.10.2025

Tasks

  • Start packing electronics lab

SBIRs

  • Method to write out walklists as csv files – done
  • Method to plot walklist “source embeddings” – done
  • Maybe start the Word2Vec model code? Nope, just visualization. Had to write out coordinates as arrays of x=[], y=[], z=[]
  • Got multiple lines working and have lists for the walk coordinates. I’ll put all that together tomorrow

Phil 9.9.2025

Antonio Gulli, one of the bigger Google people seems to be on a writing tear, and he puts his drafts online. Here are two: Agentic Design Patterns, and Reasoning Engines. Looks like interesting stuff. No idea how he finds the time . 

Oh, that’s how:

Still probably worth looking at

Registered for the Bartz, et al. v. Anthropic PBC settlement

Tasks

  • Add some quotes from yesterday. I think I’m going to drop the P33 part of the title and focus more on the essentials of the piece, which is: People don’t really change – we will always have the struggle between dominance and inverse dominance. Ideas change, and that can change the way we swing the balance in that struggle. Technology as a mediator of communication, which is the medium of ideas has a profound impact on that balance. Given this, what is an effective structure to build resilient Egalitarian communities in an age of instantaneous communication, smart machines, and infinite money. Illustrated with stories. I like that pattern.
  • Roll in edits – done
  • Dentist
  • W9 for Peter – done

SBIRs

  • Submit the quarterly report at COB – done
  • Work on the generator, using YAML – good progress, got a walk list! Need to make a list of lists and put them in a csv.
walk_list = [51]
walk_list = [51, 75]
walk_list = [51, 75, 50]
walk_list = [51, 75, 50, 51]
walk_list = [51, 75, 50, 51, 32]
walk_list = [51, 75, 50, 51, 32, 27]

Phil 9.8.2025

You shall know a piece by the company it keeps. Chess plays as a data for word2vec models

  • In this paper, I apply linguistic methods of analysis to non-linguistic data, chess plays, metaphorically equating one with the other and seeking analogies. Chess game notations are also a kind of text, and one can consider the records of moves or positions of pieces as words and statements in a certain language. In this article I show how word embeddings (word2vec) can work on chess game texts instead of natural language texts. I don’t see how this representation of chess data can be used productively. It’s unlikely that these vector models will help engines or people choose the best move. But in a purely academic sense, it’s clear that such methods of information representation capture something important about the very nature of the game, which doesn’t necessarily lead to a win.

tSNE visualisation of endgame moves

Applying word2vec to Recommenders and Advertising

Tasks

  • Doctor Appt 8:40 – done! PT too. Love Kaiser Permanente.
  • Trash
  • Roll in edits for book

SBIRs

Phil 9.5.2025

Human munitions:

  • At dawn on May 8, 2023, a 17-year-old Russian teenager named Pavel Solovyov climbed through a hole in the fence of an aircraft plant in Novosibirsk, Russia. He and two friends were looking for a warplane that could be set on fire. An anonymous Telegram account had promised them one million rubles, around $12,500, to do so — a surreal amount of money for the boys.

Tasks

  • Bills – done
  • Clean – done
  • Weed?
  • Dishes – done
  • LLC call – done
  • Dentist
  • Load up truck

SBIRs

  • 2:00 meeting today
  • Send Matt the code review paragraph. Done
  • Thinking more about the maps as a W2V approach. I think I’m going to make an X by Y (by more?) grid that has vector “labels” that can also be arbitrary size. Then pick a random starting point and do a random walk for a number of steps. That set of vectors becomes the input for the skip-gram calculation. Once the model is trained, re-run the random walk data to get the new vectors and see if the embeddings match the relationship of the original grid. The nice thing is that we can start very simply, with the index for each cell as the input, and a 2-neuron final layer that should approximate the XY. Then we start playing with the size of the “index” and the size of the final layer as independent variables

Phil 9.4.2025

I have a thought about an easier way to build NNMs. What if I took a topic model and created embeddings for, say, every sentence in the Gutenberg collection and the English Wikipedia (as a start). Then ran the word2vec algorithm on those embeddings in sequence? I think that I should get a new embedding space that has the sequential relationships between topics that should be able to accommodate trajectories. This could be validated by drawing trajectories on a UMAP reduced representation of the data. I think.

And boy are there a lot of embedding models

Tasks

SBIRS

  • Change AWS password – done
  • 9:00 standup – done
  • 3:30 ODIN? For some meeting tomorrow?
  • 4:00 SEG – done. Good progress.

Phil 9.3.2025

Inside the AI Revolution: A Two-Day Odyssey at École Polytechnique

  • The article is annoying AI writing, but if you can get past that, the content is pretty good
  • Oddly, in some ways I disagree with this. It is the hallucinatory elements of LLMs that are the most interesting.

A language model built for the public good

  • ETH Zurich and EPFL will release a large language model (LLM) developed on public infrastructure. Trained on the “Alps” supercomputer at the Swiss National Supercomputing Centre (CSCS), the new LLM marks a milestone in open-source AI and multilingual excellence.

Apertus: A fully open, transparent, multilingual language model

  • EPFL, ETH Zurich and the Swiss National Supercomputing Centre (CSCS) released Apertus today, Switzerland’s first large-scale, open, multilingual language model — a milestone in generative AI for transparency and diversity.
  • The model on HF

Tasks

  • Pinged Dr. Veronica Riniolo, and she responded that I need to apply for the activity. So:
    • Set up COST account – done
    • Finish application. I think I need to bundle all the relevant abstracts and use them for the 150 word descriptions. Done! That took a while
  • Groceries – return the fuzzy tomatoes, drop off the plastic, and swing by TJ
  • Rolling in V’s edits – done!
  • Need to see about LLC tomorrow

SBIRs

  • Roll in text as it becomes available
  • Start looking at GPT-2 layers again. I want to make animations first

Phil 9.2.2025

It’s September, and I think we can say goodbye (for now?) to HazyHot&Humid.

How Elon Musk Is Remaking Grok in His Image – The New York Times

Non-zero-sum games

This looks really interesting: AIces 2026 – 1st INTERNATIONAL SCHOOL ON THE COGNITIVE, ETHICAL AND SOCIETAL DIMENSIONS OF ARTIFICIAL INTELLIGENCE

  • The event will have a global scope along 3 thematic lines: cognition, ethics, and society. It will cover current debates about: AI and philosophy of mind; cognitive architectures; machine learning and cognitive development; large language models and visual information; robotics and embodied cognition; neuroscience-inspired AI; algorithmic bias and fairness; transparency and explainability; accountability and responsibility; privacy and surveillance; autonomy and control; AI impact on human values and social inequalities; the future of work and automation; governance, regulation and public policies; AI, human rights and democracy; AI and global development; information and AI education.

Tasks

  • Groceries – done, but I accidentally bought some fuzzy tomatoes
  • Storage – done. Sigh
  • Get rolling on CA24150 – done!
  • Roll in more edits
  • Add some quotes, and give Democracy, a Reader a shout out in the quotation section.
  • Write a note in the comments on the ICTAI AI(?) review

SBIRs

  • 9:00 Standup – done
  • Work on the Q2 report – done, except for the communication section and Matt’s part on software reviews
  • Looks like I still have a lot of training to do