Antonio Gulli, one of the bigger Google people seems to be on a writing tear, and he puts his drafts online. Here are two: Agentic Design Patterns, and Reasoning Engines. Looks like interesting stuff. No idea how he finds the time .
Oh, that’s how:
Still probably worth looking at
Registered for the Bartz, et al. v. Anthropic PBC settlement
Tasks
Add some quotes from yesterday. I think I’m going to drop the P33 part of the title and focus more on the essentials of the piece, which is: People don’t really change – we will always have the struggle between dominance and inverse dominance. Ideas change, and that can change the way we swing the balance in that struggle. Technology as a mediator of communication, which is the medium of ideas has a profound impact on that balance. Given this, what is an effective structure to build resilient Egalitarian communities in an age of instantaneous communication, smart machines, and infinite money. Illustrated with stories. I like that pattern.
Roll in edits – done
Dentist
W9 for Peter – done
SBIRs
Submit the quarterly report at COB – done
Work on the generator, using YAML – good progress, got a walk list! Need to make a list of lists and put them in a csv.
In this paper, I apply linguistic methods of analysis to non-linguistic data, chess plays, metaphorically equating one with the other and seeking analogies. Chess game notations are also a kind of text, and one can consider the records of moves or positions of pieces as words and statements in a certain language. In this article I show how word embeddings (word2vec) can work on chess game texts instead of natural language texts. I don’t see how this representation of chess data can be used productively. It’s unlikely that these vector models will help engines or people choose the best move. But in a purely academic sense, it’s clear that such methods of information representation capture something important about the very nature of the game, which doesn’t necessarily lead to a win.
At dawn on May 8, 2023, a 17-year-old Russian teenager named Pavel Solovyov climbed through a hole in the fence of an aircraft plant in Novosibirsk, Russia. He and two friends were looking for a warplane that could be set on fire. An anonymous Telegram account had promised them one million rubles, around $12,500, to do so — a surreal amount of money for the boys.
Tasks
Bills – done
Clean – done
Weed?
Dishes – done
LLC call – done
Dentist
Load up truck
SBIRs
2:00 meeting today
Send Matt the code review paragraph. Done
Thinking more about the maps as a W2V approach. I think I’m going to make an X by Y (by more?) grid that has vector “labels” that can also be arbitrary size. Then pick a random starting point and do a random walk for a number of steps. That set of vectors becomes the input for the skip-gram calculation. Once the model is trained, re-run the random walk data to get the new vectors and see if the embeddings match the relationship of the original grid. The nice thing is that we can start very simply, with the index for each cell as the input, and a 2-neuron final layer that should approximate the XY. Then we start playing with the size of the “index” and the size of the final layer as independent variables
I have a thought about an easier way to build NNMs. What if I took a topic model and created embeddings for, say, every sentence in the Gutenberg collection and the English Wikipedia (as a start). Then ran the word2vec algorithm on those embeddings in sequence? I think that I should get a new embedding space that has the sequential relationships between topics that should be able to accommodate trajectories. This could be validated by drawing trajectories on a UMAP reduced representation of the data. I think.
ETH Zurich and EPFL will release a large language model (LLM) developed on public infrastructure. Trained on the “Alps” supercomputer at the Swiss National Supercomputing Centre (CSCS), the new LLM marks a milestone in open-source AI and multilingual excellence.
EPFL, ETH Zurich and the Swiss National Supercomputing Centre (CSCS) released Apertus today, Switzerland’s first large-scale, open, multilingual language model — a milestone in generative AI for transparency and diversity.
The event will have a global scope along 3 thematic lines: cognition, ethics, and society. It will cover current debates about: AI and philosophy of mind; cognitive architectures; machine learning and cognitive development; large language models and visual information; robotics and embodied cognition; neuroscience-inspired AI; algorithmic bias and fairness; transparency and explainability; accountability and responsibility; privacy and surveillance; autonomy and control; AI impact on human values and social inequalities; the future of work and automation; governance, regulation and public policies; AI, human rights and democracy; AI and global development; information and AI education.
Tasks
Groceries – done, but I accidentally bought some fuzzy tomatoes
You must be logged in to post a comment.