Had a good discussion with Shimei and Jimmy yesterday about Language Models Represent Space and Time. Basically the idea that the model itself should have the relative representation if information in it and that could be available. The token embeddings are a kind of direction, after all.
Tasks
Call Jim Donnie’s – done
Call Nathan – done
Chores
Load Garmin (done) and laptop (on thumb drive)
Pack!
Bennie note
SBIRs
Cancelled group dinner
GPT Agents
Wrote up some thoughts about mapping using the LLM itself here
A fundamental criticism of text-only language models (LMs) is their lack of grounding—that is, the ability to tie a word for which they have learned a representation, to its actual use in the world. However, despite this limitation, large pre-trained LMs have been shown to have a remarkable grasp of the conceptual structure of language, as demonstrated by their ability to answer questions, generate fluent text, or make inferences about entities, objects, and properties that they have never physically observed. In this work we investigate the extent to which the rich conceptual structure that LMs learn indeed reflects the conceptual structure of the non-linguistic world—which is something that LMs have never observed. We do this by testing whether the LMs can learn to map an entire conceptual domain (e.g., direction or colour) onto a grounded world representation given only a small number of examples. For example, we show a model what the word “left” means using a textual depiction of a grid world, and assess how well it can generalise to related concepts, for example, the word “right”, in a similar grid world. We investigate a range of generative language models of varying sizes (including GPT-2 and GPT-3), and see that although the smaller models struggle to perform this mapping, the largest model can not only learn to ground the concepts that it is explicitly taught, but appears to generalise to several instances of unseen concepts as well. Our results suggest an alternative means of building grounded language models: rather than learning grounded representations “from scratch”, it is possible that large text-only models learn a sufficiently rich conceptual structure that could allow them to be grounded in a data-efficient way.
Neural network models have a reputation for being black boxes. We propose to monitor the features at every layer of a model and measure how suitable they are for classification. We use linear classifiers, which we refer to as “probes”, trained entirely independently of the model itself. This helps us better understand the roles and dynamics of the intermediate layers. We demonstrate how this can be used to develop a better intuition about models and to diagnose potential problems. We apply this technique to the popular models Inception v3 and Resnet-50. Among other things, we observe experimentally that the linear separability of features increase monotonically along the depth of the model.
Slides for demos
GPT agents
2:00 Meeting
Send story to CACM and see if they would like to pursue and what the lead times are – done
The bidding phase of IUI 2024 is now open. Now my present/future self has to live up to the commitments made by me in the past.
Just got back from the excellent Digital Platforms and Societal Harms IEEE event at American University. Some of the significant points that were discussed over the past two days:
Moderation is hard. Determining, for example, what is hate speech in the ten seconds or so allocated to moderators is mostly straightforward but often complicated and very dependent of locale and culture. I get the feeling that – based on examining content alone – machine learning could easily take care of 50% or so, particularly if you just decide to lump in satire and mockery. Add network analysis and you could probably be more sophisticated and get up to 70%? Handling the remaining 30% is a crushing job that would send most normal people running. Which means that the job of moderating for unacceptable content is its own form of exploitation.
Governments that were well positioned to detect and disrupt organizations like ISIS are no better prepared than a company like Meta when it comes to handling radical extremists from within the dominant culture that produced the company. In the US, that’s largely white and some variant of Christian. I’d assume that in China the same pattern exists for their dominant group.
There is a sense that all of our systems are reactive. That they only come into play when something has happened, not before something happens. Intervention for someone who is radicalizing requires human intervention. Which means it’s expensive and hard to scale. Moonshot is working to solve this problem, and has made surprisingly good progress, so there may be ways to make this work.
Militant accelerationism, or hastening societal collapse, is a thing. The exploitation of vulnerable people to become expendable munitions is being attempted by online actors. Generative AI will be a tool for these people, if it isn’t already.
There are quite a few good databases, but they are so toxic that they are largely kept in servers that are isolated from the internet to a greater or lesser degree. Public repositories are quite rare.
The transformation of Twitter to X is a new, very difficult problem. Twitter built up so much social utility as, for example, early warning, or reports from disaster areas that it can’t be removed from an App Store in the same way that an app that permits similar toxic behavior but only has 25 users can be. No one seems to have a good answer for this.
The Fediverse also appears to complicate harm tracking and prevention. Since there is no single source, how do you pull your Mastodon App if some people are accessing (possibly blacklisted) servers hosting hate speech? Most people are using the app for productive reasons. Now what?
Removing content doesn’t remove the person making the content. Even without any ability to post, or even with full bans from a platform, they can still search for targets and buy items that can enable them to cause harm in the real world. This is why moderation is only the lowest bar. Detection and treatment should be a goal.
Of course all these technologies are two edged swords. Detection and treatment in an authoritarian situation might mean finding reporters or human rights activist and imprisoning them.
The organizers are going to make this a full conference next year, with a call for papers and publication, so keep an eye on this space if you’re interested: https://tech-forum.computer.org/societal-harms-2023/
SBIRs
The War Elephants paper got a hard reject. Need to talk to Aaron to see How to proceed. Done
Add ASRC to letterhead – Done
Expense report! Done
Had a good chat with Rukan about using the SimAccel for interactive analysis of trajectories and FOM curves
The capabilities of large language models (LLMs) have sparked debate over whether such systems just learn an enormous collection of superficial statistics or a coherent model of the data generating process — a world model. We find evidence for the latter by analyzing the learned representations of three spatial datasets (world, US, NYC places) and three temporal datasets (historical figures, artworks, news headlines) in the Llama-2 family of models. We discover that LLMs learn linear representations of space and time across multiple scales. These representations are robust to prompting variations and unified across different entity types (e.g. cities and landmarks). In addition, we identify individual “space neurons” and “time neurons” that reliably encode spatial and temporal coordinates. Our analysis demonstrates that modern LLMs acquire structured knowledge about fundamental dimensions such as space and time, supporting the view that they learn not merely superficial statistics, but literal world models.
Because the world is mean, the paper cites two papers from 2022 on reconstructing the game board from knowledge in the model with Chess and Othello. My paper did this in 2020. Grumble
Logically combines advanced AI with one of the world’s largest dedicated fact-checking teams. We help governments, businesses, and enterprise organizations uncover and address harmful misinformation and deliberate disinformation online.
You must be logged in to post a comment.