Back from the MORS 92ns Symposium. Monterey is lovely. I got to ride my bike along the shore and into the hills. Good presentations. Some particularly good stuff from Sandia on finding markers for when online activity moves into the real world. In this case the data was about the GameStop short squeeze, but it might be more generalizable. Need to keep in touch.
Another, less interesting talk had a really good pointer, MITRE’s Att&ck knowledge base of adversary tactics and techniques based on real-world observations. I think it makes sense to start to put together a AI-based social hacking of theoretical and actual possible hacks and defenses. Some of these would still be human active measures, but could be scaled.
These are some loooooong daylight hours here near the 39th parallel.
SBIRs
Received a notification from the CUI folks to prepare a video presentation if I wasn’t attending of about 5 minutes, which is the same as a poster. So I think the wise move is slides and a poster. Work on that today and maybe some tomorrow. Otherwise while in CA.
9:00 Standup. Go over the layer image maybe and then go for a ride. I’ll need to work on the poster & slides next week. I’ll try to get started on UMAP today and have enough done so I can pick it up in two weeks
1:00 Overmatch call – might have gone well. More later?
Got UMAP working with Plotly! Here’s the code. It’s based on this UMAP example here and the plotly scatterplot examples here:
What if the internet were public interest technology? I mean “internet” the way most people understand it, which is to say our whole digital sphere, and by “public interest” I don’t mean tinkering at the margins to reduce harm from some bad actors or painting some glossy ethics principles atop a pile of exploitative rent-seeking — I mean through and through, warts and all, an internet that works in support of a credible, pragmatic definition of the common good.
Tasks
Carlos email
SBIRs
9:00 standup
Write up what I’ve discovered about the hidden layer info in output vs the activation info. Done
Connect the heatmap to the running model. Done
Add the running code to the documentation. Done
Start figuring out UMAP. Not even started! I blame meetings
Large Language Models (LLMs) have revolutionized natural language processing but can exhibit biases and may generate toxic content. While alignment techniques like Reinforcement Learning from Human Feedback (RLHF) reduce these issues, their impact on creativity, defined as syntactic and semantic diversity, remains unexplored. We investigate the unintended consequences of RLHF on the creativity of LLMs through three experiments focusing on the Llama-2 series. Our findings reveal that aligned models exhibit lower entropy in token predictions, form distinct clusters in the embedding space, and gravitate towards “attractor states”, indicating limited output diversity. Our findings have significant implications for marketers who rely on LLMs for creative tasks such as copywriting, ad creation, and customer persona generation. The trade-off between consistency and creativity in aligned models should be carefully considered when selecting the appropriate model for a given application. We also discuss the importance of prompt engineering in harnessing the creative potential of base models.
They were able to do this by comparing the chat vs completion models. There are all kinds of implications here
At the height of the COVID-19 pandemic, the U.S. military launched a secret campaign to counter what it perceived as China’s growing influence in the Philippines, a nation hit especially hard by the deadly virus. The clandestine operation has not been previously reported. It aimed to sow doubt about the safety and efficacy of vaccines and other life-saving aid that was being supplied by China, a Reuters investigation found. Through phony internet accounts meant to impersonate Filipinos, the military’s propaganda efforts morphed into an anti-vax campaign. Social media posts decried the quality of face masks, test kits and the first vaccine that would become available in the Philippines – China’s Sinovac inoculation.
I’ve been wondering about how to map regions of a model that are reached through an extensive prompt, like the kind you see with RAG. The problem is that the prompt gets very large, and it may be difficult to see how the trajectory works. There appear to be several approaches to dealing with this, so here’s an ongoing list of things to try:
Just plot the whole prompt including context. This assumes that the model is big enough and public, like Llama or Gemma. I assume that as the prompt grows, the head will go to different regions. But the vector that we’re trying to plot will continue to grow, and I’m not sure how that gets plotted.
Just save off the N vectors that are closes to the head. The full text can also be saved, so there is some correlation.
Use a prompt to fire off N responses and use that to finetune a small (e.g. GPT-2) model. Then create “small” window of tokens that travel through the finetuned space. The nice thing is that this lets us indirectly explore closed-source models in a narrative context.
I’d also like to see if there is any way to use dictionary learning on these narrative elements. It seems that there is no mathematical reason that you can’t have “narrative features” like tropes, possibly.
Well, the vacation is fading, and I’m back to what I’ve been calling “lockdown lite.” Going from a steady interaction with people in the real world to work-from-home where my interaction with people is a few online meetings… isn’t healthy.
Mamba, however, is one of an alternative class of models called State Space Models (SSMs). Importantly, for the first time, Mamba promises similar performance (and crucially similar scaling laws) as the Transformer whilst being feasible at long sequence lengths (say 1 million tokens). To achieve this long context, the Mamba authors remove the “quadratic bottleneck” in the Attention Mechanism. Mamba also runs fast – like “up to 5x faster than Transformer fast.”
SBIRs
Write email to Anthropic – done
Write up notes on the Scaling Monosemanticity paper and put that in the NNM documentation. Done
Update the Overleaf book content – done! I even expanded the Senate testimony. Look at me go!
Got my slot for MORS – 6/26 at 8:30-9:00 in GL113. That should give me some time for riding around Monterey 🙂
Follow up with Carlos. Maybe discuss with Shimei & Jimmy first?
SBIRs
Performance goals – done
Letter to Anthropic
I spent the whole day reading Scaling Monosemanticity: Extracting Interpretable Features from Claude 3 Sonnet. It’s very good and very interesting. The Anthropic folks are looking at features, not sequences. That being said, their feature work is really good, and the UMAP relationships they show are very map-like, in kind of the same way that text embedding is. Which makes sense as those embeddings are also coming from LLMs (OpenAI’s ada-embedding002, frequently). There is also some really interesting work in using the features to help the model detect manipulative content, which aligns with my White Hat AI concept.
I was thinking that it might make sense to wait to contact Anthropic after getting some layer mapping done, but I think it might make sense to reach out as planned. Particularly since they have a concept of features that are “smeared” across layers, which I hadn’t thought about before but makes sense. They call this Cross-Layer Superposition.
Anyway, I’ll write up an email tomorrow. Note – Include a picture from the conspiracy theory map!
I really wonder if dictionary learning of features using sparse autoencoders can be used for sequences rather than features.
Change out images in presentation and resubmit – done. Decided to leave the LLM embeddings
Work on book – nope
GPT-Agents
Ping Shimei & Jimmy to see if they’d like to meet over the next two weeks – done
Back from a great vacation in Portugal. We did non-touristy things too, but the National Palace of Pena in Sintra is an amazing 19th century Disneyland:
You must be logged in to post a comment.