Continue poster. Made a lot of synthetic art “in the style of Francis Bacon,” which captures the mood nicely. Really like this one, even though it’s not what I was after:
Large Language Models (LLMs) excel in various tasks, but they rely on carefully crafted prompts that often demand substantial human effort. To automate this process, in this paper, we propose a novel framework for discrete prompt optimization, called EvoPrompt, which borrows the idea of evolutionary algorithms (EAs) as they exhibit good performance and fast convergence. To enable EAs to work on discrete prompts, which are natural language expressions that need to be coherent and human-readable, we connect LLMs with EAs. This approach allows us to simultaneously leverage the powerful language processing capabilities of LLMs and the efficient optimization performance of EAs. Specifically, abstaining from any gradients or parameters, EvoPrompt starts from a population of prompts and iteratively generates new prompts with LLMs based on the evolutionary operators, improving the population based on the development set. We optimize prompts for both closed- and open-source LLMs including GPT-3.5 and Alpaca, on 9 datasets spanning language understanding and generation tasks. EvoPrompt significantly outperforms human-engineered prompts and existing methods for automatic prompt generation by up to 25% and 14% respectively. Furthermore, EvoPrompt demonstrates that connecting LLMs with EAs creates synergies, which could inspire further research on the combination of LLMs and conventional algorithms.
the BBC has identified four episodes in recent months where disproportionate engagement on TikTok was connected to harmful behaviour:
An online obsession with a murder case in Idaho, USA, that led to innocent people being falsely accused
Interference in the police investigation of Nicola Bulley, who went missing in Lancashire, UK
School protests involving vandalism spreading across the UK
Fanning flames of riots in France, which spread at an unusual intensity and to unexpected locations
Ex-staffers at TikTok liken these frenzies to “wildfires” and describe them as “dangerous”, especially as the app’s audience can be young and impressionable.
SBIRs
10:00 meeting with Aaron
2:30 Tech Fellows interview
Finish Dahlgren WP draft 1
GPT Agents
Add pasting areas for Education history, Work history, and Publications. Cut publications when the entire text is > ~10,000 words
Need to make a poster and submit by the 28th to the Digital Platforms and Societal Harms event. Probably show the 3 types of attacks (email examples) and mitigation. I could bring a laptop with ContextExplorer too.
Work on MAST whitepaper, then get together with Aaron at 1:00. Made good progress. The goal is to have a first draft by Friday COB
10:00 JSC Data Review. There is a lot. Ron’s going to do some summary statistics.
Maybe more scale paper this evening? Yup, finished Arms Control
MDA meeting from yesterday because Zac is back now. Done. Need to find out from Bob what the best target is.
More scale paper. Got started on the Arms Control section, which is coming along nicely. It seems that arms control is most effective when powers are not in open conflict (e.g. the cold war). Which is mostly the case now, though I wonder how much The Russian-Ukraine war would effect that. I think that there would be more focus on AI-enhanced weapons? Which for an agreement on Societal AI weapons might make things easier.
Need to get some work done on the MAST white paper
GPT Agents
Progress on getting lists of deans and chairs together to ask for participation.
…for 18 different tasks selected to be realistic samples of the kinds of work done at an elite consulting company, consultants using ChatGPT-4 outperformed those who did not, by a lot. On every dimension. Every way we measured performance.
I guess we’ll see what is going on with the server today?
9:00 Standup
GPT IRAD decision?
11:30 CSC
More scale paper. Need to start looking for some pix. Finished the disruption section. I think counterattack is an extension of disruption, and should be written that way. Of course, there’s a lot of groundwork that would have to be done in advance to put all the actors in place. That’s a tricky issue that’s worth discussing.
Tweaked the template for the Dahlgren paper and added some links to examples of prompt engineering to produce JSON files
Add a 0.5 point story for AI ethics
GPT Agents
2:00 UMBC Meeting. Test the new ContextTest and walk through the IRB – done with the later. Need to tweak the former – done
Add education history to work history prompt – done
Add “I assert that I am at least 18 years old” – done
Add recruitment email and screenshots to attachments – done
Change REI to Amazon – done
Draft email for all department chairs that includes an introduction of what the study is and who we are.
Working on venues for the scale paper/book. Need to start filling out the “defense” section. Started. Finished “Detection.” Next is “Disruption.”
Wrote up a short Python script that runs the loops that we think would generate the trajectories that we (think?) we need. I just realized that there needs a “trim” function that removes the beginning and end so we only have computable data
10:00 meeting with Rukan. The machine is hanging on file access because read permissions have been changed
3:00 AI Ethics meeting. Do homework! Done. Shiny, yet bad videos
Registered for the Digital Platforms and Societal Harms event
GPT Agents
Looks like we meet at 2:00 on Thursdays
Got a good start on the IRB! Need some guidance to finish
Our security people have decided that collaborative writing using overleaf is too much of a threat so they will not allow it. On top of all their other policies, I am very close to quitting.
We need another story. In this case, it’s another war room vignette, but this time from the defense’s side. Maybe with M again? Of course, part of this is figuring out what defenses might actually look like. One thing I’d like to re-use in the idea of diverse operator teams looking for misbehaving models. In this case though, the models are trained to be honeypots for attacks maybe? They go along in their day-to-day, sending emails, running dummy companies, having dates, etc. When they start acting too aligned, then it’s time to start looking for trouble. Maybe digital twins of important people?
You must be logged in to post a comment.