Downloaded the latest from Overleaf and converted to a Word document. In going through the Word doc and removing all the end-line hyphenations, I also found a few more grammar errors and misspellings. Going to prepare the package to send to Elsevier later today – DONE!
Need to get rid of all the footnotes, though
SBIRs
More working on the white paper
MDA meeting at 2:00
Yikes! Need to get done with the quarterly report by the 7th.
Set up Q4 writing space
GPT Agents
Sent Jimmy updates on everything for the “professor status meeting”
This Stability-AIrepository contains Stable Diffusion models trained from scratch and will be continuously updated with new checkpoints. The following list provides an overview of all currently available models. More coming soon.
Adjective ordering preferences stand as perhaps one of the best candidates for a true linguistic universal: When multiple adjectives are strung together in service of modifying some noun, speakers of different languages—from English to Mandarin to Hebrew—exhibit robust and reliable preferences concerning the relative order of those adjectives. More importantly, despite the diversity of the languages investigated, the very same preferences surface over and over again. This tantalizing regularity has led to decades of research pursuing the source of these preferences. This article offers an overview of the findings and proposals that have resulted.
Disinformation Watch is a fortnightly newsletter covering the latest news about disinformation, including case studies, research and reporting from the BBC, international media and leading experts in the field.
Book
Working on chasing down pictures that I can use. Folks, I strongly suggest never using images that might have copyright issues as placeholders. You can get very attached to them!
Finished with figures. Here’s an example of what needs to be done. Here’s the before with the placeholder:
And here’s the after, using assets from Wikimedia, and an hour or so with Illustrator
It looks better, I think, but it was a lot of work
The key to our achievement was developing new techniques at the intersection of two completely different areas of AI research: strategic reasoning, as used in agents like AlphaGo and Pluribus, and natural language processing, as used in models like GPT-3, BlenderBot 3, LaMDA, and OPT-175B. CICERO can deduce, for example, that later in the game it will need the support of one particular player, and then craft a strategy to win that person’s favor – and even recognize the risks and opportunities that that player sees from their particular point of view.
Need to look at the Mastodon API with an eye towards anonymous journalism
Had a good chat with Aaron on how population thinking is kind of like NN models, with all the odd artifacts and required dimension reduction for the loss function. This tends to explain how companies like Facebook approximate the canonical paperclip AI and consume everything to create engagement and grow the network
Discovered gbif.org, which has a lot of tracked wildlife data, including white storks. I can make a new map with Plotly express maps
Got my metronomes yesterday! Need to get a lightweight platform and see if the experiment works
SBIRs
9:15 standup
Great discussion with Aaron about the JMOR paper. Looking at War Elephants and Mahouts (mAIhouts? Nah) as a useful metaphor for models and handlers. Aaron’s going to write a short story introduction.
Galactica is an AI trained on humanity’s scientific knowledge. You can use it as a new interface to access and manipulate what we know about the universe. (Made by Papers with Code, Meta AI)
In “Emergent Abilities of Large Language Models,” recently published in the Transactions on Machine Learning Research (TMLR), we discuss the phenomena of emergent abilities, which we define as abilities that are not present in small models but are present in larger models. More specifically, we study emergence by analyzing the performance of language models as a function of language model scale, as measured by total floating point operations (FLOPs), or how much compute was used to train the language model. However, we also explore emergence as a function of other variables, such as dataset size or number of model parameters (see the paper for full details). Overall, we present dozens of examples of emergent abilities that result from scaling up language models. The existence of such emergent abilities raises the question of whether additional scaling could potentially further expand the range of capabilities of language models.
Book
More migration. Done with part one! Don is working on getting me a studio.
Tweaked my twitter counts to work with other languages. Here’s trends for “world cup” in Persian:
Trying Fentanyl again with more traps for zero tweets – Done! Got all the user info as well. Total of 5,507,159 tweets and 2,402,215 users
Book
More migration
Order metronomes! Done!
SBIRs
Had a discussion with Aaron and Rukan about what our first model should be – commands or target priority, We’ll start with targets and the idea that there might be multiple NNs within a controller
I’ve found the terms I want, which are the top 10 keywords from my set. I really want to pull 1k/day for 3 years which would be 3,650,000 tweets. I should be able to do a clamped balanced pull that will give me two samples of 500 (or maybe 3 samples of 500 depending on the rounding) per day. Going to start with one keyword at a time so I can time things. It will also produce unique experiment table entries, which is probably fine
Send note back to the First Line
Started a balanced pull at 8:45am, finished at 10:45, so 2 hours for 500k tweets. Not bad!
Second pull at 10:48. Third Pull at 1:30 – it seems to be running slower? 4th pull at 4:00. 5th pull at 5:45
Recent developments in natural language generation (NLG) using neural language models have brought us closer than ever to the goal of building AI-powered creative writing tools. However, most prior work on human-AI collaboration in the creative writing domain has evaluated new systems with amateur writers, typically in contrived user studies of limited scope. In this work, we commissioned 13 professional, published writers from a diverse set of creative writing backgrounds to craft stories using Wordcraft, a text editor with built-in AI-powered writing assistance tools. Using interviews and participant journals, we discuss the potential of NLG to have significant impact in the creative writing domain–especially with respect to brainstorming, generation of story details, world-building, and research assistance. Experienced writers, more so than amateurs, typically have well-developed systems and methodologies for writing, as well as distinctive voices and target audiences. Our work highlights the challenges in building for these writers; NLG technologies struggle to preserve style and authorial voice, and they lack deep understanding of story contents. In order for AI-powered writing assistants to realize their full potential, it is essential that they take into account the diverse goals and expertise of human writers.
SBIRs
More reading. Need to search each paper for “loop”, “centaur”, and “team” and check those paragraphs at least. As you might think, the reality is more complex. All the papers have some parts of the concepts, but they often don’t use the terms
Chat with Aaron. Really good. I think I was able to explain my concept. We’re going to write some sections and worry about the structure later
9:15 standup
2:00 Presentation. Went ok. Steve needs to join Toasmasters
We attack the state-of-the-art Go-playing AI system, KataGo, by training an adversarial policy that plays against a frozen KataGo victim. Our attack achieves a >99% win-rate against KataGo without search, and a >50% win-rate when KataGo uses enough search to be near-superhuman. To the best of our knowledge, this is the first successful end-to-end attack against a Go AI playing at the level of a top human professional. Notably, the adversary does not win by learning to play Go better than KataGo — in fact, the adversary is easily beaten by human amateurs. Instead, the adversary wins by tricking KataGo into ending the game prematurely at a point that is favorable to the adversary. Our results demonstrate that even professional-level AI systems may harbor surprising failure modes. See this https URL for example games.
9:00 Sprint Review
More reading
Used the LMN tools to figure out what to emphasize and find more papers
GPT Agents
More documenting
Figure out some keywords for various groups and start pulling tweets. I think 10k per group a week would be manageable.
Watching Twitter implde. Maybe I should just use the pushshift API?
You must be logged in to post a comment.