These services had been shaved down to the point where most of us were only a hair’s breadth away from quitting, because all the surplus had been transferred from us and from business users to the companies.
And the incentives are different for different users. Lurking is cheaper than posting, trolling by robot is free, etc. Would be interesting to try to model that
Shutterstock first – done
Finish with footnotes – done
More writing – rolling in Rukan’s work
Send a date in December for Lauren – done
Chat with Aaron about JMOR paper
Set up a weekly meeting with Jason for Tuesdays at 2:00
4:00 Meeting – going to do some pulls for COVID racism. I tried out some new prompts using openAI’s chatbot and got some good results that I need to test.
fediverse.space is a tool to visualize networks and communities on the fediverse. It works by crawling every instance it can find and aggregating statistics on communication between these.
Downloaded the latest from Overleaf and converted to a Word document. In going through the Word doc and removing all the end-line hyphenations, I also found a few more grammar errors and misspellings. Going to prepare the package to send to Elsevier later today – DONE!
Need to get rid of all the footnotes, though
More working on the white paper
MDA meeting at 2:00
Yikes! Need to get done with the quarterly report by the 7th.
Set up Q4 writing space
Sent Jimmy updates on everything for the “professor status meeting”
This Stability-AIrepository contains Stable Diffusion models trained from scratch and will be continuously updated with new checkpoints. The following list provides an overview of all currently available models. More coming soon.
Adjective ordering preferences stand as perhaps one of the best candidates for a true linguistic universal: When multiple adjectives are strung together in service of modifying some noun, speakers of different languages—from English to Mandarin to Hebrew—exhibit robust and reliable preferences concerning the relative order of those adjectives. More importantly, despite the diversity of the languages investigated, the very same preferences surface over and over again. This tantalizing regularity has led to decades of research pursuing the source of these preferences. This article offers an overview of the findings and proposals that have resulted.
Disinformation Watch is a fortnightly newsletter covering the latest news about disinformation, including case studies, research and reporting from the BBC, international media and leading experts in the field.
Working on chasing down pictures that I can use. Folks, I strongly suggest never using images that might have copyright issues as placeholders. You can get very attached to them!
Finished with figures. Here’s an example of what needs to be done. Here’s the before with the placeholder:
And here’s the after, using assets from Wikimedia, and an hour or so with Illustrator
It looks better, I think, but it was a lot of work
The key to our achievement was developing new techniques at the intersection of two completely different areas of AI research: strategic reasoning, as used in agents like AlphaGo and Pluribus, and natural language processing, as used in models like GPT-3, BlenderBot 3, LaMDA, and OPT-175B. CICERO can deduce, for example, that later in the game it will need the support of one particular player, and then craft a strategy to win that person’s favor – and even recognize the risks and opportunities that that player sees from their particular point of view.
Need to look at the Mastodon API with an eye towards anonymous journalism
Had a good chat with Aaron on how population thinking is kind of like NN models, with all the odd artifacts and required dimension reduction for the loss function. This tends to explain how companies like Facebook approximate the canonical paperclip AI and consume everything to create engagement and grow the network
In “Emergent Abilities of Large Language Models,” recently published in the Transactions on Machine Learning Research (TMLR), we discuss the phenomena of emergent abilities, which we define as abilities that are not present in small models but are present in larger models. More specifically, we study emergence by analyzing the performance of language models as a function of language model scale, as measured by total floating point operations (FLOPs), or how much compute was used to train the language model. However, we also explore emergence as a function of other variables, such as dataset size or number of model parameters (see the paper for full details). Overall, we present dozens of examples of emergent abilities that result from scaling up language models. The existence of such emergent abilities raises the question of whether additional scaling could potentially further expand the range of capabilities of language models.
More migration. Done with part one! Don is working on getting me a studio.
I’ve found the terms I want, which are the top 10 keywords from my set. I really want to pull 1k/day for 3 years which would be 3,650,000 tweets. I should be able to do a clamped balanced pull that will give me two samples of 500 (or maybe 3 samples of 500 depending on the rounding) per day. Going to start with one keyword at a time so I can time things. It will also produce unique experiment table entries, which is probably fine
Send note back to the First Line
Started a balanced pull at 8:45am, finished at 10:45, so 2 hours for 500k tweets. Not bad!
Second pull at 10:48. Third Pull at 1:30 – it seems to be running slower? 4th pull at 4:00. 5th pull at 5:45
Recent developments in natural language generation (NLG) using neural language models have brought us closer than ever to the goal of building AI-powered creative writing tools. However, most prior work on human-AI collaboration in the creative writing domain has evaluated new systems with amateur writers, typically in contrived user studies of limited scope. In this work, we commissioned 13 professional, published writers from a diverse set of creative writing backgrounds to craft stories using Wordcraft, a text editor with built-in AI-powered writing assistance tools. Using interviews and participant journals, we discuss the potential of NLG to have significant impact in the creative writing domain–especially with respect to brainstorming, generation of story details, world-building, and research assistance. Experienced writers, more so than amateurs, typically have well-developed systems and methodologies for writing, as well as distinctive voices and target audiences. Our work highlights the challenges in building for these writers; NLG technologies struggle to preserve style and authorial voice, and they lack deep understanding of story contents. In order for AI-powered writing assistants to realize their full potential, it is essential that they take into account the diverse goals and expertise of human writers.
More reading. Need to search each paper for “loop”, “centaur”, and “team” and check those paragraphs at least. As you might think, the reality is more complex. All the papers have some parts of the concepts, but they often don’t use the terms
Chat with Aaron. Really good. I think I was able to explain my concept. We’re going to write some sections and worry about the structure later
2:00 Presentation. Went ok. Steve needs to join Toasmasters