Phil 7.2.2023

On Hate Scaling Laws For Data-Swamps

  • `Scale the model, scale the data, scale the GPU-farms’ is the reigning sentiment in the world of generative AI today. While model scaling has been extensively studied, data scaling and its downstream impacts remain under explored. This is especially of critical importance in the context of visio-linguistic datasets whose main source is the World Wide Web, condensed and packaged as the CommonCrawl dump. This large scale data-dump, which is known to have numerous drawbacks, is repeatedly mined and serves as the data-motherlode for large generative models. In this paper, we: 1) investigate the effect of scaling datasets on hateful content through a comparative audit of the LAION-400M and LAION-2B-en, containing 400 million and 2 billion samples respectively, and 2) evaluate the downstream impact of scale on visio-linguistic models trained on these dataset variants by measuring racial bias of the models trained on them using the Chicago Face Dataset (CFD) as a probe. Our results show that 1) the presence of hateful content in datasets, when measured with a Hate Content Rate (HCR) metric on the inferences of the Pysentimiento hate-detection Natural Language Processing (NLP) model, increased by nearly 12% and 2) societal biases and negative stereotypes were also exacerbated with scale on the models we evaluated. As scale increased, the tendency of the model to associate images of human faces with the `human being’ class over 7 other offensive classes reduced by half. Furthermore, for the Black female category, the tendency of the model to associate their faces with the `criminal’ class doubled, while quintupling for Black male faces. We present a qualitative and historical analysis of the model audit results, reflect on our findings and its implications for dataset curation practice, and close with a summary of our findings and potential future work to be done in this area.

Phil 6.29.2023

Welcome to the future (From the Washington Post)

Textbooks Are All You Need

  • We introduce phi-1, a new large language model for code, with significantly smaller size than competing models: phi-1 is a Transformer-based model with 1.3B parameters, trained for 4 days on 8 A100s, using a selection of “textbook quality” data from the web (6B tokens) and synthetically generated textbooks and exercises with GPT-3.5 (1B tokens). Despite this small scale, phi-1 attains pass@1 accuracy 50.6% on HumanEval and 55.5% on MBPP. It also displays surprising emergent properties compared to phi-1-base, our model before our finetuning stage on a dataset of coding exercises, and phi-1-small, a smaller model with 350M parameters trained with the same pipeline as phi-1 that still achieves 45% on HumanEval.
  • This makes me think that smaller models on better data that use context prompting might be a good approach for trustworthy agents. In addition to the data used for the text, you could also provide style text in the prompt. Possibly few-shot prompting? I could try that with davinci.

Phil 6.23.2023

Chores today, pack tomorrow. I am so burned out.

I added the Quran to the db last night and am having a little trouble downloading it from svn. Seems to be working now

SequenceMatch: Imitation Learning for Autoregressive Sequence Modelling with Backtracking

  • In many domains, autoregressive models can attain high likelihood on the task of predicting the next observation. However, this maximum-likelihood (MLE) objective does not necessarily match a downstream use-case of autoregressively generating high-quality sequences. The MLE objective weights sequences proportionally to their frequency under the data distribution, with no guidance for the model’s behaviour out of distribution (OOD): leading to compounding error during autoregressive generation. In order to address this compounding error problem, we formulate sequence generation as an imitation learning (IL) problem. This allows us to minimize a variety of divergences between the distribution of sequences generated by an autoregressive model and sequences from a dataset, including divergences with weight on OOD generated sequences. The IL framework also allows us to incorporate backtracking by introducing a backspace action into the generation process. This further mitigates the compounding error problem by allowing the model to revert a sampled token if it takes the sequence OOD. Our resulting method, SequenceMatch, can be implemented without adversarial training or major architectural changes. We identify the SequenceMatch-χ2 divergence as a more suitable training objective for autoregressive models which are used for generation. We show that empirically, SequenceMatch training leads to improvements over MLE on text generation with language models.

GPT Agents

  • I tried having one of the LLMs describe my research, which it missed completely. I’m going to try to use my CV as context and see if that works as well. If it does, then I can use the faculty at UMBC to evaluate themselves, which should be kind of fun.
  • Works quite well, though the model sometimes can’t figure out the publications? Need to work on that. The context prompts are spot on, while the no context prompts are wildly hallucinatory.

SBIRs

  • Status report (again)
  • JSC meeting
  • More story

Phil 6.22.2023

Via Twitter

Trip

  • Cancel CA hotels – done
  • Get Astoria hotel -done
  • Get Seattle airport hotel with shuttle service -done
  • Tell Sande we’re getting home a day early

SBIRs

  • 9:00 standup
  • See what it takes to run JavaUtils in VS-Code – got everything working. You need the Java extensions and to point to the jar files
  • More reading, maybe start writing. Good start! Borrowing Nema

Phil 6.20.2023

TASRA: a Taxonomy and Analysis of Societal-Scale Risks from AI

  • While several recent works have identified societal-scale and extinction-level risks to humanity arising from artificial intelligence, few have attempted an exhaustive taxonomy of such risks. Many exhaustive taxonomies are possible, and some are useful — particularly if they reveal new risks or practical approaches to safety. This paper explores a taxonomy based on accountability: whose actions lead to the risk, are the actors unified, and are they deliberate? We also provide stories to illustrate how the various risk types could each play out, including risks arising from unanticipated interactions of many AI systems, as well as risks from deliberate misuse, for which combined technical and policy solutions are indicated.

SBIRs

  • 9:00 Sprint demos. Need to make slides – done
  • 10:30 Overleaf meeting – nope
  • 1:00 Q4/Q5 presentation – done, but I need to do it again
  • 2:00 Sprint planning – done
  • Working on the scale paper

Phil 6.17.2023

Back from New York! Seriously, West Point is Hogwarts:

Enabling delightful user experiences via predictive models of human attention

  • In this blog, we present two papers (one from CVPR 2022, and one just accepted to CVPR 2023) that highlight our recent research in the area of human attention modeling: “Deep Saliency Prior for Reducing Visual Distraction” and “Learning from Unique Perspectives: User-aware Saliency Modeling”, together with recent research on saliency driven progressive loading for image compression (12). We showcase how predictive models of human attention can enable delightful user experiences such as image editing to minimize visual clutter, distraction or artifacts, image compression for faster loading of webpages or apps, and guiding ML models towards more intuitive human-like interpretation and model performance. We focus on image editing and image compression, and discuss recent advances in modeling in the context of these applications.

Phil 6.15.2023

We’re excited to introduce the first AI model based on a key component of LeCun’s vision. This model, the Image Joint Embedding Predictive Architecture (I-JEPA), learns by creating an internal model of the outside world, which compares abstract representations of images (rather than comparing the pixels themselves). I-JEPA delivers strong performance on multiple computer vision tasks, and it’s much more computationally efficient than other widely used computer vision models. The representations learned by I-JEPA can also be used for many different applications without needing extensive fine tuning. For example, we train a 632M parameter visual transformer model using 16 A100 GPUs in under 72 hours, and it achieves state-of-the-art performance for low-shot classification on ImageNet, with only 12 labeled examples per class. Other methods typically take two to 10 times more GPU-hours and achieve worse error rates when trained with the same amount of data.

Phil 6.14.2023

Sequels are lower-effort, but have sufficiently high value to be profitable

People + AI Research (PAIR) is a multidisciplinary team at Google that explores the human side of AI by doing fundamental research, building tools, creating design frameworks, and working with diverse communities. We believe that for machine learning to achieve its positive potential, it needs to be participatory, involving the communities it affects and guided by a diverse set of citizens, policy-makers, activists, artists and more.

Center for Accelerating Operational Efficiency (CAOE) Fact Sheet

Found this example of Elephant Armor today

Phil 6.13.2023

The Curse of Recursion: Training on Generated Data Makes Models Forget

  • Stable Diffusion revolutionised image creation from descriptive text. GPT-2, GPT-3(.5) and GPT-4 demonstrated astonishing performance across a variety of language tasks. ChatGPT introduced such language models to the general public. It is now clear that large language models (LLMs) are here to stay, and will bring about drastic change in the whole ecosystem of online text and images. In this paper we consider what the future might hold. What will happen to GPT-{n} once LLMs contribute much of the language found online? We find that use of model-generated content in training causes irreversible defects in the resulting models, where tails of the original content distribution disappear. We refer to this effect as Model Collapse and show that it can occur in Variational Autoencoders, Gaussian Mixture Models and LLMs. We build theoretical intuition behind the phenomenon and portray its ubiquity amongst all learned generative models. We demonstrate that it has to be taken seriously if we are to sustain the benefits of training from large-scale data scraped from the web. Indeed, the value of data collected about genuine human interactions with systems will be increasingly valuable in the presence of content generated by LLMs in data crawled from the Internet.

SBIRs

  • Out conferencing, so not so much posting.

GPT Agents

Phil 6.9.2023

Better air today. Sure is dry though:

GPT Agents

  • More work on the hallucination paper – done! Decided not to include the discussion section since it was getting long. Screwed up the title a bit, so I’ll have to fix that later

SBIRs

  • Trip prep
  • 11:30 overleaf meeting – delayed

Phil 6.8.2023

The smoke from the fires in Canada is much worse today:

generative AI learning path

  • This learning path guides you through a curated collection of content on generative AI products and technologies, from the fundamentals of Large Language Models to how to create and deploy generative AI solutions on Google Cloud.

SBIRs

  • 9:15 standup
  • 11:30 CSC touchpoint
  • Put slides on thumb drive and laptop

GPT Agents

  • Try to finish the first pass of the hallucination paper and get it up on ArXiv

Phil 6.7.2023

The Surprising Creativity of Digital Evolution: A Collection of Anecdotes from the Evolutionary Computation and Artificial Life Research Communities

  • Evolution provides a creative fount of complex and subtle adaptations that often surprise the scientists who discover them. However, the creativity of evolution is not limited to the natural world: artificial organisms evolving in computational environments have also elicited surprise and wonder from the researchers studying them. The process of evolution is an algorithmic process that transcends the substrate in which it occurs. Indeed, many researchers in the field of digital evolution can provide examples of how their evolving algorithms and organisms have creatively subverted their expectations or intentions, exposed unrecognized bugs in their code, produced unexpectedly adaptations, or engaged in behaviors and outcomes uncannily convergent with ones found in nature. Such stories routinely reveal surprise and creativity by evolution in these digital worlds, but they rarely fit into the standard scientific narrative. Instead they are often treated as mere obstacles to be overcome, rather than results that warrant study in their own right. Bugs are fixed, experiments are refocused, and one-off surprises are collapsed into a single data point. The stories themselves are traded among researchers through oral tradition, but that mode of information transmission is inefficient and prone to error and outright loss. Moreover, the fact that these stories tend to be shared only among practitioners means that many natural scientists do not realize how interesting and lifelike digital organisms are and how natural their evolution can be. To our knowledge, no collection of such anecdotes has been published before. This paper is the crowd-sourced product of researchers in the fields of artificial life and evolutionary computation who have provided first-hand accounts of such cases. It thus serves as a written, fact-checked collection of scientifically important and even entertaining stories. In doing so we also present here substantial evidence that the existence and importance of evolutionary surprises extends beyond the natural world, and may indeed be a universal property of all complex evolving systems.

SBIRs

  • Took the mid morning off: https://www.strava.com/activities/9221033759
  • 2:00 MDA slides meeting with Matt. Bring slides on thumb drive. Done. We discussed, and I sent him a copy for the slides as well as an update. We’ll get together again to discuss the presentation on the 19th
  • 3:00 AI ethics tagup. Read Eric’s sidebar. He gets the idea, but he can’t write worth a damn. Cleaned up and added a lot of text. Need to read it aloud tomorrow

GPT Agents

  • Good progress on the Methods section yesterday. Fill out the Results section.
  • 4:00 GPT meeting? Nope.