Category Archives: Phil

Phil 6.22.2023

Via Twitter

Trip

  • Cancel CA hotels – done
  • Get Astoria hotel -done
  • Get Seattle airport hotel with shuttle service -done
  • Tell Sande we’re getting home a day early

SBIRs

  • 9:00 standup
  • See what it takes to run JavaUtils in VS-Code – got everything working. You need the Java extensions and to point to the jar files
  • More reading, maybe start writing. Good start! Borrowing Nema

Phil 6.20.2023

TASRA: a Taxonomy and Analysis of Societal-Scale Risks from AI

  • While several recent works have identified societal-scale and extinction-level risks to humanity arising from artificial intelligence, few have attempted an exhaustive taxonomy of such risks. Many exhaustive taxonomies are possible, and some are useful — particularly if they reveal new risks or practical approaches to safety. This paper explores a taxonomy based on accountability: whose actions lead to the risk, are the actors unified, and are they deliberate? We also provide stories to illustrate how the various risk types could each play out, including risks arising from unanticipated interactions of many AI systems, as well as risks from deliberate misuse, for which combined technical and policy solutions are indicated.

SBIRs

  • 9:00 Sprint demos. Need to make slides – done
  • 10:30 Overleaf meeting – nope
  • 1:00 Q4/Q5 presentation – done, but I need to do it again
  • 2:00 Sprint planning – done
  • Working on the scale paper

Phil 6.17.2023

Back from New York! Seriously, West Point is Hogwarts:

Enabling delightful user experiences via predictive models of human attention

  • In this blog, we present two papers (one from CVPR 2022, and one just accepted to CVPR 2023) that highlight our recent research in the area of human attention modeling: “Deep Saliency Prior for Reducing Visual Distraction” and “Learning from Unique Perspectives: User-aware Saliency Modeling”, together with recent research on saliency driven progressive loading for image compression (12). We showcase how predictive models of human attention can enable delightful user experiences such as image editing to minimize visual clutter, distraction or artifacts, image compression for faster loading of webpages or apps, and guiding ML models towards more intuitive human-like interpretation and model performance. We focus on image editing and image compression, and discuss recent advances in modeling in the context of these applications.

Phil 6.15.2023

We’re excited to introduce the first AI model based on a key component of LeCun’s vision. This model, the Image Joint Embedding Predictive Architecture (I-JEPA), learns by creating an internal model of the outside world, which compares abstract representations of images (rather than comparing the pixels themselves). I-JEPA delivers strong performance on multiple computer vision tasks, and it’s much more computationally efficient than other widely used computer vision models. The representations learned by I-JEPA can also be used for many different applications without needing extensive fine tuning. For example, we train a 632M parameter visual transformer model using 16 A100 GPUs in under 72 hours, and it achieves state-of-the-art performance for low-shot classification on ImageNet, with only 12 labeled examples per class. Other methods typically take two to 10 times more GPU-hours and achieve worse error rates when trained with the same amount of data.

Phil 6.14.2023

Sequels are lower-effort, but have sufficiently high value to be profitable

People + AI Research (PAIR) is a multidisciplinary team at Google that explores the human side of AI by doing fundamental research, building tools, creating design frameworks, and working with diverse communities. We believe that for machine learning to achieve its positive potential, it needs to be participatory, involving the communities it affects and guided by a diverse set of citizens, policy-makers, activists, artists and more.

Center for Accelerating Operational Efficiency (CAOE) Fact Sheet

Found this example of Elephant Armor today

Phil 6.13.2023

The Curse of Recursion: Training on Generated Data Makes Models Forget

  • Stable Diffusion revolutionised image creation from descriptive text. GPT-2, GPT-3(.5) and GPT-4 demonstrated astonishing performance across a variety of language tasks. ChatGPT introduced such language models to the general public. It is now clear that large language models (LLMs) are here to stay, and will bring about drastic change in the whole ecosystem of online text and images. In this paper we consider what the future might hold. What will happen to GPT-{n} once LLMs contribute much of the language found online? We find that use of model-generated content in training causes irreversible defects in the resulting models, where tails of the original content distribution disappear. We refer to this effect as Model Collapse and show that it can occur in Variational Autoencoders, Gaussian Mixture Models and LLMs. We build theoretical intuition behind the phenomenon and portray its ubiquity amongst all learned generative models. We demonstrate that it has to be taken seriously if we are to sustain the benefits of training from large-scale data scraped from the web. Indeed, the value of data collected about genuine human interactions with systems will be increasingly valuable in the presence of content generated by LLMs in data crawled from the Internet.

SBIRs

  • Out conferencing, so not so much posting.

GPT Agents

Phil 6.9.2023

Better air today. Sure is dry though:

GPT Agents

  • More work on the hallucination paper – done! Decided not to include the discussion section since it was getting long. Screwed up the title a bit, so I’ll have to fix that later

SBIRs

  • Trip prep
  • 11:30 overleaf meeting – delayed

Phil 6.8.2023

The smoke from the fires in Canada is much worse today:

generative AI learning path

  • This learning path guides you through a curated collection of content on generative AI products and technologies, from the fundamentals of Large Language Models to how to create and deploy generative AI solutions on Google Cloud.

SBIRs

  • 9:15 standup
  • 11:30 CSC touchpoint
  • Put slides on thumb drive and laptop

GPT Agents

  • Try to finish the first pass of the hallucination paper and get it up on ArXiv

Phil 6.7.2023

The Surprising Creativity of Digital Evolution: A Collection of Anecdotes from the Evolutionary Computation and Artificial Life Research Communities

  • Evolution provides a creative fount of complex and subtle adaptations that often surprise the scientists who discover them. However, the creativity of evolution is not limited to the natural world: artificial organisms evolving in computational environments have also elicited surprise and wonder from the researchers studying them. The process of evolution is an algorithmic process that transcends the substrate in which it occurs. Indeed, many researchers in the field of digital evolution can provide examples of how their evolving algorithms and organisms have creatively subverted their expectations or intentions, exposed unrecognized bugs in their code, produced unexpectedly adaptations, or engaged in behaviors and outcomes uncannily convergent with ones found in nature. Such stories routinely reveal surprise and creativity by evolution in these digital worlds, but they rarely fit into the standard scientific narrative. Instead they are often treated as mere obstacles to be overcome, rather than results that warrant study in their own right. Bugs are fixed, experiments are refocused, and one-off surprises are collapsed into a single data point. The stories themselves are traded among researchers through oral tradition, but that mode of information transmission is inefficient and prone to error and outright loss. Moreover, the fact that these stories tend to be shared only among practitioners means that many natural scientists do not realize how interesting and lifelike digital organisms are and how natural their evolution can be. To our knowledge, no collection of such anecdotes has been published before. This paper is the crowd-sourced product of researchers in the fields of artificial life and evolutionary computation who have provided first-hand accounts of such cases. It thus serves as a written, fact-checked collection of scientifically important and even entertaining stories. In doing so we also present here substantial evidence that the existence and importance of evolutionary surprises extends beyond the natural world, and may indeed be a universal property of all complex evolving systems.

SBIRs

  • Took the mid morning off: https://www.strava.com/activities/9221033759
  • 2:00 MDA slides meeting with Matt. Bring slides on thumb drive. Done. We discussed, and I sent him a copy for the slides as well as an update. We’ll get together again to discuss the presentation on the 19th
  • 3:00 AI ethics tagup. Read Eric’s sidebar. He gets the idea, but he can’t write worth a damn. Cleaned up and added a lot of text. Need to read it aloud tomorrow

GPT Agents

  • Good progress on the Methods section yesterday. Fill out the Results section.
  • 4:00 GPT meeting? Nope.

Phil 6.6.2023

More than you’ve asked for: A Comprehensive Analysis of Novel Prompt Injection Threats to Application-Integrated Large Language Models

  • In this work, we show that augmenting LLMs with retrieval and API calling capabilities (so-called Application-Integrated LLMs) induces a whole new set of attack vectors. These LLMs might process poisoned content retrieved from the Web that contains malicious prompts pre-injected and selected by adversaries. We demonstrate that an attacker can indirectly perform such PI attacks. Based on this key insight, we systematically analyze the resulting threat landscape of Application-Integrated LLMs and discuss a variety of new attack vectors. To demonstrate the practical viability of our attacks, we implemented specific demonstrations of the proposed attacks within synthetic applications. In summary, our work calls for an urgent evaluation of current mitigation techniques and an investigation of whether new techniques are needed to defend LLMs against these threats.

AI chatbots lose money every time you use them. That is a problem.

  • Those costs may also be one reason Google has yet to build an AI chatbot into its flagship search engine, which fields billions of queries every day. When Google released its Bard chatbot in March, it opted not to use its largest language model. Dylan Patel, chief analyst at the semiconductor research firm SemiAnalysis, estimated that a single chat with ChatGPT could cost up to 1,000 times as much as a simple Google search.

GPT Agents

  • Working on getting the results into the paper

SBIRs

  • Sprint planning. Need to add MORS symposium and MDA management – done
  • Send MORS stuff – Done!

Phil 6.5.2023

Bringing Open Large Language Models to Consumer Devices

  • This post describes our effort on streamlining the deployment of Open LLMs through a versatile machine learning compilation infrastructure. We bring RedPajama, a permissive open language model to WebGPU, iOS, GPUs, and various other platforms. Furthermore, the workflow we have established can be easily adapted to support a wide range of models with fine-tuned (personalized) weights, promoting flexibility and customization in LLM deployment.

The overparameterized, paralyzed generation

SBIRs

  • Sprint demos. Need to make slides – done
  • Sent off the Q5 report

GPT Agents

  • Got a lot done in reading the json files and making spreadsheets
  • Created a rollup spreadsheet that I think I’ll use for the paper

Phil 6.1.2023

June already! Probably taking tomorrow off since it looks like rain on Saturday

Book

  • Pinged On the Record and Midday. Probably screwed up one email. Nothing back from the Pratt

GPT Agents

  • Modified wikipedia_search.py to get page text that I will use to read into ContextExplorer. That will let me debug the loader and create context prompts for the hallucination project

SBIRs

  • More Q4-5 slides
  • Talk about paper with Aaron? Rather than reworking current content, I’d prefer he work on new until we have a first pass.

Phil 5.31.2023

Democratic Inputs to AI: Our nonprofit organization, OpenAI, Inc., is launching a program to award ten $100,000 grants to fund experiments in setting up a democratic process for deciding what rules AI systems should follow, within the bounds defined by the law.

Book

  • Sent another thread into the void on Twitter and Mastodon
  • Contact On The Record
  • Contact Midday

GPT Agents

  • Got the first pass of the context tagging paper done except for the results section. Need to talk to Shimei and Jimmy about what to put in. Then rewriting and cleanup. Not sure what the venue would be, but literally all the references are on ArXiv, which really says something.
  • 2:30 Alden – interesting. Got some good thinking on prompts
  • 4:00 GPT meeting

SBIRs

  • Start on MDA slides. Looks like it’s going to be a combination of Q4 and Q5. Set up the templates.
  • 3:00 AI Ethics tagup? Oddly, I wrote up and email and sent it out to the team that tried to work out what ethical autonomous systems might look like viewed through the Inupiat lens. Crickets. Colonialism dies hard, I guess.