Monthly Archives: March 2023

Phil 3.22.2023

Grace Hopper’s first bug!

Artificial Influence: An Analysis Of AI-Driven Persuasion

  • Persuasion is a key aspect of what it means to be human, and is central to business, politics, and other endeavors. Advancements in artificial intelligence (AI) have produced AI systems that are capable of persuading humans to buy products, watch videos, click on search results, and more. Even systems that are not explicitly designed to persuade may do so in practice. In the future, increasingly anthropomorphic AI systems may form ongoing relationships with users, increasing their persuasive power. This paper investigates the uncertain future of persuasive AI systems. We examine ways that AI could qualitatively alter our relationship to and views regarding persuasion by shifting the balance of persuasive power, allowing personalized persuasion to be deployed at scale, powering misinformation campaigns, and changing the way humans can shape their own discourse. We consider ways AI-driven persuasion could differ from human-driven persuasion. We warn that ubiquitous highlypersuasive AI systems could alter our information environment so significantly so as to contribute to a loss of human control of our own future. In response, we examine several potential responses to AI-driven persuasion: prohibition, identification of AI agents, truthful AI, and legal remedies. We conclude that none of these solutions will be airtight, and that individuals and governments will need to take active steps to guard against the most pernicious effects of persuasive AI.

This ties in to an earlier paper:

The Systemic Impact of Deplatforming on Social Media

  • Deplatforming, or banning malicious accounts from social media, is a key tool for moderating online harms. However, the consequences of deplatforming for the wider social media ecosystem have been largely overlooked so far, due to the difficulty of tracking banned users. Here, we address this gap by studying the ban-induced platform migration from Twitter to Gettr. With a matched dataset of 15M Gettr posts and 12M Twitter tweets, we show that users active on both platforms post similar content as users active on Gettr but banned from Twitter, but the latter have higher retention and are 5 times more active. Then, we reveal that matched users are more toxic on Twitter, where they can engage in abusive cross-ideological interactions, than Gettr. Our analysis shows that the matched cohort are ideologically aligned with the far-right, and that the ability to interact with political opponents may be part of the appeal of Twitter to these users. Finally, we identify structural changes in the Gettr network preceding the 2023 Brasilia insurrections, highlighting how deplatforming from mainstream social media can fuel poorly-regulated alternatives that may pose a risk to democratic life.

GPT Agents

  • Reversed the model list so more recent ones come first
  • Finish the subsampling code
  • AI Ethics/Watermarking review

SBIRs

  • Download slide decks to laptop
  • Pick up Aaron at 3:00

Phil 3.21.2023

First full day of Spring!

This post only focuses on prompt engineering for autoregressive language models, so nothing with Cloze tests, image generation or multimodality models. At its core, the goal of prompt engineering is about alignment and model steerability. Check my previous post on controllable text generation.

GPT Agents

  • Sent the posters off to Staples
  • Getting the embeddings for MAGA. Because I really don’t want to spend over $100 at OpenAI this month, I’m going to stop there and work on clustering, then export the db to the laptop
  • Things are taking too long to get the clustering working. Wrote a procedure to take random tweets within a range that can be used for the clustering. The trick here is that all the values will need to be updated once the values are found:
DELIMITER $$
CREATE OR REPLACE PROCEDURE get_random_tweets(min_query_id INT, max_query_id INT, num_vals INT)
BEGIN
    SELECT * FROM table_tweet AS t1
    JOIN (
        SELECT id FROM table_tweet
        WHERE query_id >= min_query_id AND query_id <= max_query_id
        ORDER BY RAND()
        LIMIT num_vals
    ) as t2 ON t1.id=t2.id;
END $$
DELIMITER ;

SBIRs

  • 9:15 Standup
  • 9:30 tuning discussions with Rukan
  • 10:30 meeting with Lauren and Aaron
  • 1:00 MDA meeting

Phil 3.20.2023

Looks like you can prompt-tune some of the overconfidence out of the GPT-4

GPT Agents

  • Pulling down embedddings and moderations. Kind of slow going. I think I need to do bigger pulls from the db, and then feed them slowly into the API. Done, after a few dumb errors. Much faster now

SBIRs

  • 2:00 MDA meeting. Make sure to get the ball rolling on the commercialization meeting
  • Prep for MC meeting. Need to do a first stab at slides – done

Phil 3.18.2023

GPT Agents

  • Getting embeddings and moderations for tweets. Because there are so many of them, I need to send them in batches. Currently 100, but I think they can be larger. Try 500 next?
  • Fix NarrativeMaps2 to use the same algo
  • Change numbers in posters to bullets

Phil 3.17.2023

Tasks

  • Test new stem height and get cut down if it works
  • Chores
  • Shopping (treats!)

GPT Agents

  • Add engine selection for ContextExplorer
  • Start (finish?) poster
  • Verify that all the parts including Gephi work on the laptop
  • Start weekend embedding run this evening

SBIRs

  • Assist Aaron on MC presentation

Phil 3. 16.2023

Google is  introducing a first set of AI-powered writing features in Docs and Gmail to trusted testers. Rollout to begin with trusted users.

Lots of GPT-4 chatter. The demos look pretty incredible. Got my access!

SBIRs

  • Add “to-GML” button that creates a network and saves a file to be imported into Gefi
  • Create some pix!
  • 3:00 USMC prep with Aaron. Slides?

GPT Agents

  • Once pix are done. Start on poster. Try to send off tomorrow
  • Start processing embeddings for other tweets so there is more data to demo with
  • If there is time, make some videos as a backup

Book

  • 2:00 Meeting. Went well. I need to send some questions and answers that work in the context of the book – easy, medium, and hard

Phil 3.15.2023

What is Rumble? The streaming platform building an alternative internet

GPT Agents

  • I realized that now with embeddings that I can revisit the “list of nearby xxx” approach to maps
  • 4:00 or 5:00 meeting. Time zone issues
  • Ping Antonio – done

SBIRs

  • Run 100 stories each, cluster and label – done
  • Write the code that will connect the sentences for each of the stories and see if they pass through clusters in meaningful ways. I still wonder if just looking for distances between sentence (or summary) vectors would make more sense. Something to evaluate. Done
    • Another thought is to average the narrative trajectory using an adjustable window. Something to try
  • If things look reasonable, write code that creates a network graph out of the connected clusters, lay them out in Gephi, and render them. This all needs to be done before the 20th!
  • Looking at the plots, things look very promising. I’ll try to generate networks tomorrow

Phil 3.14.23

Happy PI day, for all those irrational folks

AC service

Modern language models refute Chomsky’s approach to language

  • The rise and success of large language models undermines virtually every strong claim for the innateness of language that has been proposed by generative linguistics. Modern machine learning has subverted and bypassed the entire theoretical framework of Chomsky’s approach, including its core claims to particular insights, principles, structures, and processes. I describe the sense in which modern language models implement genuine theories of language, including representations of syntactic and semantic structure. I highlight the relationship between contemporary models and prior approaches in linguistics, namely those based on gradient computations and memorized constructions. I also respond to several critiques of large language models, including claims that they can’t answer “why” questions, and skepticism that they are informative about real life acquisition. Most notably, large language models have attained remarkable success at discovering grammar without using any of the methods that some in linguistics insisted were necessary for a science of language to progress.

Alpaca: A Strong Open-Source Instruction-Following Model

  • We introduce Alpaca 7B, a model fine-tuned from the LLaMA 7B model on 52K instruction-following demonstrations. Alpaca behaves similarly to OpenAI’s text-davinci-003, while being surprisingly small and easy/cheap to reproduce (<$600).

SBIRs

  • Sprint planning – done
    • Generate a set of prompts for stories using Sun Zu and Clausewitz and save them out in prompts – done
    • Test the prompts in Narrative generator. Don’t forget about the later Babbage models
      • Ran prompt1 using the Clausewitz context
    • Run 100 stories each, cluster and label
  • Write the code that will connect the sentences for each of the stories and see if they pass through clusters in meaningful ways. I still wonder if just looking for distances between sentence (or summary) vectors would make more sense. Something to evaluate.
    • Another thought is to average the narrative trajectory using an adjustable window. Something to try
  • If things look reasonable, write code that creates a network graph out of the connected clusters, lay them out in Gephi, and render them. This all needs to be done before the 20th!
  • GPT BD tagup

GPT Agents

  • I’d like to get the tweet text for the excel export, but I need to not plot that in the boxplot. This seems to be the answer?

Phil 3.13.2023

Pick up filters

GPT Agents

  • Get speech stats and charts working in TweetEmbedExplorer – done! Interesting too. Here’s moderated speech for paxlovid vs ivermectin tweets
  • Also fixed the embedding param storing code:

SBIRs

  • Abstract was accepted at MORS!
  • 9:00 Sprint demos – done. Need to put together stories
  • 10:30 USMC prep meeting – nope
  • 2:00 MDA meeting – done
  • 3:00 JSC meeting – done

Phil 3.12.2022

Fine-tuning 20B LLMs with RLHF on a 24GB consumer GPU

  • We are excited to officially release the integration of trl with peft to make Large Language Model (LLM) fine-tuning with Reinforcement Learning more accessible to anyone! In this post, we explain why this is a competitive alternative to existing fine-tuning approaches.
  • Note peft is a general tool that can be applied to many ML use-cases but it’s particularly interesting for RLHF as this method is especially memory-hungry!
  • If you want to directly deep dive into the code, check out the example scripts directly on the documentation page of TRL.

Phil 3.10.2023

Tasks

  • Chores
  • Groceries
  • Yard -done

Book

  • 2:00 Meeting – introduced some things but the software people didn’t show

SBIRs

  • 10:GPT BD Meeting – demo went well

GPT Agents

  • Get the “*” working
  • Try the MCC memo to see how small texts work – pretty good!
  • Add “clear” button – done. Helpful

Phil 3.9.2023

ChatAug: Leveraging ChatGPT for Text Data Augmentation

  • Text data augmentation is an effective strategy for overcoming the challenge of limited sample sizes in many natural language processing (NLP) tasks. This challenge is especially prominent in the few-shot learning scenario, where the data in the target domain is generally much scarcer and of lowered quality. A natural and widely-used strategy to mitigate such challenges is to perform data augmentation on the training data to better capture the data invariance and increase the sample size. However, current text data augmentation methods either can not ensure the correct labeling of the generated data (lacking faithfulness) or can not ensure sufficient diversity in the generated data (lacking completeness), or both. Inspired by the recent success of large language models, especially the development of ChatGPT, which demonstrated improved language comprehension abilities, in this work, we propose a text data augmentation approach based on ChatGPT (named ChatAug). ChatGPT is trained on data with unparalleled linguistic richness and employs a reinforcement training process with large-scale human feedback, which endows the model with affinity to the naturalness of human language. Our text data augmentation approach ChatAug rephrases each sentence in the training samples into multiple conceptually similar but semantically different samples. The augmented samples can then be used in downstream model training. Experiment results on few-shot learning text classification tasks show the superior performance of the proposed ChatAug approach over state-of-the-art text data augmentation methods in terms of testing accuracy and distribution of the augmented samples.

GPT Agents

  • Add a “no context” checkbox – done. Super interesting how things change. The question is “what is the best way to hunt whales?”:

——————————— Without prompt tuning

The best way to hunt whales is to use a harpoon. Harpoons are designed to penetrate the thick skin of whales and are the most effective way to hunt them. Additionally, it is important to use a boat that is large enough to handle the size of the whale and to have a crew that is experienced in whale hunting.

———————————- With prompt tuning

The best way to hunt whales is to use two harpoons connected to the same line and throw them into the water, with the spare coils of box line making it possible for the harpooneer to pitch the second iron even if the whale runs away after receiving the first. Additionally, the whaleman must use the manoeuver of pitchpoling with a lance to accurately dart it from a violently rocking boat.

  • Make it so that the active tab in GPTContextFrame is switched to gen_tab when any of the “actions” buttons are pressed – done
  • Set the summary engine to chatGPT and evaluate
  • Add in charting of speech categories (and saving to spreadsheet)
  • Add moderation json field to narrative maps – done

SBIRs

  • Submitted Q4 report to Lauren. It looks good!
  • 9:15 standup. Need to close tasks
  • 9:30 GA discussion with Rukan
  • 10:00 GPT for BD
  • More UMAP with Aaron
  • Create a “military” group and add Clausewitz and Sun Tzu to begin. This means I need to add the * for multiple texts in one group
    • Downloaded, trimmed, and loaded

Phil 3.8.2023

Human heuristics for AI-generated language are flawed

  • Human communication is increasingly intermixed with language generated by AI. Across chat, email, and social media, AI systems suggest words, complete sentences, or produce entire conversations. AI-generated language is often not identified as such but presented as language written by humans, raising concerns about novel forms of deception and manipulation. Here, we study how humans discern whether verbal self-presentations, one of the most personal and consequential forms of language, were generated by AI. In six experiments, participants (N = 4,600) were unable to detect self-presentations generated by state-of-the-art AI language models in professional, hospitality, and dating contexts. A computational analysis of language features shows that human judgments of AI-generated language are hindered by intuitive but flawed heuristics such as associating first-person pronouns, use of contractions, or family topics with human-written language. We experimentally demonstrate that these heuristics make human judgment of AI-generated language predictable and manipulable, allowing AI systems to produce text perceived as “more human than human.” We discuss solutions, such as AI accents, to reduce the deceptive potential of language generated by AI, limiting the subversion of human intuition.

GPT Agents

SBIRs

  • Review and submit the Q4 report

Phil 3.7.2023

Open science involves sharing of code, and Python is a popular language for that code. Scientists may be reluctant, though, to try shared Python code when doing so involves many installation steps, like installing Conda, installing packages, installing other packages with Pip, possibly resolving package conflicts, etc.

An appealing alternative is to “bundle” the Python code and its dependencies into a single executable that can be downloaded from the “Releases” section of a GitHub site. This project is a test bed for working out the detals of such an approach. This project is called a “demo” rather than a “test” just in case any of the tools involved implicitly assume that items with names including “test” are parts of an internal test suite.

SBIRs

  • 9:15 standup
  • 9:30 USNA meeting
  • 1:00 BMD bi-weekly
  • More document loading, embedding, and storing to db
  • I also need a “*” option to load all groups added to the list when appropriate
  • There is a “moderation” endpoint on the OpenAI API. Add that to twitter_v2.table_tweet. Probably just the category_scores json object

GPT Agents

  • Read in the King James Bible
  • Got sources working!

Phil 3.6.2023

Back from GSAW. It was nice to be at a conference physically again

10:00 Dentist

SBIRs

  • Working on the quarterly report. Also need to set up the Q5 files and folders on Overleaf – done
  • 2:00 MDA Meeting – done
  • Talk to Aaron about paper? Also trip to VA? Done

GPT Agents

  • Fixed a bunch of small bugs
  • Need to get the loading of data, summaries, and embeddings – progress
  • Fix TweetEmbedExplorer to use BLOBs. Then re-embed and cluster