Author Archives: pgfeldman

Phil 3.17.2023

Tasks

  • Test new stem height and get cut down if it works
  • Chores
  • Shopping (treats!)

GPT Agents

  • Add engine selection for ContextExplorer
  • Start (finish?) poster
  • Verify that all the parts including Gephi work on the laptop
  • Start weekend embedding run this evening

SBIRs

  • Assist Aaron on MC presentation

Phil 3. 16.2023

Google is  introducing a first set of AI-powered writing features in Docs and Gmail to trusted testers. Rollout to begin with trusted users.

Lots of GPT-4 chatter. The demos look pretty incredible. Got my access!

SBIRs

  • Add “to-GML” button that creates a network and saves a file to be imported into Gefi
  • Create some pix!
  • 3:00 USMC prep with Aaron. Slides?

GPT Agents

  • Once pix are done. Start on poster. Try to send off tomorrow
  • Start processing embeddings for other tweets so there is more data to demo with
  • If there is time, make some videos as a backup

Book

  • 2:00 Meeting. Went well. I need to send some questions and answers that work in the context of the book – easy, medium, and hard

Phil 3.15.2023

What is Rumble? The streaming platform building an alternative internet

GPT Agents

  • I realized that now with embeddings that I can revisit the “list of nearby xxx” approach to maps
  • 4:00 or 5:00 meeting. Time zone issues
  • Ping Antonio – done

SBIRs

  • Run 100 stories each, cluster and label – done
  • Write the code that will connect the sentences for each of the stories and see if they pass through clusters in meaningful ways. I still wonder if just looking for distances between sentence (or summary) vectors would make more sense. Something to evaluate. Done
    • Another thought is to average the narrative trajectory using an adjustable window. Something to try
  • If things look reasonable, write code that creates a network graph out of the connected clusters, lay them out in Gephi, and render them. This all needs to be done before the 20th!
  • Looking at the plots, things look very promising. I’ll try to generate networks tomorrow

Phil 3.14.23

Happy PI day, for all those irrational folks

AC service

Modern language models refute Chomsky’s approach to language

  • The rise and success of large language models undermines virtually every strong claim for the innateness of language that has been proposed by generative linguistics. Modern machine learning has subverted and bypassed the entire theoretical framework of Chomsky’s approach, including its core claims to particular insights, principles, structures, and processes. I describe the sense in which modern language models implement genuine theories of language, including representations of syntactic and semantic structure. I highlight the relationship between contemporary models and prior approaches in linguistics, namely those based on gradient computations and memorized constructions. I also respond to several critiques of large language models, including claims that they can’t answer “why” questions, and skepticism that they are informative about real life acquisition. Most notably, large language models have attained remarkable success at discovering grammar without using any of the methods that some in linguistics insisted were necessary for a science of language to progress.

Alpaca: A Strong Open-Source Instruction-Following Model

  • We introduce Alpaca 7B, a model fine-tuned from the LLaMA 7B model on 52K instruction-following demonstrations. Alpaca behaves similarly to OpenAI’s text-davinci-003, while being surprisingly small and easy/cheap to reproduce (<$600).

SBIRs

  • Sprint planning – done
    • Generate a set of prompts for stories using Sun Zu and Clausewitz and save them out in prompts – done
    • Test the prompts in Narrative generator. Don’t forget about the later Babbage models
      • Ran prompt1 using the Clausewitz context
    • Run 100 stories each, cluster and label
  • Write the code that will connect the sentences for each of the stories and see if they pass through clusters in meaningful ways. I still wonder if just looking for distances between sentence (or summary) vectors would make more sense. Something to evaluate.
    • Another thought is to average the narrative trajectory using an adjustable window. Something to try
  • If things look reasonable, write code that creates a network graph out of the connected clusters, lay them out in Gephi, and render them. This all needs to be done before the 20th!
  • GPT BD tagup

GPT Agents

  • I’d like to get the tweet text for the excel export, but I need to not plot that in the boxplot. This seems to be the answer?

Phil 3.13.2023

Pick up filters

GPT Agents

  • Get speech stats and charts working in TweetEmbedExplorer – done! Interesting too. Here’s moderated speech for paxlovid vs ivermectin tweets
  • Also fixed the embedding param storing code:

SBIRs

  • Abstract was accepted at MORS!
  • 9:00 Sprint demos – done. Need to put together stories
  • 10:30 USMC prep meeting – nope
  • 2:00 MDA meeting – done
  • 3:00 JSC meeting – done

Phil 3.12.2022

Fine-tuning 20B LLMs with RLHF on a 24GB consumer GPU

  • We are excited to officially release the integration of trl with peft to make Large Language Model (LLM) fine-tuning with Reinforcement Learning more accessible to anyone! In this post, we explain why this is a competitive alternative to existing fine-tuning approaches.
  • Note peft is a general tool that can be applied to many ML use-cases but it’s particularly interesting for RLHF as this method is especially memory-hungry!
  • If you want to directly deep dive into the code, check out the example scripts directly on the documentation page of TRL.

Phil 3.10.2023

Tasks

  • Chores
  • Groceries
  • Yard -done

Book

  • 2:00 Meeting – introduced some things but the software people didn’t show

SBIRs

  • 10:GPT BD Meeting – demo went well

GPT Agents

  • Get the “*” working
  • Try the MCC memo to see how small texts work – pretty good!
  • Add “clear” button – done. Helpful

Phil 3.9.2023

ChatAug: Leveraging ChatGPT for Text Data Augmentation

  • Text data augmentation is an effective strategy for overcoming the challenge of limited sample sizes in many natural language processing (NLP) tasks. This challenge is especially prominent in the few-shot learning scenario, where the data in the target domain is generally much scarcer and of lowered quality. A natural and widely-used strategy to mitigate such challenges is to perform data augmentation on the training data to better capture the data invariance and increase the sample size. However, current text data augmentation methods either can not ensure the correct labeling of the generated data (lacking faithfulness) or can not ensure sufficient diversity in the generated data (lacking completeness), or both. Inspired by the recent success of large language models, especially the development of ChatGPT, which demonstrated improved language comprehension abilities, in this work, we propose a text data augmentation approach based on ChatGPT (named ChatAug). ChatGPT is trained on data with unparalleled linguistic richness and employs a reinforcement training process with large-scale human feedback, which endows the model with affinity to the naturalness of human language. Our text data augmentation approach ChatAug rephrases each sentence in the training samples into multiple conceptually similar but semantically different samples. The augmented samples can then be used in downstream model training. Experiment results on few-shot learning text classification tasks show the superior performance of the proposed ChatAug approach over state-of-the-art text data augmentation methods in terms of testing accuracy and distribution of the augmented samples.

GPT Agents

  • Add a “no context” checkbox – done. Super interesting how things change. The question is “what is the best way to hunt whales?”:

——————————— Without prompt tuning

The best way to hunt whales is to use a harpoon. Harpoons are designed to penetrate the thick skin of whales and are the most effective way to hunt them. Additionally, it is important to use a boat that is large enough to handle the size of the whale and to have a crew that is experienced in whale hunting.

———————————- With prompt tuning

The best way to hunt whales is to use two harpoons connected to the same line and throw them into the water, with the spare coils of box line making it possible for the harpooneer to pitch the second iron even if the whale runs away after receiving the first. Additionally, the whaleman must use the manoeuver of pitchpoling with a lance to accurately dart it from a violently rocking boat.

  • Make it so that the active tab in GPTContextFrame is switched to gen_tab when any of the “actions” buttons are pressed – done
  • Set the summary engine to chatGPT and evaluate
  • Add in charting of speech categories (and saving to spreadsheet)
  • Add moderation json field to narrative maps – done

SBIRs

  • Submitted Q4 report to Lauren. It looks good!
  • 9:15 standup. Need to close tasks
  • 9:30 GA discussion with Rukan
  • 10:00 GPT for BD
  • More UMAP with Aaron
  • Create a “military” group and add Clausewitz and Sun Tzu to begin. This means I need to add the * for multiple texts in one group
    • Downloaded, trimmed, and loaded

Phil 3.8.2023

Human heuristics for AI-generated language are flawed

  • Human communication is increasingly intermixed with language generated by AI. Across chat, email, and social media, AI systems suggest words, complete sentences, or produce entire conversations. AI-generated language is often not identified as such but presented as language written by humans, raising concerns about novel forms of deception and manipulation. Here, we study how humans discern whether verbal self-presentations, one of the most personal and consequential forms of language, were generated by AI. In six experiments, participants (N = 4,600) were unable to detect self-presentations generated by state-of-the-art AI language models in professional, hospitality, and dating contexts. A computational analysis of language features shows that human judgments of AI-generated language are hindered by intuitive but flawed heuristics such as associating first-person pronouns, use of contractions, or family topics with human-written language. We experimentally demonstrate that these heuristics make human judgment of AI-generated language predictable and manipulable, allowing AI systems to produce text perceived as “more human than human.” We discuss solutions, such as AI accents, to reduce the deceptive potential of language generated by AI, limiting the subversion of human intuition.

GPT Agents

SBIRs

  • Review and submit the Q4 report

Phil 3.7.2023

Open science involves sharing of code, and Python is a popular language for that code. Scientists may be reluctant, though, to try shared Python code when doing so involves many installation steps, like installing Conda, installing packages, installing other packages with Pip, possibly resolving package conflicts, etc.

An appealing alternative is to “bundle” the Python code and its dependencies into a single executable that can be downloaded from the “Releases” section of a GitHub site. This project is a test bed for working out the detals of such an approach. This project is called a “demo” rather than a “test” just in case any of the tools involved implicitly assume that items with names including “test” are parts of an internal test suite.

SBIRs

  • 9:15 standup
  • 9:30 USNA meeting
  • 1:00 BMD bi-weekly
  • More document loading, embedding, and storing to db
  • I also need a “*” option to load all groups added to the list when appropriate
  • There is a “moderation” endpoint on the OpenAI API. Add that to twitter_v2.table_tweet. Probably just the category_scores json object

GPT Agents

  • Read in the King James Bible
  • Got sources working!

Phil 3.6.2023

Back from GSAW. It was nice to be at a conference physically again

10:00 Dentist

SBIRs

  • Working on the quarterly report. Also need to set up the Q5 files and folders on Overleaf – done
  • 2:00 MDA Meeting – done
  • Talk to Aaron about paper? Also trip to VA? Done

GPT Agents

  • Fixed a bunch of small bugs
  • Need to get the loading of data, summaries, and embeddings – progress
  • Fix TweetEmbedExplorer to use BLOBs. Then re-embed and cluster

Phil 3.2.2023

Flying home from GSAW

Evidence of a predictive coding hierarchy in the human brain listening to speech

  • Considerable progress has recently been made in natural language processing: deep learning algorithms are increasingly able to generate, summarize, translate and classify texts. Yet, these language models still fail to match the language abilities of humans. Predictive coding theory offers a tentative explanation to this discrepancy: while language models are optimized to predict nearby words, the human brain would continuously predict a hierarchy of representations that spans multiple timescales. To test this hypothesis, we analysed the functional magnetic resonance imaging brain signals of 304 participants listening to short stories. First, we confirmed that the activations of modern language models linearly map onto the brain responses to speech. Second, we showed that enhancing these algorithms with predictions that span multiple timescales improves this brain mapping. Finally, we showed that these predictions are organized hierarchically: frontoparietal cortices predict higher-level, longer-range and more contextual representations than temporal cortices. Overall, these results strengthen the role of hierarchical predictive coding in language processing and illustrate how the synergy between neuroscience and artificial intelligence can unravel the computational bases of human cognition.

Phil 3.1.2023

Mapping people and tags on Mastodon

Introducing ChatGPT and Whisper APIs

Using the OpenAI API, you can build your own applications with gpt-3.5-turbo to do things like:

  • Draft an email or other piece of writing
  • Write Python code
  • Answer questions about a set of documents
  • Create conversational agents
  • Give your software a natural language interface
  • Tutor in a range of subjects
  • Translate languages
  • Simulate characters for video games and much more

Got the ChatGPT API working!

At GSAW

Phil 2.28.2023

GSAW

  • Panel is done, so now we coast for two more days
  • I think a good definition for AI is that it is ML that interacts with people directly, while machine learning has no direct interaction.

Phil 2.27.2023

Stanford Human Preferences Dataset (SHP) is a dataset of 385K collective human preferences over responses to questions/instructions in 18 different subject areas, from cooking to legal advice. The preferences are meant to reflect the helpfulness of one response over another, and are intended to be used for training RLHF reward models and NLG evaluation models

Book

  • Checked to see that all the other pictures are a good size. Yes! And the art row is intact, so phew.

SBIRs

  • At GSAW. The conference is not at the hotel. Fortunately, I brought a bike!
  • Panel is today after lunch

GPT Agents

  • Added summary and narrative to generation options
  • Stubbed out the saving of the relevant project data to the narrative maps input json file

GSAW

  • “NOVA – I’m not sure what that means – we have some great acronym guys. Anyway, we have SUPERNOVA connecting the NOVAs”
  • Really thinking about the humans as the only attack surface that matters. Social hacking for everything, as long as you’re patient. This is going to be the real power of AI “social weapons.” They take advantage of intrinsic human bias and use it to shovel sand into high-tech adversaries. So how do you detect that? Is “death by PowerPoint” an example of a successful attack?