Monthly Archives: August 2023

Phil 8.10.2023

SBIRs

  • Day trip to NJ for interns. It’s going to be a looooong day
  • Tweaked the code example a bit to include the Obfuscated C Code Competition, since that always seems to come up
  • More editing of scenario three

GPT Agents

  • Lot of progress yesterday. I showed Jimmy the current state of things and he suggested making the error counting show just one item at a time. That should work nicely because the low-token prompt could go out first, and we could wait for the context prompt to finish while working on the first prompt.

Phil 8.9.2023

Got an invite to be on the IUI 2024 program committee. I think I have to accept.

Order batteries!

From Pretraining Data to Language Models to Downstream Tasks: Tracking the Trails of Political Biases Leading to Unfair NLP Models

  • Language models (LMs) are pretrained on diverse data sources, including news, discussion forums, books, and online encyclopedias. A significant portion of this data includes opinions and perspectives which, on one hand, celebrate democracy and diversity of ideas, and on the other hand are inherently socially biased. Our work develops new methods to (1) measure political biases in LMs trained on such corpora, along social and economic axes, and (2) measure the fairness of downstream NLP models trained on top of politically biased LMs. We focus on hate speech and misinformation detection, aiming to empirically quantify the effects of political (social, economic) biases in pretraining data on the fairness of high-stakes social-oriented tasks. Our findings reveal that pretrained LMs do have political leanings that reinforce the polarization present in pretraining corpora, propagating social biases into hate speech predictions and misinformation detectors. We discuss the implications of our findings for NLP research and propose future directions to mitigate unfairness.

SBIRs

  • 2:00 BMD status
  • Sent a bunch of papers over to the interns for the background section
  • Started on the Q6 report

GPT Agents

  • 8:30 – 9:30 more app development. And have the email domains rippled out yet?
    • Great progress!
  • 3:00 – 4:00 more app development. Need to get the public version running before the meeting.
  • 2:30 Alden meeting?
  • 4:00 LLM meeting

Phil 8.8.2023

Love this:

Looks like ASRC Federal is going to create a technical fellows program. Need to schedule some time to fill out the application

SBIRs

  • 9:00 Standup
  • 3:00(?) MDA meeting

GPT Agents

  • More dev. Next is to isolate the UUID and get the LangChain calls working. Nope, worked on getting the UUID checked and placing all the experiment data in a class. Not sexy, but very cool. More work tomorrow

Phil 8.7.2023

SBIRs

  • Lots of meetings today. Like, 5 of them
  • Working on the paper on the gaps – good progress!
  • Some back and forth with Bob S. on generating data

GPT Agents.

  • More work on the app. Got the email sending properly, which turned out to be MUCH more complicated that we thought. You need to have a domain that the email can be sent from. Anyway, got that set up but waiting a day for the domain to ripple
  • Got the context root working so the app is live, if not actually working. You can see the current state here
  • Next is to isolate the UUID and get the Langchain calls working

Phil 8.4.2023

It looks like COVID might be coming back this fall. Wastewater levels are rising in Europe and the US. Mostly Delaware at the moment

SBIRs

  • Creating appendices
  • Starting on email section
  • Somehow lost the simple_sabotage entry in the database – re-sourcing. Done. Also committed the db. I think the repo was having problems the last time I tried this so that may be the source of my woes.
  • Need to list all the * entries first?

Phil 8.3.2023

Large Language Models as Corporate Lobbyists

  • We demonstrate a proof-of-concept of a large language model conducting corporate lobbying related activities. An autoregressive large language model (OpenAI’s text-davinci-003) determines if proposed U.S. Congressional bills are relevant to specific public companies and provides explanations and confidence levels. For the bills the model deems as relevant, the model drafts a letter to the sponsor of the bill in an attempt to persuade the congressperson to make changes to the proposed legislation. We use hundreds of novel ground-truth labels of the relevance of a bill to a company to benchmark the performance of the model. It outperforms the baseline of predicting the most common outcome of irrelevance. We also benchmark the performance of the previous OpenAI GPT-3 model (text-davinci-002), which was the state-of-the-art model on many academic natural language tasks until text-davinci-003 was recently released. The performance of text-davinci-002 is worse than the simple baseline. Longer-term, if AI begins to influence law in a manner that is not a direct extension of human intentions, this threatens the critical role that law as information could play in aligning AI with humans. Initially, AI is being used to simply augment human lobbyists for a small portion of their daily tasks. However, firms have an incentive to use less and less human oversight over automated assessments of policy ideas and the written communication to regulatory agencies and Congressional staffers. The core question raised is where to draw the line between human-driven and AI-driven policy influence.

Can Large Language Models Change User Preference Adversarially?

  • Pretrained large language models (LLMs) are becoming increasingly powerful and ubiquitous in mainstream applications such as being a personal assistant, a dialogue model, etc. As these models become proficient in deducing user preferences and offering tailored assistance, there is an increasing concern about the ability of these models to influence, modify and in the extreme case manipulate user preference adversarially. The issue of lack of interpretability in these models in adversarial settings remains largely unsolved. This work tries to study adversarial behavior in user preferences from the lens of attention probing, red teaming and white-box analysis. Specifically, it provides a bird’s eye view of existing literature, offers red teaming samples for dialogue models like ChatGPT and GODEL and probes the attention mechanism in the latter for non-adversarial and adversarial settings.

SBIRs

  • 9:00 Standup
  • More paper. Start on the Vignette 2 analysis. Move the extensive examples to the appendix
  • Create some concept art for the SGPT screens – done!

SBIRs

  • Work with Zach to connect the back end? Goof progress. Stored data to the db and managed to send an email!

Phil 8.2.2023

Funeral for Mike yesterday. Sigh

Research.com seems kinda useful, actually. It looks like a good place to find good upcoming conferences and venues

Anatomy of an AI-powered malicious social botnet

  • Large language models (LLMs) exhibit impressive capabilities in generating realistic text across diverse subjects. Concerns have been raised that they could be utilized to produce fake content with a deceptive intention, although evidence thus far remains anecdotal. This paper presents a case study about a Twitter botnet that appears to employ ChatGPT to generate human-like content. Through heuristics, we identify 1,140 accounts and validate them via manual annotation. These accounts form a dense cluster of fake personas that exhibit similar behaviors, including posting machine-generated content and stolen images, and engage with each other through replies and retweets. ChatGPT-generated content promotes suspicious websites and spreads harmful comments. While the accounts in the AI botnet can be detected through their coordination patterns, current state-of-the-art LLM content classifiers fail to discriminate between them and human accounts in the wild. These findings highlight the threats posed by AI-enabled social bots.

SBIRs

  • Talk to Zach about SGPT BD case?
  • Work on the paper. Finished a second pass on the “Gumming up the Works” Vignette. Fixed a bunch of mad writing and generally cleaned things up.
  • Fill out forms! Done! Aaron’s too!

GPT Agents

  • See if I can get some DB and OpenAI calls set up
  • IRB
  • 4:00 Meeting

Phil 8.1.2023

SBIRs

  • Went into the office yesterday, which was fun
  • Got my new laptop with card reader. Nice little box! I was just expecting to re-image my old one
  • Got more account stuff set up. Forgot about GitLab
  • Weekly MDA meeting. Pete finished his first pass at the white paper, which needs to be fleshed out. We agreed that SEG would make changes this week and then we would take over next week when Aaron gets back
  • Made a presentation to the interns and talked about goals, human nature, LLMs, and technology
  • 9:00 Sprint planning
  • Expenses!
  • More paper!
  • Leaving early for Mike’s funeral

GPT Agents

  • Pinged Zach about getting back on the App
  • Start IRM submission