Phil 8.4.2023

It looks like COVID might be coming back this fall. Wastewater levels are rising in Europe and the US. Mostly Delaware at the moment

SBIRs

  • Creating appendices
  • Starting on email section
  • Somehow lost the simple_sabotage entry in the database – re-sourcing. Done. Also committed the db. I think the repo was having problems the last time I tried this so that may be the source of my woes.
  • Need to list all the * entries first?

Phil 8.3.2023

Large Language Models as Corporate Lobbyists

  • We demonstrate a proof-of-concept of a large language model conducting corporate lobbying related activities. An autoregressive large language model (OpenAI’s text-davinci-003) determines if proposed U.S. Congressional bills are relevant to specific public companies and provides explanations and confidence levels. For the bills the model deems as relevant, the model drafts a letter to the sponsor of the bill in an attempt to persuade the congressperson to make changes to the proposed legislation. We use hundreds of novel ground-truth labels of the relevance of a bill to a company to benchmark the performance of the model. It outperforms the baseline of predicting the most common outcome of irrelevance. We also benchmark the performance of the previous OpenAI GPT-3 model (text-davinci-002), which was the state-of-the-art model on many academic natural language tasks until text-davinci-003 was recently released. The performance of text-davinci-002 is worse than the simple baseline. Longer-term, if AI begins to influence law in a manner that is not a direct extension of human intentions, this threatens the critical role that law as information could play in aligning AI with humans. Initially, AI is being used to simply augment human lobbyists for a small portion of their daily tasks. However, firms have an incentive to use less and less human oversight over automated assessments of policy ideas and the written communication to regulatory agencies and Congressional staffers. The core question raised is where to draw the line between human-driven and AI-driven policy influence.

Can Large Language Models Change User Preference Adversarially?

  • Pretrained large language models (LLMs) are becoming increasingly powerful and ubiquitous in mainstream applications such as being a personal assistant, a dialogue model, etc. As these models become proficient in deducing user preferences and offering tailored assistance, there is an increasing concern about the ability of these models to influence, modify and in the extreme case manipulate user preference adversarially. The issue of lack of interpretability in these models in adversarial settings remains largely unsolved. This work tries to study adversarial behavior in user preferences from the lens of attention probing, red teaming and white-box analysis. Specifically, it provides a bird’s eye view of existing literature, offers red teaming samples for dialogue models like ChatGPT and GODEL and probes the attention mechanism in the latter for non-adversarial and adversarial settings.

SBIRs

  • 9:00 Standup
  • More paper. Start on the Vignette 2 analysis. Move the extensive examples to the appendix
  • Create some concept art for the SGPT screens – done!

SBIRs

  • Work with Zach to connect the back end? Goof progress. Stored data to the db and managed to send an email!

Phil 8.2.2023

Funeral for Mike yesterday. Sigh

Research.com seems kinda useful, actually. It looks like a good place to find good upcoming conferences and venues

Anatomy of an AI-powered malicious social botnet

  • Large language models (LLMs) exhibit impressive capabilities in generating realistic text across diverse subjects. Concerns have been raised that they could be utilized to produce fake content with a deceptive intention, although evidence thus far remains anecdotal. This paper presents a case study about a Twitter botnet that appears to employ ChatGPT to generate human-like content. Through heuristics, we identify 1,140 accounts and validate them via manual annotation. These accounts form a dense cluster of fake personas that exhibit similar behaviors, including posting machine-generated content and stolen images, and engage with each other through replies and retweets. ChatGPT-generated content promotes suspicious websites and spreads harmful comments. While the accounts in the AI botnet can be detected through their coordination patterns, current state-of-the-art LLM content classifiers fail to discriminate between them and human accounts in the wild. These findings highlight the threats posed by AI-enabled social bots.

SBIRs

  • Talk to Zach about SGPT BD case?
  • Work on the paper. Finished a second pass on the “Gumming up the Works” Vignette. Fixed a bunch of mad writing and generally cleaned things up.
  • Fill out forms! Done! Aaron’s too!

GPT Agents

  • See if I can get some DB and OpenAI calls set up
  • IRB
  • 4:00 Meeting

Phil 8.1.2023

SBIRs

  • Went into the office yesterday, which was fun
  • Got my new laptop with card reader. Nice little box! I was just expecting to re-image my old one
  • Got more account stuff set up. Forgot about GitLab
  • Weekly MDA meeting. Pete finished his first pass at the white paper, which needs to be fleshed out. We agreed that SEG would make changes this week and then we would take over next week when Aaron gets back
  • Made a presentation to the interns and talked about goals, human nature, LLMs, and technology
  • 9:00 Sprint planning
  • Expenses!
  • More paper!
  • Leaving early for Mike’s funeral

GPT Agents

  • Pinged Zach about getting back on the App
  • Start IRM submission

Phil 7.27.2023

This quote comes from a Washington Post article on how the Ukraine war is affecting development of AI-powered drones. I think it generalizes more broadly to how disadvantaged groups are driven to embrace alternatives that are outside conventional norms.

Ukraine doesn’t have the ability to fight the much larger Russia. Russia may have issues with corruption and the quality of its weapons, but it has a lot of them. And from the perspective of Ukraine, Russia has an infinite number of soldiers. So many that they can be squandered.

The West is providing Ukraine with enough weapons to survive, but not enough to attack and win decisively. I’ve read analysis where experts say that weapons systems are arriving just about as fast as Ukraine can incorporate them, but the order of delivery is from less-capable to more capable. They have artillery, but no F-16s, for example.

As a result, Ukraine is having to improvise and adapt. Since it is facing an existential risk, it’s not going to be too picky about the ethics of smart weapons. If AI helps in targeting, great. If Russia is jamming the control signals to drones, then AI can take over. There is a coevolution between the two forces, and the result may very well be cheap, effective AI combat drones that are largely autonomous in the right conditions.

Such technology is cheap and adaptable. Others will use it, and it will slowly trickle down to the level that a lone wolf in a small town can order the parts that can inflict carnage on the local school. Or something else. The problem is that the diffusion of technology and its associated risks are difficult to predict and manage. But the line that leads to this kind of tragedy will have its roots in our decision to starve Ukraine of the weapons that it needed to win quickly.

Of course, Ukraine isn’t the only smaller country facing an existential risk. Many low-lying countries, particularly those nearer the equator are facing similar risks from climate change – both from killing heat and sea level rise. Technology – as unproven as combat AI – exists for that too. It’s called Geoengineering.

We’ve been doing geoengineering for decades of course. By dumping megatons of carbon dioxide and other compounds in the atmosphere, we are heating our planet and are now arriving at a tipping point where potential risks are going to become very real and immediate for certain countries. If I were facing the destruction of my country by flooding and heat, I’d be looking at geoengineering very seriously. Particularly since the major economies are not doing much to stop it.

Which means that I expect that we will see efforts like the injection of sulfate aerosols into the upper atmosphere, or cloud brightening, or the spreading of iron or other nutrients to the oceans to increase the amount of phytoplankton to consume CO2. Or something else even more radical. Like Ukraine, these countries have limited budgets and limited options. They will be creative, and not worry too much about the side effects.

It’s a 24/7 technology race without a finish line. The racers are just trying to outrun disaster. And no one knows where that may lead.

SBIRs

  • 9:00 Standup
  • Finish slide deck
  • Server stuff
  • More paper

GPT Agents

  • Add a “withdraw” page and move the about page to home, then informed consent
  • Work on IRB
  • Ping Zach

Phil 7.26.2023

Visuospatial information foraging describes search behavior in learning latent environmental features

  • In the real world, making sequences of decisions to achieve goals often depends upon the ability to learn aspects of the environment that are not directly perceptible. Learning these so-called latent features requires seeking information about them. Prior efforts to study latent feature learning often used single decisions, used few features, and failed to distinguish between reward-seeking and information-seeking. To overcome this, we designed a task in which humans and monkeys made a series of choices to search for shapes hidden on a grid. On our task, the effects of reward and information outcomes from uncovering parts of shapes could be disentangled. Members of both species adeptly learned the shapes and preferred to select tiles expected to be informative earlier in trials than previously rewarding ones, searching a part of the grid until their outcomes dropped below the average information outcome—a pattern consistent with foraging behavior. In addition, how quickly humans learned the shapes was predicted by how well their choice sequences matched the foraging pattern, revealing an unexpected connection between foraging and learning. This adaptive search for information may underlie the ability in humans and monkeys to learn latent features to support goal-directed behavior in the long run.

SBIRs

  • Morning meeting with Ron about the intern’s paper writing. Got a charge number! I also need to update the lit review slide deck and do a version for the Adobe paper writing
  • Ethics course
  • Ongoing IT fire drill
  • More paper?

Phil 7.25.2023

SBIRs

  • Lots of sturm and drang about getting the server set up. Updated the overleaf with our list of needs
  • Nice progress with the interns. Need to give a talk about lit reviews and writing a scientific paper

GPT agents

  • Good progress on the experiment UI:

Phil 7.24.2023

9:10 – Dentist!

SBIRs

  • Intern stuff? Some, mostly just getting accounts set up
  • MDA meeting. Peter has written a white paper, hope to have time to read it by next time
  • Register for SMD?
  • More paper. Need to chase down some sources. Progress. More rewriting than I expected

GPT Agents

  • See if I can rough out a second page(s) for the user study. Never got around to it.

Phil 7.21.2023

GPT Agents

  • Started putting together the ContextTest website
  • We’re using solid.js for the frame. Kind of figuring it out
  • Also some good discussion on using an in-house Llama model
  • Progress! More on Tuesday

SBIRs

  • USNA stuff with Ron

Phil 7.20.2023

GPT Agents

  • Looking for a straightforward way to build a webapp that has a simple front end (submit email page, then page protected by a GUID that has the IRB statement and the experiment(s)). The last time I did this was in 2015 with Angular. The code still works, amazingly enough, so I could just try reusing that. It looks like I have all the books still, so maybe that’s not the worst answer.
  • Got ahold of Zach, and we’ll put together something mode modern using Supabase and solid.js. This should be fun!

SBIRs

  • 9:00 standup
  • 11:30 touchpoint
  • More paper. Good, albeit halting progress. I should be able to finish the analysis section for vignette 1 tomorrow

Phil 7.19.2023

How is ChatGPT’s behavior changing over time?

  • GPT-3.5 and GPT-4 are the two most widely used large language model (LLM) services. However, when and how these models are updated over time is opaque. Here, we evaluate the March 2023 and June 2023 versions of GPT-3.5 and GPT-4 on four diverse tasks: 1) solving math problems, 2) answering sensitive/dangerous questions, 3) generating code and 4) visual reasoning. We find that the performance and behavior of both GPT-3.5 and GPT-4 can vary greatly over time. For example, GPT-4 (March 2023) was very good at identifying prime numbers (accuracy 97.6%) but GPT-4 (June 2023) was very poor on these same questions (accuracy 2.4%). Interestingly GPT-3.5 (June 2023) was much better than GPT-3.5 (March 2023) in this task. GPT-4 was less willing to answer sensitive questions in June than in March, and both GPT-4 and GPT-3.5 had more formatting mistakes in code generation in June than in March. Overall, our findings shows that the behavior of the same LLM service can change substantially in a relatively short amount of time, highlighting the need for continuous monitoring of LLM quality.
https://huggingface.co/chat/

Tag up with Adti to discuss UMBC HCC PhD program

SBIRs

  • Went up to NJ yesterday for a meeting and Identrust stuff. Got everything done!
  • Also had a good discussion with Aaron about the Scale paper and how to tie it into a wargame demo. Found this Wikipedia entry and this pdf as well.
  • Need to prep for ML capabilities meeting
  • USNA Intern prep

GPT Agents

  • 4:00 UMBC meeting

Phil 7.17.2023

Tasks

  • Check with Rheena on guarantor stuff
  • Did what I could about car rental

SBIRs

  • Demo slides
  • Weekly meeting. Check overleaf before
  • Work on Scale paper

GPT Agents

  • Sent out experiment email
  • Ping Zach about test website?

Phil 7.14.2023

This is wild:

Severe Depressive Symptoms Exacerbate the Relationship Between Conspiracy Beliefs and Voting for Election Doubters

  • Two of the most significant concerns about the contemporary United States are the erosion of democratic institutions and the increase in rates of depression. The researchers provide evidence linking these phenomena. They use a survey (N=11,517) to show a relationship between COVID-19 conspiracy beliefs and the endorsement of the 2020 election fraud claim as well as voting, in 2022, for gubernatorial candidates who cast doubt on the 2020 election results. The authors further predict and find that the presence of severe depressive symptoms exacerbates these relationships. An increase in depression among COVID-19 conspiracy believers is positively associated with voters casting their ballots for candidates who question the foundation of democratic legitimacy. The results highlight how interventions to address mental health can improve the country’s political health.

SBIRs

  • JSC meeting at 10:00

GPT Agents

  • Write up experiment email
  • Add human-readable text to sources
create or replace view parsed_text_view as
    select t.id, t.source, s.text_name, t.parsed_text
    from table_parsed_text t
        inner join table_source s on t.source = s.id;

Phil 7.13.2023

Vacation is over. Back to the mines.

SBIR’s

  • 9:15 standup
  • Need to look at what I have as deliverables – MDA only
  • Start writing abstract for Emerging Techniques forum – got permission
  • Reply to Chris K – done
  • Identrust? Yup. Form is done and now needs to be notarized
  • MDA subject meeting

GPT Agents

  • Write up resume experiment thoughts
  • Need to get back to mapmaking