Lot of progress yesterday. I showed Jimmy the current state of things and he suggested making the error counting show just one item at a time. That should work nicely because the low-token prompt could go out first, and we could wait for the context prompt to finish while working on the first prompt.
Language models (LMs) are pretrained on diverse data sources, including news, discussion forums, books, and online encyclopedias. A significant portion of this data includes opinions and perspectives which, on one hand, celebrate democracy and diversity of ideas, and on the other hand are inherently socially biased. Our work develops new methods to (1) measure political biases in LMs trained on such corpora, along social and economic axes, and (2) measure the fairness of downstream NLP models trained on top of politically biased LMs. We focus on hate speech and misinformation detection, aiming to empirically quantify the effects of political (social, economic) biases in pretraining data on the fairness of high-stakes social-oriented tasks. Our findings reveal that pretrained LMs do have political leanings that reinforce the polarization present in pretraining corpora, propagating social biases into hate speech predictions and misinformation detectors. We discuss the implications of our findings for NLP research and propose future directions to mitigate unfairness.
SBIRs
2:00 BMD status
Sent a bunch of papers over to the interns for the background section
Started on the Q6 report
GPT Agents
8:30 – 9:30 more app development. And have the email domains rippled out yet?
Great progress!
3:00 – 4:00 more app development. Need to get the public version running before the meeting.
Looks like ASRC Federal is going to create a technical fellows program. Need to schedule some time to fill out the application
SBIRs
9:00 Standup
3:00(?) MDA meeting
GPT Agents
More dev. Next is to isolate the UUID and get the LangChain calls working. Nope, worked on getting the UUID checked and placing all the experiment data in a class. Not sexy, but very cool. More work tomorrow
Some back and forth with Bob S. on generating data
GPT Agents.
More work on the app. Got the email sending properly, which turned out to be MUCH more complicated that we thought. You need to have a domain that the email can be sent from. Anyway, got that set up but waiting a day for the domain to ripple
Got the context root working so the app is live, if not actually working. You can see the current state here
Next is to isolate the UUID and get the Langchain calls working
Though she didn’t know it at the time, Ms. Kolsky had fallen victim to a new form of travel scam: shoddy guidebooks that appear to be compiled with the help of generative artificial intelligence, self-published and bolstered by sham reviews, that have proliferated in recent months on Amazon.
It looks like COVID might be coming back this fall. Wastewater levels are rising in Europe and the US. Mostly Delaware at the moment
SBIRs
Creating appendices
Starting on email section
Somehow lost the simple_sabotage entry in the database – re-sourcing. Done. Also committed the db. I think the repo was having problems the last time I tried this so that may be the source of my woes.
We demonstrate a proof-of-concept of a large language model conducting corporate lobbying related activities. An autoregressive large language model (OpenAI’s text-davinci-003) determines if proposed U.S. Congressional bills are relevant to specific public companies and provides explanations and confidence levels. For the bills the model deems as relevant, the model drafts a letter to the sponsor of the bill in an attempt to persuade the congressperson to make changes to the proposed legislation. We use hundreds of novel ground-truth labels of the relevance of a bill to a company to benchmark the performance of the model. It outperforms the baseline of predicting the most common outcome of irrelevance. We also benchmark the performance of the previous OpenAI GPT-3 model (text-davinci-002), which was the state-of-the-art model on many academic natural language tasks until text-davinci-003 was recently released. The performance of text-davinci-002 is worse than the simple baseline. Longer-term, if AI begins to influence law in a manner that is not a direct extension of human intentions, this threatens the critical role that law as information could play in aligning AI with humans. Initially, AI is being used to simply augment human lobbyists for a small portion of their daily tasks. However, firms have an incentive to use less and less human oversight over automated assessments of policy ideas and the written communication to regulatory agencies and Congressional staffers. The core question raised is where to draw the line between human-driven and AI-driven policy influence.
Pretrained large language models (LLMs) are becoming increasingly powerful and ubiquitous in mainstream applications such as being a personal assistant, a dialogue model, etc. As these models become proficient in deducing user preferences and offering tailored assistance, there is an increasing concern about the ability of these models to influence, modify and in the extreme case manipulate user preference adversarially. The issue of lack of interpretability in these models in adversarial settings remains largely unsolved. This work tries to study adversarial behavior in user preferences from the lens of attention probing, red teaming and white-box analysis. Specifically, it provides a bird’s eye view of existing literature, offers red teaming samples for dialogue models like ChatGPT and GODEL and probes the attention mechanism in the latter for non-adversarial and adversarial settings.
SBIRs
9:00 Standup
More paper. Start on the Vignette 2 analysis. Move the extensive examples to the appendix
Create some concept art for the SGPT screens – done!
SBIRs
Work with Zach to connect the back end? Goof progress. Stored data to the db and managed to send an email!
Large language models (LLMs) exhibit impressive capabilities in generating realistic text across diverse subjects. Concerns have been raised that they could be utilized to produce fake content with a deceptive intention, although evidence thus far remains anecdotal. This paper presents a case study about a Twitter botnet that appears to employ ChatGPT to generate human-like content. Through heuristics, we identify 1,140 accounts and validate them via manual annotation. These accounts form a dense cluster of fake personas that exhibit similar behaviors, including posting machine-generated content and stolen images, and engage with each other through replies and retweets. ChatGPT-generated content promotes suspicious websites and spreads harmful comments. While the accounts in the AI botnet can be detected through their coordination patterns, current state-of-the-art LLM content classifiers fail to discriminate between them and human accounts in the wild. These findings highlight the threats posed by AI-enabled social bots.
SBIRs
Talk to Zach about SGPT BD case?
Work on the paper. Finished a second pass on the “Gumming up the Works” Vignette. Fixed a bunch of mad writing and generally cleaned things up.
Got my new laptop with card reader. Nice little box! I was just expecting to re-image my old one
Got more account stuff set up. Forgot about GitLab
Weekly MDA meeting. Pete finished his first pass at the white paper, which needs to be fleshed out. We agreed that SEG would make changes this week and then we would take over next week when Aaron gets back
Made a presentation to the interns and talked about goals, human nature, LLMs, and technology
The Biden administration is hunting for malicious computer code it believes China has hidden deep inside the networks controlling power grids, communications systems and water supplies that feed military bases in the United States and around the world, according to American military, intelligence and national security officials.
This quote comes from a Washington Post article on how the Ukraine war is affecting development of AI-powered drones. I think it generalizes more broadly to how disadvantaged groups are driven to embrace alternatives that are outside conventional norms.
Ukraine doesn’t have the ability to fight the much larger Russia. Russia may have issues with corruption and the quality of its weapons, but it has a lot of them. And from the perspective of Ukraine, Russia has an infinite number of soldiers. So many that they can be squandered.
The West is providing Ukraine with enough weapons to survive, but not enough to attack and win decisively. I’ve read analysis where experts say that weapons systems are arriving just about as fast as Ukraine can incorporate them, but the order of delivery is from less-capable to more capable. They have artillery, but no F-16s, for example.
As a result, Ukraine is having to improvise and adapt. Since it is facing an existential risk, it’s not going to be too picky about the ethics of smart weapons. If AI helps in targeting, great. If Russia is jamming the control signals to drones, then AI can take over. There is a coevolution between the two forces, and the result may very well be cheap, effective AI combat drones that are largely autonomous in the right conditions.
Such technology is cheap and adaptable. Others will use it, and it will slowly trickle down to the level that a lone wolf in a small town can order the parts that can inflict carnage on the local school. Or something else. The problem is that the diffusion of technology and its associated risks are difficult to predict and manage. But the line that leads to this kind of tragedy will have its roots in our decision to starve Ukraine of the weapons that it needed to win quickly.
Of course, Ukraine isn’t the only smaller country facing an existential risk. Many low-lying countries, particularly those nearer the equator are facing similar risks from climate change – both from killing heat and sea level rise. Technology – as unproven as combat AI – exists for that too. It’s called Geoengineering.
We’ve been doing geoengineering for decades of course. By dumping megatons of carbon dioxide and other compounds in the atmosphere, we are heating our planet and are now arriving at a tipping point where potential risks are going to become very real and immediate for certain countries. If I were facing the destruction of my country by flooding and heat, I’d be looking at geoengineering very seriously. Particularly since the major economies are not doing much to stop it.
Which means that I expect that we will see efforts like the injection of sulfate aerosols into the upper atmosphere, or cloud brightening, or the spreading of iron or other nutrients to the oceans to increase the amount of phytoplankton to consume CO2. Or something else even more radical. Like Ukraine, these countries have limited budgets and limited options. They will be creative, and not worry too much about the side effects.
It’s a 24/7 technology race without a finish line. The racers are just trying to outrun disaster. And no one knows where that may lead.
SBIRs
9:00 Standup
Finish slide deck
Server stuff
More paper
GPT Agents
Add a “withdraw” page and move the about page to home, then informed consent
In the real world, making sequences of decisions to achieve goals often depends upon the ability to learn aspects of the environment that are not directly perceptible. Learning these so-called latent features requires seeking information about them. Prior efforts to study latent feature learning often used single decisions, used few features, and failed to distinguish between reward-seeking and information-seeking. To overcome this, we designed a task in which humans and monkeys made a series of choices to search for shapes hidden on a grid. On our task, the effects of reward and information outcomes from uncovering parts of shapes could be disentangled. Members of both species adeptly learned the shapes and preferred to select tiles expected to be informative earlier in trials than previously rewarding ones, searching a part of the grid until their outcomes dropped below the average information outcome—a pattern consistent with foraging behavior. In addition, how quickly humans learned the shapes was predicted by how well their choice sequences matched the foraging pattern, revealing an unexpected connection between foraging and learning. This adaptive search for information may underlie the ability in humans and monkeys to learn latent features to support goal-directed behavior in the long run.
SBIRs
Morning meeting with Ron about the intern’s paper writing. Got a charge number! I also need to update the lit review slide deck and do a version for the Adobe paper writing
You must be logged in to post a comment.