Monthly Archives: April 2021

Phil 4.8.21

Print and mail taxes today

How many data points is a prompt worth?

  • Prompts are interesting because they allow a practitioner to give information to the model, although in a very different fashion from standard ML supervision. In our NAACL 2021 paper with Sasha Rush, we investigate prompt-based fine-tuning, a promising alternative fine-tuning approach, and find that prompts often yield an edge over the standard approach. As we interpret a prompt as additional human-crafted information for the model, we measure that edge in terms of data points and quantify: how many data points is a prompt worth?

SBIR

  • 9:15 IRAD standup
  • 11:00 Meeting with Orest
  • Make slide for Aaron
  • More work with Rukan

Book

GPT Agents

  • More writing

Phil 4.7.21

Two perspectives on large language model (LLM) ethics

On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? 🦜

  • The past 3 years of work in NLP have been characterized by the development and deployment of ever larger language models, especially for English. BERT, its variants, GPT-2/3, and others, most recently Switch-C, have pushed the boundaries of the possible both through architectural innovations and through sheer size. Using these pretrained models and the methodology of fine-tuning them for specific tasks, researchers have extended the state of the art on a wide array of tasks as measured by leaderboards on specific benchmarks for English. In this paper, we take a step back and ask: How big is too big? What are the possible risks associated with this technology and what paths are available for mitigating those risks? We provide recommendations including weighing the environmental and financial costs first, investing resources into curating and carefully documenting datasets rather than ingesting everything on the web, carrying out pre-development exercises evaluating how the planned approach fits into research and development goals and supports stakeholder values, and encouraging research directions beyond ever larger language models.

Alignment of Language Agents

  • For artificial intelligence to be beneficial to humans the behaviour of AI agents needs to be aligned with what humans want. In this paper we discuss some behavioural issues for language agents, arising from accidental misspecification by the system designer. We highlight some ways that misspecification can occur and discuss some behavioural issues that could arise from misspecification, including deceptive or manipulative language, and review some approaches for avoiding these issues.

Book

GPT Agents

  • Move token workbooks into the right place – done. Recalculated a few. Also created a folder for modified spreadsheets so that I can find them later!
  • Write! Did some, but mostly made charts:

SBIR

  • 10:00 Meeting
  • More model tuning with Rukan. Much better luck with MLPs! Going to rethink how an attention head should be attached to a linear layer

JuryRoom

  • 7:00 Meeting

Phil 4.6.21

Need to agree to re-review

GPT Agents

  • Continued to adjust the schema. Probe now stores the full raw json response as a string:
  • Added logit storage to the raw values. A exp(0) = 1.0, or 100%. Anything less than that is lower probability
  • Continuing to work on the paper
  • 3:00 Meeting

SBIR

  • More Transformer work. Need to save out some screenshots for slides this time!
  • 9:15 Standup

Phil 4.5.21

GPT Agents

  • Made some more progress on the mapping framework. Stubbed out some tables for storing the node and edge information, and started to look at probes that can create long jumps to other sections of the space, e.g.
There are also some countries that are very far away from the United States. Here's a short list, starting with the most distant, separated by commas:
  • More working on the paper

SBIR

  • Got the Transformer doing its thing. It looks like it might work!
  • Having some difficulty getting it to behave with batches, though

Phil 4.4.21

Happy end-of-Passover, Easter!

Playing with the GPT mapping, and I’ve gotten queries running with POS processing. Here’s the prompt:

"A list of the countries that are nearest the United States, separated by comma:"

Here’s the response:

Canada, Mexico, Bahamas, Dominican Republic, Haiti, Jamaica, Cuba, Trinidad and Tobago, Puerto Rico, Barbados, Antigua and Barbuda, Saint Lucia, Saint Vincent and the Grenadines, Grenada, Domin

And here it is processed by Flair:

{'text': 'Canada', 'tag': 'NNP'}
{'text': ',', 'tag': ','}
{'text': 'Mexico', 'tag': 'NNP'}
{'text': ',', 'tag': ','}
{'text': 'Bahamas', 'tag': 'NNP'}
{'text': ',', 'tag': ','}
{'text': 'Dominican', 'tag': 'NNP'}
{'text': 'Republic', 'tag': 'NNP'}
{'text': ',', 'tag': ','}
{'text': 'Haiti', 'tag': 'NNP'}
{'text': ',', 'tag': ','}
{'text': 'Jamaica', 'tag': 'NNP'}
{'text': ',', 'tag': ','}
{'text': 'Cuba', 'tag': 'NNP'}
{'text': ',', 'tag': ','}
{'text': 'Trinidad', 'tag': 'NNP'}
{'text': 'and', 'tag': 'CC'}
{'text': 'Tobago', 'tag': 'NNP'}
{'text': ',', 'tag': ','}
{'text': 'Puerto', 'tag': 'NNP'}
{'text': 'Rico', 'tag': 'NNP'}
{'text': ',', 'tag': ','}
{'text': 'Barbados', 'tag': 'NNP'}
{'text': ',', 'tag': ','}
{'text': 'Antigua', 'tag': 'NNP'}
{'text': 'and', 'tag': 'CC'}
{'text': 'Barbuda', 'tag': 'NNP'}
{'text': ',', 'tag': ','}
{'text': 'Saint', 'tag': 'NNP'}
{'text': 'Lucia', 'tag': 'NNP'}
{'text': ',', 'tag': ','}
{'text': 'Saint', 'tag': 'NNP'}
{'text': 'Vincent', 'tag': 'NNP'}
{'text': 'and', 'tag': 'CC'}
{'text': 'the', 'tag': 'DT'}
{'text': 'Grenadines', 'tag': 'NNPS'}
{'text': ',', 'tag': ','}
{'text': 'Grenada', 'tag': 'NNP'}
{'text': ',', 'tag': ','}
{'text': 'Domin', 'tag': 'NNP'}

I am very excited!

Phil 4.2.21

I think milling = fashion

GPT Agents

  • Extract the sentiment into a workbook. It looks like it should be pretty easy:
select count(*) as count, probe from table_output where experiment_id = 89 and tag = 'raw' and sent_label = 'NEGATIVE' group by probe order by probe;
  • Continue on paper, upload to Overleaf, too
  • Meeting at 5:00

SBIR

  • More work with Rukan? Need to figure out why a 5×256 is going in, but a 256×256 is coming out. We could try an attention layer first. Let’s see how things go?
  • Set up a time to discuss research with Orest

Book

  • 2:00 Meeting with Michelle

Phil 4.1.21

Exploring the effects of algorithm-driven news sources on political behavior and polarization

  • Do algorithm-driven news sources have different effects on political behavior when compared to non-algorithmic news sources? Media companies compete for our scarce time and attention; one way they do this is by leveraging algorithms to select the most appealing content for each user. While algorithm-driven sites are increasingly popular sources of information, we know very little about the effects of algorithmically determined news at the individual level. The objective of this paper is to define and measure the effects of algorithmically generated news. We begin by developing a taxonomy of news delivery by distinguishing between two types of algorithmically generated news, socially driven and user-driven, and contrasting these with non-algorithmic news. We follow with an exploratory analysis of the effects of these news delivery modes on political behavior, specifically political participation and polarization. Using two nationally representative surveys, one of young adults and one of the general population, we find that getting news from sites that use socially driven or user-driven algorithms to generate content corresponds with higher levels of political participation, but that getting news from non-algorithmic sources does not. We also find that neither non-algorithmic nor algorithmically determined news contribute to higher levels of partisan polarization. This research helps identify important variation in the consequences of news consumption contingent on the mode of delivery.

GPT Agents

  • Finished POS tokenizing terms
  • Started sentiment (POS/NEG) on terms – done
  • Stubbed out the POS and sentiment for the token full_string – done
  • Working on the paper – progress

SBIR

  • 2:00 Standup
  • 2:00 VDI Ubuntu