Author Archives: pgfeldman

Phil 4.21.21

Here are some silly coding conventions that I had to dig to find the answers for.

headers = {"User-Agent": "someone@someplace.com"}
page_title = "Exergaming"
yesterday = datetime.today() - timedelta(days=1)
last_week = yesterday - timedelta(days=7)
yester_s = yesterday.strftime("%Y%m%d")
lastw_s = last_week.strftime("%Y%m%d")
s = "https://wikimedia.org/api/rest_v1/metrics/pageviews/per-article/en.wikipedia/all-access/user/{}/daily/{}/{}".format(page_title, lastw_s, yester_s)
print(s)
r = requests.get(s, headers=headers)
  • Without that ‘headers’ element, you get a 404. Note that you do not need to spoof a browser header. This is all you need.
  • Second, when storing values with pymysql that involves strings that need to be escaped, you can now use parameter binding, which is very cool. BUT! Just because it uses ‘%s’, doesn’t mean that you use %d and %f. Here’s an example that uses strings, floats, and ints:
sql = "insert into gpt_maps.table_experiment (date, description, engine, max_tokens, temperature, top_p, logprobs, num_responses, presence_penalty, frequency_penalty)" \
      " values(%s, %s, %s, %s, %s, %s, %s, %s, %s, %s)"
values = (date_str, description, self.engine, self.max_tokens, self.temperature, self.top_p, self.logprobs, self.num_responses, self.presence_penalty, self.frequency_penalty)
msi.write_sql_values_get_row(sql, values)

And here’s the call that does the actual writing to the db:

def write_sql_values_get_row(self, sql:str, values:Tuple):
try:
with self.connection.cursor() as cursor:
cursor.execute(sql, values)
id = cursor.lastrowid
print("row id = {}".format(id))
return id
except pymysql.err.InternalError as e:
print("{}:\n\t{}".format(e, sql))
return -1

The Power of Scale for Parameter-Efficient Prompt Tuning

  • In this work, we explore “prompt tuning”, a simple yet effective mechanism for learning “soft prompts” to condition frozen language models to perform specific downstream tasks. Unlike the discrete text prompts used by GPT-3, soft prompts are learned through backpropagation and can be tuned to incorporate signal from any number of labeled examples. Our end-to-end learned approach outperforms GPT-3’s “few-shot” learning by a large margin. More remarkably, through ablations on model size using T5, we show that prompt tuning becomes more competitive with scale: as models exceed billions of parameters, our method “closes the gap” and matches the strong performance of model tuning (where all model weights are tuned). This finding is especially relevant in that large models are costly to share and serve, and the ability to reuse one frozen model for multiple downstream tasks can ease this burden. Our method can be seen as a simplification of the recently proposed “prefix tuning” of Li and Liang (2021), and we provide a comparison to this and other similar approaches. Finally, we show that conditioning a frozen model with soft prompts confers benefits in robustness to domain transfer, as compared to full model tuning.

GPT-Agents

  • Start building out GraphToDB.
    • Use the Wikipedia to verify a node name exists before adding it
    • Check that a (directed) edge exists before adding it. If it does, increment the weight.
  • Digging into what metaphors are:
    • Understanding Figurative Language: From Metaphor to Idioms
      • This book examines how people understand utterances that are intended figuratively. Traditionally, figurative language such as metaphors and idioms has been considered derivative from more complex than ostensibly straightforward literal language. Glucksberg argues that figurative language involves the same kinds of linguistic and pragmatic operations that are used for ordinary, literal language. Glucksberg’s research in this book is concerned with ordinary language: expressions that are used in daily life, including conversations about everyday matters, newspaper and magazine articles, and the media. Metaphor is the major focus of the book. Idioms, however, are also treated comprehensively, as is the theory of conceptual metaphor in the context of how people understand both conventional and novel figurative expressions. A new theory of metaphor comprehension is put forward, and evaluated with respect to competing theories in linguistics and in psychology. The central tenet of the theory is that ordinary conversational metaphors are used to create new concepts and categories. This process is spontaneous and automatic. Metaphor is special only in the sense that these categories get their names from the best examples of the things they represent, and that these categories get their names from the best examples of those categories. Thus, the literal “shark” can be a metaphor for any vicious and predatory being, from unscrupulous salespeople to a murderous character in The Threepenny Opera. Because the same term, e.g.,”shark,” is used both for its literal referent and for the metaphorical category, as in “My lawyer is a shark,” we call it the dual-reference theory. The theory is then extended to two other domains: idioms and conceptual metaphors. The book presents the first comprehensive account of how people use and understand metaphors in everyday life
    • The contemporary theory of metaphor — now new and improved!
      • This paper outlines a multi-dimensional/multi-disciplinary framework for the study of metaphor. It expands on the cognitive linguistic approach to metaphor in language and thought by adding the dimension of communication, and it expands on the predominantly linguistic and psychological approaches by adding the discipline of social science. This creates a map of the field in which nine main areas of research can be distinguished and connected to each other in precise ways. It allows for renewed attention to the deliberate use of metaphor in communication, in contrast with non-deliberate use, and asks the question whether the interaction between deliberate and non-deliberate use of metaphor in specific social domains can contribute to an explanation of the discourse career of metaphor. The suggestion is made that metaphorical models in language, thought, and communication can be classified as official, contested, implicit, and emerging, which may offer new perspectives on the interaction between social, psychological, and linguistic properties and functions of metaphor in discourse.

SBIR

Phil 4.20.21

Big news this afternoon:

Had an interesting talk with Aaron last night about using FB microtargeting as a mechanism to provide “deprogramming” content to folks that are going down conspiracy rabbit holes. We could also use the GPT-3

Thinking Aloud: Dynamic Context Generation Improves Zero-Shot Reasoning Performance of GPT-2

  • Thinking aloud is an effective meta-cognitive strategy human reasoners apply to solve difficult problems. We suggest to improve the reasoning ability of pre-trained neural language models in a similar way, namely by expanding a task’s context with problem elaborations that are dynamically generated by the language model itself. Our main result is that dynamic problem elaboration significantly improves the zero-shot performance of GPT-2 in a deductive reasoning and natural language inference task: While the model uses a syntactic heuristic for predicting an answer, it is capable (to some degree) of generating reasoned additional context which facilitates the successful application of its heuristic. We explore different ways of generating elaborations, including fewshot learning, and find that their relative performance varies with the specific problem characteristics (such as problem difficulty). Moreover, the effectiveness of an elaboration can be explained in terms of the degree to which the elaboration semantically coheres with the corresponding problem. In particular, elaborations that are most faithful to the original problem description may boost accuracy by up to 24%.
  • OCTIS (Optimizing and Comparing Topic models Is Simple) aims at training, analyzing and comparing Topic Models, whose optimal hyper-parameters are estimated by means of a Bayesian Optimization approach.

GPT-Agents

  • Putting together a scratch file that gets page view data from Wikimedia. My plan is to use that value to determine the weight of the node
    • Stackoverflow post
    • This page documents the Pageview API (v1), a public API developed and maintained by the Wikimedia Foundation that serves analytical data about article pageviews of Wikipedia and its sister projects. With it, you can get pageview trends on specific articles or projects; filter by agent type or access method, and choose different time ranges and granularities; you can also get the most viewed articles of a certain project and timespan, and even check out the countries that visit a project the most. Have fun!
    • Wikimedia REST API: This API provides cacheable and straightforward access to Wikimedia content and data, in machine-readable formats.
  • 3:00 Meeting
  • Paper is pending ArXiv!

SBIR

  • Asked Rukan to save of some well trained models to play with

Phil 4.19.21

Performance Trailer Sales

Today I learned about the AoE “timezone”. Latest possible midnight is always in Baker Island, US Minor Outlying Islands

GPT-Agents

  • Workshop paper is done!
  • Starting to work seriously on mapping

SBIR

  • 2:00 meeting to figure out what to do for phase 2?
  • Sync up with Rukan and see how the loss function is going

Phil 4.15.21

https://twitter.com/jure/status/1382743017283493889

GPT-Agents

Dr Fauci:
	[0]: record shows that statistics shows that 340 million Americans died due to It!#COVID19 is @LamestreamMedia!REALLY @JoeBiden called the #coronavirus the #ChinaVirus created by Dr Fauci
	[1]: @WHO]]][[[https://t This is the #CommunistParty #ChinaVirus ;&amp continue to lie about part of their creation of a COVID-19 Covid19,vaccine was developed by Dr Fauci
	[2]: ,the study on the #Coronavirus response to the #COVID19 response to #ChinaVirus.#China Dr]]][[[https://t.co/sCiJ5h0mRz Fauci
	[3]: The #FakeNews #WHOLiedPeopleDied is #FakeNews ,calling the #CCPVirus source of #ChinaVirus it quickly spread and spread this propaganda and panic spread by the Dr?#FakeNewsMedia Fauci
	[4]: can� work for the #ChinaVirus.#ChineseVirus So]]][[[#WuhanVirus the virus was made from in,China in,China China :Coffee Fauci,President Dr Fauci
	[5]: This]]][[[https://t.co/ is what that if China is calling it the #WuhanFlu or #ChineseVirus The?#WuhanCoronavirus people need to know that Dr Fauci
	[6]: Coronavirus-China-China-Coronavirus-China !Joe!?!Virus-China Joe!Joe Joe!Joe!Joe!?!Fauci Dr!Fauci!Sleeping Fauci
	[7]: Covid-19]] Vaccine is cured for China?????Coronavirus #ChinaVirus?China??Virus @WHO Covid-19 Vaccine is one out of @WHO for just giving a vaccine by Dr Fauci
	[8]: https://t.co/rZ 14 Mar 2020 https://t.co/c4vWxnQw0a 13 Mar 2020 https://t.co/0dx0Rp7tCe Dr Fauci
	[9]: #ChinaVirus #ChineseVirus @JRubin]]][[[#WuhanVirus @BorisJohnson @TuckerCarlson @ChrisCuomo.Dr @POTUS @JoeBiden Dr.Dr Fauci

Donald Trump:
	[0]: qt- by #BorderObserver @JackPosobiec]]][[[https://t.co/v2G8m1sE2o @marklevinshow #ChinaVirus #coronavirus Donald Trump
	[1]: #CO #ChinaVirus qt-covid19-news-056 by This]]][[[#BorderObserver China.time pandemics,lockdowns,hiding,lying,lied ;&amp Donald's Trump
	[2]: can’t the spread of #coronavirus so they can spread this Thanks.pandemic for will?this take out #COVID?this #COVID19Pandemic #ChinaVirus #covid19 Donald Trump
	[3]: #China #coronavirus @POTUS]]][[[#COVID19 this is all of the #CoronaVirus #China that #ThousandsDied thousands could die from #ChinaVirus #Trump’s Donald Trump
	[4]: #LamestreamMedia!DISAPPE says #ChinaVirus If.spiking #FakeNewsMedia continue these claims ;&amp states use corrupt @POTUS,#MailinBallots to delay Donald.#Election2020 Trump
	[5]: More]]][[[https://t.co/JnUZQgL than more dead from the #China's response to #ChinaVirus.this trying to tell that more Americans died from Trump,Covid19 Donald Trump
	[6]: @YouTube There was proof that the world created the outbreak of a outbreak in #Coronavirus.America #WHOLiedPeopleDied]]][[[https://t.co/2eHj7tBqE Donald Trump
	[7]: #ChinaVirus for President but,Trump I am standing against #COVID19 in the The.U.S response to the He.@realDonaldTrump called the #WuhanVirus #ChinaVirus for the President Trump J Donald Trump
	[8]: How]]][[[https://t you will get from #ChinaVirus #Coronavirus who everyone wants to call it a #ChineseVirus #CCPVirus that #Chinese will pay for the #ChinaVirus Donald Trump
	[9]: #ChinaV @SenSchumer]]][[[https://t.co/uOc1PtLp2Z #DemocratsHateAmerica #CoronaVirusUpdates #ChinaVirus #CCPVirus Donald Trump
  • The Titan RTX box is still working on this dataset, while my GTX1070 box finished in an hour? Not sure what is going on
  • It looks like I have some mismatched versions of CUDA drivers/TF/Torch installed. Need to do a cleanup tomorrow:

SBIR

  • 9:15 Standup
  • 1:30 GPT Meeting

Phil 4.14.21

GPT Agents

  • Generated reversed version of the chinavirus corpora and am currently training a model. The Huggingface API has changed some, and it seems very slow?
  • Lit review

SBIR

  • Assisting Rukan
  • 10:00 Meeting

Book

  • 5:30 Editing with Michelle

JuryRoom

  • 7:00 Meeting

Phil 4.13.21

GPT Agents

  • Working on paper – barring the lit review, I’m at a first draft, I think
    • Still need to do the abstract! Done!
  • 3:00 Meeting today
    • Banged away on a lot of issues. I need to put together a lit review by tomorrow COB. The due date is the 19th, though!
  • I have a crazy idea for prompt generation. I think I’m going to train a model on text with the word order reversed. Then an ‘answer’ fed in to the reversed system should generate a set of prompts that should have a high chance of generating that answer, once re-reversed.
  • Fixed all the weird parsing issues for POS strings

Book

  • Need to set up meeting for April 30th or May 7th at 1:15 pm PT (4:15 pm ET)

SBIR

  • 9:15 Sprint scheduling

Phil 4.8.21

Print and mail taxes today

How many data points is a prompt worth?

  • Prompts are interesting because they allow a practitioner to give information to the model, although in a very different fashion from standard ML supervision. In our NAACL 2021 paper with Sasha Rush, we investigate prompt-based fine-tuning, a promising alternative fine-tuning approach, and find that prompts often yield an edge over the standard approach. As we interpret a prompt as additional human-crafted information for the model, we measure that edge in terms of data points and quantify: how many data points is a prompt worth?

SBIR

  • 9:15 IRAD standup
  • 11:00 Meeting with Orest
  • Make slide for Aaron
  • More work with Rukan

Book

GPT Agents

  • More writing

Phil 4.7.21

Two perspectives on large language model (LLM) ethics

On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? 🦜

  • The past 3 years of work in NLP have been characterized by the development and deployment of ever larger language models, especially for English. BERT, its variants, GPT-2/3, and others, most recently Switch-C, have pushed the boundaries of the possible both through architectural innovations and through sheer size. Using these pretrained models and the methodology of fine-tuning them for specific tasks, researchers have extended the state of the art on a wide array of tasks as measured by leaderboards on specific benchmarks for English. In this paper, we take a step back and ask: How big is too big? What are the possible risks associated with this technology and what paths are available for mitigating those risks? We provide recommendations including weighing the environmental and financial costs first, investing resources into curating and carefully documenting datasets rather than ingesting everything on the web, carrying out pre-development exercises evaluating how the planned approach fits into research and development goals and supports stakeholder values, and encouraging research directions beyond ever larger language models.

Alignment of Language Agents

  • For artificial intelligence to be beneficial to humans the behaviour of AI agents needs to be aligned with what humans want. In this paper we discuss some behavioural issues for language agents, arising from accidental misspecification by the system designer. We highlight some ways that misspecification can occur and discuss some behavioural issues that could arise from misspecification, including deceptive or manipulative language, and review some approaches for avoiding these issues.

Book

GPT Agents

  • Move token workbooks into the right place – done. Recalculated a few. Also created a folder for modified spreadsheets so that I can find them later!
  • Write! Did some, but mostly made charts:

SBIR

  • 10:00 Meeting
  • More model tuning with Rukan. Much better luck with MLPs! Going to rethink how an attention head should be attached to a linear layer

JuryRoom

  • 7:00 Meeting

Phil 4.6.21

Need to agree to re-review

GPT Agents

  • Continued to adjust the schema. Probe now stores the full raw json response as a string:
  • Added logit storage to the raw values. A exp(0) = 1.0, or 100%. Anything less than that is lower probability
  • Continuing to work on the paper
  • 3:00 Meeting

SBIR

  • More Transformer work. Need to save out some screenshots for slides this time!
  • 9:15 Standup

Phil 4.5.21

GPT Agents

  • Made some more progress on the mapping framework. Stubbed out some tables for storing the node and edge information, and started to look at probes that can create long jumps to other sections of the space, e.g.
There are also some countries that are very far away from the United States. Here's a short list, starting with the most distant, separated by commas:
  • More working on the paper

SBIR

  • Got the Transformer doing its thing. It looks like it might work!
  • Having some difficulty getting it to behave with batches, though

Phil 4.4.21

Happy end-of-Passover, Easter!

Playing with the GPT mapping, and I’ve gotten queries running with POS processing. Here’s the prompt:

"A list of the countries that are nearest the United States, separated by comma:"

Here’s the response:

Canada, Mexico, Bahamas, Dominican Republic, Haiti, Jamaica, Cuba, Trinidad and Tobago, Puerto Rico, Barbados, Antigua and Barbuda, Saint Lucia, Saint Vincent and the Grenadines, Grenada, Domin

And here it is processed by Flair:

{'text': 'Canada', 'tag': 'NNP'}
{'text': ',', 'tag': ','}
{'text': 'Mexico', 'tag': 'NNP'}
{'text': ',', 'tag': ','}
{'text': 'Bahamas', 'tag': 'NNP'}
{'text': ',', 'tag': ','}
{'text': 'Dominican', 'tag': 'NNP'}
{'text': 'Republic', 'tag': 'NNP'}
{'text': ',', 'tag': ','}
{'text': 'Haiti', 'tag': 'NNP'}
{'text': ',', 'tag': ','}
{'text': 'Jamaica', 'tag': 'NNP'}
{'text': ',', 'tag': ','}
{'text': 'Cuba', 'tag': 'NNP'}
{'text': ',', 'tag': ','}
{'text': 'Trinidad', 'tag': 'NNP'}
{'text': 'and', 'tag': 'CC'}
{'text': 'Tobago', 'tag': 'NNP'}
{'text': ',', 'tag': ','}
{'text': 'Puerto', 'tag': 'NNP'}
{'text': 'Rico', 'tag': 'NNP'}
{'text': ',', 'tag': ','}
{'text': 'Barbados', 'tag': 'NNP'}
{'text': ',', 'tag': ','}
{'text': 'Antigua', 'tag': 'NNP'}
{'text': 'and', 'tag': 'CC'}
{'text': 'Barbuda', 'tag': 'NNP'}
{'text': ',', 'tag': ','}
{'text': 'Saint', 'tag': 'NNP'}
{'text': 'Lucia', 'tag': 'NNP'}
{'text': ',', 'tag': ','}
{'text': 'Saint', 'tag': 'NNP'}
{'text': 'Vincent', 'tag': 'NNP'}
{'text': 'and', 'tag': 'CC'}
{'text': 'the', 'tag': 'DT'}
{'text': 'Grenadines', 'tag': 'NNPS'}
{'text': ',', 'tag': ','}
{'text': 'Grenada', 'tag': 'NNP'}
{'text': ',', 'tag': ','}
{'text': 'Domin', 'tag': 'NNP'}

I am very excited!