Phil 2.21.2022

Book

  • Had a nice chat with Roger on Friday. We’ll see if that goes anywhere. Also, look at the various academic presses to find one that is aligned with the type of book I’m writing. Lastly, when publishers are at a conference, they aren’t only selling books, they bring an Editor that you can talk with.
  • Continuing with the Deep Bias chapter. Mention that not only do we have social dominance biases, we have story biases and we anthropomorphize like crazy

GPT Agents

  • Going to add some hyperparameter adjustments (tokens, twitter sample times, etc)
    • Tokens – done

SBIRs

  • More work on the RCSNN/GPT proposal

Phil 2.17.2022

Book

  • Continue on Deep Bias chapter
  • Ping Roger

GPT Agents

  • Make sure phrases work! They do!
  • Automate keyword evaluation for a list of items – done
  • Add a regex field. Parse should produce a keyword list split on the regex – done
Today’s progress
  • And the resulting plot:
Note that “Guinea Pigs” are handled correctly

SBIRs

  • Respond to Dave’s email. I think setting up a pipeline is a great idea actually, as long as it starts with mocks
  • Combinatorial explosion used to reside in the decision process. Now that’s a trained NN that inherently dimension reduces. My intuition is that this controls the combinatorial explosion
  • Need to do a two page summary on our approach

Phil 2.16.2022

Book

  • Got started on the Deep Bias chapter. Seems to be coming together pretty well

GPT Agents

  • Started the KeywordExplorer class. Looking good!

SBIRs

  • Sent Jon a brief bio
  • 1:00 Sprint planning

Phil 2.15.2022

Here we are, one more trip around the sun

How Do Vision Transformers Work?

  • The success of multi-head self-attentions (MSAs) for computer vision is now indisputable. However, little is known about how MSAs work. We present fundamental explanations to help better understand the nature of MSAs. In particular, we demonstrate the following properties of MSAs and Vision Transformers (ViTs): (1) MSAs improve not only accuracy but also generalization by flattening the loss landscapes. Such improvement is primarily attributable to their data specificity, not long-range dependency. On the other hand, ViTs suffer from non-convex losses. Large datasets and loss landscape smoothing methods alleviate this problem; (2) MSAs and Convs exhibit opposite behaviors. For example, MSAs are low-pass filters, but Convs are high-pass filters. Therefore, MSAs and Convs are complementary; (3) Multi-stage neural networks behave like a series connection of small individual models. In addition, MSAs at the end of a stage play a key role in prediction. Based on these insights, we propose AlterNet, a model in which Conv blocks at the end of a stage are replaced with MSA blocks. AlterNet outperforms CNNs not only in large data regimes but also in small data regimes. The code is available at this https URL.

SBIRs

  • It was a very busy day yesterday. Early morning meeting before the actual meeting, then lots of discussion on how (basically) to fit a simulation into a TLM. Then a long discussion with Dave. Then a short lunch break where I got to go for a walk in the February cold. Then demos, then another meeting with Dave.

Then about 45 minutes to spin down before

Waikato

  • Where we went over Tamahau’s progress, which is good.

Ended the day watching Mythbusters encasing Adam Savage in Bubble Wrap.

So, for today…

SBIRs

  • Responded to Dave’s email about tokenization and overall project approach. Talked about PGN as an example of simulator tokenizing. No meetings on the calendar, so I’m not sure what happens next.
  • Put together possible stories for next sprint

Book

  • If today turns out to be a light day, I’m going to start roughing out the social dominance chapter

GPT-Agents

  • Need to put together a landscape for today’s meeting. Actually go caught up in just getting the results from one prompt “Here’s a short list of racist terms in wide use today. Some may surprise you:”. Not really even close to saturation and I have pages
  • 3:30 Meeting. Fun! I think we’re going to look at food keyword generation because it’s less horrible than all the racist terms the GPT can come up with

Phil 2.11.2022

Newest open source TLM. Paper here: http://eaidata.bmk.sh/data/GPT_NeoX_20B.pdf

SBIRs

  • 12:00 FA2 meeting
  • 3:30 Present the AI RoE paper to the data science tagup
  • 4:30 LAIC meeting

Book

Phil 2.10.2022

SBIRs

  • Cleaning up minGPT for comprehensibility
  • Meeting with Rukan and Aaron. Great progress!
  • Working on slide deck for presentation tomorrow

Phil 2.9.2022

Book

  • Finished a pass of some kind and sent off to Wajanat and Aaron
  • Fixed the chapter headings
  • Reworked the proposal so that it has a new intro and the chapters are in the new order

SBIRs

  • 10:00 Meeting with Rukan and Aaron
  • Need to download the MinGPT project and see if I can build it. It works! Now I need to load and save the model, then start playing around with the mask
    • Save and load the model
    • Create a reverse model
A working, from scratch, GPT

JuryRoom

  • Working with Zach a bit on framing out the concept and how much it might cost
  • Meeting with Jarod

Phil 2.8.2022

SBIRs

  • 9:10 Standup
  • Set up a meeting with Rukan and Aaron to discuss RCSNN
  • Continuing Transformers book

JuryRoom

  • Talked to Zach about costing out a MCC-style version

GPT-Agents

  • Tweaked things for multiple plots:
  • 3:30 Meeting

Phil 2.7.2022

We are releasing PromptSource, a toolkit for creating, sharing, and using natural language prompts.

SBIRs

  • Continuing with Transformers book
  • Quick meeting with Aaron and Rukan to deal with his training problems
  • Grabbed some papers for Steve

Book

  • Fixing more chapters that I didn’t realize still sounded like a dissertation

Phil 2.4.2022

Downloading the svn backup – Done!. Going to try to install following these directions: www.if-not-true-then-false.com/2012/svn-subversion-backup-and-restore

SBIRs

  • 10:00 Meeting with Rukan
  • More Transformers book. Need to look more deeply at MinGPT

GPT Agents

  • Now that I have the counts working, need to tie that back into the GPT output. I think I need some Parts-of-speech analysis to figure out what to count. The other part is to use the feedback to determine important points in the GPT response

Phil 2.3.2022

Data Stuff – It’s stuff I made with data! (@erindataviz)

Tasks

  • The Planets
  • Spanish – done
  • So we find out what’s going on with SVN?
  • JCS – done

SBIRs

  • 9:15 standup
  • Meeting with Aaron
  • More Transformers book
    • Chapter 3
    • BertViz: Visualize Attention in Transformer Models (BERT, GPT2, T5, etc.)
    • Found (I think) what I’m looking for: MinGPT: “A PyTorch re-implementation of GPT training. minGPT tries to be small, clean, interpretable and educational, as most of the currently available ones are a bit sprawling. GPT is not a complicated model and this implementation is appropriately about 300 lines of code, including boilerplate and a totally unnecessary custom causal self-attention module.

GPT Agents

  • Continue with TwitterV2 count class. Good progress. I have basic functionality:
Chinese New Year!

Need to work on the queries a bit to get phrases. Actually not hard, you just have to use escaped quotes ‘\”happy new year\”‘:

Happy New Year!

Phil 2.2.22

Looking forward to 2.22.22. Almost as exciting as 11.11.11

This IS VERY COOL!! It’s an entire book written using Jupyter Notebooks that you can read on github: GitHub – fastai/fastbook: The fastai book, published as Jupyter Notebooks

GPT Agents.

  • Got the counts query working with only a small amount of googling. The cool thing is that the items come back with a granularity, so this call (which has a default granularity of “day”:
query = "from:twitterdev"
start_time = "2021-05-01T00:00:00Z"
end_time = "2021-06-01T00:00:00Z"
url = create_counts_url(query, start_time, end_time)
json_response = connect_to_endpoint(url)
print_response("Get counts", json_response)
  • returns a json object that has the daily volume of tweets from @twitterdev (There were 24 total time periods, and then the total tweet count was 22) :
response:
{
    "data": [
        {
            "end": "2021-05-02T00:00:00.000Z",
            "start": "2021-05-01T00:00:00.000Z",
            "tweet_count": 0
        },

        {
            "end": "2021-05-13T00:00:00.000Z",
            "start": "2021-05-12T00:00:00.000Z",
            "tweet_count": 6
        },
        {
            "end": "2021-05-14T00:00:00.000Z",
            "start": "2021-05-13T00:00:00.000Z",
            "tweet_count": 1
        },
        {
            "end": "2021-05-15T00:00:00.000Z",
            "start": "2021-05-14T00:00:00.000Z",
        {
            "end": "2021-05-21T00:00:00.000Z",
            "start": "2021-05-20T00:00:00.000Z",
            "tweet_count": 8
        },
        {
            "end": "2021-05-29T00:00:00.000Z",
            "start": "2021-05-28T00:00:00.000Z",
            "tweet_count": 2
        },
        {
            "end": "2021-06-01T00:00:00.000Z",
            "start": "2021-05-31T00:00:00.000Z",
            "tweet_count": 0
        }
    ],
    "meta": {
        "total_tweet_count": 22
    }
}
  • This is very nice! I’m looking forward to doing some interesting things with the GPT. We can scan through responses to prompts and look at word-by-word Twitter frequencies after stop words, and then use those sentences for further prompting. We can also compare embeddings, cluster and other interesting things

SBIRs

Pretty much any data you want for general training at any scale
  • Datasets simplifies this process by providing a standard interface for thousands of datasets that can be found on the Hub. It also provides smart caching (so you don’t have to redo your preprocessing each time you run your code)and avoids RAM limitations by leveraging a special mechanism called memory mapping that stores the contents of a file in virtual memory and enables multiple processes to modify a file more efficiently.
  • Imbalanced-learn (imported as imblearn) is an open source, MIT-licensed library relying on scikit-learn (imported as sklearn) and provides tools when dealing with classification with imbalanced classes.
  • Nice NW job fair

Ack! Dreamhost has deleted my SVN repo. Very bad. Working on getting it back. Other options include RiouxSVN, but it may be moribund. Assembla hosts for $19/month with 500 GB, which is good because I store models. Alternatively, make a svn server, fix the IP address, and have it on Google Drive, OneDrive, or DropBox.