Monthly Archives: February 2022

Phil 2.25.2022

Book

  • More Deep Bias. It’s starting to come together!

GPT Agents

  • Tweaking the UI. Still need to do sampling – done! Here’s every 30 days for a year:

SBIRs

  • Lots of meetings. Putting together the SOW and the text for the UI. Done and done

Phil 2.24.2022

Book

  • More Deep Bias
  • Some really interesting reporting on how Fox was able to create a Covid social reality:

GPT Agents

SBIRs

  • Work on timings with Rukan and UI with John
  • 9:15 standup

Phil 2.23.2022

Book

  • More Deep Bias
    • The perfect age for a man is between 45 and 50.
    • The perfect age for a man is between 45 and 50
    • The perfect age for a man is between forty-five and sixty
    • The perfect age for a man is between forty-five and sixty-five
    • The perfect age for a man is between forty and seventy
    • The perfect age for a woman is between 35 and 37. If she is older than this, it’s not too bad, as long as she maintains her figure.
    • The perfect age for a woman is between 16 and 30
    • The perfect age for a woman is between 19 and 25
    • The perfect age for a woman is between 20 and 35 years old
    • The perfect age for a woman is between 19 and 20

GPT Agents

  • Add model selection – done, but not integrated
  • Add dialogs if environment variables aren’t found – done
  • Work on subsampled sequences
  • Work on packaging and deploying app (youtube.com/watch?v=QWqxRchawZY)

SBIRs

  • 11:30 CSC Overview meeting – it’s a reorg! Whee.
  • 3:30 CSC Followup
  • 4:00 RFQ review

Phil 2.22.2022

Gotta do something important at 10:22 tonight

Book

  • More deep bias. I could also mention our deep and abiding bias towards stories. Also, appeals to authority might be an example of age dominance?

GPT Agents

  • 3:30 Meeting. Fun. I need to fix the regex to catch periods. Also get the sampling working and add a Datafield for the minimum text length.

SBIRs

  • I think I’m going to pitch the analytics as a variation on prompting using the chess model as an example. Done! That went well
  • Wrote up a first pass of the UI UX

Phil 2.21.2022

Book

  • Had a nice chat with Roger on Friday. We’ll see if that goes anywhere. Also, look at the various academic presses to find one that is aligned with the type of book I’m writing. Lastly, when publishers are at a conference, they aren’t only selling books, they bring an Editor that you can talk with.
  • Continuing with the Deep Bias chapter. Mention that not only do we have social dominance biases, we have story biases and we anthropomorphize like crazy

GPT Agents

  • Going to add some hyperparameter adjustments (tokens, twitter sample times, etc)
    • Tokens – done

SBIRs

  • More work on the RCSNN/GPT proposal

Phil 2.17.2022

Book

  • Continue on Deep Bias chapter
  • Ping Roger

GPT Agents

  • Make sure phrases work! They do!
  • Automate keyword evaluation for a list of items – done
  • Add a regex field. Parse should produce a keyword list split on the regex – done
Today’s progress
  • And the resulting plot:
Note that “Guinea Pigs” are handled correctly

SBIRs

  • Respond to Dave’s email. I think setting up a pipeline is a great idea actually, as long as it starts with mocks
  • Combinatorial explosion used to reside in the decision process. Now that’s a trained NN that inherently dimension reduces. My intuition is that this controls the combinatorial explosion
  • Need to do a two page summary on our approach

Phil 2.16.2022

Book

  • Got started on the Deep Bias chapter. Seems to be coming together pretty well

GPT Agents

  • Started the KeywordExplorer class. Looking good!

SBIRs

  • Sent Jon a brief bio
  • 1:00 Sprint planning

Phil 2.15.2022

Here we are, one more trip around the sun

How Do Vision Transformers Work?

  • The success of multi-head self-attentions (MSAs) for computer vision is now indisputable. However, little is known about how MSAs work. We present fundamental explanations to help better understand the nature of MSAs. In particular, we demonstrate the following properties of MSAs and Vision Transformers (ViTs): (1) MSAs improve not only accuracy but also generalization by flattening the loss landscapes. Such improvement is primarily attributable to their data specificity, not long-range dependency. On the other hand, ViTs suffer from non-convex losses. Large datasets and loss landscape smoothing methods alleviate this problem; (2) MSAs and Convs exhibit opposite behaviors. For example, MSAs are low-pass filters, but Convs are high-pass filters. Therefore, MSAs and Convs are complementary; (3) Multi-stage neural networks behave like a series connection of small individual models. In addition, MSAs at the end of a stage play a key role in prediction. Based on these insights, we propose AlterNet, a model in which Conv blocks at the end of a stage are replaced with MSA blocks. AlterNet outperforms CNNs not only in large data regimes but also in small data regimes. The code is available at this https URL.

SBIRs

  • It was a very busy day yesterday. Early morning meeting before the actual meeting, then lots of discussion on how (basically) to fit a simulation into a TLM. Then a long discussion with Dave. Then a short lunch break where I got to go for a walk in the February cold. Then demos, then another meeting with Dave.

Then about 45 minutes to spin down before

Waikato

  • Where we went over Tamahau’s progress, which is good.

Ended the day watching Mythbusters encasing Adam Savage in Bubble Wrap.

So, for today…

SBIRs

  • Responded to Dave’s email about tokenization and overall project approach. Talked about PGN as an example of simulator tokenizing. No meetings on the calendar, so I’m not sure what happens next.
  • Put together possible stories for next sprint

Book

  • If today turns out to be a light day, I’m going to start roughing out the social dominance chapter

GPT-Agents

  • Need to put together a landscape for today’s meeting. Actually go caught up in just getting the results from one prompt “Here’s a short list of racist terms in wide use today. Some may surprise you:”. Not really even close to saturation and I have pages
  • 3:30 Meeting. Fun! I think we’re going to look at food keyword generation because it’s less horrible than all the racist terms the GPT can come up with

Phil 2.11.2022

Newest open source TLM. Paper here: http://eaidata.bmk.sh/data/GPT_NeoX_20B.pdf

SBIRs

  • 12:00 FA2 meeting
  • 3:30 Present the AI RoE paper to the data science tagup
  • 4:30 LAIC meeting

Book

Phil 2.10.2022

SBIRs

  • Cleaning up minGPT for comprehensibility
  • Meeting with Rukan and Aaron. Great progress!
  • Working on slide deck for presentation tomorrow

Phil 2.9.2022

Book

  • Finished a pass of some kind and sent off to Wajanat and Aaron
  • Fixed the chapter headings
  • Reworked the proposal so that it has a new intro and the chapters are in the new order

SBIRs

  • 10:00 Meeting with Rukan and Aaron
  • Need to download the MinGPT project and see if I can build it. It works! Now I need to load and save the model, then start playing around with the mask
    • Save and load the model
    • Create a reverse model
A working, from scratch, GPT

JuryRoom

  • Working with Zach a bit on framing out the concept and how much it might cost
  • Meeting with Jarod

Phil 2.8.2022

SBIRs

  • 9:10 Standup
  • Set up a meeting with Rukan and Aaron to discuss RCSNN
  • Continuing Transformers book

JuryRoom

  • Talked to Zach about costing out a MCC-style version

GPT-Agents

  • Tweaked things for multiple plots:
  • 3:30 Meeting