Category Archives: Phil

Phil 1.6.2022

One year ago things were pretty crazy here

GPT Agents

  • From On the Reliability and Validity of Detecting Approval of Political Actors in Tweets (Section 4.2) as an example of keyword SOTA :
    • We evaluate OTS and custom methods on the following datasets. While some of these datasets have common targets, for example, Trump is present in four of them, they are all collected in different periods of time, with different keywords (c.f Appendix B). All datasets have stance labels of ‘favor’, ‘against’, and ‘none’ towards the targets. (EMNLP)
  • Finished with generating the new data, now we get to see if it works!
  • It’s pretty good. Here’s the two GPT models, one trained on the first 50k reviews of the American dataset (iso) and the other trained on the first 50k of the American dataset that do not contain the string “vegetarian options”. The probes are:
    • no vegetarian options
    • some vegetarian options
    • several vegetarian options
    • many vegetarian options
  • Basically identical
  • Now I need to compare the response vs the ground truth for each of the probes

Phil 1.5.2022

Jamie Raskin just released a book that apparently has some overlap with my work? Trying to track it down. Here’s something from CBS

GPT Agents

  • Creating unistar models from the corpora that have ‘vegetarian options’ removed. As they are trained, I’m also generating responses to the vegetarian prompts that I’ll do the star and unigram compares with. Then put that in a table and write the paper around it. Also, add the Floober part or something fanciful.
  • Models are all created. Finished running the first two and am now adding sentiment to them

SBIRs

  • Continue code cleanup and documenting. I managed to remove a good deal of code that had to do with handing raw text selection of topics, since that seems to be broken in tk
    • Finished commenting QueryFrame. Now I need to fix that listing problem in on_link_existing_clicked()
  • Set up meeting to discuss LAIC dev plans – done

Phil 1.4.2022

It got really cold last night and I had forgotten to turn the water off to the outside and lost the faucet on the deck. Could have been worse. At least the pipes didn’t burst

Thinking about submitting a writeup on Sanhedrin 17a (Section 10.4 of the dissertation. Mostly) for the We Robot conference

  • Abstracts due: March 7
  • Decisions: May 9
  • Final papers due: August 8

Book

  • Playing around with negative scalars to see how that works. This resulted in some code cleanup and a better color gradient. Not sure if it looks better though:

Still like this better:

SBIRs

  • Sprint planning
  • Working on code cleanup for MabBuilder. First, adding comments!
  • Fixed the exit condition that happened when clicking the ‘X’ close icon in the text compare popup
  • Next, check through all the button behavior in QueryFrame
    • Set Group
    • Add Topic/Seed
    • Add Topic
    • Add Seed
    • Find Closest (and dialog)
    • Add Group
    • Next Seed
    • Rerun Seed
    • Get Topic Details
    • Direct Prompt
    • Wikipedia
    • Link Existing (make this work with descending length topics)

GPT Agents

  • 3:30 Meeting. Going to make some models that explicitly are missing the phrase ‘vegetarian options’ from the training corpora. I’ll then run those as to compare to ‘vegetarian options’ in the ground truth by star and the other GPT models

Phil 1.3.2022

This looks interesting: www.oreilly.com/library/view/natural-language-processing/9781098103231/

Book

  • After a few false starts, I have the terrain extended:
  • I still need to:
    • add a ‘lit’ and ‘unlit’ node for terrain and labels – done
    • add a height scalar – done
    • toggle grids and axis – done
    • Shift keys to move the lights the other direction, plus lambda functions for the parameters – done
    • Maybe add fog? docs.panda3d.org/1.10/python/programming/render-attributes/fog – nope, can’t get the fog to be relative to the terrain center

Today’s progress:

GPT Agents

  • Get the number of POSITIVE and NEGATIVE sentiment for each isolated model and compare to ground truth. Make a chart and add to the draft. This is the part that shows that creating models for a population captures that population’s patterns, and that this method is more accurate and reliable than assuming that one general model has all the information needed in an accessible way. Done

Phil 1.2.2022

Happy New Year everyone! It’s been warm here in the Baltimore region. Working on terrain visualization.

Got lighting working. You attach the lighting node to the node you want it to move with and then set it to the node you want to light. Here’s the code:

def add_directional_light(self, name:str, root_node:NodePath,   target_node:NodePath, color:Tuple = (1,1,1,1)) -> NodePath:
    dlight = DirectionalLight(name)
    dlight.setColor(color)
    dlnp = root_node.attachNewNode(dlight)
    target_node.setLight(dlnp)
    self.light_dict[name] = dlnp
    return dlnp

I also added grid lines to emphasize the contours of the terrain. I’m liking the overall look:

The last thing I want to do is extent the terrain beyond the nodes so that everything rises from a flat surface

Phil 12.31.2021

Got reading of GML files working. Now I need to handle coordinates and elevation:

And it works!

Need to add some lighting: docs.panda3d.org/1.10/python/programming/render-attributes/lighting

Need to add gridlines (Add method to ShapePrimitives):

prim = GeomLines(Geom.UHStatic)

prim.addVertex(0)
prim.addVertex(1)
prim.addVertex(2)
# thats the first triangle

# you can also add a few at once
prim.addVertices(2, 1, 3)

prim.addVertices(0, 5, 6)

Phil 12.30.2021

Getting the surface creation out of the initial creation and into the load. Also, more colors and rotating text!

Working on the results section of the paper

Phil 12.29.2021

Good progress on the technique paper. Procrastinating about the PNAS paper. I need to think about that one some more

Also good progress on the terrain viewer. I have most of the plumbing done and am almost ready to read in the GML:

Phil 12.28.21

Tasks

  • Text Outlaw for emails

GPT Agents

Phil 12.27.21

Tasks

  • Text Outlaw for emails

GPT Agents

Phil 12.23.21

Outlaw again, dammit

Online gifts/contributions

This is pretty cool, and might be a very nice way of showing word choice visualization by the GPT – PyUpSet

https://github.com/ImSoErgodic/py-upset

SBIRs

  • Submit LaTeX!!!! (email)

GPT Agents

  • Create outlines for the approach paper (train small models on specific populations) and the application paper (behaviors in covid models)

Phil 12/22/21

Call outlaw

Check into agent Jim Levine: https://lgrliterary.com/. He represents these guys

GPT Agents

  • Realized this morning that there is another way of doing prompts now that I have the tool running. Something like “Once you accept the idea that {}, the next likely step is”. After playing around a bit, I got this prompts working: once we accept the “conspiracy theory” that “the moon landing was faked”, then next theory we deed to verify is that “the moon” is actually just a big-ass TV, and the moon-landing was just an elaborate hoax. This looks like a promising direction!

SBIRs

  • Finish paper and get off to MARCOM for checking – done
  • Got the template paper running it its own folder – done

JuryRoom.

  • Read Tamahau’s paper before meeting – no meeting, but done anyway

GPT Agents

  • There was a meeting! Next task is to put together the outline for the two papers and distribute the link

Phil 12/20/2021

SBIRs

  • Sprint Review
  • Sprint planning
  • Work on paper! Based on the reviewer’s comments, I’m thinking of changing the name of the maps from Belief maps to “Jukurrpa Charts”, which reference the Australian Aboriginal Dreamtime, which I think are a nice analogy to what language models really are, a system of fixed relationships between latent concepts and language that can be traversed using narrative forms, or “dreamtracks”. The other option would be to use the term Neural Language Maps. The only problem is that NLM also means ‘Neural Language Models’. Maybe Neural Semantic Maps (NSMs)?
  • Wound up going with Neural Narrative Mapping