Category Archives: Phil

Phil 5.12.2022

New UL2 model

https://twitter.com/huggingface/status/1524783489593360385

https://twitter.com/arankomatsuzaki/status/1524560903374393344

Book

  • Maps and Loss of Self
  • Thinking about adding something about music, but that may be a separate thing
  • Asymmetrical perceptions of partisan political bots
    • Political bots are social media algorithms that impersonate political actors and interact with other users, aiming to influence public opinion. This study investigates the ability to differentiate bots with partisan personas from humans on Twitter. Our online experiment (N = 656) explores how various characteristics of the participants and of the stimulus profiles bias recognition accuracy. The analysis reveals asymmetrical partisan-motivated reasoning, in that conservative profiles appear to be more confusing and Republican participants perform less well in the recognition task. Moreover, Republican users are more likely to confuse conservative bots with humans, whereas Democratic users are more likely to confuse conservative human users with bots. We discuss implications for how partisan identities affect motivated reasoning and how political bots exacerbate political polarization.

SBIRs

  • 9:15 standup
  • Make RCSNode

Phil 5.11.2022

Book

  • Added more on Lists

SBIRs

  • Add markings to report
  • Work on tree graph recursion. Getting there:
  • Need to try different hierarchies and make sure the spacing still works. A win for recursion!
There, that’s better
  • Need to add text for the command, cur_state, and response. That means making an RCSNode that inherits from MovableNode and has more text handles
  • We’re having funding problems for IRAD, so it’s back to the unofficial hidden agenda for important things?

Phil 5.8.2022

Inferring strategies from observations in long iterated Prisoner’s dilemma experiments

  • While many theoretical studies have revealed the strategies that could lead to and maintain cooperation in the Iterated Prisoner’s dilemma, less is known about what human participants actually do in this game and how strategies change when being confronted with anonymous partners in each round. Previous attempts used short experiments, made different assumptions of possible strategies, and led to very different conclusions. We present here two long treatments that differ in the partner matching strategy used, i.e. fixed or shuffled partners. Here we use unsupervised methods to cluster the players based on their actions and then Hidden Markov Model to infer what the memory-one strategies are in each cluster. Analysis of the inferred strategies reveals that fixed partner interaction leads to behavioral self-organization. Shuffled partners generate subgroups of memory-one strategies that remain entangled, apparently blocking the self-selection process that leads to fully cooperating participants in the fixed partner treatment. Analyzing the latter in more detail shows that AllC, AllD, TFT- and WSLS-like behavior can be observed. This study also reveals that long treatments are needed as experiments with less than 25 rounds capture mostly the learning phase participants go through in these kinds of experiments.

Book

  • Finished up alignment in belief space, started on lists, stories, games and maps. I also downloaded the whole project and stuck it in subversion. Don’t want to lose it

SBIRs

  • Monday
    • Sprint demos
    • MDA meeting
    • Discussion with Ron about Stories
  • Stories!
  • Meeting with Rukan
  • 9:15 Sprint planning
  • 11:00 Meeting with Dr. Edwards

GPT Agents

  • 3:30 meeting

Phil 5.6.2022

Found this today: transdiffusion.org: Founded in 1964, Transdiffusion’s huge archive of television and radio material is provided free to people wishing to learn more about the history of broadcasting in the UK

https://twitter.com/LiamFedus/status/1522605777961119745

Book

  • Working on the Humans and Information chapter. Needs a LOT of work

SBIRs

  • Yet more timesheet crap
  • Got the graphics loading from the config file. Need to arrange in a hierarchy and draw a module that gets its information from the data dictionary
  • Chat with Rukan
  • Slides

Phil 5.5.2022

Went to the USNA Capstone day yesterday which was quite cool

Book

  • Working my way through Deep bias. There is definitely more to tweak in the later text

GPT Agents

  • Reach out to Drew Alfgren – done
  • Reach out to April Edwards – done

SBIRs

  • GPT chat with Ron
  • More work on hierarchy drawing – nope, working on the simaccel doc instead
  • Good RCSNN discussion with Aaron

Phil 5.3.2022

Just discovered AI21 and got an account

Book

  • Going through Deep Bias, which seems to be pretty good!

SBIRs

  • Adding hierarchy to the canvas
  • Need to add zoom and pan to the base class
  • Show state/execution/etc. in visualization

GPT Agents

  • 3:30 Meeting
  • Get preliminary study design, email, and flyer done before meeting!

Phil 4.29.2022

Book

  • Finished Belief is a Place. Currently at 76k words
  • Cornell Press has a list of editors and how to contact them???

SBIRs

  • JSFC Meetings. First one seemed to go well. Second one was less focused
  • Would this work for entropy measures? Richardson–Lucy deconvolution
  • More code execution. Should be able to get init, step, and terminate working today and print the live DataDictionary – DONE!
  • Need to connect to the SimAccel repo – Done! Changed to scr directory to simaccel and pushed. Verified the change on the browser

Phil 4.28.2022

Had some fun looking at trends based on the Twitter buyout:

“free speech” has an interesting trajectory over the last year:

Book

  • Cleaning up chapters

SBIRS

  • 9:15 standup
  • SimAccel meeting with Rukan – got things set up in the right way, I think
  • SimAccel API project. Ron has set up
  • RCSNN, hopefully. Yes! Set up calling methods in a class, which meant refactoring a bit so that there is now a BaseBoardMonitor class. More tomorrow.

GPT Agents

  • Still didn’t get to the IRB stuff

Phil 4.27.2022

https://faculty.cc.gatech.edu/~srijan/pubs/conflict-paper-www18.pdf

Get new UMBC ID! It’s ready!

SBIRs

  • 9:00 LM meeting
  • 1:00 DSCC Kickoff
  • Start on RCSNN story for this sprint
  • Some cool GPT debugging(?):
https://twitter.com/megamor2/status/1519291039479214080

Antigenic cartography has its roots in a mathematical technique called “multidimensional scaling,” which has been around since the 1960s. The algorithm uses data about the distances between pairs of objects to reconstruct a map of the objects’ relative locations. For example, if you had a table that lists the distances between a bunch of U.S. cities—like you might find in a road atlas—you could use a multidimensional scaling algorithm to reconstruct a map of those cities based solely on the distances between them. (IEEE Spectrum – The Algorithm that Mapped Omicron Shows a Path Forward)

Book

  • Read the epilogue to Aaron last night and made some tweaks. I need to work on the suggestions
  • Got a firm “no” and no leads from Kendall Hunt. Sigh

GPT Agents

  • Need to do informed consent, recruiting flyers and emails

Phil 4.26.2022

Tensor Programs V: Tuning Large Neural Networks via Zero-Shot Hyperparameter Transfer

  • Hyperparameter (HP) tuning in deep learning is an expensive process, prohibitively so for neural networks (NNs) with billions of parameters. We show that, in the recently discovered Maximal Update Parametrization (muP), many optimal HPs remain stable even as model size changes. This leads to a new HP tuning paradigm we call muTransfer: parametrize the target model in muP, tune the HP indirectly on a smaller model, and zero-shot transfer them to the full-sized model, i.e., without directly tuning the latter at all. We verify muTransfer on Transformer and ResNet. For example, 1) by transferring pretraining HPs from a model of 13M parameters, we outperform published numbers of BERT-large (350M parameters), with a total tuning cost equivalent to pretraining BERT-large once; 2) by transferring from 40M parameters, we outperform published numbers of the 6.7B GPT-3 model, with tuning cost only 7% of total pretraining cost. A Pytorch implementation of our technique can be found at this http URL and installable via `pip install mup`.

sktime

sktime features a unified interface for multiple time series learning tasks. Currently, we support forecastingtime series classification and time series regression. We have experimental support for time series clustering and time series annotation.

Features:

  • API for machine learning with time series, for the purpose of specifying, fitting, applying and validating machine learning models
  • Interactive user experience with scikit-learn like syntax conventions

Book

  • More epilogue
  • Chase TODOs?

SBIRs

  • 10:00 proposal meeting. Make slides on background and concept
  • Add story for SimAccel library productization
  • 2:00 Sprint planning

GPT Agents

  • Put together study documents
  • 3:30 Meeting

Phil 4.25.2022

I helped build ByteDance’s vast censorship machine

  • “It was certainly not a job I’d tell my friends and family about with pride. When they asked what I did at ByteDance, I usually told them I deleted posts (删帖). Some of my friends would say, “Now I know who gutted my account.” The tools I helped create can also help fight dangers like fake news. But in China, one primary function of these technologies is to censor speech and erase collective memories of major events, however infrequently this function gets used.”

SBIRs

  • 9:00 Sprint Demos
  • Put together stories for next sprint
  • 2:00 MDA meeting
  • 4:00 OSS Meeting

Book

  • Reworked the epilogue a lot

Phil 4.22.2022

POT: Python Optimal Transport

SBIRs

  • Prep for 2:00 meeting

Phil 4.21.2022

Book

  • Finished first pass at GPT-3 interview
  • Work on Epilogue. Done? At least the first pass
  • Start hunting down TODO’s

SBIR’s

  • 9:15 Standup
  • Write up the current RCSNN results and create slides for tomorrow’s presentation with Rukan
  • I think that the error calculation should be a statistical measure of the divergence over a given window, which could include the entire prediction, but would otherwise be a tail of n
    • The average error
    • The variance (std dev, etc)
    • Outliers (e.g. rare, BIG errors)
    • Linear regression calc for error and variance
  • The reason that I think that this matters is that a prediction could have occasional large errors but otherwise be good, and we need a way to know that and characterize our results

GPT Agents

  • Getting back up to speed on Jarod’s work, which looks amazing.
  • Need to find some synonyms for slur