The thing is, contemporary Hasidic sects are designed for authoritarian control. Each Hasidic sect, from Bobov to Viznitz to Satmar to Skver, are run by what is called a “grand rabbi.” These rabbis are demanding patriarchs. They expect women to wear particular shades of stockings, men to dress identically, congregants to receive their blessings before making any personal life decisions, and they believe in a world where Hasids are the only Jews worth mentioning. Most importantly, Hasidic grand rabbis center their congregants’ worlds around themselves. They are populist leaders of miniature nations. Congregants have paintings and photographs of grand rabbis around their homes, sacrifice family time for tisches (Friday night gatherings) with their leaders, and would do anything to protect the power of their particular grand rabbi.
Working on Making better Human-Computer Interfaces for Populations. Finished my first pass at The signature of dangerous misinformation section
2:00 Meeting with Michelle
Decided to build out a sandbox ScriptReaderScratch RCS controller to work out the file loading and playback. Rather than AngleController, I’ll have a method that interpolates to the newest target. That should be enough to let me work out the details without breaking anything
Speaking of Twitter, this is a good thread on how to write and contest ML conference papers
ParametricUMAP allows users to train a neural network to optimize the embedding, resulting in a direct neural net based mapping from source data to embedding. This allows for extremely fast inference (embedding of new data points), orders of magnitude faster than standard UMAP. It also provides facilities for an inverse transform, mapping from the embedding space to the original data space that is both far faster and more robust that that provided by standard UMAP. Since network architectures can be user provided this also allows for CNN and RNN based UMAP embeddings for images or sequences.
Continue Making better Human-Computer Interfaces for Populations
Add in mapping to script reader, verify by adding legends
Status meeting maybe produce a spreadsheet to walk through that shows a time series of inputs and a calculation for each set? I think the inputs can be a column of six (for now?) variables as a set of rows, and the prediction calculations are shown below that. Make a DataFrame and see what that looks like.
The playbook for the Maga invasion of the nation’s Capitol building on Wednesday has been developing for years in plain sight, at far-right rallies in cities like Charlottesville, Berkeley and Portland, and then, in the past year, at state capitols across the country, where heavily armed white protesters have forced their way into legislative chambers to accuse politicians of tyranny and treason.
Here’s what seems to have happened with the Parler hack. The data may be available for research
Nice paper on training a model to generate synthetic data for better classification training: Reducing AI bias with Synthetic data. It uses the gretel’s gretel-synthetics library It’s free to use during the beta period, not sure about after, or what the pricing will be. They are hiring, with about seven openings at the moment, so they are burning through someone’s money.
Finish abstract submission – done
Make an Overleaf project for qualitative paper?
Finish up the ManeuverReader – done! Here’s the original, with some large number of points that is subsampled to 100 points and stored as a json file
Here’s a reconstructed version that uses 1/3 (33) steps through the file. You can see a little roughness, but with more points it’s indistinguishable from the original pulled off influxDB:
Working on Hierarchies, Networks, and Technology. New technologies may have the same arc as writing and printing, which is initial hierarchy that produces influence networks that counter (to a degree), the more aggressive aspects of a dominance hierarchy
Voting in Georgia today. I am pessimistic but hopeful about the outcome
I’m not sure if the meeting is today at 3:30 or Friday at 4:00?
It was today. Continuing on trying to figure out the best way to understand the behavior of the model. One of the interesting findings for today was that if the data isn’t in the dataset, then the model will start generating tokes at the meta wrapper.
Working on what’s become Hierarchies, Networks, and Technology, and I think I’m now happy with where it’s going. It makes sense to use as the end of the chapter as well
Made a cool figure:
The Lambda box was cancelled. Sigh
11:00 Meeting with Vadim
I’m going to start on a script-reading capability for TopController. I think a JSON or XML file that contains the following elements:
Absolute or relative move
Target (HPR or XYZ)
So a move could be a series of HPR coordinates that ‘play’. The first step is a MOVE command which includes the filename. The TopController opens the file (or fails and reports it), loads the move into memory and begins to step through it based on the timestamp. On reaching the end of the file and when the AngleController reports success/failure, the TopController reports DONE and is ready for the next MOVE
Working on the section about displaying. I found Mike, the chimp that used the Kerosene cans. There’s apparently a paper as well, so I put in a request
Loading data about democracies from here (ourworldindata.org/democracy) into my db for better queries and charts. I want to look at recent changes in authoritarian systems as social technologies have changed in the last couple of decades
11:00 Meeting with Vadim
More sparring with Biruh?
Need some kind of kickoff with the technical folks?
Still looking at COVID deaths. Here’s what’s going on in a sample of countries as of today
And here are the worst performing states over the duration of the epidemic. Georgia continues to be a mess. Those states at the bottom are coming up fast…
Working on importing and transcribing the debate. Since the original won’t upload, I pulled the video into Adobe Premiere and cut off the head and tail, then exported as an AVI. We’ll see how that works. Nope – it’s ENOURMOUS! Trying other formats and getting progressively more annoyed. Aaaaand never got it to work. At least not today.
I did start editing the whole video down to just the displays
Need to start coding, Going to talk to Stacey about that before I start.
Got some good advice and started.
As I’m coding, it looks like I’m making a nice set of tags for a training set. I wonder how small a set could be used to train something like BERT. Here’s an article:
In this tutorial, we will take you through an example of fine tuning BERT (as well as other transformer models) for text classification using Huggingface Transformers library on the dataset of your choice.
Other work on interpreting transformer internals has focused mostly on what the attention is looking at. The logit lens focuses on what GPT “believes” after each step of processing, rather than how it updates that belief inside the step.
Sent a note to Biruh asking how the servers will handle interactive video. He said that I could keep the server at home. So he just hates workstations? Anyway, lots of back and forth. Not sure where it’s going.
Although pretrained language models can be fine-tuned to produce state-of-the-art results for a very wide range of language understanding tasks, the dynamics of this process are not well understood, especially in the low data regime. Why can we use relatively vanilla gradient descent algorithms (e.g., without strong regularization) to tune a model with hundreds of millions of parameters on datasets with only hundreds or thousands of labeled examples? In this paper, we argue that analyzing fine-tuning through the lens of intrinsic dimension provides us with empirical and theoretical intuitions to explain this remarkable phenomenon. We empirically show that common pre-trained models have a very low intrinsic dimension; in other words, there exists a low dimension reparameterization that is as effective for fine-tuning as the full parameter space. For example, by optimizing only 200 trainable parameters randomly projected back into the full space, we can tune a RoBERTa model to achieve 90\% of the full parameter performance levels on MRPC. Furthermore, we empirically show that pre-training implicitly minimizes intrinsic dimension and, perhaps surprisingly, larger models tend to have lower intrinsic dimension after a fixed number of pre-training updates, at least in part explaining their extreme effectiveness. Lastly, we connect intrinsic dimensionality with low dimensional task representations and compression based generalization bounds to provide intrinsic-dimension-based generalization bounds that are independent of the full parameter count.
Working on getting the data out of the database in a useful way, so I learned how to create a view that combines multiple rows:
create or replace view combined as select distinct t_1.root_id, t_1.experiment_id, t_1.probe as 'probe', DATE_FORMAT(t_1.content, "%M, %Y") as 'date', t_2.content as 'text' from table_output as t_1 inner join table_output as t_2 on t_1.root_id = t_2.root_id and t_1.tag = 'date' and t_2.tag = 'trimmed';
What’s nice about this is that I can now order results by date which gives a better way of looking through the data
Imported the query output spreadsheet into NVivo and flailed with the importer a bit. I think I need to create a script that iterates over all the probes and creates a spreadsheet for each. It also needs to split off the probe from the content. Maybe remove the links as well? I’m conflicted about that because linking is an important thing. Maybe produce two files?
Working on coding the Biden-Trump debate in NVivo. Had to buy a transcription license. Can’t upload the video???
Working on getting extended “trimmed” data out of the model
Had an extensive set of talks with Stacey about using the twitter dataset to support a qualitative study of the trained model of COVID data. The thing that finally clicked was my description of the model as analogous to someone who has read every one in the data set. Such a person could more-or-less repeat actual tweets in a way that would reflect the underlying frequency, but they could also synthesize knowledge. For example, we were using probes like “Dr. Fauci is “, which can also be found in the database. But the phrase “Dr. Fauci is like a ” does not appear anywhere. But the model has no problem with it. 2 of the responses in the first test of 15 results say Dr. Fauci is like a “president”, which makes a lot of sense, actually
Working on getting the date info out. Everything works, but it doesn’t really make more text. The system has a sense of how long a tweet should be and how they end, it seems