There have been countless fact-checking and other efforts designed to rid social media of misinformation. They’re not going to work until the party and the major ideological amplifiers start explicitly renouncing these points of view. The signs are not good – while Fox News was willing to declare that Joe Biden had won the election, they are still providing platforms for people denying the facts of the victory. And a majority of Republican representatives voted to overturn a democratic election. Until there are consequences for perpetuating those falsehoods, don’t count on changes to the media to solve this problem
Twitter’s January 8 decision to permanently suspend Trump’s account closed a rare window into a president’s mindset and policymaking that we are unlikely to ever see again. For the past four years, I documented the sources of the president’s grievances and obsessions, matching Trump’s tweets to the television segments he was watching. The president’s TV addiction inspired at least 1,375 tweets dating back to September 1, 2018. The vast majority came in response to his favorite programs on the pro-Trump Fox News and Fox Business networks.
But if there ever was a coda for the Trump years, this has got to be it:
In this article, we will focus on the hidden state as it evolves from model layer to the next. By looking at the hidden states produced by every transformer decoder block, we aim to gleam information about how a language model arrived at a specific output token. This method is explored by Voita et al.. Nostalgebraist  presents compelling visual treatments showcasing the evolution of token rankings, logit scores, and softmax probabilities for the evolving hidden state through the various layers of the model.
The 2020 U.S. Presidential Election saw an unprecedented number of false claims alleging election fraud and arguing that Donald Trump was the actual winner of the election. Here we report a survey exploring belief in these false claims that was conducted three days after Biden was declared the winner. We find that a majority of Trump voters in our sample – particularly those who were more politically knowledgeable and more closely following election news – falsely believed that election fraud was widespread, and that Trump won the election. Thus, false beliefs about the election are not merely a fringe phenomenon. We also find that Trump conceding or losing his legal challenges would likely lead a majority of Trump voters to accept Biden’s victory as legitimate, although 40% said they would continue to view Biden as illegitimate regardless. Finally, we found that levels of partisan spite and endorsement of violence were equivalent between Trump and Biden voters.
Meeting with Aaron today to discuss nest steps and how to combine with his project?
Still need to be able to access the VPN – more paperwork. Wheee!
Continue with the new TopController
Reading in and stepping through the script. Now I need to slew through the points and return a done when the l2 dist is within a threshold
If you use the Hugging Face Trainer, as of transformers v4.2.0 you have the experimental support for DeepSpeed’s and FairScale’s ZeRO features. The new --sharded_ddp and --deepspeed command line Trainer arguments provide FairScale and DeepSpeed integration respectively. Here is the full documentation.
The thing is, contemporary Hasidic sects are designed for authoritarian control. Each Hasidic sect, from Bobov to Viznitz to Satmar to Skver, are run by what is called a “grand rabbi.” These rabbis are demanding patriarchs. They expect women to wear particular shades of stockings, men to dress identically, congregants to receive their blessings before making any personal life decisions, and they believe in a world where Hasids are the only Jews worth mentioning. Most importantly, Hasidic grand rabbis center their congregants’ worlds around themselves. They are populist leaders of miniature nations. Congregants have paintings and photographs of grand rabbis around their homes, sacrifice family time for tisches (Friday night gatherings) with their leaders, and would do anything to protect the power of their particular grand rabbi.
Working on Making better Human-Computer Interfaces for Populations. Finished my first pass at The signature of dangerous misinformation section
2:00 Meeting with Michelle
Decided to build out a sandbox ScriptReaderScratch RCS controller to work out the file loading and playback. Rather than AngleController, I’ll have a method that interpolates to the newest target. That should be enough to let me work out the details without breaking anything
Speaking of Twitter, this is a good thread on how to write and contest ML conference papers
ParametricUMAP allows users to train a neural network to optimize the embedding, resulting in a direct neural net based mapping from source data to embedding. This allows for extremely fast inference (embedding of new data points), orders of magnitude faster than standard UMAP. It also provides facilities for an inverse transform, mapping from the embedding space to the original data space that is both far faster and more robust that that provided by standard UMAP. Since network architectures can be user provided this also allows for CNN and RNN based UMAP embeddings for images or sequences.
Continue Making better Human-Computer Interfaces for Populations
Add in mapping to script reader, verify by adding legends
Status meeting maybe produce a spreadsheet to walk through that shows a time series of inputs and a calculation for each set? I think the inputs can be a column of six (for now?) variables as a set of rows, and the prediction calculations are shown below that. Make a DataFrame and see what that looks like.
The playbook for the Maga invasion of the nation’s Capitol building on Wednesday has been developing for years in plain sight, at far-right rallies in cities like Charlottesville, Berkeley and Portland, and then, in the past year, at state capitols across the country, where heavily armed white protesters have forced their way into legislative chambers to accuse politicians of tyranny and treason.
Here’s what seems to have happened with the Parler hack. The data may be available for research
Nice paper on training a model to generate synthetic data for better classification training: Reducing AI bias with Synthetic data. It uses the gretel’s gretel-synthetics library It’s free to use during the beta period, not sure about after, or what the pricing will be. They are hiring, with about seven openings at the moment, so they are burning through someone’s money.
Finish abstract submission – done
Make an Overleaf project for qualitative paper?
Finish up the ManeuverReader – done! Here’s the original, with some large number of points that is subsampled to 100 points and stored as a json file
Here’s a reconstructed version that uses 1/3 (33) steps through the file. You can see a little roughness, but with more points it’s indistinguishable from the original pulled off influxDB:
Working on Hierarchies, Networks, and Technology. New technologies may have the same arc as writing and printing, which is initial hierarchy that produces influence networks that counter (to a degree), the more aggressive aspects of a dominance hierarchy
Voting in Georgia today. I am pessimistic but hopeful about the outcome
I’m not sure if the meeting is today at 3:30 or Friday at 4:00?
It was today. Continuing on trying to figure out the best way to understand the behavior of the model. One of the interesting findings for today was that if the data isn’t in the dataset, then the model will start generating tokes at the meta wrapper.
Working on what’s become Hierarchies, Networks, and Technology, and I think I’m now happy with where it’s going. It makes sense to use as the end of the chapter as well
Made a cool figure:
The Lambda box was cancelled. Sigh
11:00 Meeting with Vadim
I’m going to start on a script-reading capability for TopController. I think a JSON or XML file that contains the following elements:
Absolute or relative move
Target (HPR or XYZ)
So a move could be a series of HPR coordinates that ‘play’. The first step is a MOVE command which includes the filename. The TopController opens the file (or fails and reports it), loads the move into memory and begins to step through it based on the timestamp. On reaching the end of the file and when the AngleController reports success/failure, the TopController reports DONE and is ready for the next MOVE
Working on the section about displaying. I found Mike, the chimp that used the Kerosene cans. There’s apparently a paper as well, so I put in a request
Loading data about democracies from here (ourworldindata.org/democracy) into my db for better queries and charts. I want to look at recent changes in authoritarian systems as social technologies have changed in the last couple of decades
11:00 Meeting with Vadim
More sparring with Biruh?
Need some kind of kickoff with the technical folks?
Still looking at COVID deaths. Here’s what’s going on in a sample of countries as of today
And here are the worst performing states over the duration of the epidemic. Georgia continues to be a mess. Those states at the bottom are coming up fast…
Working on importing and transcribing the debate. Since the original won’t upload, I pulled the video into Adobe Premiere and cut off the head and tail, then exported as an AVI. We’ll see how that works. Nope – it’s ENOURMOUS! Trying other formats and getting progressively more annoyed. Aaaaand never got it to work. At least not today.
I did start editing the whole video down to just the displays
Need to start coding, Going to talk to Stacey about that before I start.
Got some good advice and started.
As I’m coding, it looks like I’m making a nice set of tags for a training set. I wonder how small a set could be used to train something like BERT. Here’s an article:
In this tutorial, we will take you through an example of fine tuning BERT (as well as other transformer models) for text classification using Huggingface Transformers library on the dataset of your choice.
Other work on interpreting transformer internals has focused mostly on what the attention is looking at. The logit lens focuses on what GPT “believes” after each step of processing, rather than how it updates that belief inside the step.
Sent a note to Biruh asking how the servers will handle interactive video. He said that I could keep the server at home. So he just hates workstations? Anyway, lots of back and forth. Not sure where it’s going.
Although pretrained language models can be fine-tuned to produce state-of-the-art results for a very wide range of language understanding tasks, the dynamics of this process are not well understood, especially in the low data regime. Why can we use relatively vanilla gradient descent algorithms (e.g., without strong regularization) to tune a model with hundreds of millions of parameters on datasets with only hundreds or thousands of labeled examples? In this paper, we argue that analyzing fine-tuning through the lens of intrinsic dimension provides us with empirical and theoretical intuitions to explain this remarkable phenomenon. We empirically show that common pre-trained models have a very low intrinsic dimension; in other words, there exists a low dimension reparameterization that is as effective for fine-tuning as the full parameter space. For example, by optimizing only 200 trainable parameters randomly projected back into the full space, we can tune a RoBERTa model to achieve 90\% of the full parameter performance levels on MRPC. Furthermore, we empirically show that pre-training implicitly minimizes intrinsic dimension and, perhaps surprisingly, larger models tend to have lower intrinsic dimension after a fixed number of pre-training updates, at least in part explaining their extreme effectiveness. Lastly, we connect intrinsic dimensionality with low dimensional task representations and compression based generalization bounds to provide intrinsic-dimension-based generalization bounds that are independent of the full parameter count.
Working on getting the data out of the database in a useful way, so I learned how to create a view that combines multiple rows:
create or replace view combined as select distinct t_1.root_id, t_1.experiment_id, t_1.probe as 'probe', DATE_FORMAT(t_1.content, "%M, %Y") as 'date', t_2.content as 'text' from table_output as t_1 inner join table_output as t_2 on t_1.root_id = t_2.root_id and t_1.tag = 'date' and t_2.tag = 'trimmed';
What’s nice about this is that I can now order results by date which gives a better way of looking through the data
Imported the query output spreadsheet into NVivo and flailed with the importer a bit. I think I need to create a script that iterates over all the probes and creates a spreadsheet for each. It also needs to split off the probe from the content. Maybe remove the links as well? I’m conflicted about that because linking is an important thing. Maybe produce two files?
Working on coding the Biden-Trump debate in NVivo. Had to buy a transcription license. Can’t upload the video???