Got a response back and template from Stripe! (marketing plan)
If that doesn’t work, I think it’s time to hire an editor
GPT Agents
Still working on collecting balanced data. I think the trick will be to look for the lowest number of tweets per day starting at the first day of collection and work forward, collecting that many tweets from each keyword, then repeat
For unbalanced, just make the one request and go forward in time until the corpora size is reached?
3:30 Meeting
SBIRs
9:15 standup. Need to get reacquainted with the RCSNN codebase and tool
Six years into the grass-roots movement unleashed by Donald Trump in his first presidential campaign, Angela Rubino is a case study in what that movement is becoming. Suspicious of almost everything, trusting of almost nothing, believing in almost no one other than those who share her unease, she has in many ways become a citizen of a parallel America — not just red America, but another America entirely, one she believes to be awash in domestic enemies, stolen elections, immigrant invaders, sexual predators, the machinations of a global elite and other fresh nightmares revealed by the minute on her social media scrolls. She is known online as “Burnitdown.”
The video of the FBI’s March 31 interview of Rodriguez, released to members of the media by federal prosecutors this week after an order from U.S. District Judge Amy Berman Jackson, is a remarkable look at how a radical Trump supporter came to engage in an act of domestic terrorism in hopes of keeping the former president in office for a second term. An emotional Rodriguez explains how he actually believed Trump’s dangerous lies about the 2020 election, referring to himself as “fucking piece of shit,” “so stupid,” “an asshole,” and “not smart” as he confesses his crimes.
Really good example of stampede behavior
Book
Reworking the proposal
SBIRs
Fix broken things? Not sure what will be needed today
Call about bike! Does it pass inspection? Nope. Selling it to Bobs for very little
Book
Finished up the last edits and tweaks for this version and sent out some copies to readers.
I need to look at what is needed to submit to Cambridge and Stripe. Might as well try Oxford again. I will need to update the proposal
SBIRs
More FMDS
GPT Agents
While waiting for Aaron, start getting the pulls working.
If NOT randomized, then pull the tweets in sequential order. I think that this can use the query token and just stop when the end is reached.
If it is randomized, then randomly select within the span and then pull in sequential order.
Make sure that the beginning of the NEXT sequence does not re-use rollover tweets from the previous sequence. If it does, then throw a warning and use the timestamp of the last tweet in the previous pull.
Set up the schema and tables. We can start with tweet_table, and add a user table later
Meta-learning hyperparameter optimization (HPO) algorithms from prior experiments is a promising approach to improve optimization efficiency over objective functions from a similar distribution. However, existing methods are restricted to learning from experiments sharing the same set of hyperparameters. In this paper, we introduce the OptFormer, the first text-based Transformer HPO framework that provides a universal end-to-end interface for jointly learning policy and function prediction when trained on vast tuning data from the wild. Our extensive experiments demonstrate that the OptFormer can imitate at least 7 different HPO algorithms, which can be further improved via its function uncertainty estimates. Compared to a Gaussian Process, the OptFormer also learns a robust prior distribution for hyperparameter response functions, and can thereby provide more accurate and better calibrated predictions. This work paves the path to future extensions for training a Transformer-based model as a general HPO optimizer.
Work on getting keyword tweets in fine granular samples (e.g. 5 minutes four times a day at semi-random intervals. Mostly done, though there are all kinds of odd behaviors that involve making too many requests. Working through the options.
Compare proportions of samples to full counts for the same time periods
Train a model!
SBIRs
Sprint review – done
Write up stories. Probably go back to RCSNN – done
This paper surveys five human societal types – mobile foragers, horticulturalists, pre-state agriculturalists, state-based agriculturalists and liberal democracies – from the perspective of three core social problems faced by interacting individuals: coordination problems, social dilemmas and contest problems. We characterise the occurrence of these problems in the different societal types and enquire into the main force keeping societies together given the prevalence of these. To address this, we consider the social problems in light of the theory of repeated games, and delineate the role of intertemporal incentives in sustaining cooperative behaviour through the reciprocity principle. We analyse the population, economic and political structural features of the five societal types, and show that intertemporal incentives have been adapted to the changes in scope and scale of the core social problems as societies have grown in size. In all societies, reciprocity mechanisms appear to solve the social problems by enabling lifetime direct benefits to individuals for cooperation. Our analysis leads us to predict that as societies increase in complexity, they need more of the following four features to enable the scalability and adaptability of the reciprocity principle: nested grouping, decentralised enforcement and local information, centralised enforcement and coercive power, and formal rules.
There is something really deep in that kind of thinking. It would be a micro stampede for sure. Could the AI herd the person into a harmless area? Would that be ethical?
In social networks, users often engage with like-minded peers. This selective exposure to opinions might result in echo chambers, i.e., political fragmentation and social polarization of user interactions. When echo chambers form, opinions have a bimodal distribution with two peaks on opposite sides. In certain issues, where either extreme positions contain a degree of misinformation, neutral consensus is preferable for promoting discourse. In this paper, we use an opinion dynamics model that naturally forms echo chambers in order to find a feedback mechanism that bridges these communities and leads to a neutral consensus. We introduce the random dynamical nudge (RDN), which presents each agent with input from a random selection of other agents’ opinions and does not require surveillance of every person’s opinions. Our computational results in two different models suggest that the RDN leads to a unimodal distribution of opinions centered around the neutral consensus. Furthermore, the RDN is effective both for preventing the formation of echo chambers and also for depolarizing existing echo chambers. Due to the simple and robust nature of the RDN, social media networks might be able to implement a version of this self-feedback mechanism, when appropriate, to prevent the segregation of online communities on complex social issues.
Chat with Mike today? Good discussion. Most important is to put a summary at the end of each chapter
SBIRs
Multiple meetings on getting the server up and running
Getting content from Rukan and Loren
Some discussion with Aaron on the JSC. Need to go over the COPERNICUS paper tomorrow morning
GPT Agents
Decided to shelve keywords for a while and get back to pulling tweets. Need to get that part of the API working (keyword list, location, start/stop times). I think it should be possible to get a valid sample by just limiting the duration of the sample, so something like 24 5-minute samples per day? Need to see if that is possible.
You must be logged in to post a comment.