Done with vacation. Which was very wet. The long weekend was a nice consolation prize.
Hmmm. Subversion repo can’t be reached. Give it an hour and try again at 8:00, otherwise open a ticket. Turned out to be the VPN. Switching locations fixed it.
- “using data from the Centers for Disease Control and Prevention (CDC) and a synthetic control approach, we show that … following the Sturgis event, counties that contributed the highest inflows of rally attendees experienced a 7.0 to 12.5 percent increase in COVID-19 cases relative to counties that did not contribute inflows. … We conclude that the Sturgis Motorcycle Rally generated public health costs of approximately $12.2 billion.”

- In this paper, we show that the performance of a learnt generative model is closely related to the model’s ability to accurately represent the inferred \textbf{latent data distribution}, i.e. its topology and structural properties. We propose LaDDer to achieve accurate modelling of the latent data distribution in a variational autoencoder framework and to facilitate better representation learning. The central idea of LaDDer is a meta-embedding concept, which uses multiple VAE models to learn an embedding of the embeddings, forming a ladder of encodings. We use a non-parametric mixture as the hyper prior for the innermost VAE and learn all the parameters in a unified variational framework. From extensive experiments, we show that our LaDDer model is able to accurately estimate complex latent distribution and results in improvement in the representation quality. We also propose a novel latent space interpolation method that utilises the derived data distribution.
- Although a great deal of attention has been paid to how conspiracy theories circulate on social media, and the deleterious effect that they, and their factual counterpart conspiracies, have on political institutions, there has been little computational work done on describing their narrative structures. Predicating our work on narrative theory, we present an automated pipeline for the discovery and description of the generative narrative frameworks of conspiracy theories that circulate on social media, and actual conspiracies reported in the news media. We base this work on two separate comprehensive repositories of blog posts and news articles describing the well-known conspiracy theory Pizzagate from 2016, and the New Jersey political conspiracy Bridgegate from 2013. Inspired by the qualitative narrative theory of Greimas, we formulate a graphical generative machine learning model where nodes represent actors/actants, and multi-edges and self-loops among nodes capture context-specific relationships. Posts and news items are viewed as samples of subgraphs of the hidden narrative framework network. The problem of reconstructing the underlying narrative structure is then posed as a latent model estimation problem. To derive the narrative frameworks in our target corpora, we automatically extract and aggregate the actants (people, places, objects) and their relationships from the posts and articles. We capture context specific actants and interactant relationships by developing a system of supernodes and subnodes. We use these to construct an actant-relationship network, which constitutes the underlying generative narrative framework for each of the corpora. We show how the Pizzagate framework relies on the conspiracy theorists’ interpretation of “hidden knowledge” to link otherwise unlinked domains of human interaction, and hypothesize that this multi-domain focus is an important feature of conspiracy theories. We contrast this to the single domain focus of an actual conspiracy. While Pizzagate relies on the alignment of multiple domains, Bridgegate remains firmly rooted in the single domain of New Jersey politics. We hypothesize that the narrative framework of a conspiracy theory might stabilize quickly in contrast to the narrative framework of an actual conspiracy, which might develop more slowly as revelations come to light. By highlighting the structural differences between the two narrative frameworks, our approach could be used by private and public analysts to help distinguish between conspiracy theories and conspiracies.
#COVID
- Translate the annotated tweets and use them as a base for finding similar ones in the DB
GPT-2 Agents
- Working on the schemas now
- It looks like it may be possible to generate a full table representation using two packages. The first is genson, which you use to generate the schema. Then that schema is used by jsonschema2db, which should produce the tables (This only works for postgres, so I’ll have to install that). The last step is to insert the data, which is also handled by jsonschema2db
- Schema generation worked like a charm
- Got Postgres hooked up to IntelliJ, and working in Python too. It looks like jsonschema2db requires psycopg2-2.7.2, so I need to be careful
- Created a table, put some data in it, and got the data out with Python. Basically, I copied the MySqlInterface over to PostgresInterface, made a few small changes and everything appears to be working.

- Created the table, but it’s pretty bad. I think I* need to write a recursive dict reader that either creates tables or inserts linked data into a table
- All tables are created from objects, and have a row_id that is the key value
- disregard the “$schema” field
- Any item that is not an object is an field in the table
- Any item that is an object in the table that gets an additional parent_row_id that points to the immediate parent’s row_id
GOES
- 2:00 meeting with Vadim