Monthly Archives: July 2022

Phil 7.28.2022

Relapsed a bit. Taking it easy today

When Maps Become the World

  • When Maps Become the World shows us how the scientific theories, models, and concepts we use to intervene in the world function as maps, and explores the consequences of this, both good and bad. We increasingly understand the world around us in terms of models, to the extent that we often take the models for reality. Winther explains how in time, our historical representations in science, in cartography, and in our stories about ourselves replace individual memories and become dominant social narratives—they become reality, and they can remake the world.


  • Working on the UCP proposal – done!
  • Finding the next possible publisherd

GPT Agents

  • Top2Vec is an algorithm for topic modeling and semantic search. It automatically detects topics present in text and generates jointly embedded topic, document and word vectors. Once you train the Top2Vec model you can:
    • Get number of detected topics.
    • Get topics.
    • Get topic sizes.
    • Get hierarchichal topics.
    • Search topics by keywords.
    • Search documents by topic.
    • Search documents by keywords.
    • Find similar words.
    • Find similar documents.
    • Expose model with RESTful-Top2Vec
  • See the paper for more details on how it works.

Phil 7.27.2022

Feeling MUCH better. I took my first dose of Paxlovid around 5:00pm yesterday, with a fever of over 100F, around 3:00am this morning the fever broke and I pretty much feel back to normal with a lingering cough. Might even go for an easy bike ride today!


  • Figure out where I am with the deliverables and start going through the matrix to send out packages


  • Finish training
  • Meet with Aaron to get caught up
    • JSC – paper was good. More progress to come?
    • MDA – Lambda box is running – RayTune. Manage running and distribution across multiple GPUs
      • ATO? We need to get the files now?
      • Chat with James? He does want a presentation. Touch base with Clay for schedule
      • Is the DB running on the Lambda box?
      • ONE GUI is running
      • Local jobs run remotely using the IDE
      • SEG – a little behind on trajectory and FOM data. Still need to look at distinctly different holdout data
        • Need to introduce regularization
        • Attention with deep networks?
        • Encoder-decoder?
    • RCSNN – Aaron is getting MiniAlphaStar working.
      • Starcraft II AI community is enormous.
    • Tech conference – register today (done). Check email. Hotel?
      • Write abstract for teleoperation?
    • MDBE – Working on new scenario for land, air and sea
    • Reference implementation for SimAccel. COTS is not set up for batch. Steve is writing his own version

Phil 7.26.2022

Vacation was nice, but the end kinda sucked. My flight was delayed 14 hours and I appear to have picked up a small viral parting gift from waiting in all those lines trying to get a new flight. So far I’ve got a cough, very slight headache, and a low fever

  • Contact doctor and see if I should get a prescription? Got Paxlovid


  • 9:15 standup
  • See what’s been going on
  • Talk to Aaron about using SimAccel to model the real world enough to offset communication lag in remote systems – done, though I had to cut it short
  • It is a perfect day to do training
    • Acceptable Use Policy – done
    • Understanding and Protecting PII ‎- done
    • 2022 Kevin Mitnick Security Awareness Training – done
    • Ethics – done

GPT Agents

  • Cancel for today – done

Phil 7.23.2022

Training Generalist Agents with Multi-Game Decision Transformers

  • Current deep reinforcement learning (RL) methods can train specialist artificial agents that excel at decision-making on various individual tasks in specific environments, such as Go or StarCraft. However, little progress has been made to extend these results to generalist agents that would not only be capable of performing many different tasks, but also upon a variety of environments with potentially distinct embodiments.
  • Looking across recent progress in the fields of natural language processing, vision, and generative models (such as PaLMImagen, and Flamingo), we see that breakthroughs in making general-purpose models are often achieved by scaling up Transformer-based models and training them on large and semantically diverse datasets. It is natural to wonder, can a similar strategy be used in building generalist agents for sequential decision making? Can such models also enable fast adaptation to new tasks, similar to PaLM and Flamingo?

Phil 7.21.2022

Clearly I’m getting ready to get back to the “real” world:

The Big Truth

  • University of Chicago professor Robert Pape has spent the past year and a half examining the January 6 insurrectionists — and sounding the alarm about the future of democracy. Is America listening?

A multilevel social neuroscience perspective on radicalization and terrorism

  • Why are some people capable of sympathizing with and/or committing acts of political violence, such as attacks aimed at innocent targets? Attempts to construct terrorist profiles based on individual and situational factors, such as clinical, psychological, ethnic, and socio-demographic variables, have largely failed. Although individual and situational factors must be at work, it is clear that they alone cannot explain how certain individuals are radicalized. In this paper, we propose that a comprehensive understanding of radicalization and of how it may lead to political violence requires the integration of information across multiple levels of analysis and interdisciplinary perspectives from evolutionary theory, social, personality and cognitive psychology, political science and neuroscience. Characterization of the structural-functional relationships between neural mechanisms and the cognitive and affective psychological processes that underpin group dynamics, interpersonal processes, values and narratives, as well as micro-sociological processes may reveal latent drivers of radicalization and explain why some people turn to extreme political violence. These drivers may not be observable within a single individual level of scientific enquiry. The integrative, multilevel approach that characterizes social neuroscience has the potential to provide theoretical and empirical clarity regarding the antecedents of radicalization and support for extreme violence.

Phil 7.20.2022

Still on vacation, but found this, which is nicely written and framed well

The year of garbage internet trends

  • Sea shanties are the framework with which I view a great many things that happened in 2021, because so many of them were entirely meaningless fads: blips on the radar lasting only for a moment but just long enough to obscure some larger, more important picture. It is fascinating to trace the origins of these glitches of nothingness: inconsequential tweets that turned into inconsequential TikToks that turned into inconsequential news articles that somehow, suddenly seemed more consequential than anything else that day.
  • Virality treats humans like fast fashion: algorithmically generated products to shove onto all of our screens at the same time, on which we then spend enormous sums of money and attention before ending up in the literal and/or figurative landfill. It isn’t just TikTok; as Shira Ovide points out in the New York Times, “Netflix, YouTube, Spotify, Facebook and many other popular sites operate on similar feedback loops that push more of whatever is being noticed,” which is how you get phenomena like sales of chess sets rising 125% after the release of “The Queen’s Gambit” before interest almost immediately plummeted back down to normal levels. We already live in a world where trends are determined by algorithms, and we will soon live in a world where even the content is created — literally — by them.

Phil 7.6.2022

Christian Nationalists and the Holy Gun Crusade

  • the use of Biblical passages to sell firearms with an explicitly Christian context, is widespread in the United States. And this shouldn’t be surprising—as Brad Stoddard writes here on RD, “AR-15s are also increasingly the firearm of choice for Christian gun owners who arm themselves—in their minds, at least—in defense against both tyranny and evil.” And from there, that love of the AR-15 goes all kinds of places.


  • Select menu options, check in, etc
  • Water timer – done
  • Pack!


  • Stories (Set up meeting with Bob for ATO info, layout supervised SCII) – done
  • Meeting with Bob?


  • Start list of targets
  • Start letter
  • Update repo
  • Add Transcendence, The Human Network, and Ways of Being to the comparables section. Maybe write a new version

GPT Agents

  • Put README sections together for
  • Common parts (config file, environment variables, link to XAMPP)
  • KeywordExplorer
  • TweetCountExplorer
  • WikiPageviewExplorer
  • GoogleExplorer

Phil 7.5.2022

Vaccine QR code – done

Adding Europe maps – done

This is part of the new “active measures” book

Disinformation and Echo Chambers: How Disinformation Circulates in Social Media Through Identity-Driven Controversies

  • This paper contributes to disinformation research by showing how identity-driven controversies are prime vehicles for circulating disinformation. We theorize disinformation as an engagement-driving process that encourages participation in culture wars through any argumentative means—including not only falsehoods but also truths, half-truths, and value-laden judgments—exploiting them rhetorically to contradict perceived opponents. Empirically, the study reports on the flat Earth echo chamber on YouTube, a controversial group arguing that the Earth is not round but flat. By analyzing their rhetorical strategies, this study shows how flat earthers animate and stoke identity-based grievances. As grudges intensify, back-and-forth argumentation becomes a form of ‘knowing’ in the world, which the echo chamber weaponizes rhetorically. The resulting argument becomes impervious to fact-checking because it is not about facts (logos) but grievances (pathos) and group identification (ethos). Hence, this investigation conceptualizes disinformation as rhetorical acts that persuade in and through the contradictions of identity work, thus animating and co-creating culture wars. The paper proposes a two-phase framework conceptualizing how disinformation disseminates in social media through echo chambers. In the “seeding” phase, malicious actors strategically insert deceptions, masquerading their legitimacy (e.g., fake news). In the “echoing” phase, participants co-create a confrontational fantasy that disseminates disinformation argumentatively.


  • Sprint review. So no ambitious Starcraft II strawman. We will try to get the RCSNN working better (faster?) than the Simple64 supervised version. This will help get funding, but no publications
  • Walked Steve through attention again. Maybe it stuck this time?

GPT Agents

  • Finished proportional, clamped downloads and verified that they go into the DB correctly
  • Added a view to the DB that connects all the tables

Phil 7.1.2022

Le Tour de France commence aujourd’hui !

I heard about Futureshape this morning. Might ping them about Belief Maps and TACJ

Beyond neural scaling laws: beating power law scaling via data pruning

  • Widely observed neural scaling laws, in which error falls off as a power of the training set size, model size, or both, have driven substantial performance improvements in deep learning. However, these improvements through scaling alone require considerable costs in compute and energy. Here we focus on the scaling of error with dataset size and show how both in theory and practice we can break beyond power law scaling and reduce it to exponential scaling instead if we have access to a high-quality data pruning metric that ranks the order in which training examples should be discarded to achieve any pruned dataset size. We then test this new exponential scaling prediction with pruned dataset size empirically, and indeed observe better than power law scaling performance on ResNets trained on CIFAR-10, SVHN, and ImageNet. Given the importance of finding high-quality pruning metrics, we perform the first large-scale benchmarking study of ten different data pruning metrics on ImageNet. We find most existing high performing metrics scale poorly to ImageNet, while the best are computationally intensive and require labels for every image. We therefore developed a new simple, cheap and scalable self-supervised pruning metric that demonstrates comparable performance to the best supervised metrics. Overall, our work suggests that the discovery of good data-pruning metrics may provide a viable path forward to substantially improved neural scaling laws, thereby reducing the resource costs of modern deep learning.


  • International driver’s license – done
  • Delta and Iberia apps – done
  • Synchronize laptop – done
  • International calling plan – done
  • Financial notifications (Visa and BoA?) – done


  • Write cover letter
  • Write audience section