Monthly Archives: September 2022

Phil 8.8.2022

TOLKIEN’S ILLUSTRATORS: SERGEY YUKHIMOV

https://img0.liveinternet.ru/images/attach/c/0//51/804/51804945_37315702.jpg

SBIRs

  • Need to start the MORS slide deck
  • Need to start demo slides
  • 9:15 Standup
  • Sent a note to Erika Mackin – done
  • Tweaked MapDisplay1 so that it works again. Need to port it to the laptop
  • Looks like the presentation is next Friday at 1:00

GPT Agents

  • Got a good run of threads, but I need to verify a few things:
    • There is an imbalance. Is this because the run ended improperly? Looks like it all checks out. There are simply more threads with “ivermectin” based on the sample
    • The view is broken for threads. Need to fix or make a new one. Looks like the experiment_id is being set to -1. Fixed. I wasn’t passing in the value to run_thread_query()
    • Back up the DB – done
    • Specify the current experiment somewhere, and indicate if there are threads (in label) – done
  • Continue with EmbeddingExplorer
Imbalanced threads because peple like to talk about ivermectin

Phil 9.7.2022

Yay! Got a prescription!

My guess is that Trump made a deal with the Saudis for nuclear information on Israel and Iran

Book

  • Need to respond to Brenda’s email and set up a meeting on Friday?

SBIRs

  • Really good conversation with Aaron about CWoC. The idea that lower parts of the hierarchy could simulate higher levels is very cool. It could even be a separate model for each layer, trained on what that part of the hierarchy can be aware of and the commands that it gets. That way, it could “interpolate” across times when communication fails.
  • Need to set up a separate root document for research that has tasks broken down by people with a small introduction and then room for their documentation. Include a ToC. – done

GPT Agents

Phil 9.6.2022

Set up a monthly contribution to the UNHCR

Book

  • Adding a bit on beauty for diversity injection

GPT Agents

  • Start on the GPT and Embedding interfaces. Prompt the GPT with something like “Once upon a time there was” and set the number of times to run and the number of tokens. Split on sentences (r”\.|!|?”) and get the embeddings for each. Then cluster and extract topics (Using EmbeddingExplorer pointing at a different db). Build maps!
  • Continue fleshing out the Twitter embedding app
  • Ok, what I really wound up doing was getting threading to work on TweetDownloader and fixing an interesting bug in the sampled day method. When I wrote it, I assumed that the number of tweets per day are reasonably constant. Not true. So as a bit of a hack, I moved the endpoint of the query to include the entire day and use REPLACE INTO rather than INSERT. Much better results so far. Will work on the other stuff tomorrow.

SBIRs

  • Need to read this carefully. I like the fact that it uses the MinGPT: Transformers are Sample Efficient World Models
    • Deep reinforcement learning agents are notoriously sample inefficient, which considerably limits their application to real-world problems. Recently, many model-based methods have been designed to address this issue, with learning in the imagination of a world model being one of the most prominent approaches. However, while virtually unlimited interaction with a simulated environment sounds appealing, the world model has to be accurate over extended periods of time. Motivated by the success of Transformers in sequence modeling tasks, we introduce IRIS, a data-efficient agent that learns in a world model composed of a discrete autoencoder and an autoregressive Transformer. With the equivalent of only two hours of gameplay in the Atari 100k benchmark, IRIS achieves a mean human normalized score of 1.046, and outperforms humans on 10 out of 26 games. Our approach sets a new state of the art for methods without lookahead search, and even surpasses MuZero. To foster future research on Transformers and world models for sample-efficient reinforcement learning, we release our codebase at this https URL.
    • Delivered the quarterly report.
https://twitter.com/jerclifton/status/1565397169623797760

Book

  • Brenda has started a readthrough and will get back to me with comments
  • Need to add a reference to “beauty” in the diversity injection chapter and reference Transcendence

SBIRs

  • Finish first pass of quarterly report
  • MinGPT!!!!
https://twitter.com/hardmaru/status/1565569808548376576

GPT Agents

  • Added parens to the Twitter query. You can now do foo OR bar OR (Frodo AND Gandalf) OR (Sauron AND Sauruman)
  • Thinking about combining the GPT and Embedding interfaces. Prompt the GPT with something like “Once upon a time there was” and set the number of times to run and the number of tokens. Split on sentences (r”\.|!|?”) and get the embeddings for each. Then cluster and extract topics (Using EmbeddingExplorer pointing at a different db). Build maps!