Phil 5.4.20

It is a Chopin sort of morning


  • Zach got maps and lists working over the weekend. Still a lot more to do though
  • Need to revisit the math to work over the past days

GPT-2 Agents

  • Working on PGN to English.
    • Added game class that contains all the information for a game and reads it in. Games are created and managed by the PGNtoEnglish class
  • Rebased the transformers project. It updates fast


  • Figure out how to save and load models. I’m really not sure what to save, since you need access to the latent space and the discriminator? So far, it’s:
    def save_models(self, directory:str, prefix:str):
        p = os.getcwd()
    def load_models(self, directory:str, prefix:str):
        p = os.getcwd()
        self.d_model = tf.keras.models.load_model("{}}".format(prefix))
        self.g_model = tf.keras.models.load_model("{}}".format(prefix))
        self.gan_model = tf.keras.models.load_model("{}}".format(prefix))
    • Here’s the initial run. Very nice for 10,000 epochs!


    • And here’s the results from the loaded model:


    • The discriminator works as well:
      real accuracy = 100.00%, fake accuracy = 100.00%
      real loss = 0.0154, fake loss = 0.0947%
    • An odd thing is that I can save the GAN model, but can’t load it?
      ValueError: An empty Model cannot be used as a Layer.

      I can rebuild it from the loaded generator and discriminator models though

  • Set up MLP to convert low-fidelity sin waves to high-fidelity
    • Get the training and test data from InfluxDB
      • input is square, output is sin, and the GAN should be noisy_sin minus sin. Randomly move the sample through the domain
    • Got the queries working:
    • Train and save a 2-layer, 400 neuron MLP. No ensembles for now
  • Set up GAN to add noise


  • Ask question about what the ACM and CHI are doing, beyond providing publication venues, to fight misinformation that lets millions of people find fabricated evidence that supports dangerous behavior.
  • Effects of Credibility Indicators on Social Media News Sharing Intent
    • In recent years, social media services have been leveraged to spread fake news stories. Helping people spot fake stories by marking them with credibility indicators could dissuade them from sharing such stories, thus reducing their amplification. We carried out an online study (N = 1,512) to explore the impact of four types of credibility indicators on people’s intent to share news headlines with their friends on social media. We confirmed that credibility indicators can indeed decrease the propensity to share fake news. However, the impact of the indicators varied, with fact checking services being the most effective. We further found notable differences in responses to the indicators based on demographic and personal characteristics and social media usage frequency. Our findings have important implications for curbing the spread of misinformation via social media platforms.