Phil 7.9.20

NVAE: A Deep Hierarchical Variational Autoencoder

  • Normalizing flows, autoregressive models, variational autoencoders (VAEs), and deep energy-based models are among competing likelihood-based frameworks for deep generative learning. Among them, VAEs have the advantage of fast and tractable sampling and easy-to-access encoding networks. However, they are currently outperformed by other models such as normalizing flows and autoregressive models. While the majority of the research in VAEs is focused on the statistical challenges, we explore the orthogonal direction of carefully designing neural architectures for hierarchical VAEs. We propose Nouveau VAE (NVAE), a deep hierarchical VAE built for image generation using depth-wise separable convolutions and batch normalization. NVAE is equipped with a residual parameterization of Normal distributions and its training is stabilized by spectral regularization. We show that NVAE achieves state-of-the-art results among non-autoregressive likelihood-based models on the MNIST, CIFAR-10, and CelebA HQ datasets and it provides a strong baseline on FFHQ. For example, on CIFAR-10, NVAE pushes the state-of-the-art from 2.98 to 2.91 bits per dimension, and it produces high-quality images on CelebA HQ as shown in Fig. 1. To the best of our knowledge, NVAE is the first successful VAE applied to natural images as large as 256×256 pixels.

VAEsNotGANs

Like Two Pis in a Pod: Author Similarity Across Time in the Ancient Greek Corpus

  • One commonly recognized feature of the Ancient Greek corpus is that later texts frequently imitate and allude to model texts from earlier time periods, but analysis of this phenomenon is mostly done for specific author pairs based on close reading and highly visible instances of imitation. In this work, we use computational techniques to examine the similarity of a wide range of Ancient Greek authors, with a focus on similarity between authors writing many centuries apart. We represent texts and authors based on their usage of high-frequency words to capture author signatures rather than document topics and measure similarity using Jensen- Shannon Divergence. We then analyze author similarity across centuries, finding high similarity between specific authors and across the corpus that is not common to all languages.

GPT-2 Agents

  • Setting up some experiments, for real and synthetic, black and white. All values should have raw numbers and percentages:
    • Moves from each square by piece+color / total number of moves from square
    • Moves to each square by piece+color / total number of moves from square
    • Squares by piece+color / total number of pieces
    • Sequences? I’d have to add back in castling and re-run. Maybe later
    • Squares used over time (first 10 moves, second 10, etc)
    • Pieces used over time
  • Create new directory called results that will contain the spreadsheets
  • Running the first queries. It’s going to take about an hour by my estimation, but nothing is exploding as far as the queries go
  • Add a spreadsheet for illegal moves. Done! Here’s the results. The GPT agents make 3 illegal moves out of 1,565:
    illegal bishop move: {'from': 'e7', 'to': 'c6'}
    illegal knight move: {'from': 'c5', 'to': 'a8'}
    illegal queen move: {'from': 'f8', 'to': 'h4'}
    Dataframe: ../results/legal_1.xlsx/legal-table_moves
             illegal  legal
    pawns          0    446
    rooks          0    270
    bishops        1    193
    knights        1    266
    queen          1    175
    king           0    212
    totals         3   1562
    Dataframe: ../results/legal_1.xlsx/legal-table_actual
             illegal   legal
    pawns          0   49386
    rooks          0   31507
    bishops        0   28263
    knights        0   31493
    queen          0   22818
    king           0   23608
    totals         0  188324

     

move_percentage

GOES

  • Waiting on Vadim
  • 2:00 AIMS-Core v3.0 Overview
  • Ping MARCOM

Waikato

  • 6:00 Meeting