Phil 2.14.19

7:00 – 7:00 ASRC

  • Worked on the whitepaper. Going down the chain of consequences with respect to adding AI to military systems in the light of the Starcraft2 research.
  • Maps of Meaning: The Architecture of Belief
    • A 1999 book by Canadian clinical psychologist and psychology professor Jordan Peterson. The book describes a comprehensive theory for how people construct meaning, in a way that is compatible with the modern scientific understanding of how the brain functions.[1] It examines the “structure of systems of belief and the role those systems play in the regulation of emotion”,[2] using “multiple academic fields to show that connecting myths and beliefs with science is essential to fully understand how people make meaning”.[3] Wikipedia
  • Continuing with Clockwork Muse review. Finished the overview and theoretical takes. Continuing on the notes, which is going slow because of bad text scanning
  • JAX is Autograd and XLA, brought together for high-performance machine learning research. With its updated version of Autograd, JAX can automatically differentiate native Python and NumPy functions. It can differentiate through loops, branches, recursion, and closures, and it can take derivatives of derivatives of derivatives. It supports reverse-mode differentiation (a.k.a. backpropagation) via grad as well as forward-mode differentiation, and the two can be composed arbitrarily to any order. What’s new is that JAX uses XLA to compile and run your NumPy programs on GPUs and TPUs. Compilation happens under the hood by default, with library calls getting just-in-time compiled and executed. But JAX also lets you just-in-time compile your own Python functions into XLA-optimized kernels using a one-function API, jit. Compilation and automatic differentiation can be composed arbitrarily, so you can express sophisticated algorithms and get maximal performance without leaving Python.
  • Working on white paper lit review
    • An Evolutionary Algorithm that Constructs Recurrent Neural Networks
      • Standard methods for simultaneously inducing the structure and weights of recurrent neural networks limit every task to an assumed class of architectures. Such a simplification is necessary since the interactions between network structure and function are not well understood. Evolutionary computations, which include genetic algorithms and evolutionary programming, are population-based search methods that have shown promise in many similarly complex tasks. This paper argues that genetic algorithms are inappropriate for network acquisition and describes an evolutionary program, called GNARL, that simultaneously acquires both the structure and weights for recurrent networks. GNARL’s empirical acquisition method allows for the emergence of complex behaviors and topologies that are potentially excluded by the artificial architectural constraints imposed in standard network induction methods
    • Added Evolutionary Deep Learning and Deep RTS to the references
  • Better Language Models and Their Implications
    • We’ve trained a large-scale unsupervised language model which generates coherent paragraphs of text, achieves state-of-the-art performance on many language modeling benchmarks, and performs rudimentary reading comprehension, machine translation, question answering, and summarization — all without task-specific training.
  • Shimei seminar – 4:30 – 7:00

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.