Capacity, Bandwidth, and Compositionality in Emergent Language Learning
- Many recent works have discussed the propensity, or lack thereof, for emergent languages to exhibit properties of natural languages. A favorite in the literature is learning compositionality. We note that most of those works have focused on communicative bandwidth as being of primary importance. While important, it is not the only contributing factor. In this paper, we investigate the learning biases that affect the efficacy and compositionality of emergent languages. Our foremost contribution is to explore how capacity of a neural network impacts its ability to learn a compositional language. We additionally introduce a set of evaluation metrics with which we analyze the learned languages. Our hypothesis is that there should be a specific range of model capacity and channel bandwidth that induces compositional structure in the resulting language and consequently encourages systematic generalization. While we empirically see evidence for the bottom of this range, we curiously do not find evidence for the top part of the range and believe that this is an open question for the community.
Radiolab: Tit for Tat
- In the early 60s, Robert Axelrod was a math major messing around with refrigerator-sized computers. Then a dramatic global crisis made him wonder about the space between a rock and a hard place, and whether being good may be a good strategy. With help from Andrew Zolli and Steve Strogatz, we tackle the prisoner’s dilemma, a classic thought experiment, and learn about a simple strategy to navigate the waters of cooperation and betrayal. Then Axelrod, along with Stanley Weintraub, takes us back to the trenches of World War I, to the winter of 1914, and an unlikely Christmas party along the Western Front.
- Need to send a note for them to look into Axelrod’s “bully” saddle point
7:00 – ASRC GOES
- Dissertation – Nearly done with the agent cartography section?
- CTO Rehearsal – 10:30 – 12:00 done
- ML Dinner – 4:30 fun!
- Meeting With Aaron M
- More thinking about what to do with the paper. We decided to try for the CHI4EVIL workshop, and then try something like IEEE Spectrum. I think I’d like to reframe it around the concept of Expensive Information and Automation. Try to tie together AI weapons, spam filters, and deepfakes
- Automation makes negotiation more difficult, locks in trajectories
- Handing off responsibility to automation amplifies opportunities and destructive potential
- OODA loop could be generalized if you look at it from the perspective of attention.
7:00 – 3:00 ASRC PM Summit
- 75th anniversary of D-day
- Research talk today at the conference. Much networking yesterday.
- The talk went well. More opportunities for networking. Mayne some ML for 3D printing?
- Copied the CHIPLAY paper to a new GROUP 2020 folder and change to the acm small article format
- Simplicial models of social contagion
- Complex networks have been successfully used to describe the spread of diseases in populations of interacting individuals. Conversely, pairwise interactions are often not enough to characterize social contagion processes such as opinion formation or the adoption of novelties, where complex mechanisms of influence and reinforcement are at work. Here we introduce a higher-order model of social contagion in which a social system is represented by a simplicial complex and contagion can occur through interactions in groups of different sizes. Numerical simulations of the model on both empirical and synthetic simplicial complexes highlight the emergence of novel phenomena such as a discontinuous transition induced by higher-order interactions. We show analytically that the transition is discontinuous and that a bistable region appears where healthy and endemic states co-exist. Our results help explain why critical masses are required to initiate social changes and contribute to the understanding of higher-order interactions in complex systems.
- This is wild: Randomly wired neural networks and state-of-the-art accuracy? Yes it works.
- This is sad: Training a single AI model can emit as much carbon as five cars in their lifetimes
- Came home and slept 2 1/2 hours. Very cooked.
7:00 – 9:00 ASRC GEOS/AIMES
- Worked on the slides a bit
- Adding changes to the JASSS paper
- Waiting for meeting
- Meeting went well, I think? Funding appears to be solid, and I’m now a “Futurist”
- Meeting with Shimei’s group. Fatima might be interested in ML summer work
- Meeting with Aaron. Fleshed out the Sanhedrin-17a concept
7:00 – 8:00 ASRC NASA GEOS-R
- More Dissertation
- Break out the network slides to “island” (initial state), “star” (radio) “cyclic star” (talk radio), “dense” social media
- 7:30 Waikato meeting.
- Walked through today’s version, which is looking very nice
- Went over tasking spreadsheets
7:00 – 1:30 ASRC PhD 1:30 – 6:00 NASA AIMES
- Downloaded an amoeba eating a bacteria. Need to edit
- Fixed some bugs in the analytic code and downloaded some images and text. There is banding in the space name embeddings, which is pretty interesting
- Worked on the paper. I think I have an entry point now. It seems to be flowing nicely
- Dungeons & Dragons Single Volume Edition By Gary Gygax & Dave Arneson
- GOES-R AI Kickoff meeting
- More work on slides. Incorporated Wayne’s comments (I think?). Also got the video edited