I really need to get back to the book:
Also, make the D2Z trend based on the last 14 days this weekend. Done!
The initial version of DaysToZero is up! Working on adding states now
Got USA data working. New York looks very bad:
Evaluating the fake news problem at the scale of the information ecosystem
- “Fake news,” broadly defined as false or misleading information masquerading as legitimate news, is frequently asserted to be pervasive online with serious consequences for democracy. Using a unique multimode dataset that comprises a nationally representative sample of mobile, desktop, and television consumption, we refute this conventional wisdom on three levels. First, news consumption of any sort is heavily outweighed by other forms of media consumption, comprising at most 14.2% of Americans’ daily media diets. Second, to the extent that Americans do consume news, it is overwhelmingly from television, which accounts for roughly five times as much as news consumption as online. Third, fake news comprises only 0.15% of Americans’ daily media diet. Our results suggest that the origins of public misinformedness and polarization are more likely to lie in the content of ordinary news or the avoidance of news altogether as they are in overt fakery.
7:00 – 9:00 ASRC GOES
The brains of birds synchronize when they sing duets
- When a male or female white-browed sparrow-weaver begins its song, its partner joins in at a certain time. They duet with each other by singing in turn and precisely in tune. A team led by researchers from the Max Planck Institute for Ornithology in Seewiesen used mobile transmitters to simultaneously record neural and acoustic signals from pairs of birds singing duets in their natural habitat. They found that the nerve cell activity in the brain of the singing bird changes and synchronizes with its partner when the partner begins to sing. The brains of both animals then essentially function as one, which leads to the perfect duet. (original article: Duets recorded in the wild reveal that interindividually coordinated motor control enables cooperative behavior)
Bottom-Up and Top-Down Attention for Image Captioning and Visual Question Answering
- Top-down visual attention mechanisms have been used extensively in image captioning and visual question answering (VQA) to enable deeper image understanding through fine-grained analysis and even multiple steps of reasoning. In this work, we propose a combined bottom-up and top down attention mechanism that enables attention to be calculated at the level of objects and other salient image regions. This is the natural basis for attention to be considered. Within our approach, the bottom-up mechanism (based on Faster R-CNN) proposes image regions, each with an associated feature vector, while the top-down mechanism determines feature weightings. Applying this approach to image captioning, our results on the MSCOCO test server establish a new state-of-the-art for the task, achieving CIDEr / SPICE / BLEU-4 scores of 117.9, 21.5 and 36.9, respectively. Demonstrating the broad applicability of the method, applying the same approach to VQA we obtain first place in the 2017 VQA Challenge
- Need to think about how to discuss maps like the T-O and belief space maps (flocking and stampeding projections?) are attention maps as well. Emphasizing well-triangulated but less-attended areas is a potential good. Compare to how maps opened up areas for exploration and exploitation, but this is constructive and not extractive
- Admin -done
- Walkthrough of Aaron’s slides
- Showed him how to outline boxes and reduce the filesize
- Shimei’s group
- Walkthrough of the slides
- Strengthen the connection between the sim and the human study
7:00 – 3:00 ASRC
3rd Annual DoD AI Industry Day
From Stewart Russell, via BBC Business Daily and the AI Alignment podcast:
Although people have argued that this creates a filter bubble or a little echo chamber where you only see stuff that you like and you don’t see anything outside of your comfort zone. That’s true. It might tend to cause your interests to become narrower, but actually that isn’t really what happened and that’s not what the algorithms are doing. The algorithms are not trying to show you the stuff you like. They’re trying to turn you into predictable clickers. They seem to have figured out that they can do that by gradually modifying your preferences and they can do that by feeding you material. That’s basically, if you think of a spectrum of preferences, it’s to one side or the other because they want to drive you to an extreme. At the extremes of the political spectrum or the ecological spectrum or whatever image you want to look at. You’re apparently a more predictable clicker and so they can monetize you more effectively.
So this is just a consequence of reinforcement learning algorithms that optimize click-through. And in retrospect, we now understand that optimizing click-through was a mistake. That was the wrong objective. But you know, it’s kind of too late and in fact it’s still going on and we can’t undo it. We can’t switch off these systems because there’s so tied in to our everyday lives and there’s so much economic incentive to keep them going.
So I want people in general to kind of understand what is the effect of operating these narrow optimizing systems that pursue these fixed and incorrect objectives. The effect of those on our world is already pretty big. Some people argue that operation’s pursuing the maximization of profit have the same property. They’re kind of like AI systems. They’re kind of super intelligent because they think over long time scales, they have massive information, resources and so on. They happen to have human components, but when you put a couple of hundred thousand humans together into one of these corporations, they kind of have this super intelligent understanding, manipulation capabilities and so on.
- Predicting human decisions with behavioral theories and machine learning
- Behavioral decision theories aim to explain human behavior. Can they help predict it? An open tournament for prediction of human choices in fundamental economic decision tasks is presented. The results suggest that integration of certain behavioral theories as features in machine learning systems provides the best predictions. Surprisingly, the most useful theories for prediction build on basic properties of human and animal learning and are very different from mainstream decision theories that focus on deviations from rational choice. Moreover, we find that theoretical features should be based not only on qualitative behavioral insights (e.g. loss aversion), but also on quantitative behavioral foresights generated by functional descriptive models (e.g. Prospect Theory). Our analysis prescribes a recipe for derivation of explainable, useful predictions of human decisions.
- Adversarial Policies: Attacking Deep Reinforcement Learning
- Deep reinforcement learning (RL) policies are known to be vulnerable to adversarial perturbations to their observations, similar to adversarial examples for classifiers. However, an attacker is not usually able to directly modify another agent’s observations. This might lead one to wonder: is it possible to attack an RL agent simply by choosing an adversarial policy acting in a multi-agent environment so as to create natural observations that are adversarial? We demonstrate the existence of adversarial policies in zero-sum games between simulated humanoid robots with proprioceptive observations, against state-of-the-art victims trained via self-play to be robust to opponents. The adversarial policies reliably win against the victims but generate seemingly random and uncoordinated behavior. We find that these policies are more successful in high-dimensional environments, and induce substantially different activations in the victim policy network than when the victim plays against a normal opponent. Videos are available at this http URL.
Capacity, Bandwidth, and Compositionality in Emergent Language Learning
- Many recent works have discussed the propensity, or lack thereof, for emergent languages to exhibit properties of natural languages. A favorite in the literature is learning compositionality. We note that most of those works have focused on communicative bandwidth as being of primary importance. While important, it is not the only contributing factor. In this paper, we investigate the learning biases that affect the efficacy and compositionality of emergent languages. Our foremost contribution is to explore how capacity of a neural network impacts its ability to learn a compositional language. We additionally introduce a set of evaluation metrics with which we analyze the learned languages. Our hypothesis is that there should be a specific range of model capacity and channel bandwidth that induces compositional structure in the resulting language and consequently encourages systematic generalization. While we empirically see evidence for the bottom of this range, we curiously do not find evidence for the top part of the range and believe that this is an open question for the community.
Radiolab: Tit for Tat
- In the early 60s, Robert Axelrod was a math major messing around with refrigerator-sized computers. Then a dramatic global crisis made him wonder about the space between a rock and a hard place, and whether being good may be a good strategy. With help from Andrew Zolli and Steve Strogatz, we tackle the prisoner’s dilemma, a classic thought experiment, and learn about a simple strategy to navigate the waters of cooperation and betrayal. Then Axelrod, along with Stanley Weintraub, takes us back to the trenches of World War I, to the winter of 1914, and an unlikely Christmas party along the Western Front.
- Need to send a note for them to look into Axelrod’s “bully” saddle point
7:00 – ASRC GOES
- Dissertation – Nearly done with the agent cartography section?
- CTO Rehearsal – 10:30 – 12:00 done
- ML Dinner – 4:30 fun!
- Meeting With Aaron M
- More thinking about what to do with the paper. We decided to try for the CHI4EVIL workshop, and then try something like IEEE Spectrum. I think I’d like to reframe it around the concept of Expensive Information and Automation. Try to tie together AI weapons, spam filters, and deepfakes
- Automation makes negotiation more difficult, locks in trajectories
- Handing off responsibility to automation amplifies opportunities and destructive potential
- OODA loop could be generalized if you look at it from the perspective of attention.
The dynamics of norm change in the cultural evolution of language
- What happens when a new social convention replaces an old one? While the possible forces favoring norm change—such as institutions or committed activists—have been identified for a long time, little is known about how a population adopts a new convention, due to the difficulties of finding representative data. Here, we address this issue by looking at changes that occurred to 2,541 orthographic and lexical norms in English and Spanish through the analysis of a large corpora of books published between the years 1800 and 2008. We detect three markedly distinct patterns in the data, depending on whether the behavioral change results from the action of a formal institution, an informal authority, or a spontaneous process of unregulated evolution. We propose a simple evolutionary model able to capture all of the observed behaviors, and we show that it reproduces quantitatively the empirical data. This work identifies general mechanisms of norm change, and we anticipate that it will be of interest to researchers investigating the cultural evolution of language and, more broadly, human collective behavior.
When Hillclimbers Beat Genetic Algorithms in Multimodal Optimization
- It has been shown in the past that a multistart hillclimbing strategy compares favourably to a standard genetic algorithm with respect to solving instances of the multimodal problem generator. We extend that work and verify if the utilization of diversity preservation techniques in the genetic algorithm changes the outcome of the comparison. We do so under two scenarios: (1) when the goal is to find the global optimum, (2) when the goal is to find all optima.
A mathematical analysis is performed for the multistart hillclimbing algorithm and a through empirical study is conducted for solving instances of the multimodal problem generator with increasing number of optima, both with the hillclimbing strategy as well as with genetic algorithms with niching. Although niching improves the performance of the genetic algorithm, it is still inferior to the multistart hillclimbing strategy on this class of problems.
An idealized niching strategy is also presented and it is argued that its performance should be close to a lower bound of what any evolutionary algorithm can do on this class of problems.
Phil 7:00 – 5:00 ASRC NASA GEOS
- Factors Motivating Customization and Echo Chamber Creation Within Digital News Environments
- With the influx of content being shared through social media, mobile apps, and other digital sources – including fake news and misinformation – most news consumers experience some degree of information overload. To combat these feelings of unease associated with the sheer volume of news content, some consumers tailor their news ecosystems and purposefully include or exclude content from specific sources or individuals. This study explores customization on social media and news platforms through a survey (N = 317) of adults regarding their digital news habits. Findings suggest that consumers who diversify their online news streams report lower levels of anxiety related to current events and highlight differences in reported anxiety levels and customization practices across the political spectrum. This study provides important insights into how perceived information overload, anxiety around current events, political affiliations and partisanship, and demographic characteristics may contribute to tailoring practices related to news consumption in social media environments. We discuss these findings in terms of their implications for industry, policy, and theory
- More JASSS paper
- Installing new IntelliJ and re-indexing
- Discovered a few bugs with the JsonUtils.find. Fixed and submitted a version to StackOverflow. Eeeep!