Category Archives: Writing

Phil 3.11.19

7:00 – 10:00 ASRC PhD. Fun, long day.

Phil 3.10.19

Learning to Speak and Act in a Fantasy Text Adventure Game

  • We introduce a large scale crowdsourced text adventure game as a research platform for studying grounded dialogue. In it, agents can perceive, emote, and act whilst conducting dialogue with other agents. Models and humans can both act as characters within the game. We describe the results of training state-of-the-art generative and retrieval models in this setting. We show that in addition to using past dialogue, these models are able to effectively use the state of the underlying world to condition their predictions. In particular, we show that grounding on the details of the local environment, including location descriptions, and the objects (and their affordances) and characters (and their previous actions) present within it allows better predictions of agent behavior and dialogue. We analyze the ingredients necessary for successful grounding in this setting, and how each of these factors relate to agents that can talk and act successfully.

New run in the dungeon. Exciting!

Finished my pass through Antonio’s paper

Zoe Keating (May 1) or Imogen Heap (May 3)?

Phil 3.9.19

Understanding China’s AI Strategy

  • In my interactions with Chinese government officials, they demonstrated remarkably keen understanding of the issues surrounding AI and international security. It is clear that China’s government views AI as a high strategic priority and is devoting the required resources to cultivate AI expertise and strategic thinking among its national security community. This includes knowledge of U.S. AI policy discussions. I believe it is vital that the U.S. policymaking community similarly prioritize cultivating expertise and understanding of AI developments in China.

Russian Trolls Shift Strategy to Disrupt U.S. Election in 2020

  • Russian internet trolls appear to be shifting strategy in their efforts to disrupt the 2020 U.S. elections, promoting politically divisive messages through phony social media accounts instead of creating propaganda themselves, cybersecurity experts say.

Backup phone

Work on SASO paper – started

Rachel’s dungeon run is tomorrow! Maybe cross 10,000 posts?

Look at using BERT and the full Word2Vec model for analyzing posts

The Promise of Hierarchical Reinforcement Learning

  • To really understand the need for a hierarchical structure in the learning algorithm and in order to make the bridge between RL and HRL, we need to remember what we are trying to solve: MDPs. HRL methods learn a policy made up of multiple layers, each of which is responsible for control at a different level of temporal abstraction. Indeed, the key innovation of the HRL is to extend the set of available actions so that the agent can now choose to perform not only elementary actions, but also macro-actions, i.e. sequences of lower-level actions. Hence, with actions that are extended over time, we must take into account the time elapsed between decision-making moments. Luckily, MDP planning and learning algorithms can easily be extended to accommodate HRL.

Phil 3.6.19

5:00 – ASRC TL

  • Got a lot done on the BAA on the flight yesterday
  • Wrote up a description of LMN and CM for Eric V.
  • Reading more of the Handbook of Latent Semantic Analysis. It’s giving me some good ideas for calculating similarities of posts using Word2Vec and comparing the average vector for each post
  • Antonio got an extension to the 12th. Need to see what he’s up to. Wow, there’s a lot there now. Made some comments about what I’d like to see. I’ll pull down the document to read later
  • Continued to tweak the slides
  • TF Dev conference main sessions today. Breakouts tomorrow.

Phil 3.4.19

7:00 – 5:00 ASRC

  • Build an interactive SequenceAnalyzer. The adjustments are
    • Number of buckets
    • Percentages for each analytic (percentages to keep/discard
    • Selectable skip words that can be added to a list (in the db?)
  • Algorithm
    1. Find the most common words across all groups, these are skip_words
    2. Find the most common words along the entire series of posts per player and eliminate them
    3. Find the most common/central words across all sequences and keep those as belief places
    4. For each sequence by group, find the most common/central words after the belief places. These are the belief spaces.
    5. Build an adjacency matrix of players, groups, places and spaces
    6. Build submatrices for centrality calculations? This could be rather than finding the most common
    7. Possible word2vec variations?
      1. It seems to me that I might be able to use direction cosines and dynamic time warping to calculate the similarity of posts and align them better than the overall scaling that I’m doing now. DM posts introducing a room should align perfectly, and then other scaling could happen between those areas of greatest alignment
  • Display
    • Menu:
      • Save spreadsheet (includes config, included words, posts(?), trajectories)
      • load data
      • select database
      • select group within db
      • load/save config file
      • clear all
    • Fields
      • percent for A1, A2, A3, A4
      • Centrality/Sum switch
      • BOW/TF-IDF switch
      • Word2vec switch?
    • Textarea (areas? tabbed?)
      • Table with rows as sequence step. Columns are grouped by places, spaces, groups, and players
    • Work on Antonio’s paper got a first draft on introduction and motivation
    • BAA
      • Upload latex and references to laptop
    • Haircut! Pack!
    • Model-Based Reinforcement Learning for Atari
      • Model-free reinforcement learning (RL) can be used to learn effective policies for complex tasks, such as Atari games, even from image observations. However, this typically requires very large amounts of interaction — substantially more, in fact, than a human would need to learn the same games. How can people learn so quickly? Part of the answer may be that people can learn how the game works and predict which actions will lead to desirable outcomes. In this paper, we explore how video prediction models can similarly enable agents to solve Atari games with orders of magnitude fewer interactions than model-free methods. We describe Simulated Policy Learning (SimPLe), a complete model-based deep RL algorithm based on video prediction models and present a comparison of several model architectures, including a novel architecture that yields the best results in our setting. Our experiments evaluate SimPLe on a range of Atari games and achieve competitive results with only 100K interactions between the agent and the environment (400K frames), which corresponds to about two hours of real-time play.

 

Phil 3.1.19

7:00 – ASRC

  • Got accepted to the TF dev conference. The flight out is expensive… Sent Eric V. a note asking for permission to go, but bought tix anyway given the short fuse
  • Downloaded the full slack data
  • Working on white paper. The single file was getting unwieldy, so I broke it up
  • Found Speeding up Parliamentary Decision Making for Cyber Counter-Attack, which argues for the possibility of pre-authorizing automated response
  • Up to six pages. IN the middle of the cyberdefense section

Phil 2.28.19

7:00 – very, very, late ASRC

  • Tomorrow is March! I need to write a few paragraphs for Antonio this weekend
  • YouTube stops recommending alt-right channels
    • For the first two weeks of February, YouTube was recommending videos from at least one of these major alt-right channels on more than one in every thirteen randomly selected videos (7.8%). From February 15th, this number has dropped to less than one in two hundred and fifty (0.4%).
  • Working on text splitting Group1 in the PHPBB database
    • Updated the view so the same queries work
    • Discovered that you can do this: …, “message” as type, …. That gives you a column of type filled with “message”. Via stackoverflow
    • Mostly working, I’m missing the last bucket for some reason. But it’s good overlap with the Slack data.
    • Was debugging on my office box, and was wondering where all the data after the troll was! Ooops, not loaded
    • Changed the time tests to be > ts1 and <= ts2
  • Working on the white paper. Deep into strategy, Cyberdefense, and the evolution towards automatic active response in cyber.
  • Looooooooooooooooooooooooooong meeting of Shimei’s group. Interesting but difficult paper: Learning Dynamic Embeddings from Temporal Interaction Networks
  • Emily’s run in the dungeon finishes tonight!
  • Looks like I’m going to the TF Dev conference after all….

Phil 2.27.19

7:00 – 5:30 ASRC

  • Getting closer to the goal by being less capable
    • Understanding how systems with many semi-autonomous parts reach a desired target is a key question in biology (e.g., Drosophila larvae seeking food), engineering (e.g., driverless navigation), medicine (e.g., reliable movement for brain-damaged individuals), and socioeconomics (e.g., bottom-up goal-driven human organizations). Centralized systems perform better with better components. Here, we show, by contrast, that a decentralized entity is more efficient at reaching a target when its components are less capable. Our findings reproduce experimental results for a living organism, predict that autonomous vehicles may perform better with simpler components, offer a fresh explanation for why biological evolution jumped from decentralized to centralized design, suggest how efficient movement might be achieved despite damaged centralized function, and provide a formula predicting the optimum capability of a system’s components so that it comes as close as possible to its target or goal.
  • Nice chat with Greg last night. He likes the “Bones in a Hut” and “Stampede Theory” phrases. It turns out the domains are available…
    • Thinking that the title of the book could be “Stampede Theory: Why Group Think Happens, and why Diversity is the First, Best Answer”. Maybe structure the iConference talk around that as well.
  • Guidance from Antonio: In the meantime, if you have an idea on how to structure the Introduction, please go on considering that we want to put the decision logic inside each Autonomous Car that will be able to select passengers and help them in a self-organized manner.
  • Try out the splitter on the Tymora1 text.
    • Incorporate the ignore.xml when reading the text
    • If things look promising, then add changes to the phpbb code and try on that text as well.
    • At this point I’m just looking at overlapping lists of words that become something like a sand chart. I wonder if I can use the Eigenvector values to become a percentage connectivity/weight? Weights
    • Ok – I have to say that I’m pretty happy with this. These are centrality using top 25% BOW from the Slack text of Tymora1. I think that the way to use this is to have each group be an “agent” that has cluster of words for each step: Top 10
    • Based on this, I’d say add a “Evolving Networks of words” section to the dissertation. Have to find that WordRank paper
  • Working on white paper. Lit review today, plus fix anything that I might have broken…
    • Added section on cybersecurity that got lost in the update fiasco
    • Aaron found a good paper on the lack of advantage that the US has in AI, particularly wrt China
  • Avoiding working on white paper by writing a generator for Aaron. Done!
  • Cortex is an open-source platform for building, deploying, and managing machine learning applications in production. It is designed for any developer who wants to build machine learning powered services without having to worry about infrastructure challenges like configuring data pipelines, continuous deployment, and dependency management. Cortex is actively maintained by Cortex Labs. We’re a venture-backed team of infrastructure engineers and we’re hiring.

Phil 2.26.19

7:00 – 3:00 ASRC

    • Django is a high-level Python Web framework that encourages rapid development and clean, pragmatic design. Built by experienced developers, it takes care of much of the hassle of Web development, so you can focus on writing your app without needing to reinvent the wheel. It’s free and open source.
    • More white paper. Add Flynn’s thoughts about cyber security – see notes from yesterday
    • Reconnected with Antonio. He’d like me to write the introduction and motivation for his SASO paper
    • Add time bucketing to postanalyzer. I’m really starting to want to add a UI
      • Looks done. Try it out next time
        Running query for Poe in subject peanutgallery between 23:56 and 00:45
        Running query for Dungeon Master in subject peanutgallery between 23:56 and 00:45
        Running query for Lord Javelin in subject peanutgallery between 23:56 and 00:45
        Running query for memoriesmaze in subject peanutgallery between 23:56 and 00:45
        Running query for Linda in subject peanutgallery between 23:56 and 00:45
        Running query for phil in subject peanutgallery between 23:56 and 00:45
        Running query for Lorelai in subject peanutgallery between 23:56 and 00:45
        Running query for Bren'Dralagon in subject peanutgallery between 23:56 and 00:45
        Running query for Shelton Herrington in subject peanutgallery between 23:56 and 00:45
        Running query for Keiri'to in subject peanutgallery between 23:56 and 00:45
    • More white paper. Got through the introduction and background. Hopefully didn’t loose anything when I had to resynchronize with the repository that I hadn’t updated from

 

Phil 2.25.19

7:00 – 2:30 ASRC TL

2:30 – 4:30 PhD

  • Fix directory code of LMN so that it remembers the input and output directories – done
  • Add time bucketing capabilities. Do this by taking the complete conversation and splitting the results into N sublists. Take the beginning and ending time from each list and then use those to set the timestamp start and stop for each player’s posts.
  • Thinking about a time-series LMN tool that can chart the relative occurrence of the sorted terms over time. I think this could be done with tkinter. I would need to create and executable as described here, though the easiest answer seems to be pyinstaller.
  • Here are two papers that show the advantages of herding over nomadic behavior:
    • Phagotrophy by a flagellate selects for colonial prey: A possible origin of multicellularity
      • Predation was a powerful selective force promoting increased morphological complexity in a unicellular prey held in constant environmental conditions. The green alga, Chlorella vulgaris, is a well-studied eukaryote, which has retained its normal unicellular form in cultures in our laboratories for thousands of generations. For the experiments reported here, steady-state unicellular C. vulgaris continuous cultures were inoculated with the predator Ochromonas vallescia, a phagotrophic flagellated protist (‘flagellate’). Within less than 100 generations of the prey, a multicellular Chlorella growth form became dominant in the culture (subsequently repeated in other cultures). The prey Chlorella first formed globose clusters of tens to hundreds of cells. After about 10–20 generations in the presence of the phagotroph, eight-celled colonies predominated. These colonies retained the eight-celled form indefinitely in continuous culture and when plated onto agar. These self-replicating, stable colonies were virtually immune to predation by the flagellate, but small enough that each Chlorella cell was exposed directly to the nutrient medium.
    • De novo origins of multicellularity in response to predation
      • The transition from unicellular to multicellular life was one of a few major events in the history of life that created new opportunities for more complex biological systems to evolve. Predation is hypothesized as one selective pressure that may have driven the evolution of multicellularity. Here we show that de novo origins of simple multicellularity can evolve in response to predation. We subjected outcrossed populations of the unicellular green alga Chlamydomonas reinhardtii to selection by the filter-feeding predator Paramecium tetraurelia. Two of five experimental populations evolved multicellular structures not observed in unselected control populations within ~750 asexual generations. Considerable variation exists in the evolved multicellular life cycles, with both cell number and propagule size varying among isolates. Survival assays show that evolved multicellular traits provide effective protection against predation. These results support the hypothesis that selection imposed by predators may have played a role in some origins of multicellularity. SpontaniousClustering\

Phil 2.22.19

7:00 – 4:00 ASRC

  • Running Ellen’s dungeon tonight
  • Wondering what to do next. Look at text analytics? List is in this post.
  • But before we do that, I need to extract from the DB posts as text. And now I have something to do!
    • Sheesh – tried to update the database and had all kinds of weird problems. I wound up re-injesting everything from the Slack files. This seems to work fine, so I exported that to replace the .sql file that may have been causing all the trouble.
  • Here’s a thing using the JAX library, which I’m becoming interested in: Meta-Learning in 50 Lines of JAX
    • The focus of Machine Learning (ML) is to imbue computers with the ability to learn from data, so that they may accomplish tasks that humans have difficulty expressing in pure code. However, what most ML researchers call “learning” right now is but a very small subset of the vast range of behavioral adaptability encountered in biological life! Deep Learning models are powerful, but require a large amount of data and many iterations of stochastic gradient descent (SGD). This learning procedure is time-consuming and once a deep model is trained, its behavior is fairly rigid; at deployment time, one cannot really change the behavior of the system (e.g. correcting mistakes) without an expensive retraining process. Can we build systems that can learn faster, and with less data?
  • Meta-Learning: Learning to Learn Fast
    • A good machine learning model often requires training with a large number of samples. Humans, in contrast, learn new concepts and skills much faster and more efficiently. Kids who have seen cats and birds only a few times can quickly tell them apart. People who know how to ride a bike are likely to discover the way to ride a motorcycle fast with little or even no demonstration. Is it possible to design a machine learning model with similar properties — learning new concepts and skills fast with a few training examples? That’s essentially what meta-learning aims to solve.
  • Meta learning is everywhere! Learning to Generalize from Sparse and Underspecified Rewards
    • In “Learning to Generalize from Sparse and Underspecified Rewards“, we address the issue of underspecified rewards by developing Meta Reward Learning (MeRL), which provides more refined feedback to the agent by optimizing an auxiliary reward function. MeRL is combined with a memory buffer of successful trajectories collected using a novel exploration strategy to learn from sparse rewards.
  • Lingvo: A TensorFlow Framework for Sequence Modeling
    • While Lingvo started out with a focus on NLP, it is inherently very flexible, and models for tasks such as image segmentation and point cloud classification have been successfully implemented using the framework. Distillation, GANs, and multi-task models are also supported. At the same time, the framework does not compromise on speed, and features an optimized input pipeline and fast distributed training. Finally, Lingvo was put together with an eye towards easy productionisation, and there is even a well-defined path towards porting models for mobile inference.
  • Working on white paper. Still reading Command Dysfunction and making notes. I think I’ll use the idea of C&C combat as the framing device of the paper. Started to write more bits
  • What, if anything, can the Pentagon learn from this war simulator?
    • It is August 2010, and Operation Glacier Mantis is struggling in the fictional Saffron Valley. Coalition forces moved into the valley nine years ago, but peace negotiations are breaking down after a series of airstrikes result in civilian casualties. Within a few months, the Coalition abandons Saffron Valley. Corruption sapped the reputation of the operation. Troops are called away to a different war. Operation Glacier Mantis ends in total defeat.
  • Created a post for Command Dysfunction here. Finished.

Phil 2.21.19

7:00 – 4:00 ASRC

  • Working on white paper. Still reading Command Dysfunction and making notes. I think I’ll use the idea of C&C combat as the framing device of the paper

4:30 – 7:00 Seminar

  • Finishing slides – done, though it took all day. Charged it to PhD
  • Order food! – Done
  • Presentation and food went well. Kaufman’s “At home in the Universe” next

Phil 2.20.19

7:00 – ASRC TL

  • Fast editor for very large files: EmEditor
  • Topic Modeling Systems and Interfaces
    • The 4Humanities “WhatEvery1Says” project conducted a comparative analysis in 2016 of the following topic modeling systems/interfaces. As a result, it chose to implement Andrew Goldstone’s DFR-browser for its own work.
  • Deep Learning for Video Game Playing
    • In this article, we review recent Deep Learning advances in the context of how they have been applied to play different types of video games such as first-person shooters, arcade games, and real-time strategy games. We analyze the unique requirements that different game genres pose to a deep learning system and highlight important open challenges in the context of applying these machine learning methods to video games, such as general game playing, dealing with extremely large decision spaces and sparse rewards.
    • TimeCircleMap
  • Wide Neural Networks of Any Depth Evolve as Linear Models Under Gradient Descent
    • A longstanding goal in deep learning research has been to precisely characterize training and generalization. However, the often complex loss landscapes of neural networks have made a theory of learning dynamics elusive. In this work, we show that for wide neural networks the learning dynamics simplify considerably and that, in the infinite width limit, they are governed by a linear model obtained from the first-order Taylor expansion of the network around its initial parameters. Furthermore, mirroring the correspondence between wide Bayesian neural networks and Gaussian processes, gradient-based training of wide neural networks with a squared loss produces test set predictions drawn from a Gaussian process with a particular compositional kernel. While these theoretical results are only exact in the infinite width limit, we nevertheless find excellent empirical agreement between the predictions of the original network and those of the linearized version even for finite practically-sized networks. This agreement is robust across different architectures, optimization methods, and loss functions.
  • AI Safety Needs Social Scientists
    • We believe the AI safety community needs to invest research effort in the human side of AI alignment. Many of the uncertainties involved are empirical, and can only be answered by experiment. They relate to the psychology of human rationality, emotion, and biases. Critically, we believe investigations into how people interact with AI alignment algorithms should not be held back by the limitations of existing machine learning. Current AI safety research is often limited to simple tasks in video games, robotics, or gridworlds, but problems on the human side may only appear in more realistic scenarios such as natural language discussion of value-laden questions. This is particularly important since many aspects of AI alignment change as ML systems increase in capability.
  • Started on slides for Thursday
  • Working on white paper. Adding in the paper above on Deep Learning for Video Game Playing.

Phil 2.19.19

7:00 – 6:00 ASRC TL IRAD

  • Something to listen to tomorrow morning? Tracing the Spread of Fake News
    • Two years after a presidential election that shocked so many, we are still trying to understand the role that fake news sources played, and how a swarm of propaganda clouded social media. Now a comprehensive study has looked carefully at the impact of untrustworthy online sources in the election, with some surprising results, and some suggestions for how to avoid problems in the future. In the studio for this episode is David Lazer, Professor of Political Science and Computer and Information Science at Northeastern University. He is one of the authors of Fake news on Twitter during the 2016 U.S. presidential election, which was just published in Science Magazine. 
  • Finished my writeup on Clockwork Muse. Now I need to make slides by Thursday.
  • Visual analytics for collaborative human-machine confidence in human-centric active learning tasks
    • Active machine learning is a human-centric paradigm that leverages a small labelled dataset to build an initial weak classifier, that can then be improved over time through human-machine collaboration. As new unlabelled samples are observed, the machine can either provide a prediction, or query a human ‘oracle’ when the machine is not confident in its prediction. Of course, just as the machine may lack confidence, the same can also be true of a human ‘oracle’: humans are not all-knowing, untiring oracles. A human’s ability to provide an accurate and confident response will often vary between queries, according to the duration of the current interaction, their level of engagement with the system, and the difficulty of the labelling task. This poses an important question of how uncertainty can be expressed and accounted for in a human-machine collaboration. In short, how can we facilitate a mutually-transparent collaboration between two uncertain actors—a person and a machine—that leads to an improved outcome? In this work, we demonstrate the benefit of human-machine collaboration within the process of active learning, where limited data samples are available or where labelling costs are high. To achieve this, we developed a visual analytics tool for active learning that promotes transparency, inspection, understanding and trust, of the learning process through human-machine collaboration. Fundamental to the notion of confidence, both parties can report their level of confidence during active learning tasks using the tool, such that this can be used to inform learning. Human confidence of labels can be accounted for by the machine, the machine can query for samples based on confidence measures, and the machine can report confidence of current predictions to the human, to further the trust and transparency between the collaborative parties. In particular, we find that this can improve the robustness of the classifier when incorrect sample labels are provided, due to unconfidence or fatigue. Reported confidences can also better inform human-machine sample selection in collaborative sampling. Our experimentation compares the impact of different selection strategies for acquiring samples: machine-driven, human-driven, and collaborative selection. We demonstrate how a collaborative approach can improve trust in the model robustness, achieving high accuracy and low user correction, with only limited data sample selections.
  • Look into principle of least effort and game theory. See if there is anything
    • Human Behaviour and the Principle of Least Effort. An Introduction to Human Ecology
      • Subtitled “An introduction to human ecology,” this work attempts systematically to treat “least effort” (and its derivatives) as the principle underlying a multiplicity of individual and collective behaviors, variously but regularly distributed. The general orientation is quantitative, and the principle is widely interpreted and applied. After a brief elaboration of principles and a brief summary of pertinent studies (mostly in psychology), Part One (Language and the structure of the personality) develops 8 chapters on its theme, ranging from regularities within language per se to material on individual psychology. Part Two (Human relations: a case of intraspecies balance) contains chapters on “The economy of geography,” “Intranational and international cooperation and conflict,” “The distribution of economic power and social status,” and “Prestige values and cultural vogues”—all developed in terms of the central theme. 20 pages of references with some annotation, keyed to the index. (PsycINFO Database Record (c) 2016 APA, all rights reserved)
    • Decision Making and the Avoidance of Cognitive Demand
      • Behavioral and economic theories have long maintained that actions are chosen so as to minimize demands for exertion or work, a principle sometimes referred to as the “law of less work.” The data supporting this idea pertain almost entirely to demands for physical effort. However, the same minimization principle has often been assumed also to apply to cognitive demand. We set out to evaluate the validity of this assumption. In six behavioral experiments, participants chose freely between courses of action associated with different levels of demand for controlled information processing. Together, the results of these experiments revealed a bias in favor of the less demanding course of action. The bias was obtained across a range of choice settings and demand manipulations, and was not wholly attributable to strategic avoidance of errors, minimization of time on task, or maximization of the rate of goal achievement. Remarkably, the effect also did not depend on awareness of the demand manipulation. Consistent with a motivational account, avoidance of demand displayed sensitivity to task incentives and co-varied with individual differences in the efficacy of executive control. The findings reported, together with convergent neuroscientific evidence, lend support to the idea that anticipated cognitive demand plays a significant role in behavioral decision-making.
    • Intuition, deliberation, and the evolution of cooperation
      • Humans often cooperate with strangers, despite the costs involved. A long tradition of theoretical modeling has sought ultimate evolutionary explanations for this seemingly altruistic behavior. More recently, an entirely separate body of experimental work has begun to investigate cooperation’s proximate cognitive underpinnings using a dual process framework: Is deliberative self-control necessary to reign in selfish impulses, or does self-interested deliberation restrain an intuitive desire to cooperate? Integrating these ultimate and proximate approaches, we introduce dual-process cognition into a formal game theoretic model of the evolution of cooperation. Agents play prisoner’s dilemma games, some of which are one-shot and others of which involve reciprocity. They can either respond by using a generalized intuition, which is not sensitive to whether the game is oneshot or reciprocal, or pay a (stochastically varying) cost to deliberate and tailor their strategy to the type of game they are facing. We find that, depending on the level of reciprocity and assortment, selection favors one of two strategies: intuitive defectors who never deliberate, or dual-process agents who intuitively cooperate but sometimes use deliberation to defect in one-shot games. Critically, selection never favors agents who use deliberation to override selfish impulses: Deliberation only serves to undermine cooperation with strangers. Thus, by introducing a formal theoretical framework for exploring cooperation through a dual-process lens, we provide a clear answer regarding the role of deliberation in cooperation based on evolutionary modeling, help to organize a growing body of sometimes conflicting empirical results, and shed light on the nature of human cognition and social decision making.
    • Complexity Aversion: Influences of Cognitive Abilities, Culture and System of Thought
      • Complexity aversion describes the preference of decision makers for less complex options that cannot be explained by expected utility theory. While a number of research articles investigate the effects of complexity on choices, up to this point there exist only theoretical approaches aiming to explain the reasons behind complexity aversion. This paper presents two experimental studies that aim to fill this gap. The first study considers subjects’ cognitive abilities as a potential driver of complexity aversion. Cognitive skills are measured in a cognitive reflection test and, in addition, are approximated by subjects’ consistency of choices. In opposition to our hypothesis, subjects with higher cognitive skills display stronger complexity aversion compared to their peers. The second study deals with cultural background. The experiment was therefore conducted in Germany and in Japan. German subjects prefer less complex lotteries while Japanese are indifferent regarding choice complexity.
    • Space Time Dynamics of Insurgent Activity in Iraq
      • This paper describes analyses to determine whether there is a space-time dependency for insurgent activity. The data used for the research were 3 months of terrorist incidents attributed to the insurgency in Iraq during U.S. occupation and the methods used are based on a body of work well established using police recorded crime data. It was found that events clustered in space and time more than would be expected if the events were unrelated, suggesting communication of risk in space and time and potentially informing next event prediction. The analysis represents a first but important step and suggestions for further analysis addressing prevention or suppression of future incidents are briefly discussed.
  • Large teams develop and small teams disrupt science and technology
    • One of the most universal trends in science and technology today is the growth of large teams in all areas, as solitary researchers and small teams diminish in prevalence1,2,3. Increases in team size have been attributed to the specialization of scientific activities3, improvements in communication technology4,5, or the complexity of modern problems that require interdisciplinary solutions6,7,8. This shift in team size raises the question of whether and how the character of the science and technology produced by large teams differs from that of small teams. Here we analyse more than 65 million papers, patents and software products that span the period 1954–2014, and demonstrate that across this period smaller teams have tended to disrupt science and technology with new ideas and opportunities, whereas larger teams have tended to develop existing ones. Work from larger teams builds on more-recent and popular developments, and attention to their work comes immediately. By contrast, contributions by smaller teams search more deeply into the past, are viewed as disruptive to science and technology and succeed further into the future—if at all. Observed differences between small and large teams are magnified for higher-impact work, with small teams known for disruptive work and large teams for developing work. Differences in topic and research design account for a small part of the relationship between team size and disruption; most of the effect occurs at the level of the individual, as people move between smaller and larger teams. These results demonstrate that both small and large teams are essential to a flourishing ecology of science and technology, and suggest that, to achieve this, science policies should aim to support a diversity of team sizes.
  • Meeting with Panos about JuryRoom. Interesting! Tony Smith looks like someone to ping. Need to ask Panos