Monthly Archives: February 2019

Phil 2.28.19

7:00 – very, very, late ASRC

  • Tomorrow is March! I need to write a few paragraphs for Antonio this weekend
  • YouTube stops recommending alt-right channels
    • For the first two weeks of February, YouTube was recommending videos from at least one of these major alt-right channels on more than one in every thirteen randomly selected videos (7.8%). From February 15th, this number has dropped to less than one in two hundred and fifty (0.4%).
  • Working on text splitting Group1 in the PHPBB database
    • Updated the view so the same queries work
    • Discovered that you can do this: …, “message” as type, …. That gives you a column of type filled with “message”. Via stackoverflow
    • Mostly working, I’m missing the last bucket for some reason. But it’s good overlap with the Slack data.
    • Was debugging on my office box, and was wondering where all the data after the troll was! Ooops, not loaded
    • Changed the time tests to be > ts1 and <= ts2
  • Working on the white paper. Deep into strategy, Cyberdefense, and the evolution towards automatic active response in cyber.
  • Looooooooooooooooooooooooooong meeting of Shimei’s group. Interesting but difficult paper: Learning Dynamic Embeddings from Temporal Interaction Networks
  • Emily’s run in the dungeon finishes tonight!
  • Looks like I’m going to the TF Dev conference after all….

Phil 2.27.19

7:00 – 5:30 ASRC

  • Getting closer to the goal by being less capable
    • Understanding how systems with many semi-autonomous parts reach a desired target is a key question in biology (e.g., Drosophila larvae seeking food), engineering (e.g., driverless navigation), medicine (e.g., reliable movement for brain-damaged individuals), and socioeconomics (e.g., bottom-up goal-driven human organizations). Centralized systems perform better with better components. Here, we show, by contrast, that a decentralized entity is more efficient at reaching a target when its components are less capable. Our findings reproduce experimental results for a living organism, predict that autonomous vehicles may perform better with simpler components, offer a fresh explanation for why biological evolution jumped from decentralized to centralized design, suggest how efficient movement might be achieved despite damaged centralized function, and provide a formula predicting the optimum capability of a system’s components so that it comes as close as possible to its target or goal.
  • Nice chat with Greg last night. He likes the “Bones in a Hut” and “Stampede Theory” phrases. It turns out the domains are available…
    • Thinking that the title of the book could be “Stampede Theory: Why Group Think Happens, and why Diversity is the First, Best Answer”. Maybe structure the iConference talk around that as well.
  • Guidance from Antonio: In the meantime, if you have an idea on how to structure the Introduction, please go on considering that we want to put the decision logic inside each Autonomous Car that will be able to select passengers and help them in a self-organized manner.
  • Try out the splitter on the Tymora1 text.
    • Incorporate the ignore.xml when reading the text
    • If things look promising, then add changes to the phpbb code and try on that text as well.
    • At this point I’m just looking at overlapping lists of words that become something like a sand chart. I wonder if I can use the Eigenvector values to become a percentage connectivity/weight? Weights
    • Ok – I have to say that I’m pretty happy with this. These are centrality using top 25% BOW from the Slack text of Tymora1. I think that the way to use this is to have each group be an “agent” that has cluster of words for each step: Top 10
    • Based on this, I’d say add a “Evolving Networks of words” section to the dissertation. Have to find that WordRank paper
  • Working on white paper. Lit review today, plus fix anything that I might have broken…
    • Added section on cybersecurity that got lost in the update fiasco
    • Aaron found a good paper on the lack of advantage that the US has in AI, particularly wrt China
  • Avoiding working on white paper by writing a generator for Aaron. Done!
  • Cortex is an open-source platform for building, deploying, and managing machine learning applications in production. It is designed for any developer who wants to build machine learning powered services without having to worry about infrastructure challenges like configuring data pipelines, continuous deployment, and dependency management. Cortex is actively maintained by Cortex Labs. We’re a venture-backed team of infrastructure engineers and we’re hiring.

Phil 2.26.19

7:00 – 3:00 ASRC

    • Django is a high-level Python Web framework that encourages rapid development and clean, pragmatic design. Built by experienced developers, it takes care of much of the hassle of Web development, so you can focus on writing your app without needing to reinvent the wheel. It’s free and open source.
    • More white paper. Add Flynn’s thoughts about cyber security – see notes from yesterday
    • Reconnected with Antonio. He’d like me to write the introduction and motivation for his SASO paper
    • Add time bucketing to postanalyzer. I’m really starting to want to add a UI
      • Looks done. Try it out next time
        Running query for Poe in subject peanutgallery between 23:56 and 00:45
        Running query for Dungeon Master in subject peanutgallery between 23:56 and 00:45
        Running query for Lord Javelin in subject peanutgallery between 23:56 and 00:45
        Running query for memoriesmaze in subject peanutgallery between 23:56 and 00:45
        Running query for Linda in subject peanutgallery between 23:56 and 00:45
        Running query for phil in subject peanutgallery between 23:56 and 00:45
        Running query for Lorelai in subject peanutgallery between 23:56 and 00:45
        Running query for Bren'Dralagon in subject peanutgallery between 23:56 and 00:45
        Running query for Shelton Herrington in subject peanutgallery between 23:56 and 00:45
        Running query for Keiri'to in subject peanutgallery between 23:56 and 00:45
    • More white paper. Got through the introduction and background. Hopefully didn’t loose anything when I had to resynchronize with the repository that I hadn’t updated from

 

Phil 2.25.19

7:00 – 2:30 ASRC TL

2:30 – 4:30 PhD

  • Fix directory code of LMN so that it remembers the input and output directories – done
  • Add time bucketing capabilities. Do this by taking the complete conversation and splitting the results into N sublists. Take the beginning and ending time from each list and then use those to set the timestamp start and stop for each player’s posts.
  • Thinking about a time-series LMN tool that can chart the relative occurrence of the sorted terms over time. I think this could be done with tkinter. I would need to create and executable as described here, though the easiest answer seems to be pyinstaller.
  • Here are two papers that show the advantages of herding over nomadic behavior:
    • Phagotrophy by a flagellate selects for colonial prey: A possible origin of multicellularity
      • Predation was a powerful selective force promoting increased morphological complexity in a unicellular prey held in constant environmental conditions. The green alga, Chlorella vulgaris, is a well-studied eukaryote, which has retained its normal unicellular form in cultures in our laboratories for thousands of generations. For the experiments reported here, steady-state unicellular C. vulgaris continuous cultures were inoculated with the predator Ochromonas vallescia, a phagotrophic flagellated protist (‘flagellate’). Within less than 100 generations of the prey, a multicellular Chlorella growth form became dominant in the culture (subsequently repeated in other cultures). The prey Chlorella first formed globose clusters of tens to hundreds of cells. After about 10–20 generations in the presence of the phagotroph, eight-celled colonies predominated. These colonies retained the eight-celled form indefinitely in continuous culture and when plated onto agar. These self-replicating, stable colonies were virtually immune to predation by the flagellate, but small enough that each Chlorella cell was exposed directly to the nutrient medium.
    • De novo origins of multicellularity in response to predation
      • The transition from unicellular to multicellular life was one of a few major events in the history of life that created new opportunities for more complex biological systems to evolve. Predation is hypothesized as one selective pressure that may have driven the evolution of multicellularity. Here we show that de novo origins of simple multicellularity can evolve in response to predation. We subjected outcrossed populations of the unicellular green alga Chlamydomonas reinhardtii to selection by the filter-feeding predator Paramecium tetraurelia. Two of five experimental populations evolved multicellular structures not observed in unselected control populations within ~750 asexual generations. Considerable variation exists in the evolved multicellular life cycles, with both cell number and propagule size varying among isolates. Survival assays show that evolved multicellular traits provide effective protection against predation. These results support the hypothesis that selection imposed by predators may have played a role in some origins of multicellularity. SpontaniousClustering\

Phil 2.24.19

It is a miserable, rainy morning, so I’m working on extracting text blocks for analytics. Once I try the various packages on those blocks, I’ll work on breaking them into blocks.

Ok, that’s coming along well. Here’s an example:

Bren'Dralagon: Pushing through the vines, he steps out to meet the Orc..
(unknown distance clarity, if possible, rush down the stairs to the attack)

Bren'Dralagon: kk

Shelton Herrington: RIP

Keiri'to: first blood

Bren'Dralagon: *Hmm, my tailor will have questions on where that came from*

Shelton Herrington: how far across is the hazard? impossible to jump over?

Shelton Herrington: ok

Bren'Dralagon: close enough to attack?

Shelton Herrington: understood, just checking

Bren'Dralagon: if charging is allowed, since i just moved forward and would be turning i doubt it?, i'll charge

Lorelai: I thought the vines were (mostly) gone?

Shelton Herrington: *"this ingress is a formidable enemy"*

Bren'Dralagon: *Remind me to have those stairs cleaned. I know a guy*

Shelton Herrington: do i have a line of sight to either?

Now that I have some text, I’ll try the tools listed here: linguisticanalysistools.org. The whole suite is known as the Suite of Automatic Linguistic Analysis Tools (SALAT).

Which means… (bear with me here)

That these are tools for creating word salat!

I’ll be here all night folks. Be sure to try the fish…

Played with the tools, but I need a list of words to analyze the docs with respect to. LMN does a good job of this, so I tried it using the broken-out player and DM. It looks super interesting. This is BOW with the non-topic words “these, those, get, etc” ignored:

LMN-tymora1

Based on what I see here, I’m going to work on the bucketing and see if the top words change over time. If they do, then we can build a map in fewer steps

Command Dysfunction

Command Dysfunction: Minding the Cognitive War (1996)

Author: Arden B. Dahl

Institution: School of Advanced Airpower Studies Air University

Overall

  • An analysis of Command and Control Warfare (C2W), which aims to create command dysfunction in the adversary.
  • When viewed from an asymetric warfare perspective, this closely resembles the Gerazimov Doctrine

Notes

  • Perception and cognition perform distinct roles in the formation of judgment. Perception answers the question: What do I see? Cognition answers the next question: How do I interpret it? However, general perceptual and cognitive biases cause decision makers to deviate from objectivity and make errors of judgment. Perceptual biases occur from the way the human mind senses the environment and tend to limit the accuracy of perception. Cognitive biases result from the way the mind works and tend to hinder accurate interpretation. These biases are general in that they are thought to be normally present in the general population of decision makers-regardless of their cultural background and organizational affiliations. (Page 14)
    • This is also true of machine (or any) intelligence that is not omniscient. There are corollaries for group decision processes
  • There are three perceptual biases that affect the accuracy of one’s view of the environment: the conditioning of expectations, the resistance to change and the impact of ambiguity. (Page 14)
  • There are three primary areas in which cognitive biases degrade the accuracy of judgment within a decision process: (Page 16)
    • the attribution of causality,
    • the evaluation of probability and
      • availability bias is a rule of thumb that works on the ease with which one can remember or recall other similar instances
      • anchoring bias is a phenomenon in which decision makers adjust too little from their initial judgments as additional evidence becomes available.
      • overconfidence bias is a tendency for individual decision makers to be subjectively overconfident about the extent and accuracy of their knowledge
      • Other typical problems in estimating probabilities derive from the misunderstanding of statistics.
    • the evaluation of evidence.
      • Decision makers tend to value consistent information from a small data set over more variable information from a larger sample.
      • Absence of Evidence bias is when decision makers to miss data in complicated problems. Analysts often do not recognize that data is missing and adjust the certainty of their inferences accordingly.
      • The Persistence of Impressions bias follows a natural tendency to maintain first impressions concerning causality. It appears that the initial association of evidence to an outcome forms a strong cognitive linkage.
  • AI systems can help with these errors of judgement though, since that can be explicitly programmed or placed in the training set.
  • These are all contributors to Normal Accidents
  • What about incorporating doctrine, rules of engagement and standard operating procedures? These can change dynamically and at different scales. (Allison Model II – Organizational Processes)
  • Also, it should be possible to infer the adversaries’ rules and then find areas in the latent space that they do not cover. They will be doing the same to us. How to guard against this? Diversity?
  • While the division of labor and SOP specialization is intended to make the organization efficient, the same division generates requirements to coordinate the intelligent collection and analysis of data.41 The failure to coordinate the varied perceptions and interests within the organization can lead to a number of uncoordinated rational decisions at the lower echelons, which in tum lead to an overall irrational outcome. (Page 20)
  • There are two common cultural biases that deserve mention for their role in forming erroneous perceptions: arrogance and projection. Arrogance is the attitude of superiority over others or the opposing side. It can manifest as a national or individual perception. In the extreme case, it forgoes any serious search of alternatives or decision analysis beyond what the decision maker has already decided. It can become highly irrational. The projection bias sees the rest of the world through one’s own values and beliefs, thus tending to estimate the opposition’s intentions, motivations and capabilities as one’s own. (Page 21)
    • Again, a good case for well-designed AI/ML. That being said, a commander’s misaligned biases may discount the AI system
  • The overconfidence or hubris bias tends toward an overreaching inflation of one’s abilities and strengths. In the extreme it promotes a prideful self-confidence that is self-intoxicating and oblivious to rational limits. A decision maker affected with hubris will in his utter aggressiveness invariably be led to surprise and eventual downfall, The Hubris-Nemesis Complex is dangerous mindset that combines hubris (self intoxicating “pretension to godliness”) and nemesis (“vengeful desire” to wreak havoc and destroy). Leaders possessing this bias combination are not easily deterred or compelled by normal or rational solutions (Page 22)
  • Three major decision stress areas include the consequential weight of the decision, uncertainty and the pressure of time (Page 23)
    • Crisis settings complicate the use of rational and analytical decision processes in two ways. First, they add numerous unknowns, which in tum create many possible alternatives to the decision problem. Second, they reduce the time available to process and evaluate data, choose a course of action, and execute it.
    • As uncertainty becomes severe, decision makers begin resorting to maladaptive search and evaluation methods to reach conclusions. Part of this may stem from a desire to avoid the anxiety of being unsure, an intolerance of ambiguity. It may also be that analytical approaches are difficult when the link between the data and the outcomes is not predictable 
      • Still true for ML systems, even without stress. Being forced to make shorter searches of the solution space (not letting the results converge, etc. could be an issue)
  • The logic of dealing with the time pressure normally follows a somewhat standard pattern. Increasing time pressure first leads to an acceleration of information processing. Decision makers and their organizations will pick up the pace by expending additional resources to maintain existing decision strategies. As the pace begins to outrun in-place processing capabilities, decision makers reduce their data search and processing. In some cases this translates to increased selectivity, which the decision maker biases or weights toward details considered more important. In other cases, it does not change data collection but leads to a shallower data analysis. As the pace continues to increase, decision strategies begin to change. At this point major problems can creep into the process. The problems result from maladaptive strategies (satisficing, analogies, etc.) that save time but misrepresent data to produce inappropriate solutions. The lack of time also prevents critical introspection for perceptual and cognitive biases. In severe time pressure cases, the process may deteriorate to avoidance, denial or panic. (Page 26)
    • The goal is to create this in the adversary, but not us. Which makes this in many respects a algorithm efficiency / processing power arms race
  • In some decision situations, a timely, relatively correct response is better than an absolutely correct response that is made too late. In other words, the situation generates a tension between analysis and speed. (Page 30)
  • The Recognition-Primed Decision (RPD) process works in the following manner. First, an experienced decision maker recognizes a problem situation as familiar or prototypical. The recognition brings with it a solution. The recognition also evokes an appreciation for what additional information to monitor: plausible outcomes, typical reactions, timing cues and causal dynamics. Second, given time, the decision maker evaluates his solution for suitability by testing it through mental simulation for pitfalls and needed adjustments. Normally, the decision maker implements the first solution “on the run” and makes adjustments as required. The decision maker will not be discard a solution unless it becomes plain that it is unworkable. If so, he will attempt a second option, if available. The RPD process is one of satisficing. It assumes that experienced decision makers identify a first solution that is “reasonably good” and are capable of mentally projecting its implementation. The RPD process also assumes that experienced decision makers are able to implement their one solution at any time during the process. (Page 31)
    • It should be possible to train systems to approximate this, possibly at different levels of abstraction
  • The RPD is a descriptive model that explains how experienced decision makers work problems in high stress decision situations (Page 31)
    • It is reflexive, and as such well suited to ML techniques, assuming there is enough data…
  • Situations that require the careful deployment of resources and analysis of abstract data, such as anticipating an enemy’s course of action, require an analytical approach. If there is time for analysis, a rational process normally provides a better solution for these kinds of problems (page 31)
    • This is not what AI/ML is good at. As reaction requirements become tighter, these actions will have to be run in “slow motion” offline and used to train the system.
  • The RPD model provides some insight as to how operational commanders survive in high-load, ambiguous and time pressured situations. The key seems to be experience. The experience serves as the base for what may be seen as an intuitive way to overcome stress. (Page 32)
    • This is why training with attribution may be the best way. “Ms. XXX trained this system and I trust her” may be the best option. We may want to build a “stable” of machine trainers.
  • Decision makers with more experience will tend to employ intuitive methods more often than analytical processes. This reliance on pattern recognition among experienced commanders may provide an opportunity for an adversary to manipulate the patterns to his advantage in deception operations. (Page 32)

Chapter 3: Considering a Cognitive Warfare Framework

  • an examination of John Boyd’s Observation-Orientation-Decision-Action (OODA) cycle to illustrate the different ways a C2W campaign may attack an adversary’s decision cycle. This sets the stage for analysis of the particular methods of such attacks.
    • From Wikipedia: One of John Boyd’s primary insights in fighter combat was that it is vital to change speed and direction faster than the opponent. This may interfere with an opponent’s OODA cycle. It is not necessarily a function of the plane’s ability to maneuver, but the pilot must think and act faster than the opponent can think and act. Getting “inside” the cycle, short-circuiting the opponent’s thinking processes, produces opportunities for the opponent to react inappropriately.
    • Once a group is adapting as fast as its arousal potential can tolerate, it will react in a linear way, since any deviation from the plan creates more arousal potential. Creating these stampedes, often simply through adversarial herding can create an extremely brittle vulnerable C2W cognitive framework.
  • …degrading the efficiency of the decision cycle by denying the “observation” function the ability to see and impeding the flow of accurate information through the physical links of the loop. Data denial is usually achieved by preventing the adversary’s observation function, or sensors, from operating effectively in one or more channels. (Page 36)
  • The second approach attempts to corrupt the adversary’s orientation. The focus is on the accuracy of the opponent’s perceptions and facts that inform his decisions, rather than their speed through the decision cycle. Operations security, deception and psychological operations (PSYOPS) are usually the primary C2W elements in the corruption effort.72 The corruption scheme’s relationship to decision speed is somewhat complicated. In fact, the corruption mechanism may work to vary the decision speed depending on the objective of the intended misperception. For example, the enemy might be induced to speedily make the wrong decision. (Page 37)

C2W

  • B. H. Liddell Hart wrote: “…it is usually necessary for the dislocating move to be proceeded by a move, or moves, which can be best defined by the term ‘distract’ in its literal sense of ‘to draw asunder’. The purpose of this ‘distraction’ is to deprive the enemy of his freedom of action, and it should operate in the physical and psychological spheres.” (Page 42)
    • The issue here is that we are adding a new “psychological” sphere – the domain of the intelligent machine. Since we have limited insight into the high-dimensional function that is the trained network, we cannot know when it is being successfully manipulated until it makes an obvious blunder, at which point it may be too late. This is one of the reasons that diversity needs to be built into the system so that there is a lower chance of a majority of systems being compromised.
  • Fundamentally, all deception ploys are constructed in two parts: dissimulation and simulation. Dissimulation is covert, the act of hiding or obscuring the real; its companion, simulation, presents the false. Within this basic construct, deception programs are employed in two variants: A-type (ambiguity) and M-type (misdirection). The A-type deception seeks to increase ambiguity in the target’s mind. Its aim is to keep the adversary unsure of one’s true intentions, especially an adversary who has initially guessed right. A number of alternatives are developed for the target’s consumption, built on lies that are both plausible and sufficiently significant to cause the target to expend resources to cover them. The M-type deception is the more demanding variant. This deception misleads the adversary by reducing ambiguity, that is, attempting to convince him that the wrong solution is, in fact, “right.” In this case, the target positions most of his attention and resources in the wrong place. (Page 44)
  • Both Sun-Tzu and Liddell Hart highlighted the dilemma of alternative objectives upon an adversary’s mind made possible by movement. (Page 45)
  • Most important is the fact that the overall cognitive warfare approach is dependent upon the enemy’s command baseline–the decision making processes, command characteristics and expectations of the decision makers. The skillful employment of stress and deception against the command baseline may be a principal mechanism to bring about its cognitive dislocation. (Page 50)
    • As the baseline becomes automated, then cognitive warfare must factor these aspects in.

Phil 2.22.19

7:00 – 4:00 ASRC

  • Running Ellen’s dungeon tonight
  • Wondering what to do next. Look at text analytics? List is in this post.
  • But before we do that, I need to extract from the DB posts as text. And now I have something to do!
    • Sheesh – tried to update the database and had all kinds of weird problems. I wound up re-injesting everything from the Slack files. This seems to work fine, so I exported that to replace the .sql file that may have been causing all the trouble.
  • Here’s a thing using the JAX library, which I’m becoming interested in: Meta-Learning in 50 Lines of JAX
    • The focus of Machine Learning (ML) is to imbue computers with the ability to learn from data, so that they may accomplish tasks that humans have difficulty expressing in pure code. However, what most ML researchers call “learning” right now is but a very small subset of the vast range of behavioral adaptability encountered in biological life! Deep Learning models are powerful, but require a large amount of data and many iterations of stochastic gradient descent (SGD). This learning procedure is time-consuming and once a deep model is trained, its behavior is fairly rigid; at deployment time, one cannot really change the behavior of the system (e.g. correcting mistakes) without an expensive retraining process. Can we build systems that can learn faster, and with less data?
  • Meta-Learning: Learning to Learn Fast
    • A good machine learning model often requires training with a large number of samples. Humans, in contrast, learn new concepts and skills much faster and more efficiently. Kids who have seen cats and birds only a few times can quickly tell them apart. People who know how to ride a bike are likely to discover the way to ride a motorcycle fast with little or even no demonstration. Is it possible to design a machine learning model with similar properties — learning new concepts and skills fast with a few training examples? That’s essentially what meta-learning aims to solve.
  • Meta learning is everywhere! Learning to Generalize from Sparse and Underspecified Rewards
    • In “Learning to Generalize from Sparse and Underspecified Rewards“, we address the issue of underspecified rewards by developing Meta Reward Learning (MeRL), which provides more refined feedback to the agent by optimizing an auxiliary reward function. MeRL is combined with a memory buffer of successful trajectories collected using a novel exploration strategy to learn from sparse rewards.
  • Lingvo: A TensorFlow Framework for Sequence Modeling
    • While Lingvo started out with a focus on NLP, it is inherently very flexible, and models for tasks such as image segmentation and point cloud classification have been successfully implemented using the framework. Distillation, GANs, and multi-task models are also supported. At the same time, the framework does not compromise on speed, and features an optimized input pipeline and fast distributed training. Finally, Lingvo was put together with an eye towards easy productionisation, and there is even a well-defined path towards porting models for mobile inference.
  • Working on white paper. Still reading Command Dysfunction and making notes. I think I’ll use the idea of C&C combat as the framing device of the paper. Started to write more bits
  • What, if anything, can the Pentagon learn from this war simulator?
    • It is August 2010, and Operation Glacier Mantis is struggling in the fictional Saffron Valley. Coalition forces moved into the valley nine years ago, but peace negotiations are breaking down after a series of airstrikes result in civilian casualties. Within a few months, the Coalition abandons Saffron Valley. Corruption sapped the reputation of the operation. Troops are called away to a different war. Operation Glacier Mantis ends in total defeat.
  • Created a post for Command Dysfunction here. Finished.

Phil 2.21.19

7:00 – 4:00 ASRC

  • Working on white paper. Still reading Command Dysfunction and making notes. I think I’ll use the idea of C&C combat as the framing device of the paper

4:30 – 7:00 Seminar

  • Finishing slides – done, though it took all day. Charged it to PhD
  • Order food! – Done
  • Presentation and food went well. Kaufman’s “At home in the Universe” next

Phil 2.20.19

7:00 – ASRC TL

  • Fast editor for very large files: EmEditor
  • Topic Modeling Systems and Interfaces
    • The 4Humanities “WhatEvery1Says” project conducted a comparative analysis in 2016 of the following topic modeling systems/interfaces. As a result, it chose to implement Andrew Goldstone’s DFR-browser for its own work.
  • Deep Learning for Video Game Playing
    • In this article, we review recent Deep Learning advances in the context of how they have been applied to play different types of video games such as first-person shooters, arcade games, and real-time strategy games. We analyze the unique requirements that different game genres pose to a deep learning system and highlight important open challenges in the context of applying these machine learning methods to video games, such as general game playing, dealing with extremely large decision spaces and sparse rewards.
    • TimeCircleMap
  • Wide Neural Networks of Any Depth Evolve as Linear Models Under Gradient Descent
    • A longstanding goal in deep learning research has been to precisely characterize training and generalization. However, the often complex loss landscapes of neural networks have made a theory of learning dynamics elusive. In this work, we show that for wide neural networks the learning dynamics simplify considerably and that, in the infinite width limit, they are governed by a linear model obtained from the first-order Taylor expansion of the network around its initial parameters. Furthermore, mirroring the correspondence between wide Bayesian neural networks and Gaussian processes, gradient-based training of wide neural networks with a squared loss produces test set predictions drawn from a Gaussian process with a particular compositional kernel. While these theoretical results are only exact in the infinite width limit, we nevertheless find excellent empirical agreement between the predictions of the original network and those of the linearized version even for finite practically-sized networks. This agreement is robust across different architectures, optimization methods, and loss functions.
  • AI Safety Needs Social Scientists
    • We believe the AI safety community needs to invest research effort in the human side of AI alignment. Many of the uncertainties involved are empirical, and can only be answered by experiment. They relate to the psychology of human rationality, emotion, and biases. Critically, we believe investigations into how people interact with AI alignment algorithms should not be held back by the limitations of existing machine learning. Current AI safety research is often limited to simple tasks in video games, robotics, or gridworlds, but problems on the human side may only appear in more realistic scenarios such as natural language discussion of value-laden questions. This is particularly important since many aspects of AI alignment change as ML systems increase in capability.
  • Started on slides for Thursday
  • Working on white paper. Adding in the paper above on Deep Learning for Video Game Playing.

Phil 2.19.19

7:00 – 6:00 ASRC TL IRAD

  • Something to listen to tomorrow morning? Tracing the Spread of Fake News
    • Two years after a presidential election that shocked so many, we are still trying to understand the role that fake news sources played, and how a swarm of propaganda clouded social media. Now a comprehensive study has looked carefully at the impact of untrustworthy online sources in the election, with some surprising results, and some suggestions for how to avoid problems in the future. In the studio for this episode is David Lazer, Professor of Political Science and Computer and Information Science at Northeastern University. He is one of the authors of Fake news on Twitter during the 2016 U.S. presidential election, which was just published in Science Magazine. 
  • Finished my writeup on Clockwork Muse. Now I need to make slides by Thursday.
  • Visual analytics for collaborative human-machine confidence in human-centric active learning tasks
    • Active machine learning is a human-centric paradigm that leverages a small labelled dataset to build an initial weak classifier, that can then be improved over time through human-machine collaboration. As new unlabelled samples are observed, the machine can either provide a prediction, or query a human ‘oracle’ when the machine is not confident in its prediction. Of course, just as the machine may lack confidence, the same can also be true of a human ‘oracle’: humans are not all-knowing, untiring oracles. A human’s ability to provide an accurate and confident response will often vary between queries, according to the duration of the current interaction, their level of engagement with the system, and the difficulty of the labelling task. This poses an important question of how uncertainty can be expressed and accounted for in a human-machine collaboration. In short, how can we facilitate a mutually-transparent collaboration between two uncertain actors—a person and a machine—that leads to an improved outcome? In this work, we demonstrate the benefit of human-machine collaboration within the process of active learning, where limited data samples are available or where labelling costs are high. To achieve this, we developed a visual analytics tool for active learning that promotes transparency, inspection, understanding and trust, of the learning process through human-machine collaboration. Fundamental to the notion of confidence, both parties can report their level of confidence during active learning tasks using the tool, such that this can be used to inform learning. Human confidence of labels can be accounted for by the machine, the machine can query for samples based on confidence measures, and the machine can report confidence of current predictions to the human, to further the trust and transparency between the collaborative parties. In particular, we find that this can improve the robustness of the classifier when incorrect sample labels are provided, due to unconfidence or fatigue. Reported confidences can also better inform human-machine sample selection in collaborative sampling. Our experimentation compares the impact of different selection strategies for acquiring samples: machine-driven, human-driven, and collaborative selection. We demonstrate how a collaborative approach can improve trust in the model robustness, achieving high accuracy and low user correction, with only limited data sample selections.
  • Look into principle of least effort and game theory. See if there is anything
    • Human Behaviour and the Principle of Least Effort. An Introduction to Human Ecology
      • Subtitled “An introduction to human ecology,” this work attempts systematically to treat “least effort” (and its derivatives) as the principle underlying a multiplicity of individual and collective behaviors, variously but regularly distributed. The general orientation is quantitative, and the principle is widely interpreted and applied. After a brief elaboration of principles and a brief summary of pertinent studies (mostly in psychology), Part One (Language and the structure of the personality) develops 8 chapters on its theme, ranging from regularities within language per se to material on individual psychology. Part Two (Human relations: a case of intraspecies balance) contains chapters on “The economy of geography,” “Intranational and international cooperation and conflict,” “The distribution of economic power and social status,” and “Prestige values and cultural vogues”—all developed in terms of the central theme. 20 pages of references with some annotation, keyed to the index. (PsycINFO Database Record (c) 2016 APA, all rights reserved)
    • Decision Making and the Avoidance of Cognitive Demand
      • Behavioral and economic theories have long maintained that actions are chosen so as to minimize demands for exertion or work, a principle sometimes referred to as the “law of less work.” The data supporting this idea pertain almost entirely to demands for physical effort. However, the same minimization principle has often been assumed also to apply to cognitive demand. We set out to evaluate the validity of this assumption. In six behavioral experiments, participants chose freely between courses of action associated with different levels of demand for controlled information processing. Together, the results of these experiments revealed a bias in favor of the less demanding course of action. The bias was obtained across a range of choice settings and demand manipulations, and was not wholly attributable to strategic avoidance of errors, minimization of time on task, or maximization of the rate of goal achievement. Remarkably, the effect also did not depend on awareness of the demand manipulation. Consistent with a motivational account, avoidance of demand displayed sensitivity to task incentives and co-varied with individual differences in the efficacy of executive control. The findings reported, together with convergent neuroscientific evidence, lend support to the idea that anticipated cognitive demand plays a significant role in behavioral decision-making.
    • Intuition, deliberation, and the evolution of cooperation
      • Humans often cooperate with strangers, despite the costs involved. A long tradition of theoretical modeling has sought ultimate evolutionary explanations for this seemingly altruistic behavior. More recently, an entirely separate body of experimental work has begun to investigate cooperation’s proximate cognitive underpinnings using a dual process framework: Is deliberative self-control necessary to reign in selfish impulses, or does self-interested deliberation restrain an intuitive desire to cooperate? Integrating these ultimate and proximate approaches, we introduce dual-process cognition into a formal game theoretic model of the evolution of cooperation. Agents play prisoner’s dilemma games, some of which are one-shot and others of which involve reciprocity. They can either respond by using a generalized intuition, which is not sensitive to whether the game is oneshot or reciprocal, or pay a (stochastically varying) cost to deliberate and tailor their strategy to the type of game they are facing. We find that, depending on the level of reciprocity and assortment, selection favors one of two strategies: intuitive defectors who never deliberate, or dual-process agents who intuitively cooperate but sometimes use deliberation to defect in one-shot games. Critically, selection never favors agents who use deliberation to override selfish impulses: Deliberation only serves to undermine cooperation with strangers. Thus, by introducing a formal theoretical framework for exploring cooperation through a dual-process lens, we provide a clear answer regarding the role of deliberation in cooperation based on evolutionary modeling, help to organize a growing body of sometimes conflicting empirical results, and shed light on the nature of human cognition and social decision making.
    • Complexity Aversion: Influences of Cognitive Abilities, Culture and System of Thought
      • Complexity aversion describes the preference of decision makers for less complex options that cannot be explained by expected utility theory. While a number of research articles investigate the effects of complexity on choices, up to this point there exist only theoretical approaches aiming to explain the reasons behind complexity aversion. This paper presents two experimental studies that aim to fill this gap. The first study considers subjects’ cognitive abilities as a potential driver of complexity aversion. Cognitive skills are measured in a cognitive reflection test and, in addition, are approximated by subjects’ consistency of choices. In opposition to our hypothesis, subjects with higher cognitive skills display stronger complexity aversion compared to their peers. The second study deals with cultural background. The experiment was therefore conducted in Germany and in Japan. German subjects prefer less complex lotteries while Japanese are indifferent regarding choice complexity.
    • Space Time Dynamics of Insurgent Activity in Iraq
      • This paper describes analyses to determine whether there is a space-time dependency for insurgent activity. The data used for the research were 3 months of terrorist incidents attributed to the insurgency in Iraq during U.S. occupation and the methods used are based on a body of work well established using police recorded crime data. It was found that events clustered in space and time more than would be expected if the events were unrelated, suggesting communication of risk in space and time and potentially informing next event prediction. The analysis represents a first but important step and suggestions for further analysis addressing prevention or suppression of future incidents are briefly discussed.
  • Large teams develop and small teams disrupt science and technology
    • One of the most universal trends in science and technology today is the growth of large teams in all areas, as solitary researchers and small teams diminish in prevalence1,2,3. Increases in team size have been attributed to the specialization of scientific activities3, improvements in communication technology4,5, or the complexity of modern problems that require interdisciplinary solutions6,7,8. This shift in team size raises the question of whether and how the character of the science and technology produced by large teams differs from that of small teams. Here we analyse more than 65 million papers, patents and software products that span the period 1954–2014, and demonstrate that across this period smaller teams have tended to disrupt science and technology with new ideas and opportunities, whereas larger teams have tended to develop existing ones. Work from larger teams builds on more-recent and popular developments, and attention to their work comes immediately. By contrast, contributions by smaller teams search more deeply into the past, are viewed as disruptive to science and technology and succeed further into the future—if at all. Observed differences between small and large teams are magnified for higher-impact work, with small teams known for disruptive work and large teams for developing work. Differences in topic and research design account for a small part of the relationship between team size and disruption; most of the effect occurs at the level of the individual, as people move between smaller and larger teams. These results demonstrate that both small and large teams are essential to a flourishing ecology of science and technology, and suggest that, to achieve this, science policies should aim to support a diversity of team sizes.
  • Meeting with Panos about JuryRoom. Interesting! Tony Smith looks like someone to ping. Need to ask Panos

Phil 2.16.19

Command Dysfunction: Minding the Cognitive War

  • This paper analyzes the factors and conditions of command dysfunction from the cognitive, or mental, perspective of command and control warfare (C2W). The author examines the limitations of rational decision making and the tension that exists between rational and intuitive processes. Next, the paper examines the vulnerabilities of rational and intuitive processes in order to build a cognitive warfare framework. The framework consists of three categories: the command baseline, stressors, and deception. The stressor and deception categories act on the command baseline. The analysis also suggests that there are a number of possible interactions that exist between the stressor and deception categories. The paper then uses the framework to analyze evidence of command dysfunction in three historical campaigns. The historical analyses study the German command during the Normandy invasion, the Allied command during the first week of the Battle of the Bulge, and the Israeli command during the first half of the Arab-Israeli October 1973 War. In addition to showing that there are interactions between stressors and deception, the analyses highlight the importance of understanding the adversary’s command baseline. The paper concludes that effective C2W is not so much what is done to an adversary’s command, but rather what he does to himself, perhaps with a little help.

2019 Annual Conference of the North American Chapter of the Association for Computational Linguistics. Streaming Tutorials, June 2nd. The following tutorials have been accepted for NAACL 2019 and will be held on Sunday, June 2nd, 2019. Exact timings will be posted later as part of the official schedule.

  • T1: DEEP ADVERSARIAL LEARNING FOR NLP
  • T2: DEEP LEARNING FOR NATURAL LANGUAGE INFERENCE
  • T3: MEASURING AND MODELING LANGUAGE CHANGE
  • T4: TRANSFER LEARNING IN NATURAL LANGUAGE PROCESSING
  • T5: LANGUAGE LEARNING AND PROCESSING IN PEOPLE AND MACHINES
  • T6: APPLICATIONS OF NATURAL LANGUAGE PROCESSING IN CLINICAL RESEARCH AND PRACTICE

Fighting disinformation across Google products

  • Providing useful and trusted information at the scale that the Internet has reached is enormously complex and an important responsibility. Adding to that complexity, over the last several years we’ve seen organized campaigns use online platforms to deliberately spread false or misleading information. We have twenty years of experience in these information challenges and it’s what we strive to do better than anyone else. So while we have more work to do, we’ve beenworking hard to combat this challenge for many years.

Phil 2.14.19

7:00 – 7:00 ASRC

  • Worked on the whitepaper. Going down the chain of consequences with respect to adding AI to military systems in the light of the Starcraft2 research.
  • Maps of Meaning: The Architecture of Belief
    • A 1999 book by Canadian clinical psychologist and psychology professor Jordan Peterson. The book describes a comprehensive theory for how people construct meaning, in a way that is compatible with the modern scientific understanding of how the brain functions.[1] It examines the “structure of systems of belief and the role those systems play in the regulation of emotion”,[2] using “multiple academic fields to show that connecting myths and beliefs with science is essential to fully understand how people make meaning”.[3] Wikipedia
  • Continuing with Clockwork Muse review. Finished the overview and theoretical takes. Continuing on the notes, which is going slow because of bad text scanning
  • JAX is Autograd and XLA, brought together for high-performance machine learning research. With its updated version of Autograd, JAX can automatically differentiate native Python and NumPy functions. It can differentiate through loops, branches, recursion, and closures, and it can take derivatives of derivatives of derivatives. It supports reverse-mode differentiation (a.k.a. backpropagation) via grad as well as forward-mode differentiation, and the two can be composed arbitrarily to any order. What’s new is that JAX uses XLA to compile and run your NumPy programs on GPUs and TPUs. Compilation happens under the hood by default, with library calls getting just-in-time compiled and executed. But JAX also lets you just-in-time compile your own Python functions into XLA-optimized kernels using a one-function API, jit. Compilation and automatic differentiation can be composed arbitrarily, so you can express sophisticated algorithms and get maximal performance without leaving Python.
  • Working on white paper lit review
    • An Evolutionary Algorithm that Constructs Recurrent Neural Networks
      • Standard methods for simultaneously inducing the structure and weights of recurrent neural networks limit every task to an assumed class of architectures. Such a simplification is necessary since the interactions between network structure and function are not well understood. Evolutionary computations, which include genetic algorithms and evolutionary programming, are population-based search methods that have shown promise in many similarly complex tasks. This paper argues that genetic algorithms are inappropriate for network acquisition and describes an evolutionary program, called GNARL, that simultaneously acquires both the structure and weights for recurrent networks. GNARL’s empirical acquisition method allows for the emergence of complex behaviors and topologies that are potentially excluded by the artificial architectural constraints imposed in standard network induction methods
    • Added Evolutionary Deep Learning and Deep RTS to the references
  • Better Language Models and Their Implications
    • We’ve trained a large-scale unsupervised language model which generates coherent paragraphs of text, achieves state-of-the-art performance on many language modeling benchmarks, and performs rudimentary reading comprehension, machine translation, question answering, and summarization — all without task-specific training.
  • Shimei seminar – 4:30 – 7:00

Phil 2.13.19

7:00 – 7:00 ASRC IRAD TL

  • The Digital Clockwork Muse: A Computational Model of Aesthetic Evolution
    • This paper presents a computational model of creativity that attempts to capture within a social context an important aspect of the art and design process: the search for novelty. The computational model consists of multiple novelty-seeking agents that can assess the interestingness of artworks. The agents can communicate to particularly interesting artworks to others. Agents can also communicate to reward other agents for finding interesting artworks. We present the results from running experiments to investigate the effects of searching for different degrees of novelty on the artworks produced and the social organisation of the agents.
  • Upload the rest of Slack Tymora.
  • Create some txt files and feed into LMN. I’m thinking of by player and then by slice. Do this for both PHPBB and Slack data. Other ideas
    • Look into coherence measures
    • Are there economic models of attention? (ArXive)
    • TAACO is an easy to use tool that calculates 150 indices of both local and global cohesion, including a number of type-token ratio indices (including specific parts of speech, lemmas, bigrams, trigrams and more), adjacent overlap indices (at both the sentence and paragraph level), and connectives indices.
    • CRAT is an easy to use tool that includes over 700 indices related to lexical sophistication, cohesion and source text/summary text overlap. CRAT is particularly well suited for the exploration of writing quality as it relates to summary writing.
    •  TAALED is an analysis tool designed to calculate a wide variety of lexical diversity indices. Homographs are disambiguated using part of speech tags, and indices are calculated using lemma forms. Indices can also be calculated using all lemmas, content lemmas, or function lemmas. Also available is diagnostic output which allows the user to see how TAALED processed each word.
    • TAALES is a tool that measures over 400 classic and new indices of lexical sophistication, and includes indices related to a wide range of sub-constructs.  TAALES indices have been used to inform models of second language (L2) speaking proficiency, first language (L1) and L2 writing proficiency, spoken and written lexical proficiency, genre differences, and satirical language.
    • SEANCE is an easy to use tool that includes 254 core indices and 20 component indices based on recent advances in sentiment analysis. In addition to the core indices, SEANCE allows for a number of customized indices including filtering for particular parts of speech and controlling for instances of negation.
    • TAASSC is an advanced syntactic analysis tool. It measures a number of indices related to syntactic development. Included are classic indices of syntactic complexity (e.g., mean length of T-unit) and fine-grained indices of phrasal (e.g., number of adjectives per noun phrase) and clausal (e.g., number of adverbials per clause) complexity. Also included are indices that are grounded in usage-based perspectives to language acquisition that rely on frequency profiles of verb argument constructions.
    • GAMET is an easy to use tool that provides incidence counts for structural and mechanics errors in texts including grammar, spelling, punctuation, white space, and repetition errors. The tool also provides line output for the errors flagged in the text.
    • Comparison of Top 6 Python NLP Libraries
      • NLTK (Natural Language Toolkit) is used for such tasks as tokenization, lemmatization, stemming, parsing, POS tagging, etc. This library has tools for almost all NLP tasks.
      • Spacy is the main competitor of the NLTK. These two libraries can be used for the same tasks.
      • Scikit-learn provides a large library for machine learning. The tools for text preprocessing are also presented here.
      • Gensim is the package for topic and vector space modeling, document similarity.
      • The general mission of the Pattern library is to serve as the web mining module. So, it supports NLP only as a side task.
      • Polyglot is the yet another python package for NLP. It is not very popular but also can be used for a wide range of the NLP tasks.
  • Continuing writing Clockwork Muse review
  • Reading Attachment 1 to BAA FA8750-18-S-7014. “While white papers will be considered if received prior to 6:00 PM Eastern Standard Time (EST) on 30 Sep 2022, the following submission dates are suggested to best align with projected funding:” 
    • FY20 – 15 April 2019
  • AIMS/ML Meeting. Not sure what the outcome was, other than folks are covered for this quarter?
  • Long, wide ranging meeting with Wayne at Frisco’s. Gave him an account on Antibubbles.com. And it seems like we won first place for Blue Sky papers?

Phil 2.12.19

7:00 – 4:30 ASRC IRAD

  • Talked with Eric yesterday. going to write up a white paper about teachable AI. Two-three week effort
  • Speaking of which, The Evolved Transformer
    • Recent works have highlighted the strengths of the Transformer architecture for dealing with sequence tasks. At the same time, neural architecture search has advanced to the point where it can outperform human-designed models. The goal of this work is to use architecture search to find a better Transformer architecture. We first construct a large search space inspired by the recent advances in feed-forward sequential models and then run evolutionary architecture search, seeding our initial population with the Transformer. To effectively run this search on the computationally expensive WMT 2014 English-German translation task, we develop the progressive dynamic hurdles method, which allows us to dynamically allocate more resources to more promising candidate models. The architecture found in our experiments – the Evolved Transformer – demonstrates consistent improvement over the Transformer on four well-established language tasks: WMT 2014 English-German, WMT 2014 English-French, WMT 2014 English-Czech and LM1B. At big model size, the Evolved Transformer is twice as efficient as the Transformer in FLOPS without loss in quality. At a much smaller – mobile-friendly – model size of ~7M parameters, the Evolved Transformer outperforms the Transformer by 0.7 BLEU on WMT’14 English-German.
  • Finished running Tymora1 on Slack. Downloaded, though the download didn’t include research_notes. Hmmm. Looks like I can’t make it public, either.
  • Thinking about writing a tagging app, possibly with a centrality capability.
  • Started on the Teachable AI paper. The rough outline is there, and I have a good set of references.