Author Archives: pgfeldman

Phil 11.5.18

7:00- 4:30 ASRC PhD

  • Make integer generator by scaling and shifting the floating point generator to the desired values and then truncating. It would be fun to read in a token list and have the waveform be words
    • Done with the int waveform. This is an integer waveform of the function
      math.sin(xx)*math.sin(xx/2.0)*math.cos(xx/4.0)

      set on a range from 0 – 100:

    •  IntWaves
    • And here’s the unmodified floating-point version of the same function:
    • FloatWaves
    • Here’s the same function as words:
      #confg: {"function":math.sin(xx)*math.sin(xx/2.0)*math.cos(xx/4.0), "rows":100, "sequence_length":20, "step":1, "delta":0.4, "type":"floating_point"}
      routed, traps, thrashing, fifteen, ultimately, dealt, anyway, apprehensions, boats, job, descended, tongue, dripping, adoration, boats, routed, routed, strokes, cheerful, charleses, 
      traps, thrashing, fifteen, ultimately, dealt, anyway, apprehensions, boats, job, descended, tongue, dripping, adoration, boats, routed, routed, strokes, cheerful, charleses, travellers, 
      thrashing, fifteen, ultimately, dealt, anyway, apprehensions, boats, job, descended, tongue, dripping, adoration, boats, routed, routed, strokes, cheerful, charleses, travellers, unsuspected, 
      fifteen, ultimately, dealt, anyway, apprehensions, boats, job, descended, tongue, dripping, adoration, boats, routed, routed, strokes, cheerful, charleses, travellers, unsuspected, malingerer, 
      ultimately, dealt, anyway, apprehensions, boats, job, descended, tongue, dripping, adoration, boats, routed, routed, strokes, cheerful, charleses, travellers, unsuspected, malingerer, respect, 
      dealt, anyway, apprehensions, boats, job, descended, tongue, dripping, adoration, boats, routed, routed, strokes, cheerful, charleses, travellers, unsuspected, malingerer, respect, aback, 
      anyway, apprehensions, boats, job, descended, tongue, dripping, adoration, boats, routed, routed, strokes, cheerful, charleses, travellers, unsuspected, malingerer, respect, aback, vair', 
      apprehensions, boats, job, descended, tongue, dripping, adoration, boats, routed, routed, strokes, cheerful, charleses, travellers, unsuspected, malingerer, respect, aback, vair', wraith, 
      boats, job, descended, tongue, dripping, adoration, boats, routed, routed, strokes, cheerful, charleses, travellers, unsuspected, malingerer, respect, aback, vair', wraith, bare, 
      job, descended, tongue, dripping, adoration, boats, routed, routed, strokes, cheerful, charleses, travellers, unsuspected, malingerer, respect, aback, vair', wraith, bare, creek, 
      descended, tongue, dripping, adoration, boats, routed, routed, strokes, cheerful, charleses, travellers, unsuspected, malingerer, respect, aback, vair', wraith, bare, creek, descended, 
      tongue, dripping, adoration, boats, routed, routed, strokes, cheerful, charleses, travellers, unsuspected, malingerer, respect, aback, vair', wraith, bare, creek, descended, assortment, 
      dripping, adoration, boats, routed, routed, strokes, cheerful, charleses, travellers, unsuspected, malingerer, respect, aback, vair', wraith, bare, creek, descended, assortment, flashed, 
      adoration, boats, routed, routed, strokes, cheerful, charleses, travellers, unsuspected, malingerer, respect, aback, vair', wraith, bare, creek, descended, assortment, flashed, reputation, 
      boats, routed, routed, strokes, cheerful, charleses, travellers, unsuspected, malingerer, respect, aback, vair', wraith, bare, creek, descended, assortment, flashed, reputation, guarded, 
      routed, routed, strokes, cheerful, charleses, travellers, unsuspected, malingerer, respect, aback, vair', wraith, bare, creek, descended, assortment, flashed, reputation, guarded, tempers, 
      routed, strokes, cheerful, charleses, travellers, unsuspected, malingerer, respect, aback, vair', wraith, bare, creek, descended, assortment, flashed, reputation, guarded, tempers, partnership, 
      strokes, cheerful, charleses, travellers, unsuspected, malingerer, respect, aback, vair', wraith, bare, creek, descended, assortment, flashed, reputation, guarded, tempers, partnership, bare, 
      cheerful, charleses, travellers, unsuspected, malingerer, respect, aback, vair', wraith, bare, creek, descended, assortment, flashed, reputation, guarded, tempers, partnership, bare, count, 
      charleses, travellers, unsuspected, malingerer, respect, aback, vair', wraith, bare, creek, descended, assortment, flashed, reputation, guarded, tempers, partnership, bare, count, descended, 
      travellers, unsuspected, malingerer, respect, aback, vair', wraith, bare, creek, descended, assortment, flashed, reputation, guarded, tempers, partnership, bare, count, descended, dashed, 
      unsuspected, malingerer, respect, aback, vair', wraith, bare, creek, descended, assortment, flashed, reputation, guarded, tempers, partnership, bare, count, descended, dashed, ears, 
      malingerer, respect, aback, vair', wraith, bare, creek, descended, assortment, flashed, reputation, guarded, tempers, partnership, bare, count, descended, dashed, ears, q, 
      

       

  • Started LSTMs again, using this example using Alice in Wonderland
  • Aaron and T in all day discussions with Kevin about NASA/NOAA. Dropped in a few times. NASA is airgapped, but you can bring code in and out. Bringing code in requires a review.
  • Call the Army BAA people. We need white paper templates and a response for Dr. Palazzolo.
  • Finish and submit 810 reviews tonight. Done.
  • This is important for the DARPA and Army BAAs: The geographic embedding of online echo chambers: Evidence from the Brexit campaign
    • This study explores the geographic dependencies of echo-chamber communication on Twitter during the Brexit campaign. We review the evidence positing that online interactions lead to filter bubbles to test whether echo chambers are restricted to online patterns of interaction or are associated with physical, in-person interaction. We identify the location of users, estimate their partisan affiliation, and finally calculate the distance between sender and receiver of @-mentions and retweets. We show that polarized online echo-chambers map onto geographically situated social networks. More specifically, our results reveal that echo chambers in the Leave campaign are associated with geographic proximity and that the reverse relationship holds true for the Remain campaign. The study concludes with a discussion of primary and secondary effects arising from the interaction between existing physical ties and online interactions and argues that the collapsing of distances brought by internet technologies may foreground the role of geography within one’s social network.
  • Also important:
    • How to Write a Successful Level I DHAG Proposal
      • The idea behind a Level I project is that it can be “high risk/high reward.” Put another way, we are looking for interesting, innovative, experimental, new ideas, even if they have a high potential to fail. It’s an opportunity to figure things out so you are better prepared to tackle a big project. Because of the relatively low dollar amount (no more than $50K), we are willing to take on more risk for an idea with lots of potential. By contrast, at the Level II and especially at the Level III, there is a much lower risk tolerance; the peer reviewers expect that you’ve already completed an earlier start-up or prototyping phase and will want you to convince them your project is ready to succeed.
  • Tracing a Meme From the Internet’s Fringe to a Republican Slogan
    • This feedback loop is how #JobsNotMobs came to be. In less than two weeks, the three-word phrase expanded from corners of the right-wing internet onto some of the most prominent political stages in the country, days before the midterm elections.
  • Effectiveness of gaming for communicating and teaching climate change
    • Games are increasingly proposed as an innovative way to convey scientific insights on the climate-economic system to students, non-experts, and the wider public. Yet, it is not clear if games can meet such expectations. We present quantitative evidence on the effectiveness of a simulation game for communicating and teaching international climate politics. We use a sample of over 200 students from Germany playing the simulation game KEEP COOL. We combine pre- and postgame surveys on climate politics with data on individual in-game decisions. Our key findings are that gaming increases the sense of personal responsibility, the confidence in politics for climate change mitigation, and makes more optimistic about international cooperation in climate politics. Furthermore, players that do cooperate less in the game become more optimistic about international cooperation but less confident about politics. These results are relevant for the design of future games, showing that effective climate games do not require climate-friendly in-game behavior as a winning condition. We conclude that simulation games can facilitate experiential learning about the difficulties of international climate politics and thereby complement both conventional communication and teaching methods.
    • This reinforces the my recent thinking that games may be a fourth, distinct form of human sociocultural communication

Phil 11.4.18

The Center for Midnight

  • Inspiration came from his most recent experiments on human/computer collaborative writing. Sloan is developing a sort of cyborg text editor, an algorithmic cure for writer’s block, a machine that reads what you’ve written so far and offers a few words that might come next. It does so by reaching into its model of language, a recurrent neural network trained on whatever collection of text seems appropriate, and trying to find sensible endings to the sentence you began.
  • rnn-writer
    • This is a package for the Atom text editor that works with torch-rnn-server to provide responsive, inline “autocomplete” powered by a recurrent neural network trained on a corpus of sci-fi stories, or another corpus of your choosing.
    • Writing with the machine
      •  I had to offer an extravagant analogy (and I do) I’d say it’s like writing with a deranged but very well-read parrot on your shoulder. Anytime you feel brave enough to ask for a suggestion, you press tab, and…

Phil 11.2.18

7:00 – 2:30 ASRC PhD (feeling burned out – went home early for a nap)

  • Continuing with my 810 assignment. Just found out about finite semiotics, which could be useful for trustworthiness detection (variance in terms and speed of adoption)
  • I like this! Creating a Perceptron From Scratch
    • In order to gain more insight as to how Neural Networks (NNs) are created and used, we must first understand how they work. It is important to always create a solid foundation as to why you are doing something, instead of navigating blindly. With the ubiquity of Tensorflow or Keras, sometimes it is easy to forget what you are actually building and how to best develop your NN. For this project I will be using Python to create a simple Perceptron that will implement the basics of Back-Propagation to Optimize our Synapse Weighting. I’ll be sure to explain everything along the way and always encourage you to reach out if you have any questions! I will assume no prior knowledge in NNs, but you will instead need to know some fundamentals of Python programming, low-level calculus, and a bit of linear algebra. If you aren’t quite sure what a NN is and how they are used in the field of AI, I encourage you to first read my article covering that topic before tackling this project. So let’s get to it!
  • And this is very interesting:
    • SHAP (SHapley Additive exPlanations) is a unified approach to explain the output of any machine learning model. SHAP connects game theory with local explanations, uniting several previous methods [1-7] and representing the only possible consistent and locally accurate additive feature attribution method based on expectations (see the SHAP NIPS paper for details).
  • Ok, back to generators. Here are several versions of Call of the Wild
    • Tokens
      index, token
      0, quivering
      1, scraped
      2, introspective
      3, confines
      4, restlessness
      5, pug
      6, mandate
      7, twisted
      8, part
      9, error
      10, thong
      11, resolved
      12, daunted
      13, spray
      14, trees
      15, caught
      16, fearlessly
      17, quite
      18, soft
      19, sounds
      20, slaying
    • Text sequences
      #confg: {"sequence_length":10, "step":1, "type":"words"}
      buck, did, not, read, the, newspapers, or, he, would, have
      did, not, read, the, newspapers, or, he, would, have, known
      not, read, the, newspapers, or, he, would, have, known, that
      read, the, newspapers, or, he, would, have, known, that, trouble
      the, newspapers, or, he, would, have, known, that, trouble, was
      newspapers, or, he, would, have, known, that, trouble, was, brewing
      or, he, would, have, known, that, trouble, was, brewing, not
      he, would, have, known, that, trouble, was, brewing, not, alone
      would, have, known, that, trouble, was, brewing, not, alone, for
      have, known, that, trouble, was, brewing, not, alone, for, himself
      known, that, trouble, was, brewing, not, alone, for, himself, but
      that, trouble, was, brewing, not, alone, for, himself, but, for
      trouble, was, brewing, not, alone, for, himself, but, for, every
      was, brewing, not, alone, for, himself, but, for, every, tidewater
      brewing, not, alone, for, himself, but, for, every, tidewater, dog
      not, alone, for, himself, but, for, every, tidewater, dog, strong
      alone, for, himself, but, for, every, tidewater, dog, strong, of
      for, himself, but, for, every, tidewater, dog, strong, of, muscle
      himself, but, for, every, tidewater, dog, strong, of, muscle, and

       

    • Index sequences
      #confg: {"sequence_length":10, "step":1, "type":"integer"}
      4686, 1720, 283, 1432, 1828, 1112, 4859, 3409, 3396, 379
      1720, 283, 1432, 1828, 1112, 4859, 3409, 3396, 379, 4004
      283, 1432, 1828, 1112, 4859, 3409, 3396, 379, 4004, 3954
      1432, 1828, 1112, 4859, 3409, 3396, 379, 4004, 3954, 4572
      1828, 1112, 4859, 3409, 3396, 379, 4004, 3954, 4572, 4083
      1112, 4859, 3409, 3396, 379, 4004, 3954, 4572, 4083, 3287
      4859, 3409, 3396, 379, 4004, 3954, 4572, 4083, 3287, 283
      3409, 3396, 379, 4004, 3954, 4572, 4083, 3287, 283, 1808
      3396, 379, 4004, 3954, 4572, 4083, 3287, 283, 1808, 975
      379, 4004, 3954, 4572, 4083, 3287, 283, 1808, 975, 532
      4004, 3954, 4572, 4083, 3287, 283, 1808, 975, 532, 973
      3954, 4572, 4083, 3287, 283, 1808, 975, 532, 973, 975
      4572, 4083, 3287, 283, 1808, 975, 532, 973, 975, 4678
      4083, 3287, 283, 1808, 975, 532, 973, 975, 4678, 3017
      3287, 283, 1808, 975, 532, 973, 975, 4678, 3017, 2108
      283, 1808, 975, 532, 973, 975, 4678, 3017, 2108, 984
      1808, 975, 532, 973, 975, 4678, 3017, 2108, 984, 1868
      975, 532, 973, 975, 4678, 3017, 2108, 984, 1868, 3407

Phil 11.1.18

7:00 – 4:30 ASRC PhD

  • Quick thought. Stampedes may be recognized not just from low variance (density of connections), but also the speed that a new term moves into the lexicon (stiffness)
  • The Junk News Aggregator, the Visual Junk News Aggregator and the Top 10 Junk News Aggregator are research projects of the Computational Propaganda group (COMPROP) of the Oxford Internet Institute (OII)at the University of Oxford.These aggregators are intended as tools to help researchers, journalists, and the public see what English language junk news stories are being shared and engaged with on Facebook, ahead of the 2018 US midterm elections on November 6, 2018.The aggregators show junk news posts along with how many reactions they received, for all eight types of post reactions available on Facebook, namely: Likes, Comments, Shares, and the five emoji reactions: Love, Haha, Wow, Angry, and Sad.
  • Reading Charles Perrow’s Normal Accidents. Riveting. All about dense, tightly connected networks with hidden information
    • From The Montreal Review
      • Normal Accident drew attention to two different forms of organizational structure that Herbert Simon had pointed to years before, vertical integration, and what we now call modularity. Examining risky systems in the Accident book, I focused upon the unexpected interactions of different parts of the system that no designer could have expected and no operator comprehend or be able to interdict. Reading Charles Perrow’s Normal Accidents. Riveting. All about dense, tightly connected networks with hidden information
  • Building generators.
    • Need to change the “stepsize” in the Torrance generator to be variable – done. Here’s my little ode to The Shining:
      #confg: {"rows":100, "sequence_length":26, "step":26, "type":"words"}
      all work and no play makes jack a dull boy all work and no play makes jack a dull boy all work and no play makes 
      jack a dull boy all work and no play makes jack a dull boy all work and no play makes jack a dull boy all work 
      and no play makes jack a dull boy all work and no play makes jack a dull boy all work and no play makes jack a 
      dull boy all work and no play makes jack a dull boy all work and no play makes jack a dull boy all work and no 
      play makes jack a dull boy all work and no play makes jack a dull boy all work and no play makes jack a dull boy 
      all work and no play makes jack a dull boy all work and no play makes jack a dull boy all work and no play makes 
      jack a dull boy all work and no play makes jack a dull boy all work and no play makes jack a dull boy all work 
      and no play makes jack a dull boy all work and no play makes jack a dull boy all work and no play makes jack a 
      dull boy all work and no play makes jack a dull boy all work and no play makes jack a dull boy all work and no 
      play makes jack a dull boy all work and no play makes jack a dull boy all work and no play makes jack a dull boy 
      all work and no play makes jack a dull boy all work and no play makes jack a dull boy all work and no play makes 
      jack a dull boy all work and no play makes jack a dull boy all work and no play makes jack a dull boy all work 
      and no play makes jack a dull boy all work and no play makes jack a dull boy all work and no play makes jack a 
      
    • Need to be able to turn out a numeric equivalent. Done with floating point. This:
      #confg: {"function":math.sin(xx)*math.sin(xx/2.0)*math.cos(xx/4.0), "rows":100, "sequence_length":20, "step":1, "delta":0.4, "type":"floating_point"}
      0.0,0.07697897630719268,0.27378318599563484,0.5027638400821064,0.6604469814238397,0.6714800165989514,0.519596709539434,0.2524851001382131,-0.04065231596017931,-0.2678812526747579,-0.37181365763470914,-0.34898182120310306,-0.24382057359778858,-0.12182487479311599,-0.035942415169752356,-0.0027892469005274916,0.00019865778200507415,0.016268713740310237,0.07979661440830532,0.19146155036709192,
      0.07697897630719312,0.2737831859956355,0.5027638400821071,0.6604469814238401,0.6714800165989512,0.5195967095394334,0.2524851001382121,-0.04065231596018022,-0.26788125267475843,-0.37181365763470925,-0.3489818212031028,-0.24382057359778805,-0.12182487479311552,-0.0359424151697521,-0.0027892469005274395,0.0001986577820050832,0.016268713740310397,0.07979661440830574,0.19146155036709248,0.31158944024296154,
      0.2737831859956368,0.502763840082108,0.6604469814238405,0.6714800165989508,0.5195967095394324,0.25248510013821085,-0.04065231596018143,-0.2678812526747592,-0.37181365763470936,-0.34898182120310245,-0.24382057359778747,-0.12182487479311502,-0.03594241516975184,-0.002789246900527388,0.00019865778200509222,0.01626871374031056,0.07979661440830614,0.191461550367093,0.311589440242962,0.3760334615921674,
      0.5027638400821092,0.6604469814238411,0.6714800165989505,0.5195967095394312,0.25248510013820913,-0.040652315960182955,-0.26788125267476015,-0.37181365763470964,-0.348981821203102,-0.24382057359778667,-0.12182487479311428,-0.03594241516975145,-0.0027892469005273107,0.00019865778200510578,0.016268713740310803,0.07979661440830675,0.1914615503670939,0.3115894402429629,0.3760334615921675,0.3275646734005755,
      0.660446981423842,0.6714800165989498,0.5195967095394289,0.2524851001382062,-0.04065231596018568,-0.2678812526747618,-0.37181365763471,-0.34898182120310123,-0.24382057359778553,-0.1218248747931133,-0.03594241516975093,-0.0027892469005272066,0.00019865778200512388,0.016268713740311122,0.07979661440830756,0.19146155036709495,0.31158944024296387,0.3760334615921676,0.3275646734005745,0.1475692800414062,
      0.671480016598949,0.5195967095394267,0.25248510013820324,-0.04065231596018842,-0.2678812526747636,-0.3718136576347104,-0.34898182120310045,-0.24382057359778414,-0.12182487479311209,-0.03594241516975028,-0.002789246900527077,0.0001986577820051465,0.016268713740311528,0.07979661440830856,0.19146155036709636,0.3115894402429648,0.37603346159216783,0.32756467340057344,0.1475692800414041,-0.12805444308254293,
      0.5195967095394245,0.2524851001382003,-0.04065231596019116,-0.2678812526747653,-0.3718136576347107,-0.3489818212030998,-0.24382057359778303,-0.12182487479311109,-0.03594241516974975,-0.0027892469005269733,0.00019865778200516457,0.016268713740311847,0.07979661440830936,0.19146155036709747,0.3115894402429657,0.37603346159216794,0.32756467340057244,0.147569280041402,-0.1280544430825456,-0.41793663502550105,
      0.2524851001381973,-0.04065231596019389,-0.26788125267476703,-0.3718136576347111,-0.3489818212030989,-0.2438205735977817,-0.12182487479310988,-0.0359424151697491,-0.002789246900526843,0.00019865778200518717,0.01626871374031225,0.07979661440831039,0.1914615503670989,0.3115894402429671,0.3760334615921681,0.3275646734005709,0.14756928004139883,-0.1280544430825496,-0.41793663502550454,-0.6266831461371138,
      
    • Gives this: Waves
    • Need to write a generator that reads in text (words and characters) and produces data tables with stepsizes
    • Need to write a generator that takes an equation as a waveform
  • USPTO Meeting. Use NN to produce multiple centrality / laplacians that user interact with
  • Working on my 810 tasks
    • Potentially useful for mapmaking: Learning the Preferences of Ignorant, Inconsistent Agents
      • An important use of machine learning is to learn what people value. What posts or photos should a user be shown? Which jobs or activities would a person find rewarding? In each case, observations of people’s past choices can inform our inferences about their likes and preferences. If we assume that choices are approximately optimal according to some utility function, we can treat preference inference as Bayesian inverse planning. That is, given a prior on utility functions and some observed choices, we invert an optimal decision-making process to infer a posterior distribution on utility functions. However, people often deviate from approximate optimality. They have false beliefs, their planning is sub-optimal, and their choices may be temporally inconsistent due to hyperbolic discounting and other biases. We demonstrate how to incorporate these deviations into algorithms for preference inference by constructing generative models of planning for agents who are subject to false beliefs and time inconsistency. We explore the inferences these models make about preferences, beliefs, and biases. We present a behavioral experiment in which human subjects perform preference inference given the same observations of choices as our model. Results show that human subjects (like our model) explain choices in terms of systematic deviations from optimal behavior and suggest that they take such deviations into account when inferring preferences.
    • An Overview of the Schwartz Theory of Basic Values (Added to normative map making) Schwartz
      • This article presents an overview of the Schwartz theory of basic human values. It discusses the nature of values and spells out the features that are common to all values and what distinguishes one value from another. The theory identifies ten basic personal values that are recognized across cultures and explains where they come from. At the heart of the theory is the idea that values form a circular structure that reflects the motivations each value expresses. This circular structure, that captures the conflicts and compatibility among the ten values is apparently culturally universal. The article elucidates the psychological principles that give rise to it. Next, it presents the two major methods developed to measure the basic values, the Schwartz Value Survey and the Portrait Values Questionnaire. Findings from 82 countries, based on these and other methods, provide evidence for the validity of the theory across cultures. The findings reveal substantial differences in the value priorities of individuals. Surprisingly, however, the average value priorities of most societal groups exhibit a similar hierarchical order whose existence the article explains. The last section of the article clarifies how values differ from other concepts used to explain behavior—attitudes, beliefs, norms, and traits.

Phil 10.31.18

7:00 – ASRC PhD

  • Read this carefully today: Introducing AdaNet: Fast and Flexible AutoML with Learning Guarantees
    • Today, we’re excited to share AdaNet, a lightweight TensorFlow-based framework for automatically learning high-quality models with minimal expert intervention. AdaNet builds on our recent reinforcement learning and evolutionary-based AutoML efforts to be fast and flexible while providing learning guarantees. Importantly, AdaNet provides a general framework for not only learning a neural network architecture, but also for learning to ensemble to obtain even better models.
    • What about data from simulation?
    • Github repo
    • This looks like it’s based deeply the cloud AI and Machine Learning products, including cloud-based hyperparameter tuning.
    • Time series prediction is here as well, though treated in a more BigQuery manner
      • In this blog post we show how to build a forecast-generating model using TensorFlow’s DNNRegressor class. The objective of the model is the following: Given FX rates in the last 10 minutes, predict FX rate one minute later.
    • Text generation:
      • Cloud poetry: training and hyperparameter tuning custom text models on Cloud ML Engine
        • Let’s say we want to train a machine learning model to complete poems. Given one line of verse, the model should generate the next line. This is a hard problem—poetry is a sophisticated form of composition and wordplay. It seems harder than translation because there is no one-to-one relationship between the input (first line of a poem) and the output (the second line of the poem). It is somewhat similar to a model that provides answers to questions, except that we’re asking the model to be a lot more creative.
      • Codelab: Google Developers Codelabs provide a guided, tutorial, hands-on coding experience. Most codelabs will step you through the process of building a small application, or adding a new feature to an existing application. They cover a wide range of topics such as Android Wear, Google Compute Engine, Project Tango, and Google APIs on iOS.
        Codelab tools on GitHub

  • Add the Range and Length section in my notes to the DARPA measurement section. Done. I need to start putting together the dissertation using these parts
  • Read Open Source, Open Science, and the Replication Crisis in HCI. Broadly, it seems true, but trying to piggyback on GitHub seems like a shallow solution that repurposes something for coding – an ephemeral activity, to science, which is archival for a reason. Thought needs to be given to an integrated (collection, raw data, cleaned data, analysis, raw results, paper (with reviews?), slides, and possibly a recording of the talk with questions. What would it take to make this work across all science, from critical ethnographies to particle physics? How will it be accessible in 100 years? 500? 1,000? This is very much an HCI problem. It is about designing a useful socio-cultural interface. Some really good questions would be “how do we use our HCI tools to solve this problem?”, and, “does this point out the need for new/different tools?”.
  • NASA AIMS meeting. Demo in 2 weeks. AIMS is “time series prediction”, A2P is “unstructured data”. Proove that we can actually do ML, as opposed to saying things.
    • How about cross-point correlation? Could show in a sim?
    • Meeting on Friday with a package
    • We’ve solved A, here’s the vision for B – Z and a roadmap. JPSS is a near-term customer (JPSS Data)
    • Getting actionable intelligence from the system logs
    • Application portfolios for machine learning
    • Umbrella of capabilities for Rich Burns
    • New architectural framework for TTNC
    • Complete situational awareness. Access to commands and sensor streams
    • Software Engineering Division/Code 580
    • A2P as a toolbox, but needs to have NASA-relevant analytic capabilities
    • GMSEC overview

Phil 10.30.18

7:00 – 3:30 ASRC PhD

  • Search as embodies in the “Ten Blue Links” meets the requirements of a Parrow “Normal Accident”
    • The search results are densely connected. That’s how PageRank works. Even latent connections matter.
    • The change in popularity of a page rapidly affects the rank. So the connections are stiff
    • The relationships of the returned links both to each other and to the broader information landscape in general is hidden.
    • An additional density and stiffness issue is that everyone uses Google, so there is a dense, stiff connection between the search engine and the population of users
  • Write up something about how
    • ML can make maps, which decrease the likelihood of IR contributing to normal accidents
    • AI can use these maps to understand the shape of human belief space, and where the positive regions and dangerous sinks are.
  • Two measures for maps are the concepts or Range and length. Range is the distance that a trajectory can be placed on the map and remain contiguous. Length is the total distance that a trajectory travels, independent of the map its placed on.
  • Write up the basic algorithm of ML to map production
    • Take a set of trajectories that are known to be in the same belief region (why JuryRoom is needed) as the input
    • Generate an N-dimensional coordinate frame that best preserves length over the greatest range.
    • What is used as the basis for the trajectory may matter. The range (at a minimum), can go from letters to high-level topics. I think any map reconstruction based on letters would be a tangle, with clumps around TH, ER, ON, and AN. At the other end, an all-encompassing meta-topic, like WORDS would be a single, accurate, but useless single point. So the map reconstruction will become possible somewhere between these two extremes.
  • The Nietzsche text is pretty good. In particular, check out the way the sentences form based on the seed  “s when one is being cursed.
    • the fact that the spirit of the spirit of the body and still the stands of the world
    • the fact that the last is a prostion of the conceal the investion, there is our grust
    • the fact them strongests! it is incoke when it is liuderan of human particiay
    • the fact that she could as eudop bkems to overcore and dogmofuld
    • In this case, the first 2-3 words are the same, and random, semi-structured text. That’s promising, since the compare would be on the seed plus the generated text.
  • Today, see how fast a “Shining” (All work and no play makes Jack a dull boy.) text can be learned and then try each keyword as a start. As we move through the sentence, the probability of the next words should change.
    • Generate the text set
    • Train the Nietzsche model on the new text. Done. Here are examples with one epoch and a batch size of 32, with a temperature of 1.0:
      ----- diversity: 0.2
      ----- Generating with seed: "es jack a 
      dull boy all work and no play"
      es jack a 
      dull boy all work and no play makes jack a dull boy all work and no play makes jack a dull boy all work and no play makes jack a dull boy all work and no play makes 
      
      ----- diversity: 0.5
      ----- Generating with seed: "es jack a 
      dull boy all work and no play"
      es jack a 
      dull boy all work and no play makes jack a dull boy all work and no play makes jack a dull boy all work and no play makes jack a dull boy all work and no play makes 
      
      ----- diversity: 1.0
      ----- Generating with seed: "es jack a 
      dull boy all work and no play"
      es jack a 
      dull boy all work and no play makes jack a dull boy anl wory and no play makes jand no play makes jack a dull boy all work and no play makes jack a 
      
      ----- diversity: 1.2
      ----- Generating with seed: "es jack a 
      dull boy all work and no play"
      es jack a 
      dull boy all work and no play makes jack a pull boy all work and no play makes jack andull boy all work and no play makes jack a dull work and no play makes jack andull

      Note that the errors start with a temperature of 1.0 or greater

    • Rewrite the last part of the code to generate text based on each word in the sentence.
      • So I tried that and got gobbledygook. The issues is that the prediction only works on waveform-sized chunks. To verify this, I created a seed from the input text, truncating it to maxlen (20 in this case):
        sentence = "all work and no play makes jack a dull boy"[:maxlen]

        That worked, but it means that the character-based approach isn’t going to work

        ----- temperature: 0.2
        ----- Generating with seed: [all work and no play]
        all work and no play makes jack a dull boy all work and no play makes jack a dull boy all work and no play makes 
        
        ----- temperature: 0.5
        ----- Generating with seed: [all work and no play]
        all work and no play makes jack a dull boy all work and no play makes jack a dull boy all work and no play makes 
        
        ----- temperature: 1.0
        ----- Generating with seed: [all work and no play]
        all work and no play makes jack a dull boy all work and no play makes jack a dull boy pllwwork wnd no play makes 
        
        ----- temperature: 1.2
        ----- Generating with seed: [all work and no play]
        all work and no play makes jack a dull boy all work and no play makes jack a dull boy all work and no play makes

         

    • Based on this result and the ensuing chat with Aaron, we’re going to revisit the whole LSTM with numbers and build out a process that will support words instead of characters.
  • Looking for CMAC models, I found Self Organizing Feature Maps at NeuPy.com:
  • Here’s How Much Bots Drive Conversation During News Events
    • Late last week, about 60 percent of the conversation was driven by likely bots. Over the weekend, even as the conversation about the caravan was overshadowed by more recent tragedies, bots were still driving nearly 40 percent of the caravan conversation on Twitter. That’s according to an assessment by Robhat Labs, a startup founded by two UC Berkeley students that builds tools to detect bots online. The team’s first product, a Chrome extension called BotCheck.me, allows users to see which accounts in their Twitter timelines are most likely bots. Now it’s launching a new tool aimed at news organizations called FactCheck.me, which allows journalists to see how much bot activity there is across an entire topic or hashtag

Phil 10.29.18

7:00 – 5:00 ASRC PhD

  • This looks like a Big Deal from Google – Working together to apply AI for social good
    • Google.org is issuing an open call to organizations around the world to submit their ideas for how they could use AI to help address societal challenges. Selected organizations will receive support from Google’s AI experts, Google.org grant funding from a $25M pool, credit and consulting from Google Cloud, and more.
    • We look forward to receiving your application on or before 11:59 p.m. PT on January 22, 2019, and we encourage you to apply early given that we expect high volume within the last few hours of the application window. Thank you!
    • Application Guide
    • Application form (can’t save, compose offline using guide, above)
  • Finished my writeup on Meltdown
  • Waiting for a response from Antonio
  • Meeting with Don at 9:00 to discuss BAA partnership.
    • Don is comfortable with being PI or co-PI, whichever works best. When we call technical POCs, we speak on his behalf
    • We discussed how he could participate with the development of theoretical models based on signed graph Laplacians creating structures that can move in belief space. He thinks the idea has merit, and can put in up to 30% of his time on mathematical models and writing
    • ASRC has already partnered with UMBC. ASRC would sub to UMBC
    • Ordinarily, IP is distributed proportional to the charged hours
    • Don has access to other funding vehicles that can support the Army BAA, but this would make things more complicated. These should be discussed if we can’t make a “clean” agreement that meets our funding needs
  • Pinged Brian about his defense.
  • Some weekend thoughts
    • Opinion dynamics systems describe how communication within a network occurs, but disregards the motion of the network as a whole. In cases when the opinions converge, the network is stiff.
    • Graph laplacians could model “othering” by having negative weights. It looks like these are known as signed laplacians, and useful to denote difference. The trick is to discover the equations of motion. How do you model a “social particle”?
  • Just discovered the journal Swarm Intelligence
    • Swarm Intelligence is the principal peer reviewed publication dedicated to reporting research and new developments in this multidisciplinary field. The journal publishes original research articles and occasional reviews on theoretical, experimental, and practical aspects of swarm intelligence. It offers readers reports on advances in the understanding and utilization of systems that are based on the principles of swarm intelligence. Emphasis is given to such topics as the modeling and analysis of collective biological systems; application of biological swarm intelligence models to real-world problems; and theoretical and empirical research in ant colony optimization, particle swarm optimization, swarm robotics, and other swarm intelligence algorithms. Articles often combine experimental and theoretical work.
  • I think it’s time to start ramping up on the text generation!
      • Updated my home box to tensorflow 1.11.0. Testing to see if it still works using the Deep Learning with Keras simple_nueral_net.py example. Hasn’t broken (yet…), but is taking a long time… Worked! And it’s much faster the second time. Don’t know why that is and can’t find anything online that talks to that.
        Loss: 0.5043802047491074
        Accuracy: 0.8782
        Time =  211.42629722093085
      • Found this keras example for generating Nietsche

     

    • Trying it out. This may be a overnight run… But it is running.
  • Had a good discussion with Aaron about how mapmaking could be framed as an ML problem. More writeup tomorrow.

Phil 10.28.18

We know from the House Intelligence Committee report that the Russians were pushing a Syrian message among the other more “organic” messages. But this seems to indicate that they did get traction.

Mail bomb suspect made numerous references on Facebook to Russian associates and echoed pro-Kremlin views

  • The posts showed fixations on certain subjects, including Miami sports teams, youth soccer, Native American themes and businesses Sayoc was seeking to promote. But in April 2016, after several months of not posting on Facebook, the account abruptly changed subjects to link to videos celebrating Syria’s fight against ISIS.
  • “He just pops up four months later and just relentlessly shares stories about ISIS and terrorists,” said Albright. “The turn is just remarkable… He found ideas that never let go from that point on.”

Phil 10.25.18

7:00 – 5:00 ASRC PhD

  • Two unrelated thoughts.
    • A tangle could be made to heal if each transaction kept track of the transaction that verified it. If that transaction became unreachable for more than N heartbeats, then the transaction becomes unverified again. Not sure if the verifying transaction needs to track the other way. Being able to query the tangle for these “scars” seems like it should be useful.
    • A death threat is a unique form of dimension reduction, and should probably be tracked/tagged using both emergent topic modeling and hand-tuned heuristics
  • Tim Berners-Lee on the huge sociotechnical design challenge
    • “We must consciously decide on both of these, both the social side and the technical side,” he said. “[These platforms are] anthropogenic, made by people… Facebook and Twitter are anthropogenic. They’re made by people. They’ve coded by people. And the people who code them are constantly trying to figure out how to make them better.”
  • Antonio workshop paper
    • Today– Finished hierarchy section, didn’t start Black swan section
    • Took out the hybrid section and used Aaron’s writeup on research opportunities to set up the ensemble of hierarchies parts that Antonio is writing.
    • Tonight, send note to Antonio with thoughts on introduction and Hybrid section. Done. He’s taking a look.

Phil 10.24.18

7:00 – 6:00 ASRC PhD

  • So the BAA is only for academic work, which means partnering with UMD/UMBC. Need to talk to Don about setting this up. Some email this morning about how an NDA would be needed. I’m thinking that it would be restricted to A2P.
  • Inside the Moral Machine : When your experiment survey becomes reaction video material
    • On June 23rd, 2016, we deployed Moral Machine. The website was intended to be a mere companion survey to a paper being published that day. Thirty minutes later, it crashed.
    • Read this to see if there are ways of making JuryRoom go viral in similar ways
  • Respond to the Collective Intelligence journal proposal – done
  • Antonio workshop paper
    • Today – Finish market section – done
    • Thursday – Start hierarchy section, start Black swan section
      • Thursday night, send note to Antonio with thoughts on introduction and Hybrid section.
    • Friday – Hybrid section?
  • Hello, CoLa!
    • This network of character co-occurence in Les Misérables is positioned by constraint-based optimization using WebCoLa. Compare to d3-force.
    • This should be better than mass-spring-damper systems for building maps. Cola

Phil 10.23.18

7:00 – 4:30 ASRC PhD

  • Respond to the Collective Intelligence journal proposal
  • Antonio workshop paper
    • Today – Introduction, TaaS as a spectrum, part of the Market section
    • Wednesday – Hierarchy section
    • Thursday – Black swan section
      • Thursday night, send note to Antonio with thoughts on introduction and Hybrid section.
    • Friday – Hybrid section?
  • LSTM Encoder-Decoder with Adversarial Network for Text Generation from Keyword
    • Natural Language Generation (NLG), one of the areas of Natural Language Processing (NLP), is a difficult task, but it is also important because it applies to our lives. So far, there have been various approaches to text generation, but in recent years, approaches using artificial neural networks have been used extensively. We propose a model for generating sentences from keywords using Generative Adversarial Network (GAN) composed of a generator and a discriminator among these artificial neural networks. Specifically, the generator uses the Long Short-Term Memory (LSTM) Encoder-Decoder structure, and the discriminator uses the bi-directional LSTM with self-attention. Also, the keyword for input to the encoder of the generator is input together with two words similar to oneself. This method contributes to the creation of sentences containing words that have similar meanings to the keyword. In addition, the number of unique sentences generated increases and diversity can be increased. We evaluate our model with BLEU Score and loss value. As a result, we can see that our model improves the performance compared to the baseline model without an adversarial network.

Phil 10.22.18

7:00 – 5:30 ASRC PhD

      • Need to finish workshop paper this week
      • Jeff Atwood said I should look at 10 year old code to frighten myself and found a permuter class that could be used for hyperparameter tuning! It’s here:
        trunk/Java_folders/Projects/EntryRelationDb/src/main/java/com/edgeti/EntryRelationDb/Permutations.java
      • Fika
      • Meeting with Wayne.
        • We have a 12% chance of getting in the iConference, so don’t expect much. On the other hand, that opens up content for Antonio’s paper?

     

Phil 10.21.18

Finished Meltdown. Need to write up some notes.

Think about using a CMAC or Deep CMAC for function learning, because NIST. Also, can it be used for multi-dimensional learning?

  • Cerebellar model articulation controller
  • Adaptive Noise Cancellation Using Deep Cerebellar Model Articulation Controller
  • RCMAC Hybrid Control for MIMO Uncertain Nonlinear Systems Using Sliding-Mode Technology
    • A hybrid control system, integrating principal and compensation controllers, is developed for multiple-input-multiple-output (MIMO) uncertain nonlinear systems. This hybrid control system is based on sliding-mode technique and uses a recurrent cerebellar model articulation controller (RCMAC) as an uncertainty observer. The principal controller containing an RCMAC uncertainty observer is the main controller, and the compensation controller is a compensator for the approximation error of the system uncertainty. In addition, in order to relax the requirement of approximation error bound, an estimation law is derived to estimate the error bound. The Taylor linearization technique is employed to increase the learning ability of RCMAC and the adaptive laws of the control system are derived based on Lyapunov stability theorem and Barbalat’s lemma so that the asymptotical stability of the system can be guaranteed. Finally, the proposed design method is applied to control a biped robot. Simulation results demonstrate the effectiveness of the proposed control scheme for the MIMO uncertain nonlinear system
  • Github CMAC TF projects

 

Phil 10.19.18

Phil 7:00 – 3:30 ASRC PhD

  • Sprint review
  • Reading Meltdown: Why our systems fail and What we can do about it, and I found some really interesting work that relates to social conformity, flocking, stampeding and nomadic behaviors:
    • We show that a deviation from the group opinion is regarded by the brain as a punishment,” said the study’s lead author, Vasily Klucharev. And the error message combined with a dampened reward signal produces a brain impulse indicating that we should adjust our opinion to match the consensus. Interestingly, this process occurs even if there is no reason for us to expect any punishment from the group. As Klucharev put it, “This is likely an automatic process in which people form their own opinion, hear the group view, and then quickly shift their opinion to make it more compliant with the group view.” (Page 154)
      • Reinforcement Learning Signal Predicts Social Conformity
        • Vasily Klucharev
        • We often change our decisions and judgments to conform with normative group behavior. However, the neural mechanisms of social conformity remain unclear. Here we show, using functional magnetic resonance imaging, that conformity is based on mechanisms that comply with principles of reinforcement learning. We found that individual judgments of facial attractiveness are adjusted in line with group opinion. Conflict with group opinion triggered a neuronal response in the rostral cingulate zone and the ventral striatum similar to the “prediction error” signal suggested by neuroscientific models of reinforcement learning. The amplitude of the conflict-related signal predicted subsequent conforming behavioral adjustments. Furthermore, the individual amplitude of the conflict-related signal in the ventral striatum correlated with differences in conforming behavior across subjects. These findings provide evidence that social group norms evoke conformity via learning mechanisms reflected in the activity of the rostral cingulate zone and ventral striatum.
    • When people agreed with their peers’ incorrect answers, there was little change in activity in the areas associated with conscious decision-making. Instead, the regions devoted to vision and spatial perception lit up. It’s not that people were consciously lying to fit in. It seems that the prevailing opinion actually changed their perceptions. If everyone else said the two objects were different, a participant might have started to notice differences even if the objects were identical. Our tendency for conformity can literally change what we see. (Page 155)
      • Gregory Berns
        • Dr. Berns specializes in the use of brain imaging technologies to understand human – and now, canine – motivation and decision-making.  He has received numerous grants from the National Institutes of Health, National Science Foundation, and the Department of Defense and has published over 70 peer-reviewed original research articles.
      • Neurobiological Correlates of Social Conformity and Independence During Mental Rotation
        • Background

          When individual judgment conflicts with a group, the individual will often conform his judgment to that of the group. Conformity might arise at an executive level of decision making, or it might arise because the social setting alters the individual’s perception of the world.

          Methods

          We used functional magnetic resonance imaging and a task of mental rotation in the context of peer pressure to investigate the neural basis of individualistic and conforming behavior in the face of wrong information.Results

          Conformity was associated with functional changes in an occipital-parietal network, especially when the wrong information originated from other people. Independence was associated with increased amygdala and caudate activity, findings consistent with the assumptions of social norm theory about the behavioral saliency of standing alone.

          Conclusions

          These findings provide the first biological evidence for the involvement of perceptual and emotional processes during social conformity.

        • The Pain of Independence: Compared to behavioral research of conformity, comparatively little is known about the mechanisms of non-conformity, or independence. In one psychological framework, the group provides a normative influence on the individual. Depending on the particular situation, the group’s influence may be purely informational – providing information to an individual who is unsure of what to do. More interesting is the case in which the individual has definite opinions of what to do but conforms due to a normative influence of the group due to social reasons. In this model, normative influences are presumed to act through the aversiveness of being in a minority position
      • A Neural Basis for Social Cooperation
        • Cooperation based on reciprocal altruism has evolved in only a small number of species, yet it constitutes the core behavioral principle of human social life. The iterated Prisoner’s Dilemma Game has been used to model this form of cooperation. We used fMRI to scan 36 women as they played an iterated Prisoner’s Dilemma Game with another woman to investigate the neurobiological basis of cooperative social behavior. Mutual cooperation was associated with consistent activation in brain areas that have been linked with reward processing: nucleus accumbens, the caudate nucleus, ventromedial frontal/orbitofrontal cortex, and rostral anterior cingulate cortex. We propose that activation of this neural network positively reinforces reciprocal altruism, thereby motivating subjects to resist the temptation to selfishly accept but not reciprocate favors.
  • Working on Antonio’s paper. I think I’ve found the two best papers to use for the market system. It turns out that freight has been doing this for about 20 years. Agent simulation and everything