Phil 12.13.17

7:00 – 5:00 ASRC MKT

  • Schedule physical
  • Write up fire stampede. Done!
  • Continuing Consensus and Cooperation in Networked Multi-Agent Systems here
  • Would like to see how the credibility cues on the document were presented. What went right and what went wrong: Schumer calls cops after forged sex scandal charge
  • Finished linking the RB components to the use cases. Waiting on Aaron to finish SIGINT use case
  • Working on building maps from trajectories. Trying
    • Updating Labeled2DMatrix to read in string values. I had never finished that part! There are some issues with what to do about column headers. I think I’m going to add explicit headers for the ‘Trajectory’ sheet
  • Strategized with Aaron about how to approach the event tomorrow. And Deep Neural Network Capsules. And Social Gradient Descent Agents.
    • deep neural nets learn by back-propagation of errors over the entire network. In contrast real brains supposedly wire neurons by Hebbian principles: “units that fire together, wire together”. Capsules mimic Hebbian learning in the way that: “A lower-level capsule prefers to send its output to higher level capsules whose activity vectors have a big scalar product with the prediction coming from the lower-level capsule”
      • Sure sounds like oscillator frequency locking / flocking to me……