Monthly Archives: April 2019

Phil 4.29.19

7:00 – 3:30 ASRC TL

  • Register for Tech Summit – done
  • Ask for a week of time to prep for talk – done
  • Panos read the paper and has some suggestions. Need to implement
  • This might be important: Neural Logic Machines
    • We propose the Neural Logic Machine (NLM), a neural-symbolic architecture for both inductive learning and logic reasoning. NLMs exploit the power of both neural networks—as function approximators, and logic programming—as a symbolic processor for objects with properties, relations, logic connectives, and quantifiers. After being trained on small-scale tasks (such as sorting short arrays), NLMs can recover lifted rules, and generalize to large-scale tasks (such as sorting longer arrays). In our experiments, NLMs achieve perfect generalization in a number of tasks, from relational reasoning tasks on the family tree and general graphs, to decision making tasks including sorting arrays, finding shortest paths, and playing the blocks world. Most of these tasks are hard to accomplish for neural networks or inductive logic programming alone.
  • Need to read the Nature “Behavior” paper. Notes probably go straight into the dissertation lit review – done
  • Continuing to read Army of None, which is ridiculously good. This figure has been making me think: AoN This implies that the idea that a set of diverse ML systems all agreeing is a warning condition is worth exploring.
  • Finished read through of Tao’s paper
  • Need to find a cardiologist for Arpita

Phil 4.26.19

7:00 – 4:00 ASRC TL

Phil 4.25.19

7:00 – 9:00 ASRC TL

  • Looks like I’ll be giving a talk at the PM Summit in June on misinformation. Need to put together a 30 minute talk
    • Need to come up with a short description. Something along the lines of misinformation happens at all scales, from international news to groupthink leading to the shuttle Columbia launch decision. This talk will show the common patterns that can identify misinformation behaviors and ways to disrupt these “belief stampedes”.
  • Ping Joel and see if I can coordinate him and Wayne…
    • Sent email and made a doodle
  • Continuing with AI whitepaper
  • Walk through webGL classes with Zach – done
  • ML meeting – walked through the differences between heart attack (ischemic insults) and heart failure
  • Meeting with Aaron M. More discussions of how the paper should go. He’s like a section on how things like latent space hacking can wind up in other areas

Phil 4.23.19

7:00 – 5:30 ASRC TL

  • Reading Army of None and realizing that incorporating AI is a stampede theory and diversity issue:
    • This makes Aegis less like a finished product with a few different modes and more like a customizable system that can be tailored for each mission. Galluch explained that the ship’s doctrine review board, consisting of the officers and senior enlisted personnel who work on Aegis, begin the process of writing doctrine months before deployment. They consider their anticipated missions, intelligence assessments, and information on the region for the upcoming deployment, then make recommendations on doctrine to the ship’s captain for approval. The result is a series of doctrine statements, individually and in packages, that the captain can activate as needed during deployment. (Page 164)
    • Doctrine statements are typically grouped into two general categories: non-saturation and saturation. Non-saturation doctrine is used when there is time to carefully evaluate each potential threat. Saturation doctrine is needed if the ship gets into a combat situation where the number of inbound threats could overwhelm the ability of operators to respond. “If World War III starts and people start throwing a lot of stuff at me,” Galluch said, “I will have grouped my doctrine together so that it’s a one-push button that activates all of them. And what we’ve done is we’ve tested and we’ve looked at how they overlap each other and what the effects are going to be and make sure that we’re getting the defense of the ship that we expect.” This is where something like Auto-Special comes into play, in a “kill or be killed” scenario, as Galluch described it. (Page 164)
    • Extensive testing goes into ensuring that it works properly. Once the ship arrives in theater, the first thing the crew does is test the weapons doctrine to see if there is anything in the environment that might cause it to fire in peacetime, which would not be good. This is done safely by enabling a hardware-level cutout called the Fire Inhibit Switch, or FIS. The FIS includes a key that must be inserted for any of the ship’s weapons to fire. When the FIS key is inserted, a red light comes on; when it is turned to the right, the light turns green, meaning the weapons are live and ready to fire. When the FIS is red—or removed entirely—the ship’s weapons are disabled at the hardware level. (Page 165)
    • But the differences run deeper than merely having more options. The whole philosophy of automation is different. With Aegis, the automation is used to capture the ship captain’s intent. In Patriot, the automation embodies the intent of the designers and testers. The actual operators of the system may not even fully understand the designers’ intent that went into crafting the rules. The automation in Patriot is largely intended to replace warfighters’ decision-making. In Aegis, the automation is used to capture warfighters’ decision-making. (Page 165)
    • Hawley argued that Army Patriot operators train in a “sham environment” that doesn’t accurately simulate the rigors of real-world combat. As a result, he said “the Army deceives itself about how good their people really are. . . . It would be easy to believe you’re good at this, but that’s only because you’ve been able to handle the relatively non-demanding scenarios that they throw at you.” Unfortunately, militaries might not realize their training is ineffective until a war occurs, at which point it may be too late. (Page 171)
    • Hawley explained that the Aegis community was partially protected from this problem because they use their system day in and day out on ships operating around the globe. Aegis operators get “consistent objective feedback from your environment on how well you’re doing,” preventing this kind of self-deception. The Army’s peacetime operating environment for the Patriot, on the other hand, is not as intense, Hawley said. “Even when the Army guys are deployed, I don’t think that the quality of their experience with the system is quite the same. They’re theoretically hot, but they’re really not doing much of anything, other than just monitoring their scopes.” Leadership is also a vital factor. “Navy brass in the Aegis community are absolutely paranoid” about another Vincennes incident, Hawley said. (Page 171)
  • Working on JASS paper
  • Working on AI paper
  • Long chat with Eric H

Phil 4.22.19

7:00 – 4:00 ASRC TL

    • The mission of the Conference on Truth and Trust Online (TTO) is to bring together all parties working on automated approaches to augment manual efforts on improving the truthfulness and trustworthiness of online communications.
      • The inaugural Truth and Trust Online conference will be taking place on October 4th and 5th 2019 at BMA House in London.
      •  

        Key Dates

        • First call for papers: 2nd of April, 2019 *

        • Deadline for all submissions: 3rd of June, 2019
        • Notification of acceptance: Early July
        • Registration opens: End of June
        • Conference: 4th and 5th of October, 2019, BMA House, London, UK
    • From On Being with Pádraig Ó Tuama, about belonging gone bad and the scale of sectarianism: demonic
    • Fooling automated surveillance cameras: adversarial patches to attack person detection
      • Adversarial attacks on machine learning models have seen increasing interest in the past years. By making only subtle changes to the input of a convolutional neural network, the output of the network can be swayed to output a completely different result. The first attacks did this by changing pixel values of an input image slightly to fool a classifier to output the wrong class. Other approaches have tried to learn “patches” that can be applied to an object to fool detectors and classifiers. Some of these approaches have also shown that these attacks are feasible in the real-world, i.e. by modifying an object and filming it with a video camera. However, all of these approaches target classes that contain almost no intra-class variety (e.g. stop signs). The known structure of the object is then used to generate an adversarial patch on top of it. 
      • In this paper, we present an approach to generate adversarial patches to targets with lots of intra-class variety, namely persons. The goal is to generate a patch that is able successfully hide a person from a person detector. An attack that could for instance be used maliciously to circumvent surveillance systems, intruders can sneak around undetected by holding a small cardboard plate in front of their body aimed towards the surveillance camera. From our results we can see that our system is able significantly lower the accuracy of a person detector. Our approach also functions well in real-life scenarios where the patch is filmed by a camera. To the best of our knowledge we are the first to attempt this kind of attack on targets with a high level of intra-class variety like persons.
    • More adding Wayne’s notes into JASS paper. Figured out how to make something that looks like blockquotes without screwing up the JASS formatting:
      \hspace{1cm}\begin{minipage}{\dimexpr\textwidth-2cm}
      	\textit{"Get him home.  And deliver my cut of earnings to the people of Phandalin near Neverwinter, my home". With this, before anyone can stop him, Edmund turns to the dragon. "I make a counter offer.  In exchange for them motions to the two caged people. I offer myself to take their place.  I will remain.  I will starve.  You will lose two peasants, and in return you will gain all that I have to offer.  Edmund of house DeVir of Neverwinter.  The last of a noble bloodline of the ruling class."} - Edmond: Group 2
      \end{minipage}
    • More Machine Teaching paper

 

Phil 4.21.19

Working on the JASS paper. I added enough in the prior work so that the agent research is explained to go along with the picture, and added a section in the lit review for agent-based simulation, because JASS.

Played around with the title for the dissertation a bit. I like this one for now:

title

Phil 4.20.19

I think that we might be getting close-ish to a workable AI dungeon master(thisstorydoesnotexist.com). The trick might be to let people know that it was AI, and that it would behave… oddly:

My prompt: you enter a room with an orc and a goblin

The generated text:
The door swung open to a dark room, with no sign of a decoration or decoration for a simple wooden table and a table with no apparent decoration. As my eyes adjusted to the darkness, a pair of eyes I recognized well. He was shorter than the average goblin, even in his long robes, but his build was reminiscent of the ancient orc warrior. He looked at the two on a simple wooden stool, and then looked at me.

There was an air of calm in his tone.

The voice, it sounded like it was coming from inside the dungeon, but that was all I heard. It was a sound so foreign, the only sound I heard was the crackling of fire.
The light was blinding, blinding so much pain that I had to force my eyes open to avoid it. The light was bright enough to completely illuminate the floor of the room, but dim and the brightness did not cause me to see. The eyes of the orc were staring at me, but my mind was racing with questions.

What would make this place like it was?

Who are we, what will make this land like it is?

It’s been a couple millennia, we’re the third tribe in a short time, and were not going to leave until they had seen us. The people of our tribe were not aggressive, and their weapons were not strong, they were not strong enough to be dangerous on their own.

What makes the people of this land so dangerous?


I am the Second of the Glimmering Council. It is my duty to inform the other tribes of our future, and warn those tribes that their tribe will not stand

IntelliJ, Python, and Flask

I wound up using Python because of machine learning, in particular TensorFlow and Keras. And Python is ok. I miss strong typing, but all in all it seems pretty solid with some outstanding science libraries. Mostly, I write tools:

Tool

I’ve been using IntelliJ Ultimate for my Python work. I came at it from Java and the steaming wreckage that is Eclipse. I like Ultimate a lot. It updates frequently but no too much. Most of the time it’s seamless. And I can switch between C++, Java, Python, and TypeScript/JavaScript.

For a new tool project, I need to have Python analytics served to a thin client running D3. Which meant that it was time to set up a client-server architecture.

These things are always a pain to set up. A “Hello world” app that sends JSON from the server to the client where it is rendered at interactive rates, with user actions being fed back to the server is still a surprisingly clunky thing to set up. So I was expecting some grumbling. I was not expecting everything to be broken. Well, nearly everything.

It is possible to create a Flask project in IntelliJ Ultimate. But you can’t actually launch the server. The goal is to have code like this:

Code

Set up and run a webserver with console output like this:

Console

It turns out that there is an unresolved bug in IntelliJ that prevents the webserver from launching. In IntelliJ, the way you know that everything is working is that the python file – app.py in this case — has parentheses around it (app.py). You can also see this in the launch menu:

(app)

If you see those parens, then IntelliJ knows that the file contains a Flask webserver, and will run things accordingly.

So how did I get this to work? I bought the Professional Edition of PyCharm (which has Flask), and used the defaults to create the project. Note that it appears that you have to use the Jinja2 template language. The other option, Mako, fails. I have not tried the None option. It’s been that sort of day.

options

By the way, if you already have a subscription to Ultimate, you can extend it to include the whole suite for the low low price of about $40/year.

Phil 4.19.19

8:00 – 4:00 ASRC TL

  • Updating working copies of the paper based on the discussion with Aaron M last night.
  • Based on the diagrams of the weights that I could make with the MNIST model, I think I want to try to make a layer neuron/weight visualizer. This one is very pretty
  • Need to start on framework for data generation and analysis with Zach this morning
  • Got Flask working (see above for rant on how).
  • Flask-RESTful provides an extension to Flask for building REST APIs. Flask-RESTful was initially developed as an internal project at Twilio, built to power their public and internal APIs.

Phil 4.18.19

7:00 – ASRC TL

  • Added the Talmud to the implications section: “Rav Kahana says: In a Sanhedrin where all the judges saw fit to convict the defendant in a case of capital law, they acquit him.
  • Changed the title of the dissertation again. Now it’s Stampede Theory: Diversity in Networked Systems.
  • Need to transition Machine Teaching paper to IEEE format before meeting with Aaron
  • Shimei’s ML group – want to talk about narrative embedding
    • Semi-Supervised Classification with Graph Convolutional Networks
      • We present a scalable approach for semi-supervised learning on graph-structured data that is based on an efficient variant of convolutional neural networks which operate directly on graphs. We motivate the choice of our convolutional architecture via a localized first-order approximation of spectral graph convolutions. Our model scales linearly in the number of graph edges and learns hidden layer representations that encode both local graph structure and features of nodes. In a number of experiments on citation networks and on a knowledge graph dataset we demonstrate that our approach outperforms related methods by a significant margin.
  • More JASS writing
    • Fixed the prior work image to have all three states
      • Note: In Illustrator, the easiest way to outline an image is to select the image, click on mask, then select the outline stroke, select a color, and you’re done.
    • changed \cite{} to \citep{}, which puts the parens in the right place
  • Meeting with Aaron. Nice LaTex lesson

Phil 4.17.19

7:00 – 5:00 ASRC TL

  • Continuing to read Army of None. Really solid analysis
  • Working on JASS paper
  • The buzz on Twitter about the possible change to topic-based (e.g. Reddit?) rather than person-based, makes me wonder if there should be the ability to follow people in JuryRoom. I’d follow Cricket, for example
    • “Cricket sits down in front of the Troll doing her best to appear completely relaxed and smiles, “come and lay down again while I sing.” Gesturing in front of her she gives a smile, “and if I can just look at the pretty box for a little bit maybe right here in front of me? It will stay really close so you can grab it up once you fall asleep.” At this point Cricket hadn’t actually lied, she currently had no intention of taking the box but if it opened as she suspected she fully intended to open it and hopefully take the contents.”
  • More Grokking MNIST. Here’s a pix of the neurons. The difference about halfway through is the switch from training to testing data: mnist
  • MorphNet: Towards Faster and Smaller Neural Networks
    • Here we describe MorphNet, a sophisticated technique for neural network model refinement, which takes the latter approach. Originally presented in our paper, “MorphNet: Fast & Simple Resource-Constrained Structure Learning of Deep Networks”, MorphNet takes an existing neural network as input and produces a new neural network that is smaller, faster, and yields better performance tailored to a new problem. We’ve applied the technique to Google-scale problems to design production-serving networks that are both smaller and more accurate, and now we have open sourced the TensorFlow implementation of MorphNet to the community so that you can use it to make your models more efficient.
  • Had one of the stupidest, brain-damaged meetings I have ever had. Just destructive for destruction’s sake, as near as I can tell.
  • Updated IntelliJ, which was painful this time, requiring finding Java and Python SDKs that were clearly visible in the settings. I guess it’s that kind of day.

Phil 4.15.19

7:00 – ASRC TL

  • I’ve been hunting around for what a core message of the iSchool should be (And I like LAMDA), but I think this sums it up nicely. From The Library Book: Library
  • use arxiv2bibtex to get bibtex information for arXiv submissions for use in BibTeX, on web pages or in Wikis. You can enter:
    • one or several paper IDs like “1510.01797” or “math/0506203”.
    • your arXiv author ID looking similar to “grafvbothmer_h_1” to get a list of all your submitted papers.
    • your ORCID ID looking similar to “0000-0003-0136-444X” which you should register with your arXiv-account.
  • Here’s hoping the proposal goes in. It did!
  • Start on IEEE paper? Nope. Did get back to Grokking Deep learning. Trying to get the system working with MNIST.
  • Something for the arousal potential/Clockwork Muse file: Accelerating dynamics of collective attention
    • With news pushed to smart phones in real time and social media reactions spreading across the globe in seconds, the public discussion can appear accelerated and temporally fragmented. In longitudinal datasets across various domains, covering multiple decades, we find increasing gradients and shortened periods in the trajectories of how cultural items receive collective attention. Is this the inevitable conclusion of the way information is disseminated and consumed? Our findings support this hypothesis. Using a simple mathematical model of topics competing for finite collective attention, we are able to explain the empirical data remarkably well. Our modeling suggests that the accelerating ups and downs of popular content are driven by increasing production and consumption of content, resulting in a more rapid exhaustion of limited attention resources. In the interplay with competition for novelty, this causes growing turnover rates and individual topics receiving shorter intervals of collective attention.
  • Chasing down narrative embedding using force-directed graphs and found Tulip
    • Tulip is an information visualization framework dedicated to the analysis and visualization of relational data. Tulip aims to provide the developer with a complete library, supporting the design of interactive information visualization applications for relational data that can be tailored to the problems he or she is addressing.
    • There are Python bindings. The following are for large layouts
      • FM^3 (OGDF)
        • Implements the FM³ layout algorithm by Hachul and Jünger. It is a multilevel, force-directed layout algorithm that can be applied to very large graphs.
      • H3 (GRIP)
        • Implements the H3 layout technique for drawing large directed graphs as node-link diagrams in 3D hyperbolic space. That algorithm can lay out much larger structures than can be handled using traditional techniques for drawing general graphs because it assumes a hierarchical nature of the data. It was first published as: H3: Laying out Large Directed Graphs in 3D Hyperbolic Space . Tamara Munzner. Proceedings of the 1997 IEEE Symposium on Information Visualization, Phoenix, AZ, pp 2-10, 1997. The implementation in Python (MIT License) has been written by BuzzFeed (https://github.com/buzzfeed/pyh3).
  • Mahzarin R. Banaji
    • Professor Banaji studies thinking and feeling as they unfold in social context, with a focus on mental systems that operate in implicit or unconscious mode. She studies social attitudes and beliefs in adults and children, especially those that have roots in group membership.  She explores the implications of her work for questions of individual responsibility and social justice in democratic societies. Her current research interests focus on the origins of social cognition and applications of implicit cognition to improve individual decisions and organizational policies. 
      • What do Different Beliefs Tell us? An Examination of Factual, Opinion-Based, and Religious Beliefs 
        • Children and adults differentiate statements of religious belief from statements of fact and opinion, but the basis of that differentiation remains unclear. Across three experiments, adults and 8-10-year-old children heard statements of factual, opinion-based, and religious belief. Adults and children judged that statements of factual belief revealed more about the world, statements of opinion revealed more about individuals, and statements of religious belief provided information about both. Children—unlike adults—judged that statements of religious belief revealed more about the world than the believer. These results led to three conclusions. First, judgments concerning the relative amount of information statements of religious belief provide about individuals change across development, perhaps because adults have more experience with diversity. Second, recognizing that statements of religious belief provide information about both the world and the believer does not require protracted learning. Third, statements of religious belief are interpreted as amalgams of factual and opinion-based statements.
          • My sense is that these three regios – factual, religious, and opinion are huge attractors in our belief landscape
      • Studying Implicit Social Cognition with Noninvasive Brain Stimulation