# Phil 4.23.19

7:00 – 5:30 ASRC TL

• Reading Army of None and realizing that incorporating AI is a stampede theory and diversity issue:
• This makes Aegis less like a finished product with a few different modes and more like a customizable system that can be tailored for each mission. Galluch explained that the ship’s doctrine review board, consisting of the officers and senior enlisted personnel who work on Aegis, begin the process of writing doctrine months before deployment. They consider their anticipated missions, intelligence assessments, and information on the region for the upcoming deployment, then make recommendations on doctrine to the ship’s captain for approval. The result is a series of doctrine statements, individually and in packages, that the captain can activate as needed during deployment. (Page 164)
• Doctrine statements are typically grouped into two general categories: non-saturation and saturation. Non-saturation doctrine is used when there is time to carefully evaluate each potential threat. Saturation doctrine is needed if the ship gets into a combat situation where the number of inbound threats could overwhelm the ability of operators to respond. “If World War III starts and people start throwing a lot of stuff at me,” Galluch said, “I will have grouped my doctrine together so that it’s a one-push button that activates all of them. And what we’ve done is we’ve tested and we’ve looked at how they overlap each other and what the effects are going to be and make sure that we’re getting the defense of the ship that we expect.” This is where something like Auto-Special comes into play, in a “kill or be killed” scenario, as Galluch described it. (Page 164)
• Extensive testing goes into ensuring that it works properly. Once the ship arrives in theater, the first thing the crew does is test the weapons doctrine to see if there is anything in the environment that might cause it to fire in peacetime, which would not be good. This is done safely by enabling a hardware-level cutout called the Fire Inhibit Switch, or FIS. The FIS includes a key that must be inserted for any of the ship’s weapons to fire. When the FIS key is inserted, a red light comes on; when it is turned to the right, the light turns green, meaning the weapons are live and ready to fire. When the FIS is red—or removed entirely—the ship’s weapons are disabled at the hardware level. (Page 165)
• But the differences run deeper than merely having more options. The whole philosophy of automation is different. With Aegis, the automation is used to capture the ship captain’s intent. In Patriot, the automation embodies the intent of the designers and testers. The actual operators of the system may not even fully understand the designers’ intent that went into crafting the rules. The automation in Patriot is largely intended to replace warfighters’ decision-making. In Aegis, the automation is used to capture warfighters’ decision-making. (Page 165)
• Hawley argued that Army Patriot operators train in a “sham environment” that doesn’t accurately simulate the rigors of real-world combat. As a result, he said “the Army deceives itself about how good their people really are. . . . It would be easy to believe you’re good at this, but that’s only because you’ve been able to handle the relatively non-demanding scenarios that they throw at you.” Unfortunately, militaries might not realize their training is ineffective until a war occurs, at which point it may be too late. (Page 171)
• Hawley explained that the Aegis community was partially protected from this problem because they use their system day in and day out on ships operating around the globe. Aegis operators get “consistent objective feedback from your environment on how well you’re doing,” preventing this kind of self-deception. The Army’s peacetime operating environment for the Patriot, on the other hand, is not as intense, Hawley said. “Even when the Army guys are deployed, I don’t think that the quality of their experience with the system is quite the same. They’re theoretically hot, but they’re really not doing much of anything, other than just monitoring their scopes.” Leadership is also a vital factor. “Navy brass in the Aegis community are absolutely paranoid” about another Vincennes incident, Hawley said. (Page 171)
• Working on JASS paper
• Working on AI paper
• Long chat with Eric H

# Phil 4.22.19

7:00 – 4:00 ASRC TL

• The mission of the Conference on Truth and Trust Online (TTO) is to bring together all parties working on automated approaches to augment manual efforts on improving the truthfulness and trustworthiness of online communications.
• The inaugural Truth and Trust Online conference will be taking place on October 4th and 5th 2019 at BMA House in London.
•

### Key Dates

• First call for papers: 2nd of April, 2019 *

• Deadline for all submissions: 3rd of June, 2019
• Notification of acceptance: Early July
• Registration opens: End of June
• Conference: 4th and 5th of October, 2019, BMA House, London, UK
• From On Being with Pádraig Ó Tuama, about belonging gone bad and the scale of sectarianism:
• Fooling automated surveillance cameras: adversarial patches to attack person detection
• Adversarial attacks on machine learning models have seen increasing interest in the past years. By making only subtle changes to the input of a convolutional neural network, the output of the network can be swayed to output a completely different result. The first attacks did this by changing pixel values of an input image slightly to fool a classifier to output the wrong class. Other approaches have tried to learn “patches” that can be applied to an object to fool detectors and classifiers. Some of these approaches have also shown that these attacks are feasible in the real-world, i.e. by modifying an object and filming it with a video camera. However, all of these approaches target classes that contain almost no intra-class variety (e.g. stop signs). The known structure of the object is then used to generate an adversarial patch on top of it.
• In this paper, we present an approach to generate adversarial patches to targets with lots of intra-class variety, namely persons. The goal is to generate a patch that is able successfully hide a person from a person detector. An attack that could for instance be used maliciously to circumvent surveillance systems, intruders can sneak around undetected by holding a small cardboard plate in front of their body aimed towards the surveillance camera. From our results we can see that our system is able significantly lower the accuracy of a person detector. Our approach also functions well in real-life scenarios where the patch is filmed by a camera. To the best of our knowledge we are the first to attempt this kind of attack on targets with a high level of intra-class variety like persons.
• More adding Wayne’s notes into JASS paper. Figured out how to make something that looks like blockquotes without screwing up the JASS formatting:
\hspace{1cm}\begin{minipage}{\dimexpr\textwidth-2cm}
\textit{"Get him home.  And deliver my cut of earnings to the people of Phandalin near Neverwinter, my home". With this, before anyone can stop him, Edmund turns to the dragon. "I make a counter offer.  In exchange for them motions to the two caged people. I offer myself to take their place.  I will remain.  I will starve.  You will lose two peasants, and in return you will gain all that I have to offer.  Edmund of house DeVir of Neverwinter.  The last of a noble bloodline of the ruling class."} - Edmond: Group 2
\end{minipage}
• More Machine Teaching paper

# Phil 4.21.19

Working on the JASS paper. I added enough in the prior work so that the agent research is explained to go along with the picture, and added a section in the lit review for agent-based simulation, because JASS.

Played around with the title for the dissertation a bit. I like this one for now:

# Phil 4.20.19

I think that we might be getting close-ish to a workable AI dungeon master(thisstorydoesnotexist.com). The trick might be to let people know that it was AI, and that it would behave… oddly:

My prompt: you enter a room with an orc and a goblin

The generated text:
The door swung open to a dark room, with no sign of a decoration or decoration for a simple wooden table and a table with no apparent decoration. As my eyes adjusted to the darkness, a pair of eyes I recognized well. He was shorter than the average goblin, even in his long robes, but his build was reminiscent of the ancient orc warrior. He looked at the two on a simple wooden stool, and then looked at me.

There was an air of calm in his tone.

The voice, it sounded like it was coming from inside the dungeon, but that was all I heard. It was a sound so foreign, the only sound I heard was the crackling of fire.
The light was blinding, blinding so much pain that I had to force my eyes open to avoid it. The light was bright enough to completely illuminate the floor of the room, but dim and the brightness did not cause me to see. The eyes of the orc were staring at me, but my mind was racing with questions.

What would make this place like it was?

Who are we, what will make this land like it is?

It’s been a couple millennia, we’re the third tribe in a short time, and were not going to leave until they had seen us. The people of our tribe were not aggressive, and their weapons were not strong, they were not strong enough to be dangerous on their own.

What makes the people of this land so dangerous?

I am the Second of the Glimmering Council. It is my duty to inform the other tribes of our future, and warn those tribes that their tribe will not stand

I wound up using Python because of machine learning, in particular TensorFlow and Keras. And Python is ok. I miss strong typing, but all in all it seems pretty solid with some outstanding science libraries. Mostly, I write tools:

I’ve been using IntelliJ Ultimate for my Python work. I came at it from Java and the steaming wreckage that is Eclipse. I like Ultimate a lot. It updates frequently but no too much. Most of the time it’s seamless. And I can switch between C++, Java, Python, and TypeScript/JavaScript.

For a new tool project, I need to have Python analytics served to a thin client running D3. Which meant that it was time to set up a client-server architecture.

These things are always a pain to set up. A “Hello world” app that sends JSON from the server to the client where it is rendered at interactive rates, with user actions being fed back to the server is still a surprisingly clunky thing to set up. So I was expecting some grumbling. I was not expecting everything to be broken. Well, nearly everything.

It is possible to create a Flask project in IntelliJ Ultimate. But you can’t actually launch the server. The goal is to have code like this:

Set up and run a webserver with console output like this:

It turns out that there is an unresolved bug in IntelliJ that prevents the webserver from launching. In IntelliJ, the way you know that everything is working is that the python file – app.py in this case — has parentheses around it (app.py). You can also see this in the launch menu:

If you see those parens, then IntelliJ knows that the file contains a Flask webserver, and will run things accordingly.

So how did I get this to work? I bought the Professional Edition of PyCharm (which has Flask), and used the defaults to create the project. Note that it appears that you have to use the Jinja2 template language. The other option, Mako, fails. I have not tried the None option. It’s been that sort of day.

By the way, if you already have a subscription to Ultimate, you can extend it to include the whole suite for the low low price of about \$40/year.

# Phil 4.19.19

8:00 – 4:00 ASRC TL

• Updating working copies of the paper based on the discussion with Aaron M last night.
• Based on the diagrams of the weights that I could make with the MNIST model, I think I want to try to make a layer neuron/weight visualizer. This one is very pretty
• Need to start on framework for data generation and analysis with Zach this morning
• Got Flask working (see above for rant on how).
• Flask-RESTful provides an extension to Flask for building REST APIs. Flask-RESTful was initially developed as an internal project at Twilio, built to power their public and internal APIs.

# Phil 4.18.19

7:00 – ASRC TL

• Added the Talmud to the implications section: “Rav Kahana says: In a Sanhedrin where all the judges saw fit to convict the defendant in a case of capital law, they acquit him.
• Changed the title of the dissertation again. Now it’s Stampede Theory: Diversity in Networked Systems.
• Need to transition Machine Teaching paper to IEEE format before meeting with Aaron
• Shimei’s ML group – want to talk about narrative embedding
• Semi-Supervised Classification with Graph Convolutional Networks
• We present a scalable approach for semi-supervised learning on graph-structured data that is based on an efficient variant of convolutional neural networks which operate directly on graphs. We motivate the choice of our convolutional architecture via a localized first-order approximation of spectral graph convolutions. Our model scales linearly in the number of graph edges and learns hidden layer representations that encode both local graph structure and features of nodes. In a number of experiments on citation networks and on a knowledge graph dataset we demonstrate that our approach outperforms related methods by a significant margin.
• More JASS writing
• Fixed the prior work image to have all three states
• Note: In Illustrator, the easiest way to outline an image is to select the image, click on mask, then select the outline stroke, select a color, and you’re done.
• changed \cite{} to \citep{}, which puts the parens in the right place
• Meeting with Aaron. Nice LaTex lesson

# Phil 4.17.19

7:00 – 5:00 ASRC TL

• Continuing to read Army of None. Really solid analysis
• Working on JASS paper
• The buzz on Twitter about the possible change to topic-based (e.g. Reddit?) rather than person-based, makes me wonder if there should be the ability to follow people in JuryRoom. I’d follow Cricket, for example
• “Cricket sits down in front of the Troll doing her best to appear completely relaxed and smiles, “come and lay down again while I sing.” Gesturing in front of her she gives a smile, “and if I can just look at the pretty box for a little bit maybe right here in front of me? It will stay really close so you can grab it up once you fall asleep.” At this point Cricket hadn’t actually lied, she currently had no intention of taking the box but if it opened as she suspected she fully intended to open it and hopefully take the contents.”
• More Grokking MNIST. Here’s a pix of the neurons. The difference about halfway through is the switch from training to testing data:
• MorphNet: Towards Faster and Smaller Neural Networks
• Here we describe MorphNet, a sophisticated technique for neural network model refinement, which takes the latter approach. Originally presented in our paper, “MorphNet: Fast & Simple Resource-Constrained Structure Learning of Deep Networks”, MorphNet takes an existing neural network as input and produces a new neural network that is smaller, faster, and yields better performance tailored to a new problem. We’ve applied the technique to Google-scale problems to design production-serving networks that are both smaller and more accurate, and now we have open sourced the TensorFlow implementation of MorphNet to the community so that you can use it to make your models more efficient.
• Had one of the stupidest, brain-damaged meetings I have ever had. Just destructive for destruction’s sake, as near as I can tell.
• Updated IntelliJ, which was painful this time, requiring finding Java and Python SDKs that were clearly visible in the settings. I guess it’s that kind of day.

# Phil 4.15.19

7:00 – ASRC TL

• I’ve been hunting around for what a core message of the iSchool should be (And I like LAMDA), but I think this sums it up nicely. From The Library Book:
• use arxiv2bibtex to get bibtex information for arXiv submissions for use in BibTeX, on web pages or in Wikis. You can enter:
• one or several paper IDs like “1510.01797” or “math/0506203”.
• your arXiv author ID looking similar to “grafvbothmer_h_1” to get a list of all your submitted papers.
• your ORCID ID looking similar to “0000-0003-0136-444X” which you should register with your arXiv-account.
• Here’s hoping the proposal goes in. It did!
• Start on IEEE paper? Nope. Did get back to Grokking Deep learning. Trying to get the system working with MNIST.
• Something for the arousal potential/Clockwork Muse file: Accelerating dynamics of collective attention
• With news pushed to smart phones in real time and social media reactions spreading across the globe in seconds, the public discussion can appear accelerated and temporally fragmented. In longitudinal datasets across various domains, covering multiple decades, we find increasing gradients and shortened periods in the trajectories of how cultural items receive collective attention. Is this the inevitable conclusion of the way information is disseminated and consumed? Our findings support this hypothesis. Using a simple mathematical model of topics competing for finite collective attention, we are able to explain the empirical data remarkably well. Our modeling suggests that the accelerating ups and downs of popular content are driven by increasing production and consumption of content, resulting in a more rapid exhaustion of limited attention resources. In the interplay with competition for novelty, this causes growing turnover rates and individual topics receiving shorter intervals of collective attention.
• Chasing down narrative embedding using force-directed graphs and found Tulip
• Tulip is an information visualization framework dedicated to the analysis and visualization of relational data. Tulip aims to provide the developer with a complete library, supporting the design of interactive information visualization applications for relational data that can be tailored to the problems he or she is addressing.
• There are Python bindings. The following are for large layouts
• FM^3 (OGDF)
• Implements the FM³ layout algorithm by Hachul and Jünger. It is a multilevel, force-directed layout algorithm that can be applied to very large graphs.
• H3 (GRIP)
• Implements the H3 layout technique for drawing large directed graphs as node-link diagrams in 3D hyperbolic space. That algorithm can lay out much larger structures than can be handled using traditional techniques for drawing general graphs because it assumes a hierarchical nature of the data. It was first published as: H3: Laying out Large Directed Graphs in 3D Hyperbolic Space . Tamara Munzner. Proceedings of the 1997 IEEE Symposium on Information Visualization, Phoenix, AZ, pp 2-10, 1997. The implementation in Python (MIT License) has been written by BuzzFeed (https://github.com/buzzfeed/pyh3).
• Mahzarin R. Banaji
• Professor Banaji studies thinking and feeling as they unfold in social context, with a focus on mental systems that operate in implicit or unconscious mode. She studies social attitudes and beliefs in adults and children, especially those that have roots in group membership.  She explores the implications of her work for questions of individual responsibility and social justice in democratic societies. Her current research interests focus on the origins of social cognition and applications of implicit cognition to improve individual decisions and organizational policies.
• What do Different Beliefs Tell us? An Examination of Factual, Opinion-Based, and Religious Beliefs
• My sense is that these three regios – factual, religious, and opinion are huge attractors in our belief landscape
• Studying Implicit Social Cognition with Noninvasive Brain Stimulation

# Phil 4.14.19

An interesting take on diversity science that I had never heard of:

UnTangle Map: Visual Analysis of Probabilistic Multi-Label Data

• Data with multiple probabilistic labels are common in many situations. For example, a movie may be associated with multiple genres with different levels of confidence. Despite their ubiquity, the problem of visualizing probabilistic labels has not been adequately addressed. Existing approaches often either discard the probabilistic information, or map the data to a low-dimensional subspace where their associations with original labels are obscured. In this paper, we propose a novel visual technique, UnTangle Map, for visualizing probabilistic multi-labels. In our proposed visualization, data items are placed inside a web of connected triangles, with labels assigned to the triangle vertices such that nearby labels are more relevant to each other. The positions of the data items are determined based on the probabilistic associations between items and labels. UnTangle Map provides both (a) an automatic label placement algorithm, and (b) adaptive interactions that allow users to control the label positioning for different information needs. Our work makes a unique contribution by providing an effective way to investigate the relationship between data items and their probabilistic labels, as well as the relationships among labels. Our user study suggests that the visualization effectively helps users discover emergent patterns and compare the nuances of probabilistic information in the data labels.

Spring Embedders and Force Directed Graph Drawing Algorithms

• Force-directed algorithms are among the most flexible methods for calculating layouts of simple undirected graphs. Also known as spring embedders, such algorithms calculate the layout of a graph using only information contained within the structure of the graph itself, rather than relying on domain-specific knowledge. Graphs drawn with these algorithms tend to be aesthetically pleasing, exhibit symmetries, and tend to produce crossing-free layouts for planar graphs. In this survey we consider several classical algorithms, starting from Tutte’s 1963 barycentric method, and including recent scalable multiscale methods for large and dynamic graphs.

# Phil 4.12.19

9:00 – 5:00 ASRC TL

• Finished the BAA white paper(?), and asked for hours to write the full paper for the Symposium on Technologies for Homeland Security
• These are appropriate:
• Meaningful Human Control over Autonomous Systems: A Philosophical Account
• In this paper, we provide an analysis of the sort of control humans need to have over (semi)autonomous systems such that unreasonable risks are avoided, that human responsibility will not evaporate, and that is there is a place to turn to in case of untoward outcomes. We argue that higher levels of autonomy of systems can and should be combined with human control and responsibility. We apply the notion of guidance control that has been developed by Fischer and Ravizza (1998) in the philosophical debate about moral responsibility and free will, and we adapt it as to cover actions mediated by the use of (semi)autonomous robotic systems. As we will show, this analysis can be fruitfully applied in the context of autonomous weapon systems as well as of autonomous systems more generally. We think we herewith provide a first full-fledged philosophical account of “meaningful human control over autonomous systems.”
• The following is the preprint PDF of our paper on driver functional vigilance during Tesla Autopilot assisted driving: Human Side of Tesla Autopilot: Exploration of Functional Vigilance in Real-World Human-Machine Collaboration. It is part of the MIT-AVT large-scale naturalistic driving study
• What I Learned from a Year of ChinAI
• Finally, Chinese thinkers are engaged on broader issues of AI ethics, including the risks of human-level machine intelligence and beyond. Zhao Tingyang, an influential philosopher at the Chinese Academy of Social Sciences, has written a long essay on near-term and long-term AI safety issues, including the prospect of superintelligence. Professor Zhihua Zhou, who leads an impressive lab at Nanjing University, argued in an article for the China Computer Federation that even if strong AI is possible, it is something that AI researchers should stay away from.
• And so ends a long, hectic, but satisfying week.

# Phil 4.11.19

7:00 – 9:00 ASRC TL

• Continuing with the BAA
• Fixed the acknowledgements section and updated ArXiv
• Meeting/presentation with Wayne at noon
• ML seminar, presenting robot stampede
• Wound up having an impromptu meeting with Aaron M.

# Phil 4.10.19

9:00 – 5:00 ASRC TL

• Adversarial herding and density-stiffness in legislation
• Need to make ArXive version of the DfS paper
• NASA Meeting
• Pivot to BAA
• Refamiliarizing myself with the Call “BROAD AGENCY ANNOUNCEMENT (BAA) TITLE:  MASTERING COMPLEXITY IN MULTI- DOMAIN COMMAND & CONTROL” BAA FA8750-18-S-7014
• FOCUS AREA 3- Machine Intelligence Harnessing the speed and scale of machines to exponentially increase human capacity to command & control in an increasingly complex battlespace. The complexity of MD operations will quickly exceed human capacity and will require machine reasoning to augment the human decision maker across all stages of the C&C Monitor, Assess, Plan, and Execute (MAPE) cycle.
• CONTENT AND FORMAT:  Offerors are required to submit 3 copies of a 3 to 5 page white paper summarizing their proposed approach/solution. The purpose of the white paper is to preclude unwarranted effort on the part of an offeror whose proposed work is not of interest to the Government.
• The white paper will be formatted as follows:Section A: Title, Period of Performance, Estimated Cost, Name/Address of Company, Technical and Contracting Points of Contact (phone and email)(this section is NOT included in the page count);
• Section B: Task Objective; and
• Section C: Technical Summary and Proposed Deliverables.

• All white papers shall be double spaced with a font no smaller than 12 point.  In addition, respondents are requested to provide their Commercial and Government Entity (CAGE) Code, their unique entity identifier and electronic funds transfer (EFT) indicator (if applicable), an e-mail address and reference BAA FA8750-18-S-7014 with their submission.