# Phil 5.14.19

7:00 – 8:00 ASRC NASA GEOS-R

• More Dissertation
• Break out the network slides to “island” (initial state), “star” (radio) “cyclic star” (talk radio), “dense” social media
• MatrixScalar
• 7:30 Waikato meeting.
• Walked through today’s version, which is looking very nice

# Phil 5.13.19

7:00 – 3:00 ASRC NASA GEOS-R

• Really good Ted Radio Hour on Jumpstarting Creativity
• More dissertation
• Integrating Artificial Intelligence into Weapon Systems is up!
• Played around with the speed of agents, hoping to make some super-slow agents, but the way that the speed is updated is an accumulation of speed influences set by the neighbors, based on influence distance. To do this, I need to set the minmax separate from the initial variance
• Help Aaron with MatrixScalar – Nope
• Slides for talk -Yep, all day

# Phil 5.10.19

7:00 – 4:00 ASRC NASA GOES

• Tensorflow Graphics?
• An End-to-End AutoML Solution for Tabular Data at KaggleDays
• More dissertation writing. Added a bit on The Sorcerer’s Apprentice and finished my first pass at Moby-Dick
• Add pickling to MatrixScalar – done!
def save_class(the_class, filename:str):
print("save_class")
# Its important to use binary mode
dbfile = open(filename, 'ab')

# source, destination
pickle.dump(the_class, dbfile)
dbfile.close()

def restore_class(filename:str) -> MatrixScalar:
print("restore_class")
# for reading also binary mode is important
dbfile = open(filename, 'rb')
dbfile.close()
return db
• Added flag to allow unlimited input buffer cols. It automatically sizes to the max if no arg for input_size
• NOTE: Add a “notes” dict that is added to the setup tab for run information

# Phil 5.9.19

Finished Army of None. One of the deepest, thorough analysis of human-centered AI/ML I’ve ever read.

7:00 – 4:00 ASRC NASA GOES-R

• Well, I can write everything, but xlsxwriter won’t read in anything
• Price to win analytic?

4:30 – 7:00 ML Seminar

7:00 – 9:00 Meeting with Aaron M

• Tried to get biber working, but it produces a blank bib file. Need to look into that
• Got the AI paper uploaded to Aaron’s new account. Arxiv also has problems with biber
• Spent the rest of the meeting figuring out the next steps. It’s potentially something along the lines of using ML to build an explainable model for different sorts of ML systems (e.g. Humans-on-the-loop <-> Forensic, post-hoc interaction)

# Phil 5.7.19

7:00 – 8:00 ASRC NASA GOES-R

• Via CSAIL: “The team’s approach isn’t particularly efficient now – they must train and “prune” the full network several times before finding the successful subnetwork. However, MIT professor Michael Carbin says that his team’s findings suggest that, if we can determine precisely which part of the original network is relevant to the final prediction, scientists might one day be able to skip this expensive process altogether. Such a revelation has the potential to save hours of work and make it easier for meaningful models to be created by individual programmers and not just huge tech companies.”
• From the abstract of The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks
: We find that a standard pruning technique naturally uncovers subnetworks whose initializations made them capable of training effectively. Based on these results, we articulate the “lottery ticket hypothesis:” dense, randomly-initialized, feed-forward networks contain subnetworks (“winning tickets”) that – when trained in isolation – reach test accuracy comparable to the original network in a similar number of iterations. The winning tickets we find have won the initialization lottery: their connections have initial weights that make training particularly effective.
• Sounds like a good opportunity for evolutionary systems
• Finished with text mods for IEEE letter
• Added Kaufman and Olfati-Sabir to the discussion on Social Influence Horizon
• Started the draft deck for the tech summit
• More MatrixScalar
• Core functions work
• Change test and train within the class to input and target
• Create a coordinating class that loads and creates test and train matrices
• JuryRoom meeting
• Progress is good enough to start tracking it. Going to create a set of Google sheets that keep track of tasks and bugs

# Phil 5.6.19

7:00 – 5:00 ASRC GOES-R

• Finished the AI/ML paper with Aaron M over the weekend. I need to have him ping me when it goes in. I think it turned out pretty well, even when cut down to 7 pages (with references!! Why, IEEE, why?)
• Sent a copy to Wayne, and distributed around work. Need to put in on ArXiv on Thursday
• Starting to pull parts from phifel.com to make the lit review for the dissertation. Those reviews may have had a reason after all!
• And oddly (though satisfying), I wound up adding a section on Moby-Dick as a way of setting up the rest of the lit review
• More Matrix scalar class. Basically a satisfying day of just writing code.
• Need to fix IEEE letter and take a self-portrait. Need to charge up the good camera

# Phil 5.2.19

7:00 – 9:00 ASRC NASA

• Wrote up my notes from yesterday
• Need to make an Akido Drone image, maybe even a sim in Zach’s environment?
• Changed the title of the Dissertation
• Need to commit the changes to LMN from the laptop – done
• Need to create an instance of the JASSS paper in overleaf and make sure it runs
• Put the jasss.bst file in the svn repo – done
• Thinking about putting my dict find on stackoverflow, but did see this page on xpath for dict that is making me wonder if I just shouldn’t point there.
• Did meaningless 2019 goal stuff
• Adding ragged edge argument and generate a set of curves for eval
• ML seminar 4:30
• Meeting with Aaron M at 7:00
• Spent a good deal of time discussing the structure of the paper and the arguments. Aaron wants the point made that the “arc to full autonomy” is really only the beginning, predictable part of the process. In this part, the humans own the “reflective part” of the process, either as a human in the loop, where they decide to pull the trigger, or in the full autonomy mode where they select the training data and evaluation criteria for the reflexive system that’s built. The next part of that sequence is when machines begin to develop reflective capabilities. When that happens, many of the common assumptions that sets of human adversaries make about conflict (OODA, for example), may well be disrupted by systems that do not share the common background and culture, but have been directed to perform the same mission.

# Phil 5.1.19

7:00 – 7:00 ASRC NASA AIMS

• Added lit review section to the dissertation, and put the seven steps of sectarianism in.
• Spent most of yesterday helping Aaron with TimeSeriesML. Currently working on a JSON util that will get a value on a provided path
• Had to set up python at the module and not project level, which was odd. Here’s how: www.jetbrains.com/help/idea/2016.1/configuring-global-project-and-module-sdks.html#module_sdk
• Done!
    def lfind(self, query_list:List, target_list:List, targ_str:str = "???"):
for tval in target_list:
if isinstance(tval, dict):
return self.dfind(query_list[0], tval, targ_str)
elif tval == query_list[0]:
return tval

def dfind(self, query_dict:Dict, target_dict:Dict, targ_str:str = "???"):
for key, qval in query_dict.items():
# print("key = {}, qval = {}".format(key, qval))
tval = target_dict[key]
if isinstance(qval, dict):
return self.dfind(qval, tval, targ_str)
elif isinstance(qval, list):
return self.lfind(qval, tval, targ_str)
else:
if qval == targ_str:
return tval
if qval != tval:
return None

def find(self, query_dict:Dict):
# pprint.pprint(query_dict)
result = self.dfind(query_dict, self.json_dict)
return result


• It’s called like this:
ju = JsonUtils("../../data/output_data/lstm_structure.json")
# ju.pprint()
print("result 1 = {}".format(result))
print("result 2 = {}".format(result))
• Here’s the results:
result 1 = [None, 12, 1]
result 2 = 666.0
• Got Aaron’s code running!
• Meeting with Joel
• A quicker demo that I was expecting, though I was able to walk through how to create and use Corpus Manager and LMN. Also, we got a bug where the column index for the eigenvector didn’t exist. Fixed that in JavaUtils.math.Labeled2DMatrix.java
• Meeting with Wayne
• Walked through the JASSS paper. Need to make sure that the lit review is connected and in the proper order
• Changed the title of the dissertation to
• Stampede Theory: Mapping Dangerous Misinformation at Scale
• Solidifying defense over the winter break, with diploma in the Spring
• Mentioned the “aikido with drones” concept. Need to make an image. Actually, I wonder if there is a way for that model to be used for actually getting a grant to explore weaponized AI in a way that isn’t directly mappable to weapons systems, but is close enough to reality that people will get the point.
• Also discussed the concept of managing runaway AI with the Sanhedrin-17a concept, where unanimous agreement to convict means acquittal.  Cities had Sanhedrin of 23 Judges and the Great Sanhedrin had 71 Judges en.wikipedia.org/wiki/Sanhedrin
• Rav Kahana says: In a Sanhedrin where all the judges saw fit to convict the defendant in a case of capital law, they acquit him. The Gemara asks: What is the reasoning for this halakha? It is since it is learned as a tradition that suspension of the trial overnight is necessary in order to create a possibility of acquittal. The halakha is that they may not issue the guilty verdict on the same day the evidence was heard, as perhaps over the course of the night one of the judges will think of a reason to acquit the defendant. And as those judges all saw fit to convict him they will not see any further possibility to acquit him, because there will not be anyone arguing for such a verdict. Consequently, he cannot be convicted.

# Phil 4.29.19

7:00 – 3:30 ASRC TL

• Register for Tech Summit – done
• Ask for a week of time to prep for talk – done
• Panos read the paper and has some suggestions. Need to implement
• This might be important: Neural Logic Machines
• We propose the Neural Logic Machine (NLM), a neural-symbolic architecture for both inductive learning and logic reasoning. NLMs exploit the power of both neural networks—as function approximators, and logic programming—as a symbolic processor for objects with properties, relations, logic connectives, and quantifiers. After being trained on small-scale tasks (such as sorting short arrays), NLMs can recover lifted rules, and generalize to large-scale tasks (such as sorting longer arrays). In our experiments, NLMs achieve perfect generalization in a number of tasks, from relational reasoning tasks on the family tree and general graphs, to decision making tasks including sorting arrays, finding shortest paths, and playing the blocks world. Most of these tasks are hard to accomplish for neural networks or inductive logic programming alone.
• Need to read the Nature “Behavior” paper. Notes probably go straight into the dissertation lit review – done
• Continuing to read Army of None, which is ridiculously good. This figure has been making me think:  This implies that the idea that a set of diverse ML systems all agreeing is a warning condition is worth exploring.
• Finished read through of Tao’s paper
• Need to find a cardiologist for Arpita

# Phil 4.23.19

7:00 – 5:30 ASRC TL

• Reading Army of None and realizing that incorporating AI is a stampede theory and diversity issue:
• This makes Aegis less like a finished product with a few different modes and more like a customizable system that can be tailored for each mission. Galluch explained that the ship’s doctrine review board, consisting of the officers and senior enlisted personnel who work on Aegis, begin the process of writing doctrine months before deployment. They consider their anticipated missions, intelligence assessments, and information on the region for the upcoming deployment, then make recommendations on doctrine to the ship’s captain for approval. The result is a series of doctrine statements, individually and in packages, that the captain can activate as needed during deployment. (Page 164)
• Doctrine statements are typically grouped into two general categories: non-saturation and saturation. Non-saturation doctrine is used when there is time to carefully evaluate each potential threat. Saturation doctrine is needed if the ship gets into a combat situation where the number of inbound threats could overwhelm the ability of operators to respond. “If World War III starts and people start throwing a lot of stuff at me,” Galluch said, “I will have grouped my doctrine together so that it’s a one-push button that activates all of them. And what we’ve done is we’ve tested and we’ve looked at how they overlap each other and what the effects are going to be and make sure that we’re getting the defense of the ship that we expect.” This is where something like Auto-Special comes into play, in a “kill or be killed” scenario, as Galluch described it. (Page 164)
• Extensive testing goes into ensuring that it works properly. Once the ship arrives in theater, the first thing the crew does is test the weapons doctrine to see if there is anything in the environment that might cause it to fire in peacetime, which would not be good. This is done safely by enabling a hardware-level cutout called the Fire Inhibit Switch, or FIS. The FIS includes a key that must be inserted for any of the ship’s weapons to fire. When the FIS key is inserted, a red light comes on; when it is turned to the right, the light turns green, meaning the weapons are live and ready to fire. When the FIS is red—or removed entirely—the ship’s weapons are disabled at the hardware level. (Page 165)
• But the differences run deeper than merely having more options. The whole philosophy of automation is different. With Aegis, the automation is used to capture the ship captain’s intent. In Patriot, the automation embodies the intent of the designers and testers. The actual operators of the system may not even fully understand the designers’ intent that went into crafting the rules. The automation in Patriot is largely intended to replace warfighters’ decision-making. In Aegis, the automation is used to capture warfighters’ decision-making. (Page 165)
• Hawley argued that Army Patriot operators train in a “sham environment” that doesn’t accurately simulate the rigors of real-world combat. As a result, he said “the Army deceives itself about how good their people really are. . . . It would be easy to believe you’re good at this, but that’s only because you’ve been able to handle the relatively non-demanding scenarios that they throw at you.” Unfortunately, militaries might not realize their training is ineffective until a war occurs, at which point it may be too late. (Page 171)
• Hawley explained that the Aegis community was partially protected from this problem because they use their system day in and day out on ships operating around the globe. Aegis operators get “consistent objective feedback from your environment on how well you’re doing,” preventing this kind of self-deception. The Army’s peacetime operating environment for the Patriot, on the other hand, is not as intense, Hawley said. “Even when the Army guys are deployed, I don’t think that the quality of their experience with the system is quite the same. They’re theoretically hot, but they’re really not doing much of anything, other than just monitoring their scopes.” Leadership is also a vital factor. “Navy brass in the Aegis community are absolutely paranoid” about another Vincennes incident, Hawley said. (Page 171)
• Working on JASS paper
• Working on AI paper
• Long chat with Eric H

# Phil 4.22.19

7:00 – 4:00 ASRC TL

• The mission of the Conference on Truth and Trust Online (TTO) is to bring together all parties working on automated approaches to augment manual efforts on improving the truthfulness and trustworthiness of online communications.
• The inaugural Truth and Trust Online conference will be taking place on October 4th and 5th 2019 at BMA House in London.
•

### Key Dates

• First call for papers: 2nd of April, 2019 *

• Deadline for all submissions: 3rd of June, 2019
• Notification of acceptance: Early July
• Registration opens: End of June
• Conference: 4th and 5th of October, 2019, BMA House, London, UK
• From On Being with Pádraig Ó Tuama, about belonging gone bad and the scale of sectarianism:
• Fooling automated surveillance cameras: adversarial patches to attack person detection
• Adversarial attacks on machine learning models have seen increasing interest in the past years. By making only subtle changes to the input of a convolutional neural network, the output of the network can be swayed to output a completely different result. The first attacks did this by changing pixel values of an input image slightly to fool a classifier to output the wrong class. Other approaches have tried to learn “patches” that can be applied to an object to fool detectors and classifiers. Some of these approaches have also shown that these attacks are feasible in the real-world, i.e. by modifying an object and filming it with a video camera. However, all of these approaches target classes that contain almost no intra-class variety (e.g. stop signs). The known structure of the object is then used to generate an adversarial patch on top of it.
• In this paper, we present an approach to generate adversarial patches to targets with lots of intra-class variety, namely persons. The goal is to generate a patch that is able successfully hide a person from a person detector. An attack that could for instance be used maliciously to circumvent surveillance systems, intruders can sneak around undetected by holding a small cardboard plate in front of their body aimed towards the surveillance camera. From our results we can see that our system is able significantly lower the accuracy of a person detector. Our approach also functions well in real-life scenarios where the patch is filmed by a camera. To the best of our knowledge we are the first to attempt this kind of attack on targets with a high level of intra-class variety like persons.
• More adding Wayne’s notes into JASS paper. Figured out how to make something that looks like blockquotes without screwing up the JASS formatting:
\hspace{1cm}\begin{minipage}{\dimexpr\textwidth-2cm}
\textit{"Get him home.  And deliver my cut of earnings to the people of Phandalin near Neverwinter, my home". With this, before anyone can stop him, Edmund turns to the dragon. "I make a counter offer.  In exchange for them motions to the two caged people. I offer myself to take their place.  I will remain.  I will starve.  You will lose two peasants, and in return you will gain all that I have to offer.  Edmund of house DeVir of Neverwinter.  The last of a noble bloodline of the ruling class."} - Edmond: Group 2
\end{minipage}
• More Machine Teaching paper

# Phil 4.15.19

7:00 – ASRC TL

• I’ve been hunting around for what a core message of the iSchool should be (And I like LAMDA), but I think this sums it up nicely. From The Library Book:
• use arxiv2bibtex to get bibtex information for arXiv submissions for use in BibTeX, on web pages or in Wikis. You can enter:
• one or several paper IDs like “1510.01797” or “math/0506203”.
• your arXiv author ID looking similar to “grafvbothmer_h_1” to get a list of all your submitted papers.
• your ORCID ID looking similar to “0000-0003-0136-444X” which you should register with your arXiv-account.
• Here’s hoping the proposal goes in. It did!
• Start on IEEE paper? Nope. Did get back to Grokking Deep learning. Trying to get the system working with MNIST.
• Something for the arousal potential/Clockwork Muse file: Accelerating dynamics of collective attention
• With news pushed to smart phones in real time and social media reactions spreading across the globe in seconds, the public discussion can appear accelerated and temporally fragmented. In longitudinal datasets across various domains, covering multiple decades, we find increasing gradients and shortened periods in the trajectories of how cultural items receive collective attention. Is this the inevitable conclusion of the way information is disseminated and consumed? Our findings support this hypothesis. Using a simple mathematical model of topics competing for finite collective attention, we are able to explain the empirical data remarkably well. Our modeling suggests that the accelerating ups and downs of popular content are driven by increasing production and consumption of content, resulting in a more rapid exhaustion of limited attention resources. In the interplay with competition for novelty, this causes growing turnover rates and individual topics receiving shorter intervals of collective attention.
• Chasing down narrative embedding using force-directed graphs and found Tulip
• Tulip is an information visualization framework dedicated to the analysis and visualization of relational data. Tulip aims to provide the developer with a complete library, supporting the design of interactive information visualization applications for relational data that can be tailored to the problems he or she is addressing.
• There are Python bindings. The following are for large layouts
• FM^3 (OGDF)
• Implements the FM³ layout algorithm by Hachul and Jünger. It is a multilevel, force-directed layout algorithm that can be applied to very large graphs.
• H3 (GRIP)
• Implements the H3 layout technique for drawing large directed graphs as node-link diagrams in 3D hyperbolic space. That algorithm can lay out much larger structures than can be handled using traditional techniques for drawing general graphs because it assumes a hierarchical nature of the data. It was first published as: H3: Laying out Large Directed Graphs in 3D Hyperbolic Space . Tamara Munzner. Proceedings of the 1997 IEEE Symposium on Information Visualization, Phoenix, AZ, pp 2-10, 1997. The implementation in Python (MIT License) has been written by BuzzFeed (https://github.com/buzzfeed/pyh3).
• Mahzarin R. Banaji
• Professor Banaji studies thinking and feeling as they unfold in social context, with a focus on mental systems that operate in implicit or unconscious mode. She studies social attitudes and beliefs in adults and children, especially those that have roots in group membership.  She explores the implications of her work for questions of individual responsibility and social justice in democratic societies. Her current research interests focus on the origins of social cognition and applications of implicit cognition to improve individual decisions and organizational policies.
• What do Different Beliefs Tell us? An Examination of Factual, Opinion-Based, and Religious Beliefs
• Children and adults differentiate statements of religious belief from statements of fact and opinion, but the basis of that differentiation remains unclear. Across three experiments, adults and 8-10-year-old children heard statements of factual, opinion-based, and religious belief. Adults and children judged that statements of factual belief revealed more about the world, statements of opinion revealed more about individuals, and statements of religious belief provided information about both. Children—unlike adults—judged that statements of religious belief revealed more about the world than the believer. These results led to three conclusions. First, judgments concerning the relative amount of information statements of religious belief provide about individuals change across development, perhaps because adults have more experience with diversity. Second, recognizing that statements of religious belief provide information about both the world and the believer does not require protracted learning. Third, statements of religious belief are interpreted as amalgams of factual and opinion-based statements.
• My sense is that these three regios – factual, religious, and opinion are huge attractors in our belief landscape
• Studying Implicit Social Cognition with Noninvasive Brain Stimulation

# Phil 4.14.19

An interesting take on diversity science that I had never heard of:

UnTangle Map: Visual Analysis of Probabilistic Multi-Label Data

• Data with multiple probabilistic labels are common in many situations. For example, a movie may be associated with multiple genres with different levels of confidence. Despite their ubiquity, the problem of visualizing probabilistic labels has not been adequately addressed. Existing approaches often either discard the probabilistic information, or map the data to a low-dimensional subspace where their associations with original labels are obscured. In this paper, we propose a novel visual technique, UnTangle Map, for visualizing probabilistic multi-labels. In our proposed visualization, data items are placed inside a web of connected triangles, with labels assigned to the triangle vertices such that nearby labels are more relevant to each other. The positions of the data items are determined based on the probabilistic associations between items and labels. UnTangle Map provides both (a) an automatic label placement algorithm, and (b) adaptive interactions that allow users to control the label positioning for different information needs. Our work makes a unique contribution by providing an effective way to investigate the relationship between data items and their probabilistic labels, as well as the relationships among labels. Our user study suggests that the visualization effectively helps users discover emergent patterns and compare the nuances of probabilistic information in the data labels.

Spring Embedders and Force Directed Graph Drawing Algorithms

• Force-directed algorithms are among the most flexible methods for calculating layouts of simple undirected graphs. Also known as spring embedders, such algorithms calculate the layout of a graph using only information contained within the structure of the graph itself, rather than relying on domain-specific knowledge. Graphs drawn with these algorithms tend to be aesthetically pleasing, exhibit symmetries, and tend to produce crossing-free layouts for planar graphs. In this survey we consider several classical algorithms, starting from Tutte’s 1963 barycentric method, and including recent scalable multiscale methods for large and dynamic graphs.