# Phil 8.7.20

#COVID

• The Arabic translation program is chunking along. It’s translated over 27,000 tweets so far. I think I’m seeing the power and risks of AI/ML in this tiny example. See, I’ve been programming since the late 1970’s, in many, many, languages and environments, and the common thread in everything I’ve done was the idea of deterministic execution.  That’s the idea that you can, if you have the time and skills, step through a program line by line in a debugger and figure out what’s going on. It wasn’t always true in practice, but the idea was conceptually sound.
• This translation program is entirely different. To understand why, it helps to look at the code:

• This is the core of the code. It looks a lot like code I’ve written over the years. I open a database, get some lines, manipulate them, and put them back. Rinse, lather, repeat.
• That manipulation, though…
• The six lines in yellow are the Huggingface API, which allow me to access Microsoft’s Marian Neural Machine Translation models, and have them use the pretrained models generated by the University of Helsinki. The one I’m using translates Arabic (src = ‘ar’) to English (trg = ‘en’). The lines that do the work are in the inner loop:
batch = tok.prepare_translation_batch(src_texts=[d['contents']])
gen = model.generate(**batch)  # for forward pass: model(**batch)
words: List[str] = tok.batch_decode(gen, skip_special_tokens=True)
• The first line is straightforward. It converts the Arabic words to tokens (numbers) that the language model works in. The last line does the reverse, converting result tokens to english.
• The middle line is the new part. The input vector of tokens is goes to the input layer of the model, where they get sent through a 12-layer, 512-hidden, 8-heads, ~74M parameter model. Tokens that can be converted to English pop put the other side. I know (roughly) how it works at the neuron and layer level, but the idea of stepping through the execution of such a model to understand the translation process is meaningless.
• In the time it took to write this, its translated about 1,000 more tweets. I can have my Arabic-speaking friends to a sanity check on a sample of these words, but we’re going to have to trust the overall behavior of the model to do our research in, because some of these systems only work on English text.
• So we’re trusting a system that we cannot verify to to research at a scale that would otherwise be impossible. If the model is good enough, the results should be valid. If the model behaves poorly, then we have bad science. The problem is right now there is only one Arabic to English translation model available, so there is no way to statistically examine the results for validity.
• And I guess that’s really how we’ll have to proceed in this new world where ML becomes just another API. Validity of results will depend on diversity on model architectures and training sets. That may occur naturally in some areas, but in others, there may only be one model, and we may never know the influences that it has on us.

GOES

• More quaternions. Need to do multiple axis movement properly. Can you average two quaternions and have something meaningful?
• Here’s the reference frame with two rotations based off of the origin, so no drift. Now I need to do an incremental rotation to track these points:

GPT-2 Agents

• Start digging into knowledge graphs

# Phil 2.10.20

7:00 – 5:30 ASRC GOES

• Defense
• Slides and walkthrough
• First pass is thirty minutes too long!
• Trying to get back admin – maybe? Need to get the machine unlocked (again) tomorrow)

# Phil 6.10.19

ASRC GEOS 7:00 – 3:00

• I’ve been thinking about the implications of this article: Training a single AI model can emit as much carbon as five cars in their lifetimes
• There is something in this that has to do with the idea of cost. NN architectures have no direct concept of cost. Inevitably the “current best network” takes a building full of specialized processors 200 hours. This has been true for Inception, AmeoebaNet, and AlphaGo. I wonder what would happen if there was a cost for computation that was part of the fitness function?
• My sense is that evolution, has two interrelated parameters
• a mutation needs to “work better” (whatever that means in the context) than the current version
• the organism that embodies the mutation has to reproduce
• In other words, neural structures in our brains have an unbroken chain of history to the initial sensor neurons in multicellular organisms. All the mutations that didn’t live to make an effect. Those that weren’t able to reproduce didn’t get passed on.
• Randomness is important too. Systems that are too similar, like Aspen trees that have given up on sexual reproduction and are essentially all clones reproducing by rhizome. These live long enough to have an impact on the environment, particularly where they can crowd out other species, but the species itself is doomed.
• I’d like to see an approach to developing NNs that involves more of the constraints of “natural” evolution. I think it would lead to better, and potentially less destructive results.
• SHAP (SHapley Additive exPlanations) is a unified approach to explain the output of any machine learning model. SHAP connects game theory with local explanations, uniting several previous methods [1-7] and representing the only possible consistent and locally accurate additive feature attribution method based on expectations (see our papers for details).
• Working on clustering. I’ve been going around in circles on how to take a set of relative distance measures and use them as a basis for clustering. To revisit, here’s a screenshot of a spreadsheet containing the DTW distances from every sequence to every other sequence:
• My approach is to treat each line of relative distances as a high-dimensional coordinate ( in this case, 50 dimensions), and cluster with respect to the point that defines. This takes care of the problem that the data in this case is very symmetric about the diagonal. Using this approach, an orange/green coordinate is in a different location from the mirrored green/orange coordinate. It’s basically the difference between (1, 2) and (2, 1). That should be a reliable clustering mechanism. Here are the results:
       cluster_id
ts_0            0
ts_1            0
ts_2            0
ts_3            0
ts_4            0
ts_5            0
ts_6            0
ts_7            0
ts_8            0
ts_9            0
ts_10           0
ts_11           0
ts_12           0
ts_13           0
ts_14           0
ts_15           0
ts_16           0
ts_17           0
ts_18           0
ts_19           0
ts_20           0
ts_21           0
ts_22           0
ts_23           0
ts_24           0
ts_25           1
ts_26           1
ts_27           1
ts_28           1
ts_29           1
ts_30           1
ts_31           1
ts_32           1
ts_33           1
ts_34           1
ts_35           1
ts_36           1
ts_37           1
ts_38           1
ts_39           1
ts_40           1
ts_41           1
ts_42           1
ts_43           1
ts_44           1
ts_45           1
ts_46           1
ts_47           1
ts_48           1
ts_49           1
• First-Order Adversarial Vulnerability of Neural Networks and Input Dimension
• Carl-Johann Simon-Gabriel, Yann Ollivier, Bernhard Scholkopf, Leon BottouDavid Lopez-Paz
• Over the past few years, neural networks were proven vulnerable to adversarial images: Targeted but imperceptible image perturbations lead to drastically different predictions. We show that adversarial vulnerability increases with the gradients of the training objective when viewed as a function of the inputs. Surprisingly, vulnerability does not depend on network topology: For many standard network architectures, we prove that at initialization, the l1-norm of these gradients grows as the square root of the input dimension, leaving the networks increasingly vulnerable with growing image size. We empirically show that this dimension-dependence persists after either usual or robust training, but gets attenuated with higher regularization.
• More JASSS paper. Through the corrections up to the Results section. Kind of surprised to be leaning so hard on Homer, but I need a familiar story from before world maps.
• Oh yeah, the Age Of discovery correlates with the development of the Mercator projection and usable world maps

# IntelliJ, Python, and Flask

I wound up using Python because of machine learning, in particular TensorFlow and Keras. And Python is ok. I miss strong typing, but all in all it seems pretty solid with some outstanding science libraries. Mostly, I write tools:

I’ve been using IntelliJ Ultimate for my Python work. I came at it from Java and the steaming wreckage that is Eclipse. I like Ultimate a lot. It updates frequently but no too much. Most of the time it’s seamless. And I can switch between C++, Java, Python, and TypeScript/JavaScript.

For a new tool project, I need to have Python analytics served to a thin client running D3. Which meant that it was time to set up a client-server architecture.

These things are always a pain to set up. A “Hello world” app that sends JSON from the server to the client where it is rendered at interactive rates, with user actions being fed back to the server is still a surprisingly clunky thing to set up. So I was expecting some grumbling. I was not expecting everything to be broken. Well, nearly everything.

It is possible to create a Flask project in IntelliJ Ultimate. But you can’t actually launch the server. The goal is to have code like this:

Set up and run a webserver with console output like this:

It turns out that there is an unresolved bug in IntelliJ that prevents the webserver from launching. In IntelliJ, the way you know that everything is working is that the python file – app.py in this case — has parentheses around it (app.py). You can also see this in the launch menu:

If you see those parens, then IntelliJ knows that the file contains a Flask webserver, and will run things accordingly.

So how did I get this to work? I bought the Professional Edition of PyCharm (which has Flask), and used the defaults to create the project. Note that it appears that you have to use the Jinja2 template language. The other option, Mako, fails. I have not tried the None option. It’s been that sort of day.

By the way, if you already have a subscription to Ultimate, you can extend it to include the whole suite for the low low price of about \$40/year.

# Phil 1.19.19

Listening to World Affairs Council

In today’s reality, democracy no longer ends with a revolution or military coup, but with a gradual erosion of political norms. As a growing number of countries are chipping away at liberally democratic values, are these institutions safe from elected, authoritarian leaders? Daniel Ziblatt, professor at Harvard University and co-author of How Democracies Die, discusses the future of liberal democracies with World Affairs CEO Jane Wales.

This is connecting with Clockwork Muse. Martindale talks about the oscillation between primordial and stylistic change. Primordial is big jumps on a rugged fitness landscape and stylistic change is hill climbing through refinement. In politics, this may compare to reactionary/populist – big jumps to simpler answers and progressivism which is hill climbing to locally optimal solutions. In both cases, the role of habituation and arousal potential are important. Elites making incremental progress is not exciting. MAGA is exciting, and for both sides.

# Phil 12.2.18

This is a story about information at rest and information in motion. Actually, it’s really just a story about information in  motion, mediated by computers. Information at rest is pretty predictable. Go pick up an actual, physical, book. Alone, it’s not going to do much. But it is full of information. It’s your job to put it in motion. The right kind of motion can change the world. The thing is, that change, be it the creation of a political movement, or the discovery of a new field of study, is oddly physical. Out terms that describe it (field, movement) are physical. They have weight, and inertia. But that is a property us us — human beings — evolved meat machines that interact with information using mechanisms evolved over millennia to deal with physical interactions. Information in motion isn’t physical.  But we aren’t equipped to deal with that intuitively. The machines that we have built to manipulate information are. And though they are stunningly effective in this, they do not share our physics-based biases about how to interpret the world.

And that may lead to some ugly surprises.

The laws of physics don’t apply in information space.

Actually, we rarely deal with information. We deal in belief, which is the subset of information that we have opinions about. We don’t care how flat a table is as long as it’s flat enough. But we care a lot about the design of the dining room table that we’re putting in our dining room.

In this belief space, we interpret information using a brain that is evolved based on the behavior of the physical world. That’s a possible reason that we have so many movement terms for describing belief behavior. It is unlikely that we could develop any other intuition, given the brief time that there has even been a concept of information.

There are also some benefits to treating belief as if it has physical properties. It affords group coordination. Beliefs that change gradually can be aligned easier (dimension reduction), allowing groups to reach consensus and compromise. This combined with our need for novelty creates somewhat manageable trajectories. Much of the way that we communicate depends on this linearity. Spoken and written language are linear constructs. Sequential structures like stories contain both information and the order of delivery. Only the sequence differs in music. The notes are the same.

But belief merely creates the illusion that information has qualities like weight. Although the neurons in our brain are slower than solid-state circuits, the electrochemical network that they build is capable of behaving in far less linear and inertial ways. Mental illness can be regarded as a state where the brain network is misbehaving. It can be underdamped, leading to racing thoughts or overdamped manifesting as depression. It can have runaway resonances, as with seizures. In these cases, the functioning of the brain no longer maps successfully to the external, physical environment. There seems to be an evolutionary sweet spot where enough intelligence to model and predict possible outcomes is useful. Functional intelligence appears to be a saddle point, surrounded by regions of instability and stasis.

Computers, which have not evolved under these rules treat information very differently. They are stable in their function as sets of transistors. But the instructions that those transistors execute, the network of action and information is not so well defined. For example, computers can access all information simultaneously. This is one of the qualities of computers that makes them so effective at search. But this capability leads to deeply connected systems with complicated implications that we tend to mask with interfaces that we, the users, find intuitive.

For example, it is possible to add the illusion of physical properties to information. In simulation we model masses, springs and dampers to create sophisticated representations of real-world behaviors. But simply because these systems mimic their real-world counterpart, doesn’t mean that they have these intrinsic propertiesConsider the simulation of a set of masses and springs below:

Depending on the solver (physics algorithm), damping, and stiffness, the system will behave in a believable way. Choose the Euler solver, turn down the damping and wind up the stiffness and the systems becomes unstable, or explodes:

The computer, of course, doesn’t know the difference. It can detect instability only if we program or train is specifically to do so. This is not just true in simulations for physical systems, but also training-based systems like neural networks (gradient descent) and genetic algorithms (mutation rate). In all these cases, systems can converge or explode based on the algorithms used and the hyperparameters that configure them.

This is the core of an implicit user interface problem. The more we make our intelligent machines so that they appear to be navigating in belief spaces, the more we will be able to work productively with them in intuitive ways. Rather than describing carefully what we want them to do (either with programming or massive training sets), we will be able to negotiate with them, and arrive at consensus or compromise. The fundamental problem is that this is a facade that does not reflect the underlying hardware. Because no design is perfect, and accidents are inevitable, I think that it is impossible to design systems that will not “explode” in reaction to unpredictable combinations if inputs.

But I think that we can reduce the risks. If we legislate an environment that requires a minimum level of diversity in these systems, from software design through training data,  to hardware platform, we can increase the likelihood that when a nonlinear accident happens it will only happen in one of several loosely coupled systems. The question of design is a socio-cultural one that consists of several elements:

1. How will these systems communicate that guarantees loose coupling?
2. What is the minimum number of systems that should be permitted, and under what contexts?
3. What is the maximum “size” of a single system?

By addressing these issues early, in technical and legislative venues, we have an opportunity to create a resilient socio-technical ecosystem, where novel interactions of humans and machines can create new capabilities and opportunities, but also a resilient environment that is fundamentally resistant to catastrophe.

# Phil 12.1.18

Trying to think about how intelligence at an individual level is different from intelligence at a population level. At an individual level, the question is how much computation to spend in the presence of imperfect / incomplete information. Does it make sense to be an unquestioning acolite? Follow fashion? Go your own way? These define a spectrum from lowest to highest amount of computation. A population that is evolving over time works with different demands. There is little or no sense of social coordination at a population’s genetic level (though there is coevolution). It seems to me that it is more a question of how to allocate traits in the population in such a way that optimises the duration that the genetic pattern that defines the population. The whole population needs a level of diversity. Clone populations (Aspens, etc) fail quickly. Gene exchange increases the likelihood of survival, even though it is costly. Similarly, explore/exploit and other social traits may be distributed unevenly so that there are always nomadic individuals that move through the larger ecosystem, producing a diaspora that a population can use to recover from a disaster that decimates the main population centers. Genes probably don’t “care” if these nomads are outcasts or willing explorers, but the explorers will be probably be better equipped to survive and create a new population, perpetuating the “explorer” genes at some level in the larger genome, at least within some time horizon where there is a genetic, adaptive “memory” of catastrophe.

# Phil 10.31.18

7:00 – ASRC PhD

• Read this carefully today: Introducing AdaNet: Fast and Flexible AutoML with Learning Guarantees
• Today, we’re excited to share AdaNet, a lightweight TensorFlow-based framework for automatically learning high-quality models with minimal expert intervention. AdaNet builds on our recent reinforcement learning and evolutionary-based AutoML efforts to be fast and flexible while providing learning guarantees. Importantly, AdaNet provides a general framework for not only learning a neural network architecture, but also for learning to ensemble to obtain even better models.
• What about data from simulation?
• Github repo
• AdaNet is a lightweight and scalable TensorFlow AutoML framework for training and deploying adaptive neural networks using the AdaNet algorithm [Cortes et al. ICML 2017]. AdaNet combines several learned subnetworks in order to mitigate the complexity inherent in designing effective neural networks. This is not an official Google product.
• Tutorials: for understanding the AdaNet algorithm and learning to use this package
• Welcome to adanet! For a tour of this python package’s capabilities, please work through the following notebooks:
• This looks like it’s based deeply the cloud AI and Machine Learning products, including cloud-based hyperparameter tuning.
• Time series prediction is here as well, though treated in a more BigQuery manner
• In this blog post we show how to build a forecast-generating model using TensorFlow’s DNNRegressor class. The objective of the model is the following: Given FX rates in the last 10 minutes, predict FX rate one minute later.
• Text generation:
• Cloud poetry: training and hyperparameter tuning custom text models on Cloud ML Engine
• Let’s say we want to train a machine learning model to complete poems. Given one line of verse, the model should generate the next line. This is a hard problem—poetry is a sophisticated form of composition and wordplay. It seems harder than translation because there is no one-to-one relationship between the input (first line of a poem) and the output (the second line of the poem). It is somewhat similar to a model that provides answers to questions, except that we’re asking the model to be a lot more creative.
• Codelab: Google Developers Codelabs provide a guided, tutorial, hands-on coding experience. Most codelabs will step you through the process of building a small application, or adding a new feature to an existing application. They cover a wide range of topics such as Android Wear, Google Compute Engine, Project Tango, and Google APIs on iOS.
Codelab tools on GitHub

• Add the Range and Length section in my notes to the DARPA measurement section. Done. I need to start putting together the dissertation using these parts
• Read Open Source, Open Science, and the Replication Crisis in HCI. Broadly, it seems true, but trying to piggyback on GitHub seems like a shallow solution that repurposes something for coding – an ephemeral activity, to science, which is archival for a reason. Thought needs to be given to an integrated (collection, raw data, cleaned data, analysis, raw results, paper (with reviews?), slides, and possibly a recording of the talk with questions. What would it take to make this work across all science, from critical ethnographies to particle physics? How will it be accessible in 100 years? 500? 1,000? This is very much an HCI problem. It is about designing a useful socio-cultural interface. Some really good questions would be “how do we use our HCI tools to solve this problem?”, and, “does this point out the need for new/different tools?”.
• NASA AIMS meeting. Demo in 2 weeks. AIMS is “time series prediction”, A2P is “unstructured data”. Proove that we can actually do ML, as opposed to saying things.
• How about cross-point correlation? Could show in a sim?
• Meeting on Friday with a package
• We’ve solved A, here’s the vision for B – Z and a roadmap. JPSS is a near-term customer (JPSS Data)
• Getting actionable intelligence from the system logs
• Application portfolios for machine learning
• Umbrella of capabilities for Rich Burns
• New architectural framework for TTNC
• Complete situational awareness. Access to commands and sensor streams
• Software Engineering Division/Code 580
• A2P as a toolbox, but needs to have NASA-relevant analytic capabilities
• GMSEC overview

# Phil 10.8.18

7:00 – 12:00, 2:00 – 5:00 ASRC Research

• Finish up At Home in the Universe notes – done!
• Get started on framing out Antonio’s paper – good progress!
• Basically, Aaron and I think there is a spectrum of interaction that can occur in these systems. At one end is some kind of market, where communication is mediated through price, time, and convenience to the transportation user. At the other is a more top down, control system way of dealing with this. NIST RCS would be an example of this. In between these two extremes are control hierarchies that in turn interact through markets
• Wrote up some early thoughts on how simulation and machine learning can be a thinking fast and slow solution to understandable AI

# Phil 9.20.18

7:00 – 5:00 ASRC MKT

• Submit pre-approval for school – done!
• Call bank – done!
• Tried to do stuff on the Lufthansa site but couldn’t log in
• Read through the USPTO RFI and realized it was a good fit for the Research Browser. Sent the RB white paper to those in the decision loop.
• Updated the JuryRoom white paper to include an appendix on self-governance and handling hate speech, etc.
• Introducing Cloud Inference API: uncover insights from large scale, typed time-series data
• Today, we’re announcing the Cloud Inference API to address this need. Cloud Inference API is a simple, highly efficient and scalable system that makes it easier for businesses and developers to quickly gather insights from typed time series datasets. It’s fully integrated with Google Cloud Storage and can handle datasets as large as tens of billions of event records. If you store any time series data in Cloud Storage, you can use the Cloud Inference API to begin generating predictions.
• Thread by Jeff Dean
• Realized that there are additional matrices that can post-multiply the Laplacian. That way we can break down the individual components that contribute to “stiffness”. The reason for this is that only identical oscillators will synchronize. Similarity is a type of implicit coordination
• Leave the Master matrix [M]: as degree on the diagonal, with “1” for a connection, “0” for no connection
• =Bandwidth matrix [B]: has a value (0, 1) for each connection
• Alignment matrix [A]: calculates the direction cosine between each connected node. Completely aligned nodes get an edge value of 1.0
• There can also be a Weight vector W: which contains the “mass” of the node. A high mass node will be more influential in the network.
• Had a few thoughts about JuryRoom self governance. The major social networks seem to be a mess with respect to what rights users have, and what constitutes a violation of terms of service. The solutions seem pretty brittle (Radiolab podcast on facebook rule making). JuryRoom has built in a mechanism for deliberation. Can that be used to create an online legal framework for crowdsourcing the rules and the interpretation? Roughly, I think that this requires the following:
• A constitution – a simple document that lays out how JuryRoom will be goverened.
• A bill of rights. What are users entitled to?
• The concept of petition, trial, binding decisions, and precedent.
• Is there a concept of testifying under oath?
• The addition of “evidence” attachments that can be linked to posts. This could be existing documents, commissioned expert opinion, etc.
• A special location for the “legal decisions”. These will become the basis for the precedent in future deliberations. Links to these prior decisions are done as attachments? Or as something else?
• Localization. Since what is acceptable (within the bounds of the constitution and the bill of rights) changes as a function of culture, there needs to be a way that groups can split off from the main group to construct and use their own legal history. Voting/membership may need to be a part of this.
• What is visible to non-members?
• What are the requirements to be a member?
• How are legal decisions implemented in software?
• What are the duties of a “citizen”?
• More iConf paper
• I wanted to make figures align on the bottom. Turns out that the way that you do this is to set top alignment [t] for each minipage. Here’s my example:
\begin{figure}[h]
\centering
\begin{minipage}[t]{.5\textwidth}
\centering
\caption{\label{fig:N-F-S} Evolved systems}
\end{minipage}%
\begin{minipage}[t]{.5\textwidth}
\centering
\end{minipage}%
\end{figure}

# Phil 7.1.18

On vacation, but oddly enough, I’m back on my morning schedule, so here I am in Bormio, Italy at 4:30 am.

I forgot my HDMI adaptor for the laptop. Need to order one and have it delivered to Zurich – Hmmm. Can’t seem to get it delivered from Amazon to a hotel. Will have to buy in Zurich

Need to add Gamerfate to the lit review timeline to show where I started to get interested in the problem – tried it but didn’t like it. I’d have to redo the timeline and I’m not sure I have the excel file

Add vacation pictures to slides – done!

Some random thoughts

• When using the belief space example of the table, note that if we sum up all the discussions about tables, we would be able to build a pretty god map of what matters to people with regards to tables
• Manifold learning is what intelligent systems do as a way of determining relationships between things (see curse of dimensionality). As groups of individuals, we need to coordinate our manifold learning activities so that we can us the power of group cognition. When looking at how manifold learning schemes like t-sne and particularly embedding systems such as word2vec create their own unique embeddings, it becomes clear that our machines are not yet engaged in group cognition, except in the simplest way of re-using trained networks and copied hyperparameters. This is very prone to stampedes
• In conversation at dinner, Mike M mentioned that he’d like a language app that is able to indicate the centrality of a term an order that list so that it’s possible to learn a language in a “prioritized” way that can be context-dependent. I think that LMN with a few tweaks could do that.

Continuing the Evolution of Cooperation. A thing that strikes me is that once a TIT FOR TAT successfully takes over, then it becomes computationally easier to ALWAYS COOPERATE. That could evolve to become dominant and be completely vulnerable to ALWAYS DEFECT

# Phil 3.21.18

7:00 – 6:00 ASRC MKT, with some breaks for shovelling

• First day of spring. Snow on the ground and more in the forecast.
• I’ve been thinking of ways to describe the differences between information visualizations with respect to maps. Here’s The Odyssey as a geographic map:
• The first thing that I notice is just how far Odysseus travelled. That’s about half of the Mediterranean! I thought that it all happened close to Greece. Maps afford this understanding. They are diagrams that support the plotting of trajectories.Which brings me to the point that we lose a lot of information about relationships in narratives. That’s not their point. This doesn’t mean that non-map diagrams don’t help sometimes. Here’s a chart of the characters and their relationships in the Odyssey:
•
• There is a lot of information here that is helpful. And this I do remember and understood from reading the book. Stories are good about depicting how people interact. But though this chart shows relationships, the layout does not really support navigation. For example, the gods are all related by blood and can pretty much contact each other at will. This chart would have Poseidon accessing Aeolus and  Circe by going through Odysseus.  So this chart is not a map.
• Lastly, is the relationship that comes at us through search. Because the implicit geographic information about the Odyssey is not specifically in the text, a search request within the corpora cannot produce a result that lets us integrate it
• There is a lot of ambiguity in this result, which is similar to other searches that I tried which included travel, sail and other descriptive terms. This doesn’t mean that it’s bad, it just shows how search does not handle context well. It’s not designed to. It’s designed around precision and recall. Context requires a deeper understanding about meaning, and even such recent innovations such as sharded views with cards, single answers, and pro/con results only skim the surface of providing situationally appropriate, meaningful context.
• Ok, back to tensorflow. Need to update my computer first….
• Updating python to 64-bit – done
• Installing Visual Studio – sloooooooooooooooooooooowwwwwwwwwwwww. Done
• Updating graphics drivers – done
• Updating tensorflow
• Updating numpy with intel math
• At the Validation section in the TF crash course. Good progress. drilling down into all the parts of python that I’ve forgotten. And I got to make a pretty picture:

# Phil 2.15.18

ASRC MKT 7:00 – 8:00

• Taking most of the day off, but spent the early morning tweaking the CI 2018 paper and sending it out to the Fika writing group
• We have discussions, but we do not have discussions about the axis that we are choosing to decide along

Sent this to my representative:

Dear Rep. Cummings,

I would like to suggest a simple piece of legislation that may begin to address gun violence.

“For every student killed or wounded with a firearm in the preceding year, a 1-cent tax will be added to the price of the type of bullet used in the attack. The funds collected will be used to support the victims.”

This approach will do two things: 1) It will incentivize gun owners to demand action, since it could substantially increase the cost of using their guns. 2) It will place the onus of determining effective gun control within the gun community. As a result, there should be no second amendment concerns.

I realize that this is small in scope, and targeted at only the most innocent victims of gun violence, but I’m hoping that the simplicity and strength of the message may help moving the process forward.

# Phil 1.17.18

7:00 – 3:30 ASRC MKT

• Harbinger, another DiscussionGame comparable: We are investigating how people make predictions and how to improve forecasting of current events.
• Working over time, constructing a project based on beliefs and ideas, can be regarded as working with a group of yourself. You communicate with your future self through construction. You perceive your past self through artifacts. Polarization should happen here as a matter of course, since the social similarity (and therefore influence) is very high.
• Back to Beyond Individual Choice
• Back to Angular – prepping for integration of PolarizationGame into the A2P platform. Speaking of which, there needs to be a REST API that will support registered, (optionally?) identified bots. A bot that is able to persuade a group of people over time to reach a unanimous vote would be an interesting Turing-style test. And a prize
• Got Tour of Heroes running again, though it seems broken…
• Nice chat with Jeremy.
• He’ll talk to Heath about what it would take to set up an A2P instance for the discussion system that could scale to millions of players
• Also mentioned that there would need to be a REST interface for bots
• Look through Material Design
• Don’t see any direct Forum (threaded discussion) details on the home site, but I found this Forum example GIF
• Add meeting with Heath and Jeremy early in the sprint to lay out initial detailed design
• Stub out non-functional pages as a deliverable for this (next?) sprint
• He sent me an email with all the things to set up. Got the new Node, Yarn and CLI on my home machine. Will do that again tomorrow and test the VPN connections
• Sprint planning
• A2P GUI and Detailed Design are going to overlap

# Phil 1.12.18

7:00 – 3:30 ASRC MKT

• Continuing to write up thoughts here. Done! Posted to Phlog
• Would expect this, based on M&Ds work: The Wisdom of Polarized Crowds
• As political polarization in the United States continues to rise, the question of whether polarized individuals can fruitfully cooperate becomes pressing. Although diversity of individual perspectives typically leads to superior team performance on complex tasks, strong political perspectives have been associated with conflict, misinformation and a reluctance to engage with people and perspectives beyond one’s echo chamber. It is unclear whether self-selected teams of politically diverse individuals will create higher or lower quality outcomes. In this paper, we explore the effect of team political composition on performance through analysis of millions of edits to Wikipedia’s Political, Social Issues, and Science articles. We measure editors’ political alignments by their contributions to conservative versus liberal articles. A survey of editors validates that those who primarily edit liberal articles identify more strongly with the Democratic party and those who edit conservative ones with the Republican party. Our analysis then reveals that polarized teams—those consisting of a balanced set of politically diverse editors—create articles of higher quality than politically homogeneous teams. The effect appears most strongly in Wikipedia’s Political articles, but is also observed in Social Issues and even Science articles. Analysis of article “talk pages” reveals that politically polarized teams engage in longer, more constructive, competitive, and substantively focused but linguistically diverse debates than political moderates. More intense use of Wikipedia policies by politically diverse teams suggests institutional design principles to help unleash the power of politically polarized teams.
• C&C is not in the citations, but overall this looks good. Add this to the initial game paper.
• Nice article on how establishment of norms can be a tipping point on which gradient to climb in a complex landscape: Tipping into the future
• A history of tipping points from an ecological perspective and how they inform resilience thinking in global development.
• The NOAA demo went well, it seems.