Phil 10.26.19

The dynamics of norm change in the cultural evolution of language

  • What happens when a new social convention replaces an old one? While the possible forces favoring norm change—such as institutions or committed activists—have been identified for a long time, little is known about how a population adopts a new convention, due to the difficulties of finding representative data. Here, we address this issue by looking at changes that occurred to 2,541 orthographic and lexical norms in English and Spanish through the analysis of a large corpora of books published between the years 1800 and 2008. We detect three markedly distinct patterns in the data, depending on whether the behavioral change results from the action of a formal institution, an informal authority, or a spontaneous process of unregulated evolution. We propose a simple evolutionary model able to capture all of the observed behaviors, and we show that it reproduces quantitatively the empirical data. This work identifies general mechanisms of norm change, and we anticipate that it will be of interest to researchers investigating the cultural evolution of language and, more broadly, human collective behavior.

When Hillclimbers Beat Genetic Algorithms in Multimodal Optimization

  • It has been shown in the past that a multistart hillclimbing strategy compares favourably to a standard genetic algorithm with respect to solving instances of the multimodal problem generator. We extend that work and verify if the utilization of diversity preservation techniques in the genetic algorithm changes the outcome of the comparison. We do so under two scenarios: (1) when the goal is to find the global optimum, (2) when the goal is to find all optima.
    A mathematical analysis is performed for the multistart hillclimbing algorithm and a through empirical study is conducted for solving instances of the multimodal problem generator with increasing number of optima, both with the hillclimbing strategy as well as with genetic algorithms with niching. Although niching improves the performance of the genetic algorithm, it is still inferior to the multistart hillclimbing strategy on this class of problems.
    An idealized niching strategy is also presented and it is argued that its performance should be close to a lower bound of what any evolutionary algorithm can do on this class of problems.

Phil 10.25.19

7:00 – 4:00 ASRC GOES

Phil 10.24.19

AI_weird

 The Danger of AI is Weirder than you Think

Janelle Shane’s website

7:00 – ASRC GOES

  • Dissertation
    • Nice chapter on force-directed graphs here
    • Explaining Strava heatmap.
      • Also, added a better transition from Moscovici to Simon’s Ant and mapping. This is turning into a lot of writing…
    • Explain approach for cells (sum of all agent time, and sum all unique agent visits)
    • Explain agent trajectory (add to vector if cur != prev)
  • Good discussion with Aaron about time series approaches to trajectory detection

Phil 10.22.19

7:00 – 4:00 ASRC

  • Dissertation – starting the maps section
  • Need to finish the financial OODA loop section
  • Spending the day at a Navy-sponsored miniconference on AI, ethics and the military (no wifi at Annapolis, so I’ll put up notes later). This was an odd mix of higher-level execs in suits, retirees, and midshipmen, with a few technical folks sprinkled in. It is clear that for these people, the technology(?) is viewed as AI/ml. The idea that AI is a thing that we don’t do yet does not emerge at this level. Rather, AI is being implemented using machine learning, and in particular deep learning.

Phil 10.21.19

7:00 – 8:00 ASRC / Phd

The Journal of Design and Science (JoDS), a joint venture of the MIT Media Lab and the MIT Press, forges new connections between science and design, breaking down the barriers between traditional academic disciplines in the process.

There is a style of propaganda on the rise that isn’t interested in persuading you that something is true. Instead, it’s interested in persuading you that everything is untrue. Its goal is not to influence opinion, but to stratify power, manipulate relationships, and sow the seeds of uncertainty.

Unreal explores the first order effects recent attacks on reality have on political discourse, civics & participation, and its deeper effects on our individual and collective psyche. How does the use of media to design unreality change our trust in the reality we encounter? And, most important, how does cleaving reality into different camps—political, social or philosophical—impact our society and our future?

This looks really nice: The Illustrated GPT-2 (Visualizing Transformer Language Models)

Phil 10.17.19

ASRC GOES 7:00 – 5:30

  • How A Massive Facebook Scam Siphoned Millions Of Dollars From Unsuspecting Boomers (adversarial herding for profit)
    • But the subscription trap was just one part of Ads Inc.’s shady business practices. Burke’s genius was in fusing the scam with a boiler room–style operation that relied on convincing thousands of average people to rent their personal Facebook accounts to the company, which Ads Inc. then used to place ads for its deceptive free trial offers. That strategy enabled his company to run a huge volume of misleading Facebook ads, targeting consumers all around the world in a lucrative and sophisticated enterprise, a BuzzFeed News investigation has found.
  • Finished writing up my post on ensemble NNs: A simple example of ensemble training
  • Dissertation. Working on robot stampedes, though I’m not sure that this is the right place. It could be though, as a story to reinforce the previous sections. Of course, this has caused a lot of rework, but I think I like where it’s going?
  • Good talk with Vadim and Bruce yesterday that was kind of road map-ish
  • Working on the GSAW extended abstract for the rest of the week
    • About a page in. Finished Dr. Li’s paper for reference
  • Artificial Intelligence and Machine Learning in Defense Applications

Phil 10.16.19

7:00 – ASRC GOES

  • Listening to Rachel Maddow on City Arts and Lectures. She’s talking about the power of oil and gas, and how they are essentially anti-democratic. I think that may be true for most extracting industries. They are incentivised to take advantage of the populations that are the gatekeepers to the resource. Which is why you get corruption – it’s cost effective. This also makes me wonder about advertising, which regards consumers as the source to extract money/votes/etc from.
  • Dissertation:
    • Something to add to the discussion section. Primordial jumps are not just on the parts of an individual on a fitness landscape. Sometimes the landscape can change, as with a natural disaster. The survivors are presented with an entirely new fitness landscape,often devoid of competition, that they can now climb.
    • This implies that sustained stylistic change creates sophisticated ecosystems, while primordial change disrupts that, and sets the stage for the creation of new ecosystems.
    • Had a really scary moment. Everything with \includegraphics wouldn’t compile. It seems to be a problem with MikTex, as described here. The fix is to place this code after \documentclass:
      \makeatletter
      \def\set@curr@file#1{%
      	\begingroup
      	\escapechar\m@ne
      	\xdef\@curr@file{\expandafter\string\csname #1\endcsname}%
      	\endgroup
      }
      \def\quote@name#1{"\quote@@name#1\@gobble""}
      \def\quote@@name#1"{#1\quote@@name}
      \def\unquote@name#1{\quote@@name#1\@gobble"}
      \makeatother
    • Finished the intro simulation description and results. Next is robot stampedes, then adversarial herding
  • Evolver
    • Check on status
    • Write abstract for GSAW if things worked out
  • GOES-R AI/ML Meeting
    • Lots of AIMS deployment discussion. Config files, version control, etc.
  • AIMS / A2P Meeting
    • Walked through report
    • Showed Vadim’s physics
    • Showed video of the Deep Mind robot Rubik’s cube to drive homethe need for simulation
    • Send an estimate for travel expenses for 2020
    • Put together a physics roadmap with Vadim and Bruce

Phil 10.15.19

7:00 – ASRC GOES

  • Well, I’m pretty sure I missed the filing deadline for a defense in February. Looks like April 20 now?
  • Dissertation – More simulation. Nope, worked on making sure that I actually have all the paperwork done that will let me defend in February.
  • Evolver? Test? Done! It seems to be working. Here’s what I’ve got
  • Ground Truth: Because the MLP is trained on a set of mathematical functions, I have a definitive ground truth that I can extend infinitely. It’s simple a set of ten sin(x) waves of varying frequency:

GroundTruth

  • All Predictions: If you read back through my posts, I’ve discovered how variable a neural network can be when it has the same architecture and training parameters. This variation is based solely on the different random initialization  of the weights between layers.
  • I’ve put together a genetic-algorithm-based evolver to determine the best hyperparameters, but because of the variation due to initialization, I have to train an ensemble of models and do a statistical analysis just to see if one set of hyperparameters is truly better than another. The reason is easy to see in the following image. What you are looking at is the input vector being run through ten models that are used to calculate the statistical values of the ensemble. You can see that most values are pretty good, some are a bit off, and some are pretty bonkers.

All_predictions

  • Ensemble Average: On the whole though, if you take the average of all the ensemble, you get a pretty nice result. And, unlike the single-shot method of training, the likelihood that another ensemble produced with the same architecture will be the same is much higher.

Ensemble_average

  • This is not to say that the model is perfect. The orange curve at the top of the last chart is too low. This model had a mean accuracy of 67%. I’ve just kicked off a much longer run to see if I can find a better architecture using the evolver over 50 generations rather than just 2.
  • Ok, it’s now tomorrow, and I have the full run of 50 generation. Things did get better. We end with a higher mean, but we also have a higher variance. This means that it’s possible that the architecture around generation 23 might actually be better:

50_generations

  • Because all the values are saved in the spreadsheet, I can try that scenario, but let’s see what the best mean looks like as an ensemble when compared to the early run:

Best_all_predictions

  • Wow, that is a lot better. All the models are much closer to each other, and appear to be clustered around the right places. I am genuinely surprised how tidy the clustering is, based on the previous “All Predictions” plot towards the top of this post. On to the ensemble average:

Best_ensemble_average

  • That is extremely close to the “Ground Truth” chart. The orange line is in the right place, for example. The only error that I can see with a cursory visual inspection is that the height of the olive line is a little lower than it should be.
  • Now, I am concerned that there may be two peaks in this fitness landscape that we’re trying to climb. The one that we are looking for is a generalized model that can fit approximate curves. The other case is that the network has simply memorized the curves and will blow up when it sees something different. Let’s test that.
  • First, let’s revisit the training set. This model was trained with extremely clean data. The input is a sin function with varying frequencies, and the evaluation data is the same sin function, picking up where we cut off the training data. Here’s an example of the clean data that was used to train the model:

Clean_input

  • Now let’s try noising that up, so that the model has to figure out what to do based on data that model has never seen before:

Noisy_input

  • Let’s see what happened! First, let’s look at all the predictions from the ensemble:

Noisy_predictions

  • The first thing that I notice is that it didn’t blow up. Although the paths from each model are somewhat different, each one got all the paths approximately right, and there is no wild deviation. The worst behavior (as usual?) is the orange band, and possibly the green band. But this looks like it should average well. Let’s take a look:

Noisy_average

  • That seems pretty good. And the orange / green lines are in the right place. It’s the blue, olive, and grey lines that are a little low. Still, pretty happy with this.
  • So, ensembles seem to work very well, and make for resilient, predictable behavior in NN architectures. The cost is that there is much more time required to run many, many models through the system.
  • Work on AI paper
    • Good chat with Aaron – the span of approaches to the “model brittleness problem” can be described using three scenarios:
      • Military: Models used in training and at the start of a conflict may not be worth much during hostilities
      • Waste, Fraud, and Abuse. Clever criminals can figure out how not to get caught. If they know the models being used, they may be able to thwart them better
      • Facial recognition and protest. Currently, protesters in cultures that support large-scale ML-based surveillance try to disguise their identity to the facial recognizers. Developing patterns that are likely to cause errors in recognizers and classifiers may support civil disobedience.
  • Solving Rubik’s Cube with a Robot Hand (openAI)
    • To overcome this, we developed a new method called Automatic Domain Randomization (ADR), which endlessly generates progressively more difficult environments in simulation. This frees us from having an accurate model of the real world, and enables the transfer of neural networks learned in simulation to be applied to the real world.

Phil 10.14.19

7:00 – 7:00 School

  • Dissertation
    • Starting on the simulation section. Smoother going. Up to Detecting Emergent Group Behavior
  • Rachel’s defense!
    • Did you vary the cognitive load of the radio message. Acknowledgement vs. math problems? Garbled? How did you decide on the type of radio message?
    • Audio message icon instead of visual (in-layer inhibition)
    • Are ambient displays better for this type of activity? Acute displays are bad? Tesla experience
  • ML Seminar
  • Meeting with Aaron M

Doctoral Degree

 

  • Application for Admission to Candidacy for the Degree of Doctor of Philosophy [DOC | PDF]
  • 5 Year Rule Extension Request [PDF]
  • Nomination of Members for the Final Doctoral Dissertation Examination Committee [DOC | PDF]
    • Due 6 months prior to defense
  • Online Application for Diploma [Info | Apply]
  • 4 Year Rule Extension Request [PDF]
  • Certification of Readiness to Defend the Doctoral Dissertation [DOC | PDF]
    • Due 2 weeks prior to defense
  • PhD Defense Announcement [DOC | PDF]
    • Due 2 weeks prior to defense
  • Approval Sheet [DOC | PDF]
  • Thesis and Dissertation Electronic Publication form [DOC | PDF]
    • Due November 30; April 30; July 31 (Fall, Spring, Summer)
  • Ph.D. Exit Survey [ Info ]

Phil 10.11.19

7:00 – 5:00 ASRC

  • Got Will’s extractor working last night
  • A thought about how Trump’s popularity appears to be crumbling. Primordial jumps don’t have the same sort of sunk cost that stylistic change has. If one big jump doesn’t work out, try something else drastic. It’s the same or less effort than hillclimbing out of a sinking region
  • Dissertation. Fix the proposal sections as per yesterday’s notes
  • Evolver
    • Write out model files as eval_0 … eval_n. If the new fitness is better than the old fitness, replace best_0 … best_n
    • Which si turning out to be tricky. Had to add a save function to save at the right time in the eval loop

Phil 10.10.19

7:00 – 4:00 ASRC GOES

  • The Daily has an episode on how to detach from environmental reality and create a social reality stampede
  • Dissertation, working on finishing up the “unexpected findings” piece of the research plan
    • Tie together explore/exploit, the Three Patterns, and M&R three behaviors.
    • Also, set up the notion that it was initially explore OR exploit, with no thought given to the middle ground. M&R foreshadowed that there would be, though
  • Registered for Navy AI conference Oct 22
  • Get together with Vadim to see how the physics are going on Tuesday?
  • More evolver
    • installed the new timeseriesML2
    • The test run blew up with a tensorflow/core/framework/op_kernel.cc:1622] OP_REQUIRES failed at cwise_ops_common.cc:82 error. Can’t find any direct help, though maybe try this?
      • Reduce your Batchsize of datagen.flow (by default set 32 so you have to set 8/16/24 )
    • Figured it out – I’m saving models in memory. Need to write them out instead.
  • Swing by campus and check on Will

Phil 10.9.19

7:00 – 5:00 ASRC

Dissertation – more work on the research design section. Adding unexpected results

FAA

  • Call

GOES

  • Adding the storing of step and model to the genome
  • Added step
  • While working on adding data, I realized that I was re-calculating fitness for genomes that had already been tested. Added skip function if there was already a population’s worth of data
  • AI/ML meeting -showed the current work with the evolver and the motivation for ensembles
  • AIMS/A2P meeting

Phil 10.8.19

7:00 – 5:00 ASRC GOES

  • Had a really good discussion in seminar about weight randomness and hyperparameter tuning
  • Got  Will to show me the issue he’s having with the data. The first element of an item is being REPLACED INTO twice, and we’re not seeing the last one
  • Chat with Aaron about the AI/ML weapons paper.
    • He gave me The ethics of algorithms: Mapping the debate to read
      • In information societies, operations, decisions and choices previously left to humans are increasingly delegated to algorithms, which may advise, if not decide, about how data should be interpreted and what actions should be taken as a result. More and more often, algorithms mediate social processes, business transactions, governmental decisions, and how we perceive, understand, and interact among ourselves and with the environment. Gaps between the design and operation of algorithms and our understanding of their ethical implications can have severe consequences affecting individuals as well as groups and whole societies. This paper makes three contributions to clarify the ethical importance of algorithmic mediation. It provides a prescriptive map to organise the debate. It reviews the current discussion of ethical aspects of algorithms. And it assesses the available literature in order to identify areas requiring further work to develop the ethics of algorithms.
    • An issue that we’re working through is when an inert object like a hammer becomes something that has a level of (for lack of a better term) agency imbued by the creator, which creates a mismatch in the user’s head as to what should happen. The more intelligent the system, the greater the opportunity for mismatch. My thinking was that Dourish, in  Where the Action Is had some insight (pg 109):
      • This aspect of Heidegger’s phenomenology is already known in HCI. It was one of the elements on which Winograd and Flores (1986) based their analysis of computational theories of cognition. In particular, they were concerned with Heidegger’s distinction between “ready-to-hand” (zuhanden) and “present-at-hand” (vorhanden). These are ways, Heidegger explains, that we encounter the world and act through it. As an example, consider the mouse connected to my computer. Much of the time, I act through the mouse; the mouse is an extension of my hand as I select objects, operate menus, and so forth. The mouse is, in Heidegger’s terms, ready-to-hand. Sometimes, however, such as when I reach the edge of the mousepad and cannot move the mouse further, my orientation toward the mouse changes. Now, I become conscious of the mouse mediating my action, precisely because of the fact that it has been interrupted. The mouse becomes the object of my attention as I pick it up and move it back to the center of the mousepad. When I act on the mouse in this way, being mindful of it as an object of my activity, the mouse is present-at-hand.
  • Dissertation – working on Research Design. Turns out that I had done the pix but only had placeholder text.
  • Left the evolver cooking last night. Hopefully results today, then break up the class and build the lazy version. Arrgh! Misspelled variable. Trying a short run to verify.
  • That seems to work nicely:

Evolver

  • The mean improves from 57% to 68%, so that’s really nice. But notice also that the range from min to max on line 5 is between 100% and 20%. Wow.
  • Here’s 50 generations. I need to record steps and best models. That’s next:

Evolver50

  • Waikato meeting tonight. Chris is pretty much done. Suggested using word clouds to show group discussion markers