Category Archives: Dissertation

Phil 11.3.19

Listening to the On Being interview with angel Kyodo williams

We are in this amazing moment of evolving, where the values of some of us are evolving at rates that are faster than can be taken in and integrated for peoples that are oriented by place and the work that they’ve inherited as a result of where they are.

This really makes me think of the Wundt curve (FMRI analysis here?), and how misalignment between a bourgeoisie class (think elites) and a proletariat class. Without day-to-day existence constraints, it’s possible for elites to move individually and in small groups through less traveled belief spaces. Proletarian concerns are have more “red queen” elements, so you need larger workers movements to make progress.

Phil 11.1.19

7:00 – 3:00 ASRC GOES

KerasTuner

  • Hugging Face: State-of-the-Art Natural Language Processing in ten lines of TensorFlow 2.0
    • Hugging Face is an NLP-focused startup with a large open-source community, in particular around the Transformers library. 🤗/Transformers is a python-based library that exposes an API to use many well-known transformer architectures, such as BERTRoBERTaGPT-2 or DistilBERT, that obtain state-of-the-art results on a variety of NLP tasks like text classification, information extraction, question answering, and text generation. Those architectures come pre-trained with several sets of weights. 
  • Dissertation
    • Starting on Human Study section!
    • For once there was something there that I could work with pretty directly. Fleshing out the opening
  • OODA paper:
    • Maximin (Cass Sunstein)
      • For regulation, some people argue in favor of the maximin rule, by which public officials seek to eliminate the worst worst-cases. The maximin rule has not played a formal role in regulatory policy in the Unites States, but in the context of climate change or new and emerging technologies, regulators who are unable to conduct standard cost-benefit analysis might be drawn to it. In general, the maximin rule is a terrible idea for regulatory policy, because it is likely to reduce rather than to increase well-being. But under four imaginable conditions, that rule is attractive.
        1. The worst-cases are very bad, and not improbable, so that it may make sense to eliminate them under conventional cost-benefit analysis.
        2. The worst-case outcomes are highly improbable, but they are so bad that even in terms of expected value, it may make sense to eliminate them under conventional cost-benefit analysis.
        3. The probability distributions may include “fat tails,” in which very bad outcomes are more probable than merely bad outcomes; it may make sense to eliminate those outcomes for that reason.
        4. In circumstances of Knightian uncertainty, where observers (including regulators) cannot assign probabilities to imaginable outcomes, the maximin rule may make sense. (It may be possible to combine (3) and (4).) With respect to (3) and (4), the challenges arise when eliminating dangers also threatens to impose very high costs or to eliminate very large gains. There are also reasons to be cautious about imposing regulation when technology offers the promise of “moonshots,” or “miracles,” offering a low probability or an uncertain probability of extraordinarily high payoffs. Miracles may present a mirror-image of worst-case scenarios.
  • Reaction wheel efficiency inference
    • Since I have this spiffy accurate model, I think I’m going to try using it before spending a lot of time evolving an ensemble
    • Realized that I only trained it with a voltage of +1, so I’ll need to abs(delta)
    • It’s working!

WorkingInference

  • Next steps:
    • I can’t do ensemble realtime inference because I’d need a gpu for each model. This means that I need to get the best “best” model and use that
    • Run the evolver to see if something better can be found
    • Add “flywheel mass” and “vehicle mass” to dictionary and get rid of the 0.05 value
    • Set up a second model that uses the inferred efficiency to move in accordance with the actual commands. Have them sit on either side of the origin
  • Committed everything. I think I’m done for the day

Phil 10.31.19

8:00 – 4:00 ASRC

  • Got my dissertation paperwork in!
  • To Persuade As an Expert, Order Matters: ‘Information First, then Opinion’ for Effective Communication
    • Participants whose stated preference was to follow the doctor’s opinion had significantly lower rates of antibiotic requests when given “information first, then opinion” compared to “opinion first, then information.” Our evidence suggests that “information first, then opinion” is the most effective approach. We hypothesize that this is because it is seen by non-experts as more trustworthy and more respectful of their autonomy.
    • This matters a lot because what is presented and the order of presentation is itself, an opinion. Maps lay out the information in a way that provides a larger, less edited selection of information.
  • Working on RW training set. Got the framework methods working. Here’s a particularly good run – 99% accuracy for 50 “functions” repeated 20 times each:
  • Tomorrow I’ll roll them into the optomizer. I’ve already built the subclass, but had to flail a bit to find the right way to structure and scale the data

Phil 10.30.19

7:00 – 5:00 GOES

starbird

  • Dissertation – finish up the maps chapter – done!
  • Try writing up more expensive information thoughts (added to discussion section as well)
    • Game theory comes from an age of incomplete information. Now we have access to mostly complete, but potentially expensive information
      • Expense in time – throwing the breakers on high-frequency trading
      • Expense in $$ – Buying the information you need from available resources
      • Expensive in resources – developing the hardware and software to obtain the information (Operation Hummingbird to TPU/DNN development)
    • By handing the information management to machines, we create a human-machine social structure, governed by the rules of dense/sparse,stiff/slack networks
      • AI combat is a very good example of an extremely stiff network (varies in density) and the associated time expense. Combat has to happen as fast as possible, due to OODA loop constraints. But if the system does not have designed-in capacity to negotiate a ceasefire (on both/all sides!), there may be no way to introduce it in human time scales, even though the information that one side is losing is readily apparent.
      • Online advertising is a case where existing information is hidden from the target of the advertiser, but available to the platform, and to a lesser degree, the client. Because this information asymmetry, the user’s behavior/beliefs are more likely to be exploited in a way that denies the user agency, while granting maximum agency to the platform and clients.
      • Deepfakes, spam and the costs of identifying deliberate misinformation
      • Call to action: the creation of an information environment impact body that can examine these issues and determine costs. This is too complex a process for the creators to do on their own, and there would be rampant conflict of interest anyway. But an EPA-like structure, where experts in this topic perform as a counterbalance to unconstrained development and exploitation of the information ecosystem
  • The Knowledge, Analytics, Cognitive and Cloud Computing (KnACC) lab in the Information Systems department in UMBC aims to address challenging issues at the intersection of Data Science and Cloud Computing. We are located in ITE 415.
  • GOES
    • Start creating NN that takes pitch/roll/yaw star tracker deltas and tries to calculate reaction wheel efficiency
      • input vector is dp, dr, dy. Assume a fixed timestep
      • output vector is effp, effr, effy
      • once everything trains up, try running the inferencer on the running sim and display “inferred RW efficiency” for each RW
      • Broke out the base class parts of TF2OptimizerTest. I just need to generate the test/train data for now, no sim needed

Twitter

big ending news for the day

Phil10.29.19

7:00 – 5:00 ASRC GOES

  • Dissertation – more maps
  • CTO presentation at 2:00
    • Delayed a bit, but I think it went well. A lot of the things that Eric tried to put in place look to be resurfacing. I wonder if it will work this time?
  • Meeting with Wayne? Yep – got his signature! Need to email Nicole and see where this goes – done
  • Starting to negotiate the new-ish “expensive information” paper with Aaron M

Phil 10.28.19

Language

Capacity, Bandwidth, and Compositionality in Emergent Language Learning

  • Many recent works have discussed the propensity, or lack thereof, for emergent languages to exhibit properties of natural languages. A favorite in the literature is learning compositionality. We note that most of those works have focused on communicative bandwidth as being of primary importance. While important, it is not the only contributing factor. In this paper, we investigate the learning biases that affect the efficacy and compositionality of emergent languages. Our foremost contribution is to explore how capacity of a neural network impacts its ability to learn a compositional language. We additionally introduce a set of evaluation metrics with which we analyze the learned languages. Our hypothesis is that there should be a specific range of model capacity and channel bandwidth that induces compositional structure in the resulting language and consequently encourages systematic generalization. While we empirically see evidence for the bottom of this range, we curiously do not find evidence for the top part of the range and believe that this is an open question for the community.

Radiolab: Tit for Tat

  • In the early 60s, Robert Axelrod was a math major messing around with refrigerator-sized computers. Then a dramatic global crisis made him wonder about the space between a rock and a hard place, and whether being good may be a good strategy. With help from Andrew Zolli and Steve Strogatz, we tackle the prisoner’s dilemma, a classic thought experiment, and learn about a simple strategy to navigate the waters of cooperation and betrayal. Then Axelrod, along with Stanley Weintraub, takes us back to the trenches of World War I, to the winter of 1914, and an unlikely Christmas party along the Western Front.
    • Need to send a note for them to look into Axelrod’s “bully” saddle point

7:00 – ASRC GOES

  • Dissertation – Nearly done with the agent cartography section?
  • CTO Rehearsal – 10:30 – 12:00 done
  • ML Dinner – 4:30 fun! 20191028_173214
  • Meeting With Aaron M
    • More thinking about what to do with the paper. We decided to try for the CHI4EVIL workshop, and then try something like IEEE Spectrum. I think I’d like to reframe it around the concept of Expensive Information and Automation. Try to tie together AI weapons, spam filters, and deepfakes
      • Automation makes negotiation more difficult, locks in trajectories
      • Handing off responsibility to automation amplifies opportunities and destructive potential
      • OODA loop could be generalized if you look at it from the perspective of attention.

Phil 10.25.19

7:00 – 4:00 ASRC GOES

Phil 10.24.19

AI_weird

 The Danger of AI is Weirder than you Think

Janelle Shane’s website

7:00 – ASRC GOES

  • Dissertation
    • Nice chapter on force-directed graphs here
    • Explaining Strava heatmap.
      • Also, added a better transition from Moscovici to Simon’s Ant and mapping. This is turning into a lot of writing…
    • Explain approach for cells (sum of all agent time, and sum all unique agent visits)
    • Explain agent trajectory (add to vector if cur != prev)
  • Good discussion with Aaron about time series approaches to trajectory detection

Phil 10.22.19

7:00 – 4:00 ASRC

  • Dissertation – starting the maps section
  • Need to finish the financial OODA loop section
  • Spending the day at a Navy-sponsored miniconference on AI, ethics and the military (no wifi at Annapolis, so I’ll put up notes later). This was an odd mix of higher-level execs in suits, retirees, and midshipmen, with a few technical folks sprinkled in. It is clear that for these people, the technology(?) is viewed as AI/ml. The idea that AI is a thing that we don’t do yet does not emerge at this level. Rather, AI is being implemented using machine learning, and in particular deep learning.

Phil 10.21.19

7:00 – 8:00 ASRC / Phd

The Journal of Design and Science (JoDS), a joint venture of the MIT Media Lab and the MIT Press, forges new connections between science and design, breaking down the barriers between traditional academic disciplines in the process.

There is a style of propaganda on the rise that isn’t interested in persuading you that something is true. Instead, it’s interested in persuading you that everything is untrue. Its goal is not to influence opinion, but to stratify power, manipulate relationships, and sow the seeds of uncertainty.

Unreal explores the first order effects recent attacks on reality have on political discourse, civics & participation, and its deeper effects on our individual and collective psyche. How does the use of media to design unreality change our trust in the reality we encounter? And, most important, how does cleaving reality into different camps—political, social or philosophical—impact our society and our future?

This looks really nice: The Illustrated GPT-2 (Visualizing Transformer Language Models)

Phil 10.17.19

ASRC GOES 7:00 – 5:30

  • How A Massive Facebook Scam Siphoned Millions Of Dollars From Unsuspecting Boomers (adversarial herding for profit)
    • But the subscription trap was just one part of Ads Inc.’s shady business practices. Burke’s genius was in fusing the scam with a boiler room–style operation that relied on convincing thousands of average people to rent their personal Facebook accounts to the company, which Ads Inc. then used to place ads for its deceptive free trial offers. That strategy enabled his company to run a huge volume of misleading Facebook ads, targeting consumers all around the world in a lucrative and sophisticated enterprise, a BuzzFeed News investigation has found.
  • Finished writing up my post on ensemble NNs: A simple example of ensemble training
  • Dissertation. Working on robot stampedes, though I’m not sure that this is the right place. It could be though, as a story to reinforce the previous sections. Of course, this has caused a lot of rework, but I think I like where it’s going?
  • Good talk with Vadim and Bruce yesterday that was kind of road map-ish
  • Working on the GSAW extended abstract for the rest of the week
    • About a page in. Finished Dr. Li’s paper for reference
  • Artificial Intelligence and Machine Learning in Defense Applications

Phil 10.16.19

7:00 – ASRC GOES

  • Listening to Rachel Maddow on City Arts and Lectures. She’s talking about the power of oil and gas, and how they are essentially anti-democratic. I think that may be true for most extracting industries. They are incentivised to take advantage of the populations that are the gatekeepers to the resource. Which is why you get corruption – it’s cost effective. This also makes me wonder about advertising, which regards consumers as the source to extract money/votes/etc from.
  • Dissertation:
    • Something to add to the discussion section. Primordial jumps are not just on the parts of an individual on a fitness landscape. Sometimes the landscape can change, as with a natural disaster. The survivors are presented with an entirely new fitness landscape,often devoid of competition, that they can now climb.
    • This implies that sustained stylistic change creates sophisticated ecosystems, while primordial change disrupts that, and sets the stage for the creation of new ecosystems.
    • Had a really scary moment. Everything with \includegraphics wouldn’t compile. It seems to be a problem with MikTex, as described here. The fix is to place this code after \documentclass:
      \makeatletter
      \def\set@curr@file#1{%
      	\begingroup
      	\escapechar\m@ne
      	\xdef\@curr@file{\expandafter\string\csname #1\endcsname}%
      	\endgroup
      }
      \def\quote@name#1{"\quote@@name#1\@gobble""}
      \def\quote@@name#1"{#1\quote@@name}
      \def\unquote@name#1{\quote@@name#1\@gobble"}
      \makeatother
    • Finished the intro simulation description and results. Next is robot stampedes, then adversarial herding
  • Evolver
    • Check on status
    • Write abstract for GSAW if things worked out
  • GOES-R AI/ML Meeting
    • Lots of AIMS deployment discussion. Config files, version control, etc.
  • AIMS / A2P Meeting
    • Walked through report
    • Showed Vadim’s physics
    • Showed video of the Deep Mind robot Rubik’s cube to drive homethe need for simulation
    • Send an estimate for travel expenses for 2020
    • Put together a physics roadmap with Vadim and Bruce

Phil 10.15.19

7:00 – ASRC GOES

  • Well, I’m pretty sure I missed the filing deadline for a defense in February. Looks like April 20 now?
  • Dissertation – More simulation. Nope, worked on making sure that I actually have all the paperwork done that will let me defend in February.
  • Evolver? Test? Done! It seems to be working. Here’s what I’ve got
  • Ground Truth: Because the MLP is trained on a set of mathematical functions, I have a definitive ground truth that I can extend infinitely. It’s simple a set of ten sin(x) waves of varying frequency:

GroundTruth

  • All Predictions: If you read back through my posts, I’ve discovered how variable a neural network can be when it has the same architecture and training parameters. This variation is based solely on the different random initialization  of the weights between layers.
  • I’ve put together a genetic-algorithm-based evolver to determine the best hyperparameters, but because of the variation due to initialization, I have to train an ensemble of models and do a statistical analysis just to see if one set of hyperparameters is truly better than another. The reason is easy to see in the following image. What you are looking at is the input vector being run through ten models that are used to calculate the statistical values of the ensemble. You can see that most values are pretty good, some are a bit off, and some are pretty bonkers.

All_predictions

  • Ensemble Average: On the whole though, if you take the average of all the ensemble, you get a pretty nice result. And, unlike the single-shot method of training, the likelihood that another ensemble produced with the same architecture will be the same is much higher.

Ensemble_average

  • This is not to say that the model is perfect. The orange curve at the top of the last chart is too low. This model had a mean accuracy of 67%. I’ve just kicked off a much longer run to see if I can find a better architecture using the evolver over 50 generations rather than just 2.
  • Ok, it’s now tomorrow, and I have the full run of 50 generation. Things did get better. We end with a higher mean, but we also have a higher variance. This means that it’s possible that the architecture around generation 23 might actually be better:

50_generations

  • Because all the values are saved in the spreadsheet, I can try that scenario, but let’s see what the best mean looks like as an ensemble when compared to the early run:

Best_all_predictions

  • Wow, that is a lot better. All the models are much closer to each other, and appear to be clustered around the right places. I am genuinely surprised how tidy the clustering is, based on the previous “All Predictions” plot towards the top of this post. On to the ensemble average:

Best_ensemble_average

  • That is extremely close to the “Ground Truth” chart. The orange line is in the right place, for example. The only error that I can see with a cursory visual inspection is that the height of the olive line is a little lower than it should be.
  • Now, I am concerned that there may be two peaks in this fitness landscape that we’re trying to climb. The one that we are looking for is a generalized model that can fit approximate curves. The other case is that the network has simply memorized the curves and will blow up when it sees something different. Let’s test that.
  • First, let’s revisit the training set. This model was trained with extremely clean data. The input is a sin function with varying frequencies, and the evaluation data is the same sin function, picking up where we cut off the training data. Here’s an example of the clean data that was used to train the model:

Clean_input

  • Now let’s try noising that up, so that the model has to figure out what to do based on data that model has never seen before:

Noisy_input

  • Let’s see what happened! First, let’s look at all the predictions from the ensemble:

Noisy_predictions

  • The first thing that I notice is that it didn’t blow up. Although the paths from each model are somewhat different, each one got all the paths approximately right, and there is no wild deviation. The worst behavior (as usual?) is the orange band, and possibly the green band. But this looks like it should average well. Let’s take a look:

Noisy_average

  • That seems pretty good. And the orange / green lines are in the right place. It’s the blue, olive, and grey lines that are a little low. Still, pretty happy with this.
  • So, ensembles seem to work very well, and make for resilient, predictable behavior in NN architectures. The cost is that there is much more time required to run many, many models through the system.
  • Work on AI paper
    • Good chat with Aaron – the span of approaches to the “model brittleness problem” can be described using three scenarios:
      • Military: Models used in training and at the start of a conflict may not be worth much during hostilities
      • Waste, Fraud, and Abuse. Clever criminals can figure out how not to get caught. If they know the models being used, they may be able to thwart them better
      • Facial recognition and protest. Currently, protesters in cultures that support large-scale ML-based surveillance try to disguise their identity to the facial recognizers. Developing patterns that are likely to cause errors in recognizers and classifiers may support civil disobedience.
  • Solving Rubik’s Cube with a Robot Hand (openAI)
    • To overcome this, we developed a new method called Automatic Domain Randomization (ADR), which endlessly generates progressively more difficult environments in simulation. This frees us from having an accurate model of the real world, and enables the transfer of neural networks learned in simulation to be applied to the real world.