Monthly Archives: December 2019

Phil 12.31.19


Get tix for ET 2020

7:00 – 4:30 PhD

  • Starting slides as a way to do the chapter overviews and summaries
  • GPT-2 agents
    • Got rid of Huggingface’s transformers library. Too much hidden stuff to understand
    • Aaron found a couple of other projects on GitHub – trying those
    • Downloaded the 715M model and associated files

And I’m a guest editor!IEEE

Phil 1.30.19

7:00 – 7:00 ASRC PhD


  • Nice visualization, with map-like aspects: The Climate Learning Tree
  •  Dissertation
    • Start JuryRoom section – done!
    • Finished all content!
  • GPT-2 Agents
    • Download big model and try to run it
    • Move models and code out of the transformers project
  • GOES
    • Learning by Cheating (sounds like a mechanism for simulation to work with)
      • Vision-based urban driving is hard. The autonomous system needs to learn to perceive the world and act in it. We show that this challenging learning problem can be simplified by decomposing it into two stages. We first train an agent that has access to privileged information. This privileged agent cheats by observing the ground-truth layout of the environment and the positions of all traffic participants. In the second stage, the privileged agent acts as a teacher that trains a purely vision-based sensorimotor agent. The resulting sensorimotor agent does not have access to any privileged information and does not cheat. This two-stage training procedure is counter-intuitive at first, but has a number of important advantages that we analyze and empirically demonstrate. We use the presented approach to train a vision-based autonomous driving system that substantially outperforms the state of the art on the CARLA benchmark and the recent NoCrash benchmark. Our approach achieves, for the first time, 100% success rate on all tasks in the original CARLA benchmark, sets a new record on the NoCrash benchmark, and reduces the frequency of infractions by an order of magnitude compared to the prior state of the art. For the video that summarizes this work, see this https URL
  • Meeting with Aaron
    • Overview at the beginning of each chapter – look at Aaron’s chapter 5 for
    • example intro and summary.
    • Callouts in text should match the label
    • hfill to right-justify
    • Footnote goes after puntuation
    • Punctuation goes inside quotes
    • for url monospace use \texttt{} (
    • indent blockquotes 1/2 more tab
    • Non breaking spaces on names
    • Increase figure sizes in intro

Phil 12.28.19

Calculating Political Bias and Fighting Partisanship with AI

  • I tried this with the abstract to a paper that Google Scholar has been suggesting I read:
  • The Thorny Challenge of Making Moral Machines: Ethical Dilemmas with Self-Driving Cars
    • The algorithms that control AVs will need to embed moral principles guiding their decisions in situations of unavoidable harm. Manufacturers and regulators are confronted with three potentially incompatible objectives: being consistent, not causing public outrage, and not discouraging buyers. The presented moral machine study is a step towards solving this problem as it tries to learn how people all over the world feel about the alternative decisions the AI of self-driving vehicles might have to make. The global study displayed broad agreement across regions regarding how to handle unavoidable accidents. To master the moral challenges, all stakeholders should embrace the topic of machine ethics: this is a unique opportunity to decide as a community what we believe to be right or wrong, and to make sure that machines, unlike humans, unerringly follow the agreed-upon moral preferences. The integration of autonomous cars will require a new social contract that provides clear guidelines about who is responsible for different kinds of accidents, how monitoring and enforcement will be performed, and how trust among all stakeholders can be engendered.

      The online analyzer ( Found this to have some left bias. I think its unable to distinguish between opinion presentation and fact presentation?

Phil 12.27.19

ASRC PhD 7:00 –

  • The difference between “more” (low dimension stampede-ish), and “enough” (grounded and comparative) – from Rebuilding the Social Contract, Part 2
  • Dissertation – finished Limitations!
  • GPT-2
    • Having installed all the transformers-related librarues, I’m testing the evolver to see if it still works. Woohoo! Onward
    • Is this good? It seems to have choked on the Torch examples, which makes sense
      D:\Development\Sandboxes\transformers>make test-examples
      python -m pytest -n auto --dist=loadfile -s -v ./examples/
      ================================================= test session starts =================================================
      platform win32 -- Python 3.7.4, pytest-5.3.2, py-1.8.0, pluggy-0.13.1 -- D:\Program Files\Python37\python.exe
      cachedir: .pytest_cache
      rootdir: D:\Development\Sandboxes\transformers
      plugins: forked-1.1.3, xdist-1.31.0
      [gw0] win32 Python 3.7.4 cwd: D:\Development\Sandboxes\transformers
      [gw1] win32 Python 3.7.4 cwd: D:\Development\Sandboxes\transformers
      [gw2] win32 Python 3.7.4 cwd: D:\Development\Sandboxes\transformers
      [gw3] win32 Python 3.7.4 cwd: D:\Development\Sandboxes\transformers
      [gw4] win32 Python 3.7.4 cwd: D:\Development\Sandboxes\transformers
      [gw5] win32 Python 3.7.4 cwd: D:\Development\Sandboxes\transformers
      [gw6] win32 Python 3.7.4 cwd: D:\Development\Sandboxes\transformers
      [gw7] win32 Python 3.7.4 cwd: D:\Development\Sandboxes\transformers
      [gw0] Python 3.7.4 (tags/v3.7.4:e09359112e, Jul  8 2019, 20:34:20) [MSC v.1916 64 bit (AMD64)]
      [gw1] Python 3.7.4 (tags/v3.7.4:e09359112e, Jul  8 2019, 20:34:20) [MSC v.1916 64 bit (AMD64)]
      [gw2] Python 3.7.4 (tags/v3.7.4:e09359112e, Jul  8 2019, 20:34:20) [MSC v.1916 64 bit (AMD64)]
      [gw3] Python 3.7.4 (tags/v3.7.4:e09359112e, Jul  8 2019, 20:34:20) [MSC v.1916 64 bit (AMD64)]
      [gw4] Python 3.7.4 (tags/v3.7.4:e09359112e, Jul  8 2019, 20:34:20) [MSC v.1916 64 bit (AMD64)]
      [gw5] Python 3.7.4 (tags/v3.7.4:e09359112e, Jul  8 2019, 20:34:20) [MSC v.1916 64 bit (AMD64)]
      [gw6] Python 3.7.4 (tags/v3.7.4:e09359112e, Jul  8 2019, 20:34:20) [MSC v.1916 64 bit (AMD64)]
      [gw7] Python 3.7.4 (tags/v3.7.4:e09359112e, Jul  8 2019, 20:34:20) [MSC v.1916 64 bit (AMD64)]
      gw0 [0] / gw1 [0] / gw2 [0] / gw3 [0] / gw4 [0] / gw5 [0] / gw6 [0] / gw7 [0]
      scheduling tests via LoadFileScheduling
      ======================================================= ERRORS ========================================================
      _____________________________________ ERROR collecting examples/ ______________________________________
      ImportError while importing test module 'D:\Development\Sandboxes\transformers\examples\'.
      Hint: make sure your test modules/packages have valid Python names.
      examples\ in 
          import run_generation
      examples\ in 
          import torch
      E   ModuleNotFoundError: No module named 'torch'
      _________________________ ERROR collecting examples/summarization/ _________________________
      ImportError while importing test module 'D:\Development\Sandboxes\transformers\examples\summarization\'.
      Hint: make sure your test modules/packages have valid Python names.
      examples\summarization\ in 
          import torch
      E   ModuleNotFoundError: No module named 'torch'
      ================================================== 2 errors in 1.57s ==================================================
      make: *** [test-examples] Error 1
    • Hmm. seems to need Torch. This sets of a whole bunch of issues. First, installing Torch from here provides a cool little tool to determine what to install: Torch
    • Note that the available version of CUDA are 9.2 and 10.0. This is a problem, because at the moment, TF only works with 10.0. Mostly because the user community hates upgrading driversTFCuda
    • That being said, it may be true that the release candidate TF is using CUDA 10.1: TFCuda10.1
    • I think I’m going to wait until Aaron shows up to decide if I want to jump down this rabbit hole. In the meantime, I’m going to look at other TF implementations of the GPT-2. Also, the  actual use of Torch seems pretty minor, so maybe it’s avoidable?
      • It appears to be just this method
        def set_seed(args):
            if args.n_gpu > 0:
      • And the code that calls it
            args.device = torch.device("cuda" if torch.cuda.is_available() and not args.no_cuda else "cpu")
            args.n_gpu = torch.cuda.device_count()
    • Aaron suggest using a previous version of torch that is compatible with CUDA 10.0. All the previous versions are here, and this is the line that should work (huggingface transformers’ ” repo is tested on Python 3.5+, PyTorch 1.0.0+ and TensorFlow 2.0.0-rc1“):
      pip install torch==1.2.0 torchvision==0.4.0 -f

Phil 12.26.19

ASRC PhD 7:00 – 4:00

  • Dissertation
    • Limitations
  • GPT-2 agents setup – set up the project, but in the process of getting the huggingface transformers, I wound up setting up that project as well
    • Following directions for
      • pip install transformers
      • git clone
        • cd transformers
        • pip install .
      • pip install -e .[testing]
        • make test – oops. My GNU Make wasn’t on the path – fixed it
        • running tests
          • Some passed, some failed. Errors like: tests/ Fatal Python error: Aborted
          • Sure is keeping the processor busy… Like bringing the machine to its knees busy….
          • Finished – 14 failed, 10 passed, 196 skipped, 20 warnings in 1925.12s (0:32:05)
  • Fixed the coffee maker
  • Dealt with stupid credit card nonsense

Phil 12.25.19

6:30 – 10:30 ASRC PhD

  • Put together a list of journals for Antonio’s transportation paper
  • Looking at Moby-Dick as an entrance do the limitations section. It may be too big a reach
  • Got Rachel’s chapter 5 as a template
  • Dissertation –
    • Added my research question to the Hypothesis introduction.
    • H4 – done!
    • H4a – Done!

Phil 12.24.19

ASRC PhD 6:30 – 9:30

  • The Worldwide Web of Chinese and Russian Information Controls
    • The global diffusion of Chinese and Russian information control technology and techniques has featured prominently in the headlines of major international newspapers.1 Few stories, however, have provided a systematic analysis of both the drivers and outcomes of such diffusion. This paper does so – and finds that these information controls are spreading more efficiently to countries with hybrid or authoritarian regimes, particularly those that have ties to China or Russia. Chinese information controls spread more easily to countries along the Belt and Road Initiative; Russian controls spread to countries within the Commonwealth of Independent States. In arriving at these findings, this working paper first defines the Russian and Chinese models of information control and then traces their diffusion to the 110 countries within the countries’ respective technological spheres, which are geographical areas and spheres of influence to which Russian and Chinese information control technology, techniques of handling information, and law have diffused.
  • Wrote up some preliminary thoughts on Antonio’s Autonomous Shuttles concept. Need to share the doc
  • Listening to World Affairs Council, and the idea of B-Corporations came up, which are a kind of contractual mechanism for diversity injection?
    • Certified B Corporations are a new kind of business that balances purpose and profit. They are legally required to consider the impact of their decisions on their workers, customers, suppliers, community, and the environment. This is a community of leaders, driving a global movement of people using business as a force for good.
    • Deciding to leave this out of the dissertation, since I’m more focussed on individual interfaces with global effects as opposed to corporate legal structures. It’s just too tangential.
  • Dissertation
    • H3 conclusions – done!


Phil 12.23.19

7:00 – 4:30 ASRC

  • 2020 International Conference on Social Computing, Behavioral-Cultural Modeling, & Prediction and Behavior Representation in Modeling and Simulation
    • SBP-BRiMS is an interdisciplinary computational social science conference focused on both modeling complex socio-technical systems and using computational techniques to reason about and study complex socio-technical systems. The participants in this conference take part in forming the conversation on how computation is shaping the modern world and helping us to better understand and reason about human behavior. Both papers addressing basic research and those addressing applied research are accepted. All methodological approaches are encouraged; however, the vast majority of papers use computer simulation, network analysis or machine learning as the method of choice in addressing human social and behavioral activities. At the conference, these paper presentations are complemented by data science challenge problems, demonstrations of new technologies, and a government funding panel.
    • Regular Paper Submission (10 – page max) : 21-February-2020 (Midnight EST)
    • Tuesday, July 14, 2020 – Friday, July 17, 2020 George Washington University, Washington DC, USA
  • Dissertation
    • More conclusions. Got through H2
  • Evolver
    • Figuring out how to merge changes from develop onto master. Hooray – success! The IntelliJ directions (here) were very helpful.
    • And everything is now visible on GitHub

Phil 12.20.19

ASRC GOES 7:00 – 4:30

Phil 12.19.19

7:00 – 4:30 ASRC GOES

  • Dissertation
    • Conclusions – got through the intro and starting the hypothesis section
  • NASA GitHub
  • Evolver
    • More documentation for sure, maybe more debugging?
    • Had to update my home system
    • Looks like the fix is working. I ran it again, and no problems
    • A little more documentation before heading down to the NSOF
  • Simulations
    • Meeting with Isaac – Lots of discussion. The question is how to handle the simulations. NOAA is used to these and has extremely high fidelity ones, but we need sims that can train on many permutations. Here’s an IEEE article on augmented reality training robocars that should be cited
      • industry must augment road testing with other strategies to bring out as many edge cases as possible. One method now in use is to test self-driving vehicles in closed test facilities where known edge cases can be staged again and again.
      • Computer simulation provides a way around the limitations of physical testing. Algorithms generate virtual vehicles and then move them around on a digital map that corresponds to a real-world road. If the data thus generated is then broadcast to an actual vehicle driving itself on the same road, the vehicle will interpret the data exactly as if it had come from its own sensors. Think of it as augmented reality tuned for use by a robot.
  • NSOF Meeting
    • UI demonstrations
    • Got my card activated!

Phil 12.18.19

7:00 – 5:30 ASRC GOES

  • Recalls V46 and VB2/NHSTA 19V-818
  • Fireplace
  • Dissertation
    • Pull in Rachel’s comments – done
    • Begin conclusions!
  • More documentation.
    • Creating the readme for the TF2_opt_example
    • Created the new file, and verifying that everything works – looking good
    • Whoops! I was still using
      from tensorflow_core.python.keras import layers
    • instead of
      from tensorflow.keras import layers
    • which gave me a tensorflow/core/common_runtime/] InUse at error, at least according to this. Going to have to update the library.
    • Nope – that didn’t work. Trying to clear the GPU directly using cuda libaries as described here 
      • That causes the execution to stop. I think you have to do something to re-open the GPU
    • Trying Keras clear_session(). It’s tricky, because it can’t be in the GPU context. Seeing if it works in the loop that creates the TFOptimizerTest object.
      • That worked! Just worried that it might have to do with the complexity of the model. THis time, the evolver came up with a 980 neuron, one layer architecture. Last time, it choked on 800 X 5. Rerunning.
  • More on hyperparameter optimization (HPO). These articles goes into the scikit libraries
  • An alternate take: An Introductory Example of Bayesian Optimization in Python with Hyperopt A hands-on example for learning the foundations of a powerful optimization framework
  • Deploy to PiPy
  • Mission Drive meetings
    • Satellite tool kit? STK’s physics-based, multi-domain modeling, simulation, and analysis environment supports the fast, cost-effective, and responsive approaches needed to realize the full value of digital engineering.
    • What’s new in STK 11.7
    • Set up a one hour meeting tomorrow before the main meeting at the NSOF with Isaac. Something about how to recognize the pattern of switching from one satellite ground station to another.
  • In general, Bing directs users to conspiracy-related content, even if they aren’t explicitly looking for it. For example, if you search Bing for comet ping pong, you get Pizzagate-related content in its top 50 results. If you search for fluoride, you get content accusing the U.S. government of poisoning its population. And if you search for sandy hook shooting, you will find sources claiming that the event was a hoax. Google does not show users conspiracy-related content in its top 50 results for any of these queries. (Stanford Internet Observatory)
  • In 2000, Lucas Introna and Helen Nissenbaum published a paper called “Shaping the Web: Why the Politics of Search Engines Matters.” Examining how the internet had developed to that point and where it was likely to go next, Introna and Nissenbaum identified a specific threat facing the public: search engines, they argued, could conceivably be “colonized by specialized interests at the expense of the public good” and cease to be reliable, more or less transparent sources of information. If the authors’ fears of rampant commercialism affecting the way search engines operate were prophetic, it has also become clear that commercial interests are only part of the problem. If Google became a public utility tomorrow, societies would still have to come up with ethical standards for how to deal with harmful content and the vectors, such as data voids, by which it reaches users. 
    • Add cite to the “diversity is algorithmically crowded out” line in the ethical considerations section?

Phil 12.17.19

7:00 – 3:30 ASRC GOES

  • Dissertation. Added in the framing of the ethics setup that I think Aaron was asking for
    • Got some edits back from Rachel!
    • Asked Thom about a copy editor
  • GOES
    • Sent Isaac a note to set up a meeting
    • Working on file for the evolver, with a tutorial on how to use the module
    • Done with the EvolutionaryOptomizer


7:00 – 5:00 ASRC GOES

  • Recalls V46 and VB2/NHSTA 19V-818
  • Fireplace
  • Dissertation – took a hammer to the discussion intro and rewrote it. I think it’s better?
  • Gen2 Schedule – done
    • Add database to the plan
  • More PyDoc – Success! Here’s how you do it:
    • Run “python -m pydoc -b”. This will fire up the browser
    • Find your package:


    • You can now navigate all the classes and methods. Can’t figure out how to save the whole module though.
    • I think I’m going to try
      • Web scraping and archiving tool written in Python Archive any online website and its assets, css, js and images for offline reading, storage or whatever reasons. It’s easy with pywebcopy.
  • Write a tutorial/quickstart to using the libraries
  • Meeting with Aaron M 6:00 –
  • This looks interesting:  Call for Abstracts: Robots, recommenders and responsibility: where should the media go with AI?
    • The integration of AI-driven tools into the journalistic process raises not only a host of challenging professional, technical and organizational questions. Intense debates about filter bubbles, privacy, shifting power dynamics, gatekeeping, editorial independence and the metrification of journalistic values and fundamental rights also touch upon the legal, ethical, societal and democratic implications that the use of AI in the media can have. So far, much of the discussion has centered around social media platforms. But what are the implications of the introduction of AI-driven tools for the legacy media and its role of informing, being a critical watchdog and providing a forum for public debate? What are the implications of the ongoing trend to automatisation for the realisation of public values and fundamental rights? How do new legal frameworks, such as the GDPR or the plans of the EC to regulate AI affect the media? And are the existing journalistic codes and professional principles useful to guide journalists and editors in an age of AI?

Phil 12.13.19

7:00 – 5:00 ASRC Research & GOES

  • Dissertation. Ethics, ecologies, and Judas Goats
  • Installing project at work. It’s… very different since the last time I used it. Seems better, which is interesting
  • Working on setting up the NextGen AIMS, but making newbie mistakes
  • Submitting a Hadoop/Accumulo feature request for similarity queries

Phil 12.12.19

7:00 – 7:00 ASRC Research

  • 1st International Conference on Autonomic Computing and Self-Organizing Systems – ACSOS 2020
    • Washington DC from August 17 to August 21, 2020
    • Important Dates (tentative)
      • April 1, 2020: Abstract submission deadline
      • April 8, 2020: Paper submission deadline
      • June 8, 2020: Notification to authors
      • July 8, 2020: Camera Ready Deadline
  •  Dissertation
    • Starting on Ethics
    • A Framework for Making Ethical Decisions
      • Decisions about right and wrong permeate everyday life. Ethics should concern all levels of life: acting properly as individuals, creating responsible organizations and governments, and making our society as a whole more ethical. This document is designed as an introduction to making ethical decisions.  It recognizes that decisions about “right” and “wrong” can be difficult, and may be related to individual context. It first provides a summary of the major sources for ethical thinking, and then presents a framework for decision-making.
    • Archipelago-Wide Island Restoration in the Galápagos Islands: Reducing Costs of Invasive Mammal Eradication Programs and Reinvasion Risk
      • Invasive alien mammals are the major driver of biodiversity loss and ecosystem degradation on islands. Over the past three decades, invasive mammal eradication from islands has become one of society’s most powerful tools for preventing extinction of insular endemics and restoring insular ecosystems. As practitioners tackle larger islands for restoration, three factors will heavily influence success and outcomes: the degree of local support, the ability to mitigate for non-target impacts, and the ability to eradicate non-native species more cost-effectively. Investments in removing invasive species, however, must be weighed against the risk of reintroduction. One way to reduce reintroduction risks is to eradicate the target invasive species from an entire archipelago, and thus eliminate readily available sources. We illustrate the costs and benefits of this approach with the efforts to remove invasive goats from the Galapagos Islands. Project Isabela, the world’s largest island restoration effort to date, removed >140,000 goats from >500,000 ha for a cost of US$10.5 million. Leveraging the capacity built during Project Isabela, and given that goat reintroductions have been common over the past decade, we implemented an archipelago-wide goat eradication strategy.
    • Galápagos Monday: When Conservation Means Killing
    • Galápagos Redux: When Is It OK to Kill Goats?
  • Flynn’s proposal defense 11:30 – 1:30
    • Qualitative study of mental models with respect to security?
    • Limited qualitative studies in this area
    • How do you transfer a sophisticated user to a more naive one?
    • The profit model incentivised insecure design
    • Biometric adoption (what about legal?)
    • Experts are more disposed to use biometrics!
    • Government guidance is broad, technical, and hard to use
    • Commercial guidance is narrow and easier, but has a price
    • What was the sampling technique?
    • What does “technical” mean? Technospeak?
    • What about a validation study to show that the approach works more than untrained small business users? What about confounding variables, like whether companies that participate are more likely to be security aware