Author Archives: pgfeldman

Phil 12.20.19

ASRC GOES 7:00 – 4:30

Phil 12.19.19

7:00 – 4:30 ASRC GOES

  • Dissertation
    • Conclusions – got through the intro and starting the hypothesis section
  • NASA GitHub
  • Evolver
    • More documentation for sure, maybe more debugging?
    • Had to update my home system
    • Looks like the fix is working. I ran it again, and no problems
    • A little more documentation before heading down to the NSOF
  • Simulations
    • Meeting with Isaac – Lots of discussion. The question is how to handle the simulations. NOAA is used to these and has extremely high fidelity ones, but we need sims that can train on many permutations. Here’s an IEEE article on augmented reality training robocars that should be cited
      • industry must augment road testing with other strategies to bring out as many edge cases as possible. One method now in use is to test self-driving vehicles in closed test facilities where known edge cases can be staged again and again.
      • Computer simulation provides a way around the limitations of physical testing. Algorithms generate virtual vehicles and then move them around on a digital map that corresponds to a real-world road. If the data thus generated is then broadcast to an actual vehicle driving itself on the same road, the vehicle will interpret the data exactly as if it had come from its own sensors. Think of it as augmented reality tuned for use by a robot.
  • NSOF Meeting
    • UI demonstrations
    • Got my card activated!

Phil 12.18.19

7:00 – 5:30 ASRC GOES

  • Recalls V46 and VB2/NHSTA 19V-818
  • Fireplace
  • Dissertation
    • Pull in Rachel’s comments – done
    • Begin conclusions!
  • More documentation.
    • Creating the readme for the TF2_opt_example
    • Created the new file, and verifying that everything works – looking good
    • Whoops! I was still using
      from tensorflow_core.python.keras import layers
    • instead of
      from tensorflow.keras import layers
    • which gave me a tensorflow/core/common_runtime/bfc_allocator.cc:905] InUse at error, at least according to this. Going to have to update the library.
    • Nope – that didn’t work. Trying to clear the GPU directly using cuda libaries as described here 
      • That causes the execution to stop. I think you have to do something to re-open the GPU
    • Trying Keras clear_session(). It’s tricky, because it can’t be in the GPU context. Seeing if it works in the loop that creates the TFOptimizerTest object.
      • That worked! Just worried that it might have to do with the complexity of the model. THis time, the evolver came up with a 980 neuron, one layer architecture. Last time, it choked on 800 X 5. Rerunning.
  • More on hyperparameter optimization (HPO). These articles goes into the scikit libraries
  • An alternate take: An Introductory Example of Bayesian Optimization in Python with Hyperopt A hands-on example for learning the foundations of a powerful optimization framework
  • Deploy to PiPy
  • Mission Drive meetings
    • Satellite tool kit? STK’s physics-based, multi-domain modeling, simulation, and analysis environment supports the fast, cost-effective, and responsive approaches needed to realize the full value of digital engineering.
    • What’s new in STK 11.7
    • Set up a one hour meeting tomorrow before the main meeting at the NSOF with Isaac. Something about how to recognize the pattern of switching from one satellite ground station to another.
  • In general, Bing directs users to conspiracy-related content, even if they aren’t explicitly looking for it. For example, if you search Bing for comet ping pong, you get Pizzagate-related content in its top 50 results. If you search for fluoride, you get content accusing the U.S. government of poisoning its population. And if you search for sandy hook shooting, you will find sources claiming that the event was a hoax. Google does not show users conspiracy-related content in its top 50 results for any of these queries. (Stanford Internet Observatory)
  • In 2000, Lucas Introna and Helen Nissenbaum published a paper called “Shaping the Web: Why the Politics of Search Engines Matters.” Examining how the internet had developed to that point and where it was likely to go next, Introna and Nissenbaum identified a specific threat facing the public: search engines, they argued, could conceivably be “colonized by specialized interests at the expense of the public good” and cease to be reliable, more or less transparent sources of information. If the authors’ fears of rampant commercialism affecting the way search engines operate were prophetic, it has also become clear that commercial interests are only part of the problem. If Google became a public utility tomorrow, societies would still have to come up with ethical standards for how to deal with harmful content and the vectors, such as data voids, by which it reaches users. 
    • Add cite to the “diversity is algorithmically crowded out” line in the ethical considerations section?

Phil 12.17.19

7:00 – 3:30 ASRC GOES

  • Dissertation. Added in the framing of the ethics setup that I think Aaron was asking for
    • Got some edits back from Rachel!
    • Asked Thom about a copy editor
  • GOES
    • Sent Isaac a note to set up a meeting
    • Working on readme.md file for the evolver, with a tutorial on how to use the module
    • Done with the EvolutionaryOptomizer

Phil12.16.19

7:00 – 5:00 ASRC GOES

  • Recalls V46 and VB2/NHSTA 19V-818
  • Fireplace
  • Dissertation – took a hammer to the discussion intro and rewrote it. I think it’s better?
  • Gen2 Schedule – done
    • Add database to the plan
  • More PyDoc – Success! Here’s how you do it:
    • Run “python -m pydoc -b”. This will fire up the browser
    • Find your package:

DocGen1

    • You can now navigate all the classes and methods. Can’t figure out how to save the whole module though.
    • I think I’m going to try pypi.org/project/pywebcopy/
      • Web scraping and archiving tool written in Python Archive any online website and its assets, css, js and images for offline reading, storage or whatever reasons. It’s easy with pywebcopy.
  • Write a tutorial/quickstart to using the libraries
  • Meeting with Aaron M 6:00 –
  • This looks interesting:  Call for Abstracts: Robots, recommenders and responsibility: where should the media go with AI?
    • The integration of AI-driven tools into the journalistic process raises not only a host of challenging professional, technical and organizational questions. Intense debates about filter bubbles, privacy, shifting power dynamics, gatekeeping, editorial independence and the metrification of journalistic values and fundamental rights also touch upon the legal, ethical, societal and democratic implications that the use of AI in the media can have. So far, much of the discussion has centered around social media platforms. But what are the implications of the introduction of AI-driven tools for the legacy media and its role of informing, being a critical watchdog and providing a forum for public debate? What are the implications of the ongoing trend to automatisation for the realisation of public values and fundamental rights? How do new legal frameworks, such as the GDPR or the plans of the EC to regulate AI affect the media? And are the existing journalistic codes and professional principles useful to guide journalists and editors in an age of AI?

Phil 12.13.19

7:00 – 5:00 ASRC Research & GOES

  • Dissertation. Ethics, ecologies, and Judas Goats
  • Installing project at work. It’s… very different since the last time I used it. Seems better, which is interesting
  • Working on setting up the NextGen AIMS, but making newbie mistakes
  • Submitting a Hadoop/Accumulo feature request for similarity queries

Phil 12.12.19

7:00 – 7:00 ASRC Research

  • 1st International Conference on Autonomic Computing and Self-Organizing Systems – ACSOS 2020
    • Washington DC from August 17 to August 21, 2020
    • Important Dates (tentative)
      • April 1, 2020: Abstract submission deadline
      • April 8, 2020: Paper submission deadline
      • June 8, 2020: Notification to authors
      • July 8, 2020: Camera Ready Deadline
  •  Dissertation
    • Starting on Ethics
    • A Framework for Making Ethical Decisions
      • Decisions about right and wrong permeate everyday life. Ethics should concern all levels of life: acting properly as individuals, creating responsible organizations and governments, and making our society as a whole more ethical. This document is designed as an introduction to making ethical decisions.  It recognizes that decisions about “right” and “wrong” can be difficult, and may be related to individual context. It first provides a summary of the major sources for ethical thinking, and then presents a framework for decision-making.
    • Archipelago-Wide Island Restoration in the Galápagos Islands: Reducing Costs of Invasive Mammal Eradication Programs and Reinvasion Risk
      • Invasive alien mammals are the major driver of biodiversity loss and ecosystem degradation on islands. Over the past three decades, invasive mammal eradication from islands has become one of society’s most powerful tools for preventing extinction of insular endemics and restoring insular ecosystems. As practitioners tackle larger islands for restoration, three factors will heavily influence success and outcomes: the degree of local support, the ability to mitigate for non-target impacts, and the ability to eradicate non-native species more cost-effectively. Investments in removing invasive species, however, must be weighed against the risk of reintroduction. One way to reduce reintroduction risks is to eradicate the target invasive species from an entire archipelago, and thus eliminate readily available sources. We illustrate the costs and benefits of this approach with the efforts to remove invasive goats from the Galapagos Islands. Project Isabela, the world’s largest island restoration effort to date, removed >140,000 goats from >500,000 ha for a cost of US$10.5 million. Leveraging the capacity built during Project Isabela, and given that goat reintroductions have been common over the past decade, we implemented an archipelago-wide goat eradication strategy.
    • Galápagos Monday: When Conservation Means Killing
    • Galápagos Redux: When Is It OK to Kill Goats?
  • Flynn’s proposal defense 11:30 – 1:30
    • Qualitative study of mental models with respect to security?
    • Limited qualitative studies in this area
    • How do you transfer a sophisticated user to a more naive one?
    • The profit model incentivised insecure design
    • Biometric adoption (what about legal?)
    • Experts are more disposed to use biometrics!
    • Government guidance is broad, technical, and hard to use
    • Commercial guidance is narrow and easier, but has a price
    • What was the sampling technique?
    • What does “technical” mean? Technospeak?
    • What about a validation study to show that the approach works more than untrained small business users? What about confounding variables, like whether companies that participate are more likely to be security aware

Phil 12.11.19

7:00 – 5:30 ASRC GOES

  • Call dentist – done!
  • Dissertation – finished designing for populations. Ethics are next

     

  • Evolver
    • Looking at Keras-Tuner (github) to compare Evolver against
    • Installing. Wow. Big. 355MB?
    • Installed the new optevolver whl. No more timeseriesml2 for tuning! Fixed many broken links in code that used timeseriesml2
    • Tried getting the keras-tuner package installed, but it seems to make the gpu invisible? Anyway, it broke everything and after figuring out that “cpu:0” worked just fine but “gpu:0” didn’t (which required setting up some quick code to prove all that), I cleaned out all the tf packages (tensorglow-gpu, tensorboard, and keras-tuner), and reinstalled tensorflow-gpu. Everything is humming happily again, but I need a less destructive Bayesian system.
    • Maybe this? An Introductory Example of Bayesian Optimization in Python with Hyperopt A hands-on example for learning the foundations of a powerful optimization framework
  • Meetings at Mission
    • Erik was stuck at a luncheon for the first meeting
    • Some new commits from Vadim, but he couldn’t make the meeting
    • Discussion about the Artificial Intelligence and Machine Learning, Technology Summit in April, and the AI Tech Connect Spring. Both are very aligned with industry (like AI + 3D Printing), which is not my thing, so I passed. I did suggest that IEEE ICTAI 2020 might be a good fit. Need to send info to John.
    • Still need to get started on the schedule for version 2 development. Include conferences and prep, and minimal assistance.

Phil 12.10.19

7:00 – ASRC GOES

  • Dissertation – got through the stories and games section. Then de-emphasizing lists, etc.
  • LMN prep (done) and demo
  • Evolver
    • Migrate to cookie cutter – done
    • Github – done
    • Try to make a package – done!
    • Start on paper/tutorial for IEEE ICTAI 2020. Need to compare against Bayesian system. Maybe just use the TF optimizer? Same models, same data, and they are very simple

Phil 12.9.19

7:00 – 8:00 ASRC

  • Saw this on Twitter this morning: Training Agents using Upside-Down Reinforcement Learning
    • Traditional Reinforcement Learning (RL) algorithms either predict rewards with value functions or maximize them using policy search. We study an alternative: Upside-Down Reinforcement Learning (Upside-Down RL or UDRL), that solves RL problems primarily using supervised learning techniques. Many of its main principles are outlined in a companion report [34]. Here we present the first concrete implementation of UDRL and demonstrate its feasibility on certain episodic learning problems. Experimental results show that its performance can be surprisingly competitive with, and even exceed that of traditional baseline algorithms developed over decades of research.
  • I wonder how it compares with Stuart Russell’s paper Cooperative Inverse Reinforcement Learning
    • For an autonomous system to be helpful to humans and to pose no unwarranted risks, it needs to align its values with those of the humans in its environment in such a way that its actions contribute to the maximization of value for the humans. We propose a formal definition of the value alignment problem as cooperative inverse reinforcement learning (CIRL). A CIRL problem is a cooperative, partial- information game with two agents, human and robot; both are rewarded according to the human’s reward function, but the robot does not initially know what this is. In contrast to classical IRL, where the human is assumed to act optimally in isolation, optimal CIRL solutions produce behaviors such as active teaching, active learning, and communicative actions that are more effective in achieving value alignment. We show that computing optimal joint policies in CIRL games can be reduced to solving a POMDP, prove that optimality in isolation is suboptimal in CIRL, and derive an approximate CIRL algorithm.
  • Dissertation
    • In the Ethics section, change ‘civilization’ to ‘culture’, and frame it in terms of the simulation – done
    • Last slide should be ‘Thanks for coming to my TED talk’
    • Ping Don’s composer and choreographer, if I can find them
    • Cool! A T-O style universe map (Unmismoobjetivo , via Wikipedia). The logarithmic distance effect is something that I need to look into: universe
  • Evolver
    • Quickstart
    • User’s guide
    • Finished commenting!
    • Flailing on geting the documentation tools to work.
  • ML Seminar
    • Double Crab Cake Platter (2) – 2 Vegetables – $34.00
    • Went over the Evolver. The Ensemble charts really make an impression, but overall, the code walkthrough is too difficult – there are two many moving parts. I need to write a paper with screengrabs that walk through the whole process. I’ll need to evaluate against Bayesian tuners, but I also have architecture search
    • The venue could be IEEE ICTAI 2020: The IEEE International Conference on Tools with Artificial Intelligence (ICTAI) is a leading Conference of AI in the Computer Society providing a major international forum where the creation and exchange of ideas related to artificial intelligence are fostered among academia, industry, and government agencies. It will be in Baltimore, I think.
  • Meeting with Aaron. He thinks that part of the ethics discussion needs to be an addressing of the status quo

Phil 12.7.19

You can now have an AI DM. AI Dungeon 2. Here’s an article about it: You can do nearly anything you want in this incredible AI-powered game. It looks like a GPT-2 model trained with chooseyouradventure. Here’s the “how we did it”. Wow

The Toxins We Carry (Whitney Phillips)

  • My proposal is that we begin thinking ecologically, an approach I explore with Ryan Milner, a communication scholar, in our forthcoming book You Are Here: A Field Guide for Navigating Polluted Information. From an ecological perspective, Wardle’s term “information pollution” makes perfect sense. Building on Wardle’s definition, we use the inverted form “polluted information” to emphasize the state of being polluted and to underscore connections between online and offline toxicity. One of the most important of these connections is just how little motives matter to outcomes. Online and off, pollution still spreads, and still has consequences downstream, whether it’s introduced to the environment willfully, carelessly, or as the result of sincere efforts to help. The impact of industrial-scale polluters online—the bigots, abusers, and chaos agents, along with the social platforms that enable them—should not be minimized. But less obvious suspects can do just as much damage. The truth is one of them.
  • Taking an ecological approach to misinformation

Phil 12.5.19

ASRC GOES 7:00 – 4:30, 6:30 – 7:00

  • Write up something for Erik and John?
  • Send gdoc link to Bruce – done
  • apply for TF Dev invite – done
  • Schedule physical! – done
  • Dissertation – more Designing for populations
  • Evolver
    • Comment EvolutionaryOptimizer – almost done
    • Comment ModelWriter
    • Quickstart
    • User’s guide
    • Comment the excel utils?
  • Waikato meeting with Alex and Panos

Phil 12.4.19

7:00 – 8:00 ASRC GOES

  • Dissertation – back to designing for populations
  • Timesheet revisions
  • Applying for MS Project
  • Evolver – more documentation
  • GOES Meeting
    • Bought a copy of MS Project for $15
    • Send Erik a note about permission to charge for TF Dev Conf
    • Good chat with Bruce about many things, including CASSIE as a Cloud service
    • Re-send links to common satellite dictionary
    • Vadim got a pendulum working
  • Meeting with Roger
    • Got a tour of the new building
    • Lots of VR discussion
    • Some academic future options

Phil 12.3.19

7:00 – 4:00 ASRC GOES

  • Dissertation – reworked the last paragraph of the Reflection and reflex section
  • Evolver – more documentation
  • Send this out to the HCC mailing list: The introvert’s academic “alternative networking” guide
  • Arpita’s proposal defense
    • Stanford: Open information extraction (open IE) refers to the extraction of relation tuples, typically binary relations, from plain text, such as (Mark Zuckerberg; founded; Facebook). The central difference from other information extraction is that the schema for these relations does not need to be specified in advance; typically the relation name is just the text linking two arguments. For example, Barack Obama was born in Hawaii would create a triple (Barack Obama; was born in; Hawaii), corresponding to the open domain relation was-born-in(Barack-Obama, Hawaii).
    • Open Information Extraction 5
    • UKG Open Information Extraction
    • Supervised Ensemble of Open IE
    • Datasets
      • AW-OIE
      • AW-OIE-C
      • WEB
      • NYT
      • PENN
    • Why the choice of 100 dimensins for your symentic embedding? How does it compare to other dimensions?
    • Contextual embedding for NLP?
    • Input-Output Hidden Markov Model (version on GitHub)