Category Archives: thesis

Phil 11.12.19

7:00 – 4:00 ASRC GOES

  • Dissertation – Human study discussion
    • “Degrees of Freedom” are different from “dimensions”. Dimensions, as used in machine learning, mean a single parameter that can be varied, discretely or continuously. Degrees of freedom define a continuous space that can contain things that are not contained in the dimensions. Latitude and Longitude do not define the globe. They serve as a way to show relationships between regions on the globe.
  • How news media are setting the 2020 election agenda: Chasing daily controversies, often burying policy
    • Our topic analysis of ~10,000 news articles on the 2020 Democratic candidates, published between March and October in an ideological diverse range of 28 news outlets, reveals that political coverage, at least this cycle, tracks with the ebbs and flows of scandals, viral moments and news items, from accusations of Joe Biden’s inappropriate behavior towards women to President Trump’s phone call with Ukraine. (A big thanks to Media Cloud.)
  • Neat visualization – a heatmap plus a mean. I’d like to try adding things like variance to this. From Large scale and information effects on cooperation in public good games. Looks like the Seaborn library might be able to do this.

Heatmap

  • Evolver – more GPU allocation and threading
    • Training – load and unload GPUs using thread pools
      • Updating EvolutionaryOptimizer
        • got threads working
        • Added enums, which meant that I had to handle enum key values in my ExcelUtils class
        • Updated the TimeSeriesML2 whl
        • Started folding gpu management into PyBullet. Making sure that everything still works first… It does!
      • Ok, back to TimeSeriesML2 to make nested genomes
        • Added a parent/child relationship to EvolveAxis so that it’s possible to a top-level parent (self.parent == None) to step down the tree of all the children to get the new appropriate values. These will need to be assembled into an argument string. Figure that part out tomorrow.
    • Predicting – load and use models in real time

Phil 11.11.19

7:00 –  8:00 PhD

Phil 11.8.19

7:00 – 3:00 ASRC GOES

  • Dissertation
    • Usability study! Done!
    • Discussion. This is going to take some framing. I want to tie it back to earlier navigation, particularly the transition from stories and mappaemundi to isotropic maps of Ptolemy and Mercator.
  • Sent Don and Danilo sql file
  • Start satellite component list
  • Evolver
    • Adding threads to handle the GPU. This looks like what I want (from here):
      import logging
      import concurrent.futures
      import threading
      import time
      
      def thread_function(name):
          logging.info("Task %s: starting on thread %s", name, threading.current_thread().name)
          time.sleep(2)
          logging.info("Task %s: finishing on thread %s", name, threading.current_thread().name)
      
      if __name__ == "__main__":
          num_tasks = 5
          num_gpus = 1
          format = "%(asctime)s: %(message)s"
          logging.basicConfig(format=format, level=logging.INFO,
                              datefmt="%H:%M:%S")
      
          with concurrent.futures.ThreadPoolExecutor(max_workers=num_gpus) as executor:
              result = executor.map(thread_function, range(num_tasks))
      
          logging.info("Main    : all done")

      As you can see, it’s possible to have a thread for each gpu, while having them iterate over a larger set of tasks. Now I need to extract the gpu name from the thread info. In other words,  ThreadPoolExecutor-0_0 needs to map to gpu:1.

    • Ok, this seems to do everything I need, with less cruft:
      import concurrent.futures
      import threading
      import time
      from typing import List
      import re
      
      last_num_in_str_re = '(\d+)(?!.*\d)'
      prog = re.compile(last_num_in_str_re)
      
      def thread_function(args:List):
          num = prog.search(threading.current_thread().name) # get the last number in a string
          gpu_str = "gpu:{}".format(int(num.group(0))+1)
          print("{}: starting on  {}".format(args["name"], gpu_str))
          time.sleep(2)
          print("{}: finishing on  {}".format(args["name"], gpu_str))
      
      if __name__ == "__main__":
          num_tasks = 5
          num_gpus = 5
          task_list = []
          for i in range(num_tasks):
              task = {"name":"task_{}".format(i), "value":2+(i/10)}
              task_list.append(task)
          with concurrent.futures.ThreadPoolExecutor(max_workers=num_gpus) as executor:
              result = executor.map(thread_function, task_list)
      
          print("Finished Main")

      And that gives me:

      task_0: starting on  gpu:1
      task_1: starting on  gpu:2
      task_0: finishing on  gpu:1, after sleeping 2.0 seconds
      task_2: starting on  gpu:1
      task_1: finishing on  gpu:2, after sleeping 2.1 seconds
      task_3: starting on  gpu:2
      task_2: finishing on  gpu:1, after sleeping 2.2 seconds
      task_4: starting on  gpu:1
      task_3: finishing on  gpu:2, after sleeping 2.3 seconds
      task_4: finishing on  gpu:1, after sleeping 2.4 seconds
      Finished Main

      So the only think left is to integrate this into TimeSeriesMl2

Phil 11.7.19

7:00 – 5:00 ASRC GOES

  • Dissertation
  • ML+Sim
    • Save actual and inferred efficiency to excel and plot
    • Create an illustration that shows how the network is trained, validated against the sim, then integrated into the operating system. (maybe show a physical testbed for evaluation?)
    • Demo at the NSOF
      • Went ok. Next steps are a sufficiently realistic model that can interpret an actual malfunction
      • Put together a Google Doc/Sheet that has the common core elements that we can model most satellites (LEO, MEO, GEO, and HEO?). What are the common components between cubesats and the James Webb?
      • Detection of station-keeping failure is a possibility
      • Also, high-dynamic phases, like orbit injection might be low-ish fruit
    • Tomorrow, continue on the GPU assignment in the evolver

Phil 11.6.19

7:00 – 3:00 ASRC GOES

  • Simulation for training ML at UMD: Improved simulation system developed for self-driving cars 
    • University of Maryland computer scientist Dinesh Manocha, in collaboration with a team of colleagues from Baidu Research and the University of Hong Kong, has developed a photo-realistic simulation system for training and validating self-driving vehicles. The new system provides a richer, more authentic simulation than current systems that use game engines or high-fidelity computer graphics and mathematically rendered traffic patterns.
  • Dissertation
    • Send out email setting the date/time to Feb 21, from 11:00 – 1:00. Ask if folks could move the time earlier or later for Wayne – done
    • More human study – I think I finally have a good explanation of the text convergence.
  • Maybe work of the evolver?
    • Add nested variables
    • Look at keras-tuner code to see how GPU assignment is done
      • So it looks like they are using gRPC as a way to communicate between processes? grpc
      • I mean, like separate processes, communicating via ports grpc2
      • Oh. This is why. From the tf.distribute documentation tf.distribute
      • No – wait. This is from the TF distributed training overview pagetf.distribute2
      • And that seems to straight up work (assuming that multiple GPUs can be called. Here’s an example of training:
        strategy = tf.distribute.OneDeviceStrategy(device="/gpu:0")
        with strategy.scope():
            model = tf.keras.Sequential()
            # Adds a densely-connected layer with 64 units to the model:
            model.add(layers.Dense(sequence_length, activation='relu', input_shape=(sequence_length,)))
            # Add another:
            model.add(layers.Dense(200, activation='relu'))
            model.add(layers.Dense(200, activation='relu'))
            # Add a layer with 10 output units:
            model.add(layers.Dense(sequence_length))
        
            loss_func = tf.keras.losses.MeanSquaredError()
            opt_func = tf.keras.optimizers.Adam(0.01)
            model.compile(optimizer= opt_func,
                          loss=loss_func,
                          metrics=['accuracy'])
        
            noise = 0.0
            full_mat, train_mat, test_mat = generate_train_test(num_funcs, rows_per_func, noise)
        
            model.fit(train_mat, test_mat, epochs=70, batch_size=13)
            model.evaluate(train_mat, test_mat)
        
            model.save(model_name)

        And here’s an example of predicting

        strategy = tf.distribute.OneDeviceStrategy(device="/gpu:0")
        with strategy.scope():
            model = tf.keras.models.load_model(model_name)
            full_mat, train_mat, test_mat = generate_train_test(num_funcs, rows_per_func, noise)
        
            predict_mat = model.predict(train_mat)
        
            # Let's try some immediate inference
            for i in range(10):
                pitch = random.random()/2.0 + 0.5
                roll = random.random()/2.0 + 0.5
                yaw = random.random()/2.0 + 0.5
                inp_vec = np.array([[pitch, roll, yaw]])
                eff_mat = model.predict(inp_vec)
                print("input: pitch={:.2f}, roll={:.2f}, yaw={:.2f}  efficiencies: pitch={:.2f}%, roll={:.2f}%, yaw={:.2f}%".
                      format(inp_vec[0][0], inp_vec[0][1], inp_vec[0][2], eff_mat[0][0]*100, eff_mat[0][1]*100, eff_mat[0][2]*100))
    • Look at TF code to see if it makes sense to add to the project. Doesn’t look like it, but I think I can make a nice hyperparameter/architecture search API using this, once validated
  • Mission Drive meeting and demo – went ok. Will Demo at NSOF tomorrow

Phil 10.5.19

“Everything that we see is a shadow cast by that which we do not see.” – Dr. King

misinfo

Transformer

ASRC GOES 7:00 – 4:30

  • Dissertation – more human study. Pretty smooth progress right now!
  • Cleaning up the sim code for tomorrow – done. All the prediction and manipulation to change the position data for the RWs and the vehicle are done in the inference section, while the updates to the drawing nodes are separated.
  • I think this is the code to generate GPT-2 Agents?: github.com/huggingface/transformers/blob/master/examples/run_generation.py

Phil 11.4.19

7:00 – 9:00 ASRC GOES

  • Cool thing: Our World in Data
    • The goal of our work is to make the knowledge on the big problems accessible and understandable. As we say on our homepage, Our World in Data is about Research and data to make progress against the world’s largest problems.
  • Dissertation – more human study
  • This is super-cool: The Future News Pilot Fund: Call for ideas
    • Between February and June 2020 we will fund and support a community of changemakers to test their promising ideas, technologies and models for public interest news, so communities in England have access to reliable and accurate news about the issues that matter most to them.
  • October status report
  • Sim + ML next steps:
    • I can’t do ensemble realtime inference because I’d need a gpu for each model. This means that I need to get the best “best” model and use that
    • Run the evolver to see if something better can be found
    • Add “flywheel mass” and “vehicle mass” to dictionary and get rid of the 0.05 value – done
    • Set up a second model that uses the inferred efficiency to move in accordance with the actual commands. Have them sit on either side of the origin
      • Graphics are done
      • Need to make second control system and ‘sim’ that uses inferred efficiency. Didn’t have to do all that. What I’m really doing is calculating rw angles based on the voltage and inferred efficiency. I can take the commands from the control system for the ‘actual’ satellite.

SimAndInferred

  • ML seminar
    • Showed the sim, which runs on the laptop. Then everyone’s status reports
  • Meeting with Aaron
    • Really good discussion. I think I have a handle on the paper/chapter. Added it to the ethical considerations section

Phil 11.3.19

Listening to the On Being interview with angel Kyodo williams

We are in this amazing moment of evolving, where the values of some of us are evolving at rates that are faster than can be taken in and integrated for peoples that are oriented by place and the work that they’ve inherited as a result of where they are.

This really makes me think of the Wundt curve (FMRI analysis here?), and how misalignment between a bourgeoisie class (think elites) and a proletariat class. Without day-to-day existence constraints, it’s possible for elites to move individually and in small groups through less traveled belief spaces. Proletarian concerns are have more “red queen” elements, so you need larger workers movements to make progress.

Phil 11.1.19

7:00 – 3:00 ASRC GOES

KerasTuner

  • Hugging Face: State-of-the-Art Natural Language Processing in ten lines of TensorFlow 2.0
    • Hugging Face is an NLP-focused startup with a large open-source community, in particular around the Transformers library. 🤗/Transformers is a python-based library that exposes an API to use many well-known transformer architectures, such as BERTRoBERTaGPT-2 or DistilBERT, that obtain state-of-the-art results on a variety of NLP tasks like text classification, information extraction, question answering, and text generation. Those architectures come pre-trained with several sets of weights. 
  • Dissertation
    • Starting on Human Study section!
    • For once there was something there that I could work with pretty directly. Fleshing out the opening
  • OODA paper:
    • Maximin (Cass Sunstein)
      • For regulation, some people argue in favor of the maximin rule, by which public officials seek to eliminate the worst worst-cases. The maximin rule has not played a formal role in regulatory policy in the Unites States, but in the context of climate change or new and emerging technologies, regulators who are unable to conduct standard cost-benefit analysis might be drawn to it. In general, the maximin rule is a terrible idea for regulatory policy, because it is likely to reduce rather than to increase well-being. But under four imaginable conditions, that rule is attractive.
        1. The worst-cases are very bad, and not improbable, so that it may make sense to eliminate them under conventional cost-benefit analysis.
        2. The worst-case outcomes are highly improbable, but they are so bad that even in terms of expected value, it may make sense to eliminate them under conventional cost-benefit analysis.
        3. The probability distributions may include “fat tails,” in which very bad outcomes are more probable than merely bad outcomes; it may make sense to eliminate those outcomes for that reason.
        4. In circumstances of Knightian uncertainty, where observers (including regulators) cannot assign probabilities to imaginable outcomes, the maximin rule may make sense. (It may be possible to combine (3) and (4).) With respect to (3) and (4), the challenges arise when eliminating dangers also threatens to impose very high costs or to eliminate very large gains. There are also reasons to be cautious about imposing regulation when technology offers the promise of “moonshots,” or “miracles,” offering a low probability or an uncertain probability of extraordinarily high payoffs. Miracles may present a mirror-image of worst-case scenarios.
  • Reaction wheel efficiency inference
    • Since I have this spiffy accurate model, I think I’m going to try using it before spending a lot of time evolving an ensemble
    • Realized that I only trained it with a voltage of +1, so I’ll need to abs(delta)
    • It’s working!

WorkingInference

  • Next steps:
    • I can’t do ensemble realtime inference because I’d need a gpu for each model. This means that I need to get the best “best” model and use that
    • Run the evolver to see if something better can be found
    • Add “flywheel mass” and “vehicle mass” to dictionary and get rid of the 0.05 value
    • Set up a second model that uses the inferred efficiency to move in accordance with the actual commands. Have them sit on either side of the origin
  • Committed everything. I think I’m done for the day

Phil 10.31.19

8:00 – 4:00 ASRC

  • Got my dissertation paperwork in!
  • To Persuade As an Expert, Order Matters: ‘Information First, then Opinion’ for Effective Communication
    • Participants whose stated preference was to follow the doctor’s opinion had significantly lower rates of antibiotic requests when given “information first, then opinion” compared to “opinion first, then information.” Our evidence suggests that “information first, then opinion” is the most effective approach. We hypothesize that this is because it is seen by non-experts as more trustworthy and more respectful of their autonomy.
    • This matters a lot because what is presented and the order of presentation is itself, an opinion. Maps lay out the information in a way that provides a larger, less edited selection of information.
  • Working on RW training set. Got the framework methods working. Here’s a particularly good run – 99% accuracy for 50 “functions” repeated 20 times each:
  • Tomorrow I’ll roll them into the optomizer. I’ve already built the subclass, but had to flail a bit to find the right way to structure and scale the data

Phil 10.30.19

7:00 – 5:00 GOES

starbird

  • Dissertation – finish up the maps chapter – done!
  • Try writing up more expensive information thoughts (added to discussion section as well)
    • Game theory comes from an age of incomplete information. Now we have access to mostly complete, but potentially expensive information
      • Expense in time – throwing the breakers on high-frequency trading
      • Expense in $$ – Buying the information you need from available resources
      • Expensive in resources – developing the hardware and software to obtain the information (Operation Hummingbird to TPU/DNN development)
    • By handing the information management to machines, we create a human-machine social structure, governed by the rules of dense/sparse,stiff/slack networks
      • AI combat is a very good example of an extremely stiff network (varies in density) and the associated time expense. Combat has to happen as fast as possible, due to OODA loop constraints. But if the system does not have designed-in capacity to negotiate a ceasefire (on both/all sides!), there may be no way to introduce it in human time scales, even though the information that one side is losing is readily apparent.
      • Online advertising is a case where existing information is hidden from the target of the advertiser, but available to the platform, and to a lesser degree, the client. Because this information asymmetry, the user’s behavior/beliefs are more likely to be exploited in a way that denies the user agency, while granting maximum agency to the platform and clients.
      • Deepfakes, spam and the costs of identifying deliberate misinformation
      • Call to action: the creation of an information environment impact body that can examine these issues and determine costs. This is too complex a process for the creators to do on their own, and there would be rampant conflict of interest anyway. But an EPA-like structure, where experts in this topic perform as a counterbalance to unconstrained development and exploitation of the information ecosystem
  • The Knowledge, Analytics, Cognitive and Cloud Computing (KnACC) lab in the Information Systems department in UMBC aims to address challenging issues at the intersection of Data Science and Cloud Computing. We are located in ITE 415.
  • GOES
    • Start creating NN that takes pitch/roll/yaw star tracker deltas and tries to calculate reaction wheel efficiency
      • input vector is dp, dr, dy. Assume a fixed timestep
      • output vector is effp, effr, effy
      • once everything trains up, try running the inferencer on the running sim and display “inferred RW efficiency” for each RW
      • Broke out the base class parts of TF2OptimizerTest. I just need to generate the test/train data for now, no sim needed

Twitter

big ending news for the day

Phil10.29.19

7:00 – 5:00 ASRC GOES

  • Dissertation – more maps
  • CTO presentation at 2:00
    • Delayed a bit, but I think it went well. A lot of the things that Eric tried to put in place look to be resurfacing. I wonder if it will work this time?
  • Meeting with Wayne? Yep – got his signature! Need to email Nicole and see where this goes – done
  • Starting to negotiate the new-ish “expensive information” paper with Aaron M

Phil 10.28.19

Language

Capacity, Bandwidth, and Compositionality in Emergent Language Learning

  • Many recent works have discussed the propensity, or lack thereof, for emergent languages to exhibit properties of natural languages. A favorite in the literature is learning compositionality. We note that most of those works have focused on communicative bandwidth as being of primary importance. While important, it is not the only contributing factor. In this paper, we investigate the learning biases that affect the efficacy and compositionality of emergent languages. Our foremost contribution is to explore how capacity of a neural network impacts its ability to learn a compositional language. We additionally introduce a set of evaluation metrics with which we analyze the learned languages. Our hypothesis is that there should be a specific range of model capacity and channel bandwidth that induces compositional structure in the resulting language and consequently encourages systematic generalization. While we empirically see evidence for the bottom of this range, we curiously do not find evidence for the top part of the range and believe that this is an open question for the community.

Radiolab: Tit for Tat

  • In the early 60s, Robert Axelrod was a math major messing around with refrigerator-sized computers. Then a dramatic global crisis made him wonder about the space between a rock and a hard place, and whether being good may be a good strategy. With help from Andrew Zolli and Steve Strogatz, we tackle the prisoner’s dilemma, a classic thought experiment, and learn about a simple strategy to navigate the waters of cooperation and betrayal. Then Axelrod, along with Stanley Weintraub, takes us back to the trenches of World War I, to the winter of 1914, and an unlikely Christmas party along the Western Front.
    • Need to send a note for them to look into Axelrod’s “bully” saddle point

7:00 – ASRC GOES

  • Dissertation – Nearly done with the agent cartography section?
  • CTO Rehearsal – 10:30 – 12:00 done
  • ML Dinner – 4:30 fun! 20191028_173214
  • Meeting With Aaron M
    • More thinking about what to do with the paper. We decided to try for the CHI4EVIL workshop, and then try something like IEEE Spectrum. I think I’d like to reframe it around the concept of Expensive Information and Automation. Try to tie together AI weapons, spam filters, and deepfakes
      • Automation makes negotiation more difficult, locks in trajectories
      • Handing off responsibility to automation amplifies opportunities and destructive potential
      • OODA loop could be generalized if you look at it from the perspective of attention.

Phil 10.26.19

The dynamics of norm change in the cultural evolution of language

  • What happens when a new social convention replaces an old one? While the possible forces favoring norm change—such as institutions or committed activists—have been identified for a long time, little is known about how a population adopts a new convention, due to the difficulties of finding representative data. Here, we address this issue by looking at changes that occurred to 2,541 orthographic and lexical norms in English and Spanish through the analysis of a large corpora of books published between the years 1800 and 2008. We detect three markedly distinct patterns in the data, depending on whether the behavioral change results from the action of a formal institution, an informal authority, or a spontaneous process of unregulated evolution. We propose a simple evolutionary model able to capture all of the observed behaviors, and we show that it reproduces quantitatively the empirical data. This work identifies general mechanisms of norm change, and we anticipate that it will be of interest to researchers investigating the cultural evolution of language and, more broadly, human collective behavior.

When Hillclimbers Beat Genetic Algorithms in Multimodal Optimization

  • It has been shown in the past that a multistart hillclimbing strategy compares favourably to a standard genetic algorithm with respect to solving instances of the multimodal problem generator. We extend that work and verify if the utilization of diversity preservation techniques in the genetic algorithm changes the outcome of the comparison. We do so under two scenarios: (1) when the goal is to find the global optimum, (2) when the goal is to find all optima.
    A mathematical analysis is performed for the multistart hillclimbing algorithm and a through empirical study is conducted for solving instances of the multimodal problem generator with increasing number of optima, both with the hillclimbing strategy as well as with genetic algorithms with niching. Although niching improves the performance of the genetic algorithm, it is still inferior to the multistart hillclimbing strategy on this class of problems.
    An idealized niching strategy is also presented and it is argued that its performance should be close to a lower bound of what any evolutionary algorithm can do on this class of problems.

Phil 10.25.19

7:00 – 4:00 ASRC GOES