Category Archives: Simulation

Phil 1.16.20

Optuna: An open source hyperparameter optimization framework to automate hyperparameter search

  • Medium writeup. It looks like this is Bayesian, and is better than hyperopt?

7:00 – 5:00 ASRC GOES

  • Dissertation
    • Starting to add Wayne’s comments
    • Finished the intro, starting motivation
  • NSOF Meeting with Isaac & Bruce
    • Still looking at the optimal scenario to use the current simulators (running over a weekend) to generate data
    • Data sets are used to train and evaluate, then progressively simplified until they can no longer recognize the source data. This will let us estimate the fidelity of the simulations we need.
  • JuryRoom meeting. Looking into adding UX faculty. Meeting is expanding to 6:00 – 8:00

Phil 12.24.19

ASRC PhD 6:30 – 9:30

  • The Worldwide Web of Chinese and Russian Information Controls
    • The global diffusion of Chinese and Russian information control technology and techniques has featured prominently in the headlines of major international newspapers.1 Few stories, however, have provided a systematic analysis of both the drivers and outcomes of such diffusion. This paper does so – and finds that these information controls are spreading more efficiently to countries with hybrid or authoritarian regimes, particularly those that have ties to China or Russia. Chinese information controls spread more easily to countries along the Belt and Road Initiative; Russian controls spread to countries within the Commonwealth of Independent States. In arriving at these findings, this working paper first defines the Russian and Chinese models of information control and then traces their diffusion to the 110 countries within the countries’ respective technological spheres, which are geographical areas and spheres of influence to which Russian and Chinese information control technology, techniques of handling information, and law have diffused.
  • Wrote up some preliminary thoughts on Antonio’s Autonomous Shuttles concept. Need to share the doc
  • Listening to World Affairs Council, and the idea of B-Corporations came up, which are a kind of contractual mechanism for diversity injection?
    • Certified B Corporations are a new kind of business that balances purpose and profit. They are legally required to consider the impact of their decisions on their workers, customers, suppliers, community, and the environment. This is a community of leaders, driving a global movement of people using business as a force for good.
    • Deciding to leave this out of the dissertation, since I’m more focussed on individual interfaces with global effects as opposed to corporate legal structures. It’s just too tangential.
  • Dissertation
    • H3 conclusions – done!

 

Phil 12.5.19

ASRC GOES 7:00 – 4:30, 6:30 – 7:00

  • Write up something for Erik and John?
  • Send gdoc link to Bruce – done
  • apply for TF Dev invite – done
  • Schedule physical! – done
  • Dissertation – more Designing for populations
  • Evolver
    • Comment EvolutionaryOptimizer – almost done
    • Comment ModelWriter
    • Quickstart
    • User’s guide
    • Comment the excel utils?
  • Waikato meeting with Alex and Panos

Phil 11.8.19

7:00 – 3:00 ASRC GOES

  • Dissertation
    • Usability study! Done!
    • Discussion. This is going to take some framing. I want to tie it back to earlier navigation, particularly the transition from stories and mappaemundi to isotropic maps of Ptolemy and Mercator.
  • Sent Don and Danilo sql file
  • Start satellite component list
  • Evolver
    • Adding threads to handle the GPU. This looks like what I want (from here):
      import logging
      import concurrent.futures
      import threading
      import time
      
      def thread_function(name):
          logging.info("Task %s: starting on thread %s", name, threading.current_thread().name)
          time.sleep(2)
          logging.info("Task %s: finishing on thread %s", name, threading.current_thread().name)
      
      if __name__ == "__main__":
          num_tasks = 5
          num_gpus = 1
          format = "%(asctime)s: %(message)s"
          logging.basicConfig(format=format, level=logging.INFO,
                              datefmt="%H:%M:%S")
      
          with concurrent.futures.ThreadPoolExecutor(max_workers=num_gpus) as executor:
              result = executor.map(thread_function, range(num_tasks))
      
          logging.info("Main    : all done")

      As you can see, it’s possible to have a thread for each gpu, while having them iterate over a larger set of tasks. Now I need to extract the gpu name from the thread info. In other words,  ThreadPoolExecutor-0_0 needs to map to gpu:1.

    • Ok, this seems to do everything I need, with less cruft:
      import concurrent.futures
      import threading
      import time
      from typing import List
      import re
      
      last_num_in_str_re = '(\d+)(?!.*\d)'
      prog = re.compile(last_num_in_str_re)
      
      def thread_function(args:List):
          num = prog.search(threading.current_thread().name) # get the last number in a string
          gpu_str = "gpu:{}".format(int(num.group(0))+1)
          print("{}: starting on  {}".format(args["name"], gpu_str))
          time.sleep(2)
          print("{}: finishing on  {}".format(args["name"], gpu_str))
      
      if __name__ == "__main__":
          num_tasks = 5
          num_gpus = 5
          task_list = []
          for i in range(num_tasks):
              task = {"name":"task_{}".format(i), "value":2+(i/10)}
              task_list.append(task)
          with concurrent.futures.ThreadPoolExecutor(max_workers=num_gpus) as executor:
              result = executor.map(thread_function, task_list)
      
          print("Finished Main")

      And that gives me:

      task_0: starting on  gpu:1
      task_1: starting on  gpu:2
      task_0: finishing on  gpu:1, after sleeping 2.0 seconds
      task_2: starting on  gpu:1
      task_1: finishing on  gpu:2, after sleeping 2.1 seconds
      task_3: starting on  gpu:2
      task_2: finishing on  gpu:1, after sleeping 2.2 seconds
      task_4: starting on  gpu:1
      task_3: finishing on  gpu:2, after sleeping 2.3 seconds
      task_4: finishing on  gpu:1, after sleeping 2.4 seconds
      Finished Main

      So the only think left is to integrate this into TimeSeriesMl2

Phil 11.7.19

7:00 – 5:00 ASRC GOES

  • Dissertation
  • ML+Sim
    • Save actual and inferred efficiency to excel and plot
    • Create an illustration that shows how the network is trained, validated against the sim, then integrated into the operating system. (maybe show a physical testbed for evaluation?)
    • Demo at the NSOF
      • Went ok. Next steps are a sufficiently realistic model that can interpret an actual malfunction
      • Put together a Google Doc/Sheet that has the common core elements that we can model most satellites (LEO, MEO, GEO, and HEO?). What are the common components between cubesats and the James Webb?
      • Detection of station-keeping failure is a possibility
      • Also, high-dynamic phases, like orbit injection might be low-ish fruit
    • Tomorrow, continue on the GPU assignment in the evolver

Phil 11.6.19

7:00 – 3:00 ASRC GOES

  • Simulation for training ML at UMD: Improved simulation system developed for self-driving cars 
    • University of Maryland computer scientist Dinesh Manocha, in collaboration with a team of colleagues from Baidu Research and the University of Hong Kong, has developed a photo-realistic simulation system for training and validating self-driving vehicles. The new system provides a richer, more authentic simulation than current systems that use game engines or high-fidelity computer graphics and mathematically rendered traffic patterns.
  • Dissertation
    • Send out email setting the date/time to Feb 21, from 11:00 – 1:00. Ask if folks could move the time earlier or later for Wayne – done
    • More human study – I think I finally have a good explanation of the text convergence.
  • Maybe work of the evolver?
    • Add nested variables
    • Look at keras-tuner code to see how GPU assignment is done
      • So it looks like they are using gRPC as a way to communicate between processes? grpc
      • I mean, like separate processes, communicating via ports grpc2
      • Oh. This is why. From the tf.distribute documentation tf.distribute
      • No – wait. This is from the TF distributed training overview pagetf.distribute2
      • And that seems to straight up work (assuming that multiple GPUs can be called. Here’s an example of training:
        strategy = tf.distribute.OneDeviceStrategy(device="/gpu:0")
        with strategy.scope():
            model = tf.keras.Sequential()
            # Adds a densely-connected layer with 64 units to the model:
            model.add(layers.Dense(sequence_length, activation='relu', input_shape=(sequence_length,)))
            # Add another:
            model.add(layers.Dense(200, activation='relu'))
            model.add(layers.Dense(200, activation='relu'))
            # Add a layer with 10 output units:
            model.add(layers.Dense(sequence_length))
        
            loss_func = tf.keras.losses.MeanSquaredError()
            opt_func = tf.keras.optimizers.Adam(0.01)
            model.compile(optimizer= opt_func,
                          loss=loss_func,
                          metrics=['accuracy'])
        
            noise = 0.0
            full_mat, train_mat, test_mat = generate_train_test(num_funcs, rows_per_func, noise)
        
            model.fit(train_mat, test_mat, epochs=70, batch_size=13)
            model.evaluate(train_mat, test_mat)
        
            model.save(model_name)

        And here’s an example of predicting

        strategy = tf.distribute.OneDeviceStrategy(device="/gpu:0")
        with strategy.scope():
            model = tf.keras.models.load_model(model_name)
            full_mat, train_mat, test_mat = generate_train_test(num_funcs, rows_per_func, noise)
        
            predict_mat = model.predict(train_mat)
        
            # Let's try some immediate inference
            for i in range(10):
                pitch = random.random()/2.0 + 0.5
                roll = random.random()/2.0 + 0.5
                yaw = random.random()/2.0 + 0.5
                inp_vec = np.array([[pitch, roll, yaw]])
                eff_mat = model.predict(inp_vec)
                print("input: pitch={:.2f}, roll={:.2f}, yaw={:.2f}  efficiencies: pitch={:.2f}%, roll={:.2f}%, yaw={:.2f}%".
                      format(inp_vec[0][0], inp_vec[0][1], inp_vec[0][2], eff_mat[0][0]*100, eff_mat[0][1]*100, eff_mat[0][2]*100))
    • Look at TF code to see if it makes sense to add to the project. Doesn’t look like it, but I think I can make a nice hyperparameter/architecture search API using this, once validated
  • Mission Drive meeting and demo – went ok. Will Demo at NSOF tomorrow

Phil 10.25.19

7:00 – 4:00 ASRC GOES

Phil 10.16.19

7:00 – ASRC GOES

  • Listening to Rachel Maddow on City Arts and Lectures. She’s talking about the power of oil and gas, and how they are essentially anti-democratic. I think that may be true for most extracting industries. They are incentivised to take advantage of the populations that are the gatekeepers to the resource. Which is why you get corruption – it’s cost effective. This also makes me wonder about advertising, which regards consumers as the source to extract money/votes/etc from.
  • Dissertation:
    • Something to add to the discussion section. Primordial jumps are not just on the parts of an individual on a fitness landscape. Sometimes the landscape can change, as with a natural disaster. The survivors are presented with an entirely new fitness landscape,often devoid of competition, that they can now climb.
    • This implies that sustained stylistic change creates sophisticated ecosystems, while primordial change disrupts that, and sets the stage for the creation of new ecosystems.
    • Had a really scary moment. Everything with \includegraphics wouldn’t compile. It seems to be a problem with MikTex, as described here. The fix is to place this code after \documentclass:
      \makeatletter
      \def\set@curr@file#1{%
      	\begingroup
      	\escapechar\m@ne
      	\xdef\@curr@file{\expandafter\string\csname #1\endcsname}%
      	\endgroup
      }
      \def\quote@name#1{"\quote@@name#1\@gobble""}
      \def\quote@@name#1"{#1\quote@@name}
      \def\unquote@name#1{\quote@@name#1\@gobble"}
      \makeatother
    • Finished the intro simulation description and results. Next is robot stampedes, then adversarial herding
  • Evolver
    • Check on status
    • Write abstract for GSAW if things worked out
  • GOES-R AI/ML Meeting
    • Lots of AIMS deployment discussion. Config files, version control, etc.
  • AIMS / A2P Meeting
    • Walked through report
    • Showed Vadim’s physics
    • Showed video of the Deep Mind robot Rubik’s cube to drive homethe need for simulation
    • Send an estimate for travel expenses for 2020
    • Put together a physics roadmap with Vadim and Bruce

Phil 8.29.19

ASRC GOES – 7:00 – 4:00

  • Find out who I was talking to yesterday at lunch (Boynton?)
  • Contact David Lazar about RB
  • Participating as an anonymous fish in JuryRoom. Started the discussion
  • Dissertation – started the State section
  • Working on Control and sim diagrams
    • Putting this here because I keep on forgetting how to add an outline/border to an image in Illustrator:

OutlineAI

  1. Place and select an image in the Illustrator document.
  2. Once selected, open Appearance panel and from the Appearance panel flyout menu, choose Add New Stroke:
  3. With the Stroke highlighted in the Appearance panel, choose Effect -> Path -> Outline Object.
  • Anyway, back to our regularly scheduled program.
  • Made a control system diagram
  • Made a control system inheritance diagram
  • Made a graphics inheritance diagram
  • Need to stick them in the ASRC Dev Pipeline document
  • Discovered JabRef: JabRef is an open source bibliography reference manager. The native file format used by JabRef is BibTeX, the standard LaTeX bibliography format. JabRef is a desktop application and runs on the Java VM (version 8), and works equally well on Windows, Linux, and Mac OS X.
  • Tomorrow we get started with TF 2.0

Phil 8.23.19

7:00 – 4:00 ASRC GEOS

  • More Dissertation
    • Continuing lit review
  • Rework BlueSky paper for air traffic? Meeting with T at 10:00
  • Simulation
    • Need to discuss with Aaron the best way to use the data to train the NN and round-trip the outputs so that they can be used to have the ML model issue commands to the RCS system so that given the outputs of one model, the NN can create commands that cause the same outputs in a separate model
  • Wow. It knows/finds syntactically correct Java. From TalkToTransformer.com:
  • Wow

Phil 8.22.19

7:00 – ASRC GOES

ScottW

  • Dissertation
    • Lit review
    • This, from Colin Martindale CogPsy a NN approach. It’s the central piece:
      • it turns out that language is almost entirely metaphorical (Hobbs,
        1983; Lakoff, 1987; Lakoff & Johnson, 1980). Many of these metaphors are
        spatial. Look back at the last sentence. I asked you to think things through. I told you that something turned out. We bring up topics. We put them on the table. If you could argue with me, Lakoff and Johnson (1980) point out that we would have a war: you might try to attack and shoot down my arguments. I would try to defend them by trying to demolish your position and counterattacking. Lakoff’s argument is that if we took all the metaphors out of language, there would be virtually nothing left. (p 212)
  • More control systems – first pass is working!

RunningSim

InputVector

Phil 8.21.19

City Arts & Lectures: Privacy and Technology

  • This week, a conversation about privacy, ethics, and organizing in the world of technology.Who benefits from the lack of diversity in the tech industry? Does artificial intelligence reflect the biases of those who create it? How can we push for regulation and transparency?  These are some of the questions discussed by our guests, Meredith Whittaker, co-founder of AI Now at NYU and the founder of Google’s Open Research Institute; and Kade Crockford, Director of the ACLU Massachusetts’ Technology and Liberty Program. They appeared at the Sydney Goldstein Theater in San Francisco on June 7, 2019.

7:00 – 8:00 ASRC GOES

  • Printed out some business cards for JuryRoom
  • Antonio has submitted the manuscript – created a TAAS account and verified that its there
  • Dissertation
    • Finished 0.5 pass at chapter 1!
  • Goddard today
    • See if I can get a permanent card? Done!
    • More control system work
  • Meeting with Wayne
    • Send the as-delivered TAAS paper and cover letter. Done
    • Work on getting the ML/Weapons paper reformatted tomorrow
    • Send chapter one of the dissertation
    • I’ll then start sending the chapters as I “complete” them, and we’ll see how it’s going. If the dissertation seems to be coming together well, then we might switch strategies to a from a content-centric to a coherence-centric approach.

Phil 8.20.19

Chores calls!

Trump, Qanon and an impending judgment day: Behind the Facebook-fueled rise of The Epoch Times

  • By the numbers, there is no bigger advocate of President Donald Trump on Facebook than The Epoch Times. The small New York-based nonprofit news outlet has spent more than $1.5 million on about 11,000 pro-Trump advertisements in the last six months, according to data from Facebook’s advertising archive — more than any organization outside of the Trump campaign itself, and more than most Democratic presidential candidates have spent on their own campaigns. Those video ads — in which unidentified spokespeople thumb through a newspaper to praise Trump, peddle conspiracy theories about the “Deep State,” and criticize “fake news” media — strike a familiar tone in the online conservative news ecosystem. The Epoch Times looks like many of the conservative outlets that have gained followings in recent years.

7:00 – 4:00 ASRC GOES

  • Dissertation
    • I just found out that the UMBC, UMD, and TSU are Jan 2 – 22. I’ll be doing my defense in there somewhere
    • Working on the TACJ section
  • Add controllers for the reaction wheels so that a “go to AZ=30, EL=20, R= -10” can work
    • Getting close:
    • Control
  • Next task is to subclass the ReactionWheelController to pitch, roll, and yaw controllers, so that they can manipulate their data separately
    • Voltage should control the velocity and direction of the wheel
    • Simulator can be told to add drag to a wheel
  • I think Communications of the ACM is the next place to try the AI Weapons paper. It’s 5,000-ish words, so it looks like it should fit:
    • CACM
  • Finished TAAS article and letter. Notified Antonio

Phil 8.19.19

ASRC GOES 7:00 – 4:00

  • Adding the journalism framing example to the introduction
  • Continue to work all data transfers into the Data Dictionary so that effective, tagged training in the sim can happen
    • done-ish? Need to do another layer and figure out how to set up the command and response buffers – done

DataDictionary

Data Dictionary as a spreadsheet