Category Archives: Tensorflow

Phil 4.23.20

Transformer Architecture: The Positional Encoding

  • In this article, I don’t plan to explain its architecture in depth as there are currently several great tutorials on this topic (herehere, and here), but alternatively, I want to discuss one specific part of the transformer’s architecture – the positional encoding.

D20

  • Add centroids for states – done
  • Return the number of neighbors as an argument – done
  • Chatted with Aaron and Zach. More desire to continue than abandon

ACSOS

  • More revisions. Swap steps for discussion and future work

GOES

    • IRS proposal went in yesterday
    • Continue with GANs
    • Using the VGG model now with much better results. Also figured out how to loads weights and read the probabilities in the output layer: vgg
    • Same thing using the pre-trained model from Keras:
      from tensorflow.keras.applications.vgg16 import VGG16
      # prebuild model with pre-trained weights on imagenet
      model = VGG16(weights='imagenet', include_top=True)
      model.compile(optimizer='sgd', loss='categorical_crossentropy')

      vggPretrained

    • Trying to visualize a layer using this code. And using that code as a starting point, I had to explore how to slice up the tensors in the right way. A CNN layer has a set of “filters” that contain a square set of pixels. The data is stored as an array of pixels at each x, y, coordinate, so I had to figure out how to get one image at a time. Here’s my toy:
      import numpy as np
      import matplotlib.pyplot as plt
      
      n_rows = 4
      n_cols = 8
      depth = 4
      
      my_list = []
      
      for r in range(1, n_rows):
          row = []
          my_list.append(row)
          for c in range(1, n_cols):
              cell = []
              row.append(cell)
              for d in range(depth):
                  cell.append(d+c*10+r*100)
      
      print(my_list)
      nl = np.array(my_list)
      for d in range(depth):
          print("\nlayer {} = \n{}".format(d, nl[:, :, d]))
          plt.figure(d)
          plt.imshow(nl[:, :, d], aspect='auto', cmap='plasma')
      
      plt.show()
    • This gets features from a cat image at one of the pooling layers. The color map is completely arbitrary:
      # get the features from this block
      features = model.predict(x)
      print(features.shape)
      farray = np.array(features[0])
      print("{}".format(farray[:, :, 0]))
      
      for d in range(4):
         plt.figure(d)
         plt.imshow(farray[:, :, d], aspect='auto', cmap='plasma')
    • But we get some cool pix!

Phil 4.22.20

  • Amsterdam, 24 April 2020​
  • This workshop aims to bring together researchers and practitioners from the emerging fields of Graph Representation Learning and Geometric Deep Learning. The workshop will feature invited talks and a poster session. There will be ample opportunity for discussion and networking.​
  • Invited talks will be live-streamed on YouTube: https://www.youtube.com/watch?v=Zf_nLR4kMo4
  • Looking for an online seminar that presents the latest advances in reinforcement learning theory? You just found it! We aim to bring you a virtual seminar (approximately) every Tuesday at 5pm UTC featuring the latest work in theoretical reinforcement learning.

D20

  • Added P-threshold to json file. I’m concerned that everyone is too busy to participate any more. Aaron hasn’t even asked about the project since he got better and is complaining about how overworked he is. Zach seems to be equally busy. If no one steps up by the end of the week, I think it’s time to either take over the project entirely or shut it down.

ACSOS

  • Started working on Antonio’s changes
  • Changed the MappApp so that the trajectory lines are blue

GOES

  • Finish CNN chapter
  • Enable Tensorflow profiling
    • Installed the plugin: pip install tensorboard_plugin_profile
    • Updated setup_tensorboard():
      def setup_tensorboard(dir_str: str, windows_slashes:bool = True) -> List:
          if windows_slashes:
              dir_str = dir_str.replace("/", "\\")
          try:
              shutil.rmtree(dir_str)
          except:
              print("no file {} at {}".format(dir_str, os.getcwd()))
      
          # use TensorBoard, princess Aurora!
          callbacks = [tf.keras.callbacks.TensorBoard(log_dir=dir_str, profile_batch = '500,510')]
          return callbacks
  • Huh. Looks like scipy.misc.imresize() and scipy.misc.imread() are both deprecated and out of the library. Trying opencv
    • pip install opencv-python
    • Here’s how I did it, with some debugging to varify that everything was working correctly thrown in:
      img_names = ['cat.jpg', 'steam-locomotive.jpg']
      img_list = []
      for name in img_names:
          img = cv2.imread(name)
          res = np.array(cv2.resize(img, dsize=(32, 32), interpolation=cv2.INTER_CUBIC))
          cv2.imwrite(name.replace(".jpg","_32x32.jpg"), res)
          img_list.append(res)
      
      imgs = np.transpose(img_list, (0, 2, 1, 3))
      imgs = np.array(img_list) / 255
  • This forced me to go down a transpose() in multiple dimensions rabbit hole that’s worth documenting. First, here’s code that takes some tiny images in an array and transposes them:
    import numpy as np
    
    img_list = [
        # image 1
        [[[10, 20, 30],
          [11, 21, 31],
          [12, 22, 32],
          [13, 23, 33]],
    
         [[255, 255, 255],
          [48, 45, 58],
          [101, 150, 205],
          [255, 255, 255]],
    
         [[255, 255, 255],
          [43, 56, 75],
          [77, 110, 157],
          [255, 255, 255]],
    
         [[255, 255, 255],
          [236, 236, 238],
          [76, 104, 139],
          [255, 255, 255]]],
        # image 2
        [[[100, 200, 300],
          [101, 201, 301],
          [102, 202, 302],
          [103, 203, 303]],
    
         [[159, 146, 145],
          [89, 74, 76],
          [207, 207, 210],
          [212, 203, 203]],
    
         [[145, 155, 164],
          [52, 40, 36],
          [166, 160, 163],
          [136, 132, 134]],
    
         [[61, 56, 60],
          [36, 32, 35],
          [202, 195, 195],
          [172, 165, 177]]]]
    
    np_imgs = np.array(img_list)
    print("np_imgs shape = {}".format(np_imgs.shape))
    
    imgs = np.transpose(img_list, (0, 2, 1, 3))
    print("imgs shape = {}".format(np_imgs.shape))
    #imgs = np.array(imgs) / 255
    
    print("pix 0: \n{}".format(np_imgs[0]))
    print("transposed pix 0: \n{}".format(imgs[0]))
    print("\n------------------------\n")
    print("pix 1: \n{}".format(np_imgs[1]))
    print("transposed pix 1: \n{}".format(imgs[1]))
  • So, this is a complex matrix, with a shape of (2, 4, 4, 3). What we want to do is rotate the images (the inner 4, 4) by 90 degrees by transposing them. The way to understand Numpy’s transpose is that it interchanges two axis. The trick is understanding how.
  • For this matrix, applying a transpose that does nothing means writing this:
    imgs = np.transpose(img_list, (0, 1, 2, 3))
  • Think of it as an identity transpose. What we want to do is reverse the order of the inner 4, 4, which we do like this:
    imgs = np.transpose(img_list, (0, 2, 1, 3))
  • That’s it! Now the second “4” will be transposed with the first “4”. You can do this with any of the elements. So
    imgs = np.transpose(img_list, (3, 2, 1, 0))
  • Reverses everything!
  • Ok, so things are working, but the results are crap. Not really worrying about it for now because it’s CFAR and I always have this problem:
    ./images\airplane.jpg = [8] ship
    ./images\automobile.jpg = [0] airplane 
    ./images\bird.jpg = [4] deer
    ./images\cat.jpg = [0] airplane 
    ./images\cat2.jpg = [6] frog
    ./images\cat3.jpg = [8] ship
    ./images\deer.jpg = [8] ship
    ./images\dog.jpg = [2] bird
    ./images\horse.jpg = [8] ship
    ./images\ship.jpg = [0] airplane 
    ./images\steam-locomotive.jpg = [2] bird
    ./images\truck.jpg = [3] cat
    [8 0 4 0 6 8 8 2 8 0 2 3]

     

  • Meeting

Phil 2.21.20

D20

  • Add P-threshold value to the json meta values. Items lower than this should be pale colors with black text

ML Seminar

  • finish reading paper, put together some quick slides – Done! Had to turn one table into charts
  • Good discussion. It makes me think that the way to have agents play games is to learn on text up to the latest point in the game as probe, and then add the question to the end of the probe, something like “<game text> + <character name> “. Have to try that with some novel text and talk to transformer. Here’s an example from the beginning of O Henry’s the Skylight Room. The Text begins: First Mrs. Parker would show you the double parlours. You would not dare to interrupt her description of their advantages and of the merits of the gentleman who had occupied them for eight years. Then you would manage to stammer forth the confession that you were neither a doctor nor a dentist. Mrs. Parker’s
    • O Henry: manner of receiving the admission was such that you could never afterward entertain the same feeling toward your parents, who had neglected to train you up in one of the professions that fitted Mrs. Parker’s parlours.
    • TTT 1: eyes would turn on you with the ferocity of her surgeon’s. “If I see a cripple with no teeth, I get him to be my apprentice, and he stays a month. I put him to work in a laundry. I give him a salary of seven hundred pounds a year, with a board and everything to live upon.”
    • TTT 2: memory is rather dim, and she was not accustomed to be questioned directly; and she was anxious to see that you could remember as little as possible. You felt sure that by telling a story from childhood that had so often caught your fancy, you would escape this forced duty. So you went in, decided to visit the fair each evening, and spent an hour or two putting it off.
    • TTT 3: vicarage would be a long, square room overlooking a grassy acre. Not less than four hundred acres were required to cover the floor of the park with his striking landscape. The grounds were monotonous, both in their beauty and in their repetition. To name a single flower would be so pedestrian an exercise that I am surprised that any of my country neighbours ever took that course. 
  • So there is something there, but working out the right probe is still an issue.

GOES

  • CNNs
  • The frontier of simulation-based inference
    • Many domains of science have developed complex simulations to describe phenomena of interest. While these simulations provide high-fidelity models, they are poorly suited for inference and lead to challenging inverse problems. We review the rapidly developing field of simulation-based inference and identify the forces giving new momentum to the field. Finally, we describe how the frontier is expanding so that a broad audience can appreciate the profound change these developments may have on science.

Phil 4.20.20

GOES

  • Reading the Distill article on Gaussian processes (highlighted page here)
  • Copy over neural-tangents code from notebook to IDE
  • Working on regression
  • Ran into a problem with Tensorboard
    Traceback (most recent call last):
      File "d:\program files\python37\lib\runpy.py", line 193, in _run_module_as_main
        "__main__", mod_spec)
      File "d:\program files\python37\lib\runpy.py", line 85, in _run_code
        exec(code, run_globals)
      File "D:\Program Files\Python37\Scripts\tensorboard.exe\__main__.py", line 7, in 
      File "d:\program files\python37\lib\site-packages\tensorboard\main.py", line 75, in run_main
        app.run(tensorboard.main, flags_parser=tensorboard.configure)
      File "d:\program files\python37\lib\site-packages\absl\app.py", line 299, in run
        _run_main(main, args)
      File "d:\program files\python37\lib\site-packages\absl\app.py", line 250, in _run_main
        sys.exit(main(argv))
      File "d:\program files\python37\lib\site-packages\tensorboard\program.py", line 289, in main
        return runner(self.flags) or 0
      File "d:\program files\python37\lib\site-packages\tensorboard\program.py", line 305, in _run_serve_subcommand
        server = self._make_server()
      File "d:\program files\python37\lib\site-packages\tensorboard\program.py", line 409, in _make_server
        self.flags, self.plugin_loaders, self.assets_zip_provider
      File "d:\program files\python37\lib\site-packages\tensorboard\backend\application.py", line 183, in standard_tensorboard_wsgi
        flags, plugin_loaders, data_provider, assets_zip_provider, multiplexer
      File "d:\program files\python37\lib\site-packages\tensorboard\backend\application.py", line 272, in TensorBoardWSGIApp
        tbplugins, flags.path_prefix, data_provider, experimental_plugins
      File "d:\program files\python37\lib\site-packages\tensorboard\backend\application.py", line 345, in __init__
        "Duplicate plugins for name %s" % plugin.plugin_name
    ValueError: Duplicate plugins for name projector
  • After poking around a bit online with the “Duplicate plugins for name %s” % plugin.plugin_name ValueError: Duplicate plugins for name projector, I found this diagnostic, which basically asked me to reinstall everything*. That didn’t work, so I went into the Python37\Lib\site-packages and deleted by hand. Tensorboard now runs, but now I need to upgrade my cuda so that I have cudart64_101.dll
    • Installed the minimum set of items from the Nvidia Package Launcher (cuda_10.1.105_418.96_win10.exe)
    • Installed the cuDNN drivers from here: https://developer.nvidia.com/rdp/cudnn-download
    • The regular (e.g. MNIST) demos work byt when I try the distribution code I got this error: tensorflow.python.framework.errors_impl.InvalidArgumentError: No OpKernel was registered to support Op ‘NcclAllReduce’. It turns out that there are only two viable MirroredStrategy operations, for windows, and the default is not one of them. These are the valid calls:
      distribution = tf.distribute.MirroredStrategy(cross_device_ops=tf.distribute.ReductionToOneDevice())
      distribution = tf.distribute.MirroredStrategy(cross_device_ops=tf.distribute.HierarchicalCopyAllReduce())
    • And this call is not
      # distribution = tf.distribute.MirroredStrategy(cross_device_ops=tf.distribute.NcclAllReduce()) # <-- not valid for Windows
  • Funny thing. After reinstalling and getting everything to work, I tried the diagnostic again. It seems it always says to reinstall everything
  • And Tensorboard is working! Here’s the call that puts data in the directory:
    linear_est = tf.estimator.LinearRegressor(feature_columns=feature_columns, model_dir = 'logs/boston/')
  • And when launched on the command line pointing at the same directory:
    D:\Development\Tutorials\Deep Learning with TensorFlow 2 and Keras\Chapter 3>tensorboard --logdir=.\logs\boston
    2020-04-20 11:36:42.999208: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudart64_101.dll
    W0420 11:36:46.005735 18544 plugin_event_accumulator.py:300] Found more than one graph event per run, or there was a metagraph containing a graph_def, as well as one or more graph events.  Overwriting the graph with the newest event.
    W0420 11:36:46.006743 18544 plugin_event_accumulator.py:312] Found more than one metagraph event per run. Overwriting the metagraph with the newest event.
    Serving TensorBoard on localhost; to expose to the network, use a proxy or pass --bind_all
    TensorBoard 2.1.1 at http://localhost:6006/ (Press CTRL+C to quit)
  • I got this! tensoboard
  • Of course, we’re not done yet. When attempting to use the Keras callback, I get the following error: tensorflow.python.eager.profiler.ProfilerNotRunningError: Cannot stop profiling. No profiler is running. It turns out that you have to specify the log folder like this
      • command line:
        tensorboard --logdir=.\logs
      • in code:
        logpath = '.\\logs'

         

     

  • That seems to be working! RunningTBNN
  • Finished regression chapter

ASRC

  • Submitted RFI response for review

ACSOS

  • Got Antonio’s comments back

D20

  • Need to work on the math to find second bumps
    • If the rate has been < x% (maybe 2.5%), calculate an offset that leaves a value of 100 for each day. When the rate jumps more than y% (e.g. 100 – 120 = 20%), freeze that number until the rate settles down again and repeat the process
    • Change the number of samples to be the last x days
  • Work with Zach to get maps up?

ML seminar

Phil 4.19.20

This is interesting: Online Town

  • Online Town is a video-calling space that lets multiple people hold separate conversations in parallel. It lets you walk in, out and around those conversations just as easily as you would in real life.

More JAX and infinite-width networks. Get the code from the notebook and get it working in the IDE

Phil 4.18.20

Cross-Platform State Propaganda: Russian Trolls on Twitter and YouTube during the 2016 U.S. Presidential Election

  • This paper investigates online propaganda strategies of the Internet Research Agency (IRA)—Russian “trolls”—during the 2016 U.S. presidential election. We assess claims that the IRA sought either to (1) support Donald Trump or (2) sow discord among the U.S. public by analyzing hyperlinks contained in 108,781 IRA tweets. Our results show that although IRA accounts promoted links to both sides of the ideological spectrum, “conservative” trolls were more active than “liberal” ones. The IRA also shared content across social media platforms, particularly YouTube—the second-most linked destination among IRA tweets. Although overall news content shared by trolls leaned moderate to conservative, we find troll accounts on both sides of the ideological spectrum, and these accounts maintain their political alignment. Links to YouTube videos were decidedly conservative, however. While mixed, this evidence is consistent with the IRA’s supporting the Republican campaign, but the IRA’s strategy was multifaceted, with an ideological division of labor among accounts. We contextualize these results as consistent with a pre-propaganda strategy. This work demonstrates the need to view political communication in the context of the broader media ecology, as governments exploit the interconnected information ecosystem to pursue covert propaganda strategies

JAX Paper

D20

  • Get centroids working – done!
    • Fixed names
    • For each country
  • Work on a “score” that looks at countries with larger(?) populations’s projections where the days to zero is less than 15. Do a distribution and then score

ML Seminar

  • Started to look at the neural tangents library.
  • Installed
  • Did a first pass through the Colab notebook. Need to but this in my IDE

Phil 4.16.20

Fix siding!

SageMathMore on SageTex here

D20

  • Playing around with something to indicate the linear fit to the data. Trying P value
  • Updated UI code so that the P value will display on the next build
  • Hopefully we try the world map code today?

GOES

IMDB_embedding

  • Learning more about multiple inputs to embedding and had to get the keras.utils.plot_model working, which failed with this error: ImportError: Failed to import pydot. You must install pydot and graphviz for `pydotprint` to work. So I pip installed both, and had the same problem.
  • Had problems running the distribution samples. Upgraded tf to version 2.1. No problems and better performance
  • Finished chapter 2

ACSOS

  • Struggled with picture placement. Moving on.
  • Finished first pass. I need to add more ABM text, but I’m down to 10 pages plus references!

Multi-input and multi-output models

  • Here’s a good use case for the functional API: models with multiple inputs and outputs. The functional API makes it easy to manipulate a large number of intertwined datastreams. Let’s consider the following model. We seek to predict how many retweets and likes a news headline will receive on Twitter. The main input to the model will be the headline itself, as a sequence of words, but to spice things up, our model will also have an auxiliary input, receiving extra data such as the time of day when the headline was posted, etc. The model will also be supervised via two loss functions. Using the main loss function earlier in a model is a good regularization mechanism for deep models.

 

Phil 4.15.20

Fix siding from wind!

D20

  • Talked to Aaron about taking a derivative of the regression slope to see what it looks like. There may be common features in the pattern of rates, or of the slopes of the regressions changing over time
  • Still worried about countries that don’t report well. I’d like to be able to use rates from neighboring countries as some kind of check
  • Got the first pass on a world map json file done
  • Spread of SARS-CoV-2 in the Icelandic Population
    • As of April 4, a total of 1221 of 9199 persons (13.3%) who were recruited for targeted testing had positive results for infection with SARS-CoV-2. Of those tested in the general population, 87 (0.8%) in the open-invitation screening and 13 (0.6%) in the random-population screening tested positive for the virus. In total, 6% of the population was screened. Most persons in the targeted-testing group who received positive tests early in the study had recently traveled internationally, in contrast to those who tested positive later in the study. Children under 10 years of age were less likely to receive a positive result than were persons 10 years of age or older, with percentages of 6.7% and 13.7%, respectively, for targeted testing; in the population screening, no child under 10 years of age had a positive result, as compared with 0.8% of those 10 years of age or older. Fewer females than males received positive results both in targeted testing (11.0% vs. 16.7%) and in population screening (0.6% vs. 0.9%). The haplotypes of the sequenced SARS-CoV-2 viruses were diverse and changed over time. The percentage of infected participants that was determined through population screening remained stable for the 20-day duration of screening.

ACSOS

  • Finished first pass of the lit review. Now at 13 pages

GOES

  • Start looking at GANs. Also work on fixing Optevolver for multiple CPUs
    • Starting Deep Learning with TensorFlow 2 and Keras: Regression, ConvNets, GANs, RNNs, NLP, and more with TensorFlow 2 and the Keras API, 2nd Edition. Chapter six is GANs, which is what I’m interested in, but I’m ok with getting some review in first.
    • Working on embeddings with the IMDB sentiment analysis project. It’s the first time I’ve seen an embedding layer which is 1) Cool, and 2) Something to play with. I’d noticed when I was working with Word2Vec for my research that embeddings didn’t seem to change shape much as a function of the number of dimensions. It seemed like a lot of information was being kept at very low dimensions, like three, rather than the more accepted 128 or so:

place-embeddings

    • Well, this example gave me an opportunity to test that with some accuracy numbers. Here’s what I get:

EmbeddingDimensions

    • That is super interesting. It basically means that model building, testing, and visualization can happen at low dimensions. That makes everything faster, and with about a 10% improvement likely as one of the last steps.
    • Continuing with book.
  • Wrote up a response to Mike M’s questions about the white paper. Probably pointless, and has pretty much wasted my afternoon. And it was pointless! Now what?
  • Slides for John?

Phil 4.14.20

Fix siding from wind!

D20

  • I want to try taking a second derivative of the rates to see what it looks like. There may be common features in the pattern of rates, or of the slopes of the regressions changing over time
  • I’m also getting worried about countries that don’t report well. I’d like to be able to use rates from neighboring countries as some kind of check
  • Work with Zach on cleanup and map integration?

COVID Twitter

  • Finished ingesting the new data. It took almost 24 hours

ACSOS

  • Finished first pass of the introduction. Still at 14 pages

GOES

Phil 4.13.20

That was a very solitary weekend. I fixed some bikes, planted some herbs and vegetables, cleaned house, and procrastinated about pretty much everything else. I pinged Don and Wayne about D20 ideas, and got a ping for more info from Don, then silence. Everyone seems to be wrapped up tight in their worlds.

And for good reason. Maryland is looking grim:

Maryland_4_13_2020

D20

  • Worked with Zach to get states in. It’s working!

D20USA

COVID Twitter

  • Went looking for new data to ingest, but didn’t see anything new? It wasn’t there yet. Ingesting now
  • 1:30 Meeting

ACSOS

  • Reading through paper and pulling out all the parts from Simple Trick
  • Ping Antonio to let him know I’m working

GOES

  • Get absolute queries working in InfluxDB2. It took some looking, but here’s an example from the API reference on range(). Done!
    • Everything is in GMT. As usual, the parser is picky about the format, which is ISO-8601:
      range_args = "start:2020-04-13T13:30:00Z, stop:2020-04-13T13:30:10Z"
  • Start on TF2/GANs for converting square waves to noisy sin waves of varying frequencies using saved InfluxDB data
    • First, pull a square, sin, and noisy sin and plot using matplotlib so we know we have good vectors. Success!

Waveforms

Fika

Phil 3.19.20

I found the data sources for the dashboard in the previous few posts. Yes, everything still looks grim:

So rather than working on my dissertation, I thought I’d take a look at the data for the last 9(!) days in Excel:

This is for the USA. The data is sorted based on the cumulative total of new cases confirmed. If you look at the chart on the right, everything is in line with a pandemic in exponential growth. However, that’s not the whole story.

I like to color code the cells in my spreadsheets because colors help me visualize patterns in the data that I wouldn’t otherwise see. And one of the things that really stands out here is the red rows with one yellow cell on the left. These are all cases where the rate of confirmed new cases dropped to zero overnight. And they’re not near each other. They are in WA, NY, and CA. Is this a measuring problem or is something going right in these places?

Maybe we’ll find out more in the next few days. Now that I know how to get the data, I can do some of my own visualizations that look for outliers. I can also train up some sequence-to-sequence ML models to extrapolate trends.

One more thing. I had heard earlier (Twitter, I think?) that Vietnam was handling the crisis well. And it looks like it was, but things are back to being bad:

Ok, back to work

8:00 – 4:30 ASRC PhD, GOES

  • Working on the process section – done!
  • Working on the TACJ bookend – done! Made a new figure:
  • Submitted to Wayne. Here’s hoping it doesn’t fall through the cracks
  • Neuroevolution of Self-Interpretable Agents
    • Inattentional blindness is the psychological phenomenon that causes one to miss things in plain sight. It is a consequence of the selective attention in perception that lets us remain focused on important parts of our world without distraction from irrelevant details. Motivated by selective attention, we study the properties of artificial agents that perceive the world through the lens of a self-attention bottleneck. By constraining access to only a small fraction of the visual input, we show that their policies are directly interpretable in pixel space. We find neuroevolution ideal for training self-attention architectures for vision-based reinforcement learning tasks, allowing us to incorporate modules that can include discrete, non-differentiable operations which are useful for our agent. We argue that self-attention has similar properties as indirect encoding, in the sense that large implicit weight matrices are generated from a small number of key-query parameters, thus enabling our agent to solve challenging vision based tasks with at least 1000x fewer parameters than existing methods. Since our agent attends to only task-critical visual hints, they are able to generalize to environments where task irrelevant elements are modified while conventional methods fail.

Phil 2.28.20

7:00 – ASRC GOES

AirSim is a simulator for drones, cars and more, built on Unreal Engine (we now also have an experimental Unity release). It is open-source, cross platform, and supports hardware-in-loop with popular flight controllers such as PX4 for physically and visually realistic simulations. It is developed as an Unreal plugin that can simply be dropped into any Unreal environment. Similarly, we have an experimental release for a Unity plugin.

  • Added notes for the dissertation revisions
  • Working on the GVSETS paper – meeting at 3:00. Got everything into SVN and coordinated across machines.
  • Got Deep Learning with Tensorflow2 and Keras to start boning up on before the conference
  • Need to set some time aside for dissertation revisions
  • Keyword search for Shakespeare
  • Still need to fix the race conditions on file write and directory change
  • IRAD meeting. Signed up for Sim as a service, and exploit spaces white paper. Got John to pay for an Overleaf account

Phil 1.17.20

An ant colony has memories that its individual members don’t have

  • Like a brain, an ant colony operates without central control. Each is a set of interacting individuals, either neurons or ants, using simple chemical interactions that in the aggregate generate their behaviour. People use their brains to remember. Can ant colonies do that? 

7:00 – ASRC

  •  Dissertation
    • More edits
    • Changed all the overviews so that they also reference the section by name. It reads better now, I think
    • Meeting with Thom
  • GPT-2 Agents
  • GSAW Slide deck

Phil 1.15.20

I got invited to the TF Dev conference!

The HKS Misinformation Review is a new format of peer-reviewed, scholarly publication. Content is produced and “fast-reviewed” by misinformation scientists and scholars, released under open access, and geared towards emphasizing real-world implications. All content is targeted towards a specialized audience of researchers, journalists, fact-checkers, educators, policy makers, and other practitioners working in the information, media, and platform landscape.

  • For the essays, a length of 1,500 to 3,000 words (excluding footnotes and methodology appendix) is appropriate, but the HKS Misinformation Review will consider and publish longer articles. Authors of articles with more than 3,000 words should consult the journal’s editors before submission.

7:00 – ASRC GOES

  •  Dissertation
    • It looks like I fixed my LaTeX problems. I went to C:\Users\phil\AppData\Roaming\MiKTeX\2.9\tex\latex, and deleted the ifvtex folder. Re-ran, things installed, and all is better now
    • Slides
  • GOES
    • Pinged Isaac about the idea of creating scenarios that incorporate the NASA simulators
    • Meeting
  • GSAW
    • Slides
    • Speakers presenting in a plenary session are scheduled to speak for 15 minutes, with five additional minutes allowed for questions and answers from the audience
    • Our microphones work best when the antenna unit is clipped to a belt and the microphone is attached near the center of your chest.
    • We are NOT providing network capabilities such as WiFi. If you require WiFi, you are responsible for purchasing it from the hotel and ensuring that it works for the presentation.
    • Charts produced by the PC version of Microsoft PowerPoint 2013, 2016 or 365 are preferred
    • . In creating your slides, note that the presentation room is large and you should consider this in your selection of larger fonts, diagram size, etc. At a minimum, a 20-point font is recommended
  • GPT-2 – Maybe do something with Aaron today?