Category Archives: Tensorflow

Phil 9.5.19

7:00 –

  • David Manheim (scholar)
    • I work on existential risk mitigation, computational modelling, and epidemiology. I spend time talking about Goodhart’s Law, and have been a #Superforecaster with the Good Judgement Project since 2012.
  • Goodhart’s law is an adage named after economist Charles Goodhart, which has been phrased by Marilyn Strathern as “When a measure becomes a target, it ceases to be a good measure.”[1] One way in which this can occur is individuals trying to anticipate the effect of a policy and then taking actions that alter its outcome
  • Dissertation
  • Continuing TF 2.0 Keras tutorial
    • Had a weird problem where
      from tensorflow import keras

      made IntelliJ complain, but the python interpreter ran fine. I then installed keras, and IJ stopped complaining. Checking the version(s) seems to be identical, even though I can see that there is a new keras directory in D:\Program Files\Python37\Lib\site-packages. And we know that the interpreter and IDE are pointing to the same place:

      "D:\Program Files\Python37\python.exe" D:/Development/Sandboxes/PyBullet/src/TensorFlow/HelloKeras.py
      2019-09-05 11:30:04.694327: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudart64_100.dll
      tf.keras version = 2.2.4-tf
      keras version = 2.2.4-tf

Keras

    • This has the implication that instead of :
      from tensorflow.keras import layers

      I need to have:

      from keras import layers

      I mean, it works, but it’s weird and makes me think that something subtle may be busted…

Phil 9.3.19 (including install directions for Tensorflow 2.0rc1 on Windows 10)

7:00 – 4:30ASRC GOES

  • Dissertation – Working on the Orientation section, where I compare Moby Dick to Dieselgate
  • Uninstalling all previous versions of CUDA, which should hopefully allow 10 to be installed
  • Still flailing on getting TF 2.0 working. Grrrrr. Success! Added guide below
  • Spent some time discussing mapping the GPT-2 with Aaron

Installing Tensorflow 2.0rc1 to Windows 10, a temporary accurate guide

  • Uninstall any previous version of Tensorflow (e.g. “pip uninstall tensorflow”)
  • Uninstall all your NVIDIA crap
  • Install JUST THE CUDA LIBRARIES for version 9.0 and 10.0. You don’t need anything else

NVIDIA1

NVIDIA2

  • Then install the latest Nvidia graphics drivers. When you’re done, your install should look something like this (this worked on 9.3.19):

NVIDIA3

Edit your system variables so that the CUDA 9 and CUDA 10 directories are on your path:

NVIDIA4

One more part is needed from NVIDIA: cudnn64_7.dll

In order to download cuDNN, ensure you are registered for the NVIDIA Developer Program.

    1. Go to: NVIDIA cuDNN home page
    2. Click “Download”.
  1. Remember to accept the Terms and Conditions.
  2. Select the cuDNN version to want to install from the list. This opens up a second list of target OS installs. Select cuDNN Library for Windows 10.
  3. Extract the cuDNN archive to a directory of your choice. The important part (cudnn64_7.dll) is in the cuda\bin directory. Either add that directory to your path, or copy the dll and put it in the Program Files\NVIDIA GPU Computing Toolkit\CUDA\v10\bin directory

NVIDIA6

Then open up a console window (cmd) as admin, and install tensorflow:

  • pip install tensorflow-gpu==2.0.0-rc1
  • verify that it works by opening the python console and typing the following:

NVIDIA5

if that works, you should be able to have the following work:

import tensorflow as tf
print("tf version = {}".format(tf.__version__))
mnist = tf.keras.datasets.mnist

(x_train, y_train), (x_test, y_test) = mnist.load_data()
x_train, x_test = x_train / 255.0, x_test / 255.0

model = tf.keras.models.Sequential([
    tf.keras.layers.Flatten(input_shape=(28, 28)),
    tf.keras.layers.Dense(128, activation='relu'),
    tf.keras.layers.Dropout(0.2),
    tf.keras.layers.Dense(10, activation='softmax')
])

model.compile(optimizer='adam',
              loss='sparse_categorical_crossentropy',
              metrics=['accuracy'])

model.fit(x_train, y_train, epochs=5)

model.evaluate(x_test, y_test)

The results should looks something like:

"D:\Program Files\Python37\python.exe" D:/Development/Sandboxes/PyBullet/src/TensorFlow/HelloWorld.py
2019-09-03 15:09:56.685476: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudart64_100.dll
tf version = 2.0.0-rc0
2019-09-03 15:09:59.272748: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library nvcuda.dll
2019-09-03 15:09:59.372341: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1618] Found device 0 with properties: 
name: TITAN X (Pascal) major: 6 minor: 1 memoryClockRate(GHz): 1.531
pciBusID: 0000:01:00.0
2019-09-03 15:09:59.372616: I tensorflow/stream_executor/platform/default/dlopen_checker_stub.cc:25] GPU libraries are statically linked, skip dlopen check.
2019-09-03 15:09:59.373339: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1746] Adding visible gpu devices: 0
2019-09-03 15:09:59.373671: I tensorflow/core/platform/cpu_feature_guard.cc:142] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2
2019-09-03 15:09:59.376010: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1618] Found device 0 with properties: 
name: TITAN X (Pascal) major: 6 minor: 1 memoryClockRate(GHz): 1.531
pciBusID: 0000:01:00.0
2019-09-03 15:09:59.376291: I tensorflow/stream_executor/platform/default/dlopen_checker_stub.cc:25] GPU libraries are statically linked, skip dlopen check.
2019-09-03 15:09:59.376996: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1746] Adding visible gpu devices: 0
2019-09-03 15:09:59.951116: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1159] Device interconnect StreamExecutor with strength 1 edge matrix:
2019-09-03 15:09:59.951317: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1165]      0 
2019-09-03 15:09:59.951433: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1178] 0:   N 
2019-09-03 15:09:59.952189: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1304] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 9607 MB memory) -> physical GPU (device: 0, name: TITAN X (Pascal), pci bus id: 0000:01:00.0, compute capability: 6.1)
Train on 60000 samples
Epoch 1/5
2019-09-03 15:10:00.818650: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cublas64_100.dll

   32/60000 [..............................] - ETA: 17:07 - loss: 2.4198 - accuracy: 0.0938
  736/60000 [..............................] - ETA: 48s - loss: 1.7535 - accuracy: 0.4891  
 1696/60000 [..............................] - ETA: 22s - loss: 1.2584 - accuracy: 0.6515
 2560/60000 [>.............................] - ETA: 16s - loss: 1.0503 - accuracy: 0.7145
 3552/60000 [>.............................] - ETA: 12s - loss: 0.9017 - accuracy: 0.7531
 4352/60000 [=>............................] - ETA: 10s - loss: 0.8156 - accuracy: 0.7744
 5344/60000 [=>............................] - ETA: 9s - loss: 0.7407 - accuracy: 0.7962 
 6176/60000 [==>...........................] - ETA: 8s - loss: 0.7069 - accuracy: 0.8039
 7040/60000 [==>...........................] - ETA: 7s - loss: 0.6669 - accuracy: 0.8134
 8032/60000 [===>..........................] - ETA: 6s - loss: 0.6285 - accuracy: 0.8236
 8832/60000 [===>..........................] - ETA: 6s - loss: 0.6037 - accuracy: 0.8291
 9792/60000 [===>..........................] - ETA: 6s - loss: 0.5823 - accuracy: 0.8356
10656/60000 [====>.........................] - ETA: 5s - loss: 0.5621 - accuracy: 0.8410
11680/60000 [====>.........................] - ETA: 5s - loss: 0.5434 - accuracy: 0.8453
12512/60000 [=====>........................] - ETA: 5s - loss: 0.5311 - accuracy: 0.8485
13376/60000 [=====>........................] - ETA: 4s - loss: 0.5144 - accuracy: 0.8534
14496/60000 [======>.......................] - ETA: 4s - loss: 0.4997 - accuracy: 0.8580
15296/60000 [======>.......................] - ETA: 4s - loss: 0.4894 - accuracy: 0.8609
16224/60000 [=======>......................] - ETA: 4s - loss: 0.4792 - accuracy: 0.8634
17120/60000 [=======>......................] - ETA: 4s - loss: 0.4696 - accuracy: 0.8664
17888/60000 [=======>......................] - ETA: 3s - loss: 0.4595 - accuracy: 0.8690
18752/60000 [========>.....................] - ETA: 3s - loss: 0.4522 - accuracy: 0.8711
19840/60000 [========>.....................] - ETA: 3s - loss: 0.4434 - accuracy: 0.8738
20800/60000 [=========>....................] - ETA: 3s - loss: 0.4356 - accuracy: 0.8756
21792/60000 [=========>....................] - ETA: 3s - loss: 0.4293 - accuracy: 0.8776
22752/60000 [==========>...................] - ETA: 3s - loss: 0.4226 - accuracy: 0.8794
23712/60000 [==========>...................] - ETA: 3s - loss: 0.4179 - accuracy: 0.8808
24800/60000 [===========>..................] - ETA: 2s - loss: 0.4111 - accuracy: 0.8827
26080/60000 [============>.................] - ETA: 2s - loss: 0.4029 - accuracy: 0.8849
27264/60000 [============>.................] - ETA: 2s - loss: 0.3981 - accuracy: 0.8864
28160/60000 [=============>................] - ETA: 2s - loss: 0.3921 - accuracy: 0.8882
29408/60000 [=============>................] - ETA: 2s - loss: 0.3852 - accuracy: 0.8902
30432/60000 [==============>...............] - ETA: 2s - loss: 0.3809 - accuracy: 0.8916
31456/60000 [==============>...............] - ETA: 2s - loss: 0.3751 - accuracy: 0.8932
32704/60000 [===============>..............] - ETA: 2s - loss: 0.3707 - accuracy: 0.8946
33760/60000 [===============>..............] - ETA: 1s - loss: 0.3652 - accuracy: 0.8959
34976/60000 [================>.............] - ETA: 1s - loss: 0.3594 - accuracy: 0.8975
35968/60000 [================>.............] - ETA: 1s - loss: 0.3555 - accuracy: 0.8984
37152/60000 [=================>............] - ETA: 1s - loss: 0.3509 - accuracy: 0.8998
38240/60000 [==================>...........] - ETA: 1s - loss: 0.3477 - accuracy: 0.9006
39232/60000 [==================>...........] - ETA: 1s - loss: 0.3442 - accuracy: 0.9015
40448/60000 [===================>..........] - ETA: 1s - loss: 0.3393 - accuracy: 0.9030
41536/60000 [===================>..........] - ETA: 1s - loss: 0.3348 - accuracy: 0.9042
42752/60000 [====================>.........] - ETA: 1s - loss: 0.3317 - accuracy: 0.9049
43840/60000 [====================>.........] - ETA: 1s - loss: 0.3288 - accuracy: 0.9059
44992/60000 [=====================>........] - ETA: 1s - loss: 0.3255 - accuracy: 0.9069
46016/60000 [======================>.......] - ETA: 0s - loss: 0.3230 - accuracy: 0.9077
47104/60000 [======================>.......] - ETA: 0s - loss: 0.3203 - accuracy: 0.9085
48288/60000 [=======================>......] - ETA: 0s - loss: 0.3174 - accuracy: 0.9091
49248/60000 [=======================>......] - ETA: 0s - loss: 0.3155 - accuracy: 0.9098
50208/60000 [========================>.....] - ETA: 0s - loss: 0.3131 - accuracy: 0.9105
51104/60000 [========================>.....] - ETA: 0s - loss: 0.3111 - accuracy: 0.9111
52288/60000 [=========================>....] - ETA: 0s - loss: 0.3085 - accuracy: 0.9117
53216/60000 [=========================>....] - ETA: 0s - loss: 0.3066 - accuracy: 0.9121
54176/60000 [==========================>...] - ETA: 0s - loss: 0.3043 - accuracy: 0.9128
55328/60000 [==========================>...] - ETA: 0s - loss: 0.3018 - accuracy: 0.9135
56320/60000 [===========================>..] - ETA: 0s - loss: 0.2995 - accuracy: 0.9141
57440/60000 [===========================>..] - ETA: 0s - loss: 0.2980 - accuracy: 0.9143
58400/60000 [============================>.] - ETA: 0s - loss: 0.2961 - accuracy: 0.9148
59552/60000 [============================>.] - ETA: 0s - loss: 0.2941 - accuracy: 0.9154
60000/60000 [==============================] - 4s 65us/sample - loss: 0.2930 - accuracy: 0.9158
... epochs pass ...
10000/1 [==========] - 1s 61us/sample - loss: 0.0394 - accuracy: 0.9778

Phil 8.30.19

7:00 – 4:00 ASRC GOES

  • Dentist!
  • Sent notes to David Lazar and Erika M-T. Still need to ping Stuart Shulman.
  • Did my part for JuryRoom (Eero Mäntyranta)
  • Dissertation – more on State
  • TF 2.0 today? (release notes)
  • Installed! Well – it didn’t blow up…
    C:\WINDOWS\system32>pip3 install tensorflow-gpu==2.0.0-rc0                                                              Collecting tensorflow-gpu==2.0.0-rc0                                                                                      Downloading https://files.pythonhosted.org/packages/3c/90/046fdf56ba957de792e4132b687e09e34b6f237608aa9fc17c656ab69b39/tensorflow_gpu-2.0.0rc0-cp37-cp37m-win_amd64.whl (285.1MB)                                                                  |████████████████████████████████| 285.1MB 20kB/s                                                                  Collecting absl-py>=0.7.0 (from tensorflow-gpu==2.0.0-rc0)                                                                Downloading https://files.pythonhosted.org/packages/3c/0d/7cbf64cac3f93617a2b6b079c0182e4a83a3e7a8964d3b0cc3d9758ba002/absl-py-0.8.0.tar.gz (102kB)                                                                                                |████████████████████████████████| 112kB ...                                                                       Collecting gast>=0.2.0 (from tensorflow-gpu==2.0.0-rc0)                                                                   Downloading https://files.pythonhosted.org/packages/4e/35/11749bf99b2d4e3cceb4d55ca22590b0d7c2c62b9de38ac4a4a7f4687421/gast-0.2.2.tar.gz                                                                                                      Collecting google-pasta>=0.1.6 (from tensorflow-gpu==2.0.0-rc0)                                                           Downloading https://files.pythonhosted.org/packages/d0/33/376510eb8d6246f3c30545f416b2263eee461e40940c2a4413c711bdf62d/google_pasta-0.1.7-py3-none-any.whl (52kB)                                                                                  |████████████████████████████████| 61kB 4.1MB/s                                                                    Collecting wrapt>=1.11.1 (from tensorflow-gpu==2.0.0-rc0)                                                                 Downloading https://files.pythonhosted.org/packages/23/84/323c2415280bc4fc880ac5050dddfb3c8062c2552b34c2e512eb4aa68f79/wrapt-1.11.2.tar.gz                                                                                                    Collecting grpcio>=1.8.6 (from tensorflow-gpu==2.0.0-rc0)                                                                 Downloading https://files.pythonhosted.org/packages/32/e7/478737fd426798caad32a2abb7cc63ddb4c12908d9e03471dd3c41992b05/grpcio-1.23.0-cp37-cp37m-win_amd64.whl (1.6MB)                                                                              |████████████████████████████████| 1.6MB ...                                                                       Collecting termcolor>=1.1.0 (from tensorflow-gpu==2.0.0-rc0)                                                              Downloading https://files.pythonhosted.org/packages/8a/48/a76be51647d0eb9f10e2a4511bf3ffb8cc1e6b14e9e4fab46173aa79f981/termcolor-1.1.0.tar.gz                                                                                                 Collecting tf-estimator-nightly<1.14.0.dev2019080602,>=1.14.0.dev2019080601 (from tensorflow-gpu==2.0.0-rc0)              Downloading https://files.pythonhosted.org/packages/21/28/f2a27a62943d5f041e4a6fd404b2d21cb7c59b2242a4e73b03d9ba166552/tf_estimator_nightly-1.14.0.dev2019080601-py2.py3-none-any.whl (501kB)                                                      |████████████████████████████████| 501kB ...                                                                       Collecting keras-preprocessing>=1.0.5 (from tensorflow-gpu==2.0.0-rc0)                                                    Downloading https://files.pythonhosted.org/packages/28/6a/8c1f62c37212d9fc441a7e26736df51ce6f0e38455816445471f10da4f0a/Keras_Preprocessing-1.1.0-py2.py3-none-any.whl (41kB)                                                                       |████████████████████████████████| 51kB 3.2MB/s                                                                    Collecting tb-nightly<1.15.0a20190807,>=1.15.0a20190806 (from tensorflow-gpu==2.0.0-rc0)                                  Downloading https://files.pythonhosted.org/packages/bc/88/24b5fb7280e74c7cf65bde47c171547fd02afb3840cff41bcbe9270650f5/tb_nightly-1.15.0a20190806-py3-none-any.whl (4.3MB)                                                                         |████████████████████████████████| 4.3MB 6.4MB/s                                                                   Collecting wheel>=0.26 (from tensorflow-gpu==2.0.0-rc0)                                                                   Downloading https://files.pythonhosted.org/packages/00/83/b4a77d044e78ad1a45610eb88f745be2fd2c6d658f9798a15e384b7d57c9/wheel-0.33.6-py2.py3-none-any.whl                                                                                      Requirement already satisfied: numpy<2.0,>=1.16.0 in d:\program files\python37\lib\site-packages (from tensorflow-gpu==2.0.0-rc0) (1.16.4)                                                                                                      Collecting protobuf>=3.6.1 (from tensorflow-gpu==2.0.0-rc0)                                                               Downloading https://files.pythonhosted.org/packages/46/8b/5e77963dac4a944a0c6b198c004fac4c85d7adc54221c288fc6ca9078072/protobuf-3.9.1-cp37-cp37m-win_amd64.whl (1.0MB)                                                                             |████████████████████████████████| 1.0MB 6.4MB/s                                                                   Requirement already satisfied: six>=1.10.0 in d:\program files\python37\lib\site-packages (from tensorflow-gpu==2.0.0-rc0) (1.12.0)                                                                                                             Collecting opt-einsum>=2.3.2 (from tensorflow-gpu==2.0.0-rc0)                                                             Downloading https://files.pythonhosted.org/packages/c0/1a/ab5683d8e450e380052d3a3e77bb2c9dffa878058f583587c3875041fb63/opt_einsum-3.0.1.tar.gz (66kB)                                                                                              |████████████████████████████████| 71kB 4.5MB/s                                                                    Collecting astor>=0.6.0 (from tensorflow-gpu==2.0.0-rc0)                                                                  Downloading https://files.pythonhosted.org/packages/d1/4f/950dfae467b384fc96bc6469de25d832534f6b4441033c39f914efd13418/astor-0.8.0-py2.py3-none-any.whl                                                                                       Collecting keras-applications>=1.0.8 (from tensorflow-gpu==2.0.0-rc0)                                                     Downloading https://files.pythonhosted.org/packages/71/e3/19762fdfc62877ae9102edf6342d71b28fbfd9dea3d2f96a882ce099b03f/Keras_Applications-1.0.8-py3-none-any.whl (50kB)                                                                            |████████████████████████████████| 51kB 3.2MB/s                                                                    Collecting markdown>=2.6.8 (from tb-nightly<1.15.0a20190807,>=1.15.0a20190806->tensorflow-gpu==2.0.0-rc0)                 Downloading https://files.pythonhosted.org/packages/c0/4e/fd492e91abdc2d2fcb70ef453064d980688762079397f779758e055f6575/Markdown-3.1.1-py2.py3-none-any.whl (87kB)                                                                                  |████████████████████████████████| 92kB 5.8MB/s                                                                    Collecting setuptools>=41.0.0 (from tb-nightly<1.15.0a20190807,>=1.15.0a20190806->tensorflow-gpu==2.0.0-rc0)              Downloading https://files.pythonhosted.org/packages/b2/86/095d2f7829badc207c893dd4ac767e871f6cd547145df797ea26baea4e2e/setuptools-41.2.0-py2.py3-none-any.whl (576kB)                                                                              |████████████████████████████████| 583kB ...                                                                       Collecting werkzeug>=0.11.15 (from tb-nightly<1.15.0a20190807,>=1.15.0a20190806->tensorflow-gpu==2.0.0-rc0)               Downloading https://files.pythonhosted.org/packages/d1/ab/d3bed6b92042622d24decc7aadc8877badf18aeca1571045840ad4956d3f/Werkzeug-0.15.5-py2.py3-none-any.whl (328kB)                                                                                |████████████████████████████████| 337kB 6.4MB/s                                                                   Collecting h5py (from keras-applications>=1.0.8->tensorflow-gpu==2.0.0-rc0)                                               Downloading https://files.pythonhosted.org/packages/4f/1e/89aa610afce8df6fd1f12647600a05e902238587ae6375442a3164b59d51/h5py-2.9.0-cp37-cp37m-win_amd64.whl (2.4MB)                                                                                 |████████████████████████████████| 2.4MB ...                                                                       Installing collected packages: absl-py, gast, google-pasta, wrapt, grpcio, termcolor, tf-estimator-nightly, keras-preprocessing, setuptools, markdown, werkzeug, wheel, protobuf, tb-nightly, opt-einsum, astor, h5py, keras-applications, tensorflow-gpu                                                                                                                 Running setup.py install for absl-py ... done                                                                           Running setup.py install for gast ... done                                                                              Running setup.py install for wrapt ... done                                                                             Running setup.py install for termcolor ... done                                                                         Found existing installation: setuptools 40.8.0                                                                            Uninstalling setuptools-40.8.0:                                                                                           Successfully uninstalled setuptools-40.8.0                                                                          Running setup.py install for opt-einsum ... done                                                                      Successfully installed absl-py-0.8.0 astor-0.8.0 gast-0.2.2 google-pasta-0.1.7 grpcio-1.23.0 h5py-2.9.0 keras-applications-1.0.8 keras-preprocessing-1.1.0 markdown-3.1.1 opt-einsum-3.0.1 protobuf-3.9.1 setuptools-41.2.0 tb-nightly-1.15.0a20190806 tensorflow-gpu-2.0.0rc0 termcolor-1.1.0 tf-estimator-nightly-1.14.0.dev2019080601 werkzeug-0.15.5 wheel-0.33.6 wrapt-1.11.2

     

  • Oops: Python 3.7.4 (tags/v3.7.4:e09359112e, Jul 8 2019, 20:34:20) [MSC v.1916 64 bit (AMD64)] on win32
    Type “help”, “copyright”, “credits” or “license” for more information.
    >>> import tensorflow as tf
    2019-08-30 10:23:30.632254: W tensorflow/stream_executor/platform/default/dso_loader.cc:55] Could not load dynamic library ‘cudart64_100.dll’; dlerror: cudart64_100.dll not found
    >>>
  • Spent the rest of the day trying to get “import tensorflow as tf” to work

Phil 8.29.19

ASRC GOES – 7:00 – 4:00

  • Find out who I was talking to yesterday at lunch (Boynton?)
  • Contact David Lazar about RB
  • Participating as an anonymous fish in JuryRoom. Started the discussion
  • Dissertation – started the State section
  • Working on Control and sim diagrams
    • Putting this here because I keep on forgetting how to add an outline/border to an image in Illustrator:

OutlineAI

  1. Place and select an image in the Illustrator document.
  2. Once selected, open Appearance panel and from the Appearance panel flyout menu, choose Add New Stroke:
  3. With the Stroke highlighted in the Appearance panel, choose Effect -> Path -> Outline Object.
  • Anyway, back to our regularly scheduled program.
  • Made a control system diagram
  • Made a control system inheritance diagram
  • Made a graphics inheritance diagram
  • Need to stick them in the ASRC Dev Pipeline document
  • Discovered JabRef: JabRef is an open source bibliography reference manager. The native file format used by JabRef is BibTeX, the standard LaTeX bibliography format. JabRef is a desktop application and runs on the Java VM (version 8), and works equally well on Windows, Linux, and Mac OS X.
  • Tomorrow we get started with TF 2.0

Phil 9.27.19

7:00 – 7:00 ASRC GOES

  • Replied to Antonio with a plan for the software paper
  • Dissertation
    • More Lit Review – finished Dimension Reduction!
  • Set up TF 2.0 – Nope
  • Started writeup of simulator and sent a status to Erik
  • Waikato meeting
    • First room is running
    • Asked Chris to change “Maximum Anonymity” to “Improved Anonymity”
    • Some discussion about experimental design

Phil 8.26.19

7:00 – ASRC GOES

  • Dissertation – working my way through the lit review section
  • Antonio sent a note about Software Impacts, which provides a scholarly reference to software that has been used to address a research challenge. The journal disseminates impactful and re-usable scientific software through Original Software Publications (OSP) which describe the application of the software to research and the published outputs.
    • Submissions to Software Impacts consist of two major parts:
      • A short descriptive paper of about three pages including an Impact Overview and references to publications where the software has been used
      • An open source software distribution with support material.
    • So, to get things to fit on GitHub, I worked on getting GPM to work with a smaller library – done
  • Discussions with Aaron about using TF 2.0 xformer on GOES sim data
  • Security training – an hour or so
  • Copied the Waikato JuryRoom proposal to PolarizationGame folder

Phil 5.10.19

7:00 – 4:00 ASRC NASA GOES

  • Tensorflow Graphics? TF-Graphics
  • An End-to-End AutoML Solution for Tabular Data at KaggleDays
  • More dissertation writing. Added a bit on The Sorcerer’s Apprentice and finished my first pass at Moby-Dick
  • Add pickling to MatrixScalar – done!
    def save_class(the_class, filename:str):
        print("save_class")
        # Its important to use binary mode
        dbfile = open(filename, 'ab')
    
        # source, destination
        pickle.dump(the_class, dbfile)
        dbfile.close()
    
    
    def restore_class(filename:str) -> MatrixScalar:
        print("restore_class")
        # for reading also binary mode is important
        dbfile = open(filename, 'rb')
        db = pickle.load(dbfile)
        dbfile.close()
        return db
  • Added flag to allow unlimited input buffer cols. It automatically sizes to the max if no arg for input_size
  • NOTE: Add a “notes” dict that is added to the setup tab for run information

 

Phil 4.19.19

8:00 – 4:00 ASRC TL

  • Updating working copies of the paper based on the discussion with Aaron M last night.
  • Based on the diagrams of the weights that I could make with the MNIST model, I think I want to try to make a layer neuron/weight visualizer. This one is very pretty
  • Need to start on framework for data generation and analysis with Zach this morning
  • Got Flask working (see above for rant on how).
  • Flask-RESTful provides an extension to Flask for building REST APIs. Flask-RESTful was initially developed as an internal project at Twilio, built to power their public and internal APIs.

Phil 3.14.19

ASRC AIMS 7:00 – 4:00, PhD ML, 4:30 –

Phil 3.11.19

7:00 – 10:00 ASRC PhD. Fun, long day.

Phil 3.1.19

7:00 – ASRC

  • Got accepted to the TF dev conference. The flight out is expensive… Sent Eric V. a note asking for permission to go, but bought tix anyway given the short fuse
  • Downloaded the full slack data
  • Working on white paper. The single file was getting unwieldy, so I broke it up
  • Found Speeding up Parliamentary Decision Making for Cyber Counter-Attack, which argues for the possibility of pre-authorizing automated response
  • Up to six pages. IN the middle of the cyberdefense section

Phil 2.22.19

7:00 – 4:00 ASRC

  • Running Ellen’s dungeon tonight
  • Wondering what to do next. Look at text analytics? List is in this post.
  • But before we do that, I need to extract from the DB posts as text. And now I have something to do!
    • Sheesh – tried to update the database and had all kinds of weird problems. I wound up re-injesting everything from the Slack files. This seems to work fine, so I exported that to replace the .sql file that may have been causing all the trouble.
  • Here’s a thing using the JAX library, which I’m becoming interested in: Meta-Learning in 50 Lines of JAX
    • The focus of Machine Learning (ML) is to imbue computers with the ability to learn from data, so that they may accomplish tasks that humans have difficulty expressing in pure code. However, what most ML researchers call “learning” right now is but a very small subset of the vast range of behavioral adaptability encountered in biological life! Deep Learning models are powerful, but require a large amount of data and many iterations of stochastic gradient descent (SGD). This learning procedure is time-consuming and once a deep model is trained, its behavior is fairly rigid; at deployment time, one cannot really change the behavior of the system (e.g. correcting mistakes) without an expensive retraining process. Can we build systems that can learn faster, and with less data?
  • Meta-Learning: Learning to Learn Fast
    • A good machine learning model often requires training with a large number of samples. Humans, in contrast, learn new concepts and skills much faster and more efficiently. Kids who have seen cats and birds only a few times can quickly tell them apart. People who know how to ride a bike are likely to discover the way to ride a motorcycle fast with little or even no demonstration. Is it possible to design a machine learning model with similar properties — learning new concepts and skills fast with a few training examples? That’s essentially what meta-learning aims to solve.
  • Meta learning is everywhere! Learning to Generalize from Sparse and Underspecified Rewards
    • In “Learning to Generalize from Sparse and Underspecified Rewards“, we address the issue of underspecified rewards by developing Meta Reward Learning (MeRL), which provides more refined feedback to the agent by optimizing an auxiliary reward function. MeRL is combined with a memory buffer of successful trajectories collected using a novel exploration strategy to learn from sparse rewards.
  • Lingvo: A TensorFlow Framework for Sequence Modeling
    • While Lingvo started out with a focus on NLP, it is inherently very flexible, and models for tasks such as image segmentation and point cloud classification have been successfully implemented using the framework. Distillation, GANs, and multi-task models are also supported. At the same time, the framework does not compromise on speed, and features an optimized input pipeline and fast distributed training. Finally, Lingvo was put together with an eye towards easy productionisation, and there is even a well-defined path towards porting models for mobile inference.
  • Working on white paper. Still reading Command Dysfunction and making notes. I think I’ll use the idea of C&C combat as the framing device of the paper. Started to write more bits
  • What, if anything, can the Pentagon learn from this war simulator?
    • It is August 2010, and Operation Glacier Mantis is struggling in the fictional Saffron Valley. Coalition forces moved into the valley nine years ago, but peace negotiations are breaking down after a series of airstrikes result in civilian casualties. Within a few months, the Coalition abandons Saffron Valley. Corruption sapped the reputation of the operation. Troops are called away to a different war. Operation Glacier Mantis ends in total defeat.
  • Created a post for Command Dysfunction here. Finished.

Phil 1.11.18

7:00 – 5:00 ASRC NASA

  • The Philosopher Redefining Equality (New Yorker profile of Elizabeth Anderson)
    • She takes great pleasure in arranging information in useful forms; if she weren’t a philosopher, she thinks, she’d like to be a mapmaker, or a curator of archeological displays in museums.
  • Trolling the U.S.: Q&A on Russian Interference in the 2016 Presidential Election
    • Ryan Boyd and researchers from Carnegie Mellon University and Microsoft Research analyzed Facebook ads and Twitter troll accounts run by Russia’s Internet Research Agency (IRA) to determine how people with differing political ideologies were targeted and pitted against each other through this “largely unsophisticated and low-budget” operation. To learn more about the study and its findings, we asked Boyd the following questions:
    • Boyd is an interesting guy. Here’s his twitter profile: Social/Personality Psychologist, Computational Social Scientist, and Occasional Software Developer.
  • Applied for an invite to the TF Dev summit
  • Work on text analytics?
    • Extract data by groups, group, user and start looking at cross-correlations
      • Started modifying post_analyzer.py
    • PHP “story” generator?
    • Updating IntelliJ
  • More DB work