Category Archives: Machine Learning

Phil 9.12.19

7:00 – 4:30 ASRC GOES

  • FractalNet: Ultra-Deep Neural Networks without Residuals
    • We introduce a design strategy for neural network macro-architecture based on self-similarity. Repeated application of a simple expansion rule generates deep networks whose structural layouts are precisely truncated fractals. These networks contain interacting subpaths of different lengths, but do not include any pass-through or residual connections; every internal signal is transformed by a filter and nonlinearity before being seen by subsequent layers. In experiments, fractal networks match the excellent performance of standard residual networks on both CIFAR and ImageNet classification tasks, thereby demonstrating that residual representations may not be fundamental to the success of extremely deep convolutional neural networks. Rather, the key may be the ability to transition, during training, from effectively shallow to deep. We note similarities with student-teacher behavior and develop drop-path, a natural extension of dropout, to regularize co-adaptation of subpaths in fractal architectures. Such regularization allows extraction of high-performance fixed-depth subnetworks. Additionally, fractal networks exhibit an anytime property: shallow subnetworks provide a quick answer, while deeper subnetworks, with higher latency, provide a more accurate answer.
  • Structural diversity in social contagion
    • The concept of contagion has steadily expanded from its original grounding in epidemic disease to describe a vast array of processes that spread across networks, notably social phenomena such as fads, political opinions, the adoption of new technologies, and financial decisions. Traditional models of social contagion have been based on physical analogies with biological contagion, in which the probability that an individual is affected by the contagion grows monotonically with the size of his or her “contact neighborhood”—the number of affected individuals with whom he or she is in contact. Whereas this contact neighborhood hypothesis has formed the underpinning of essentially all current models, it has been challenging to evaluate it due to the difficulty in obtaining detailed data on individual network neighborhoods during the course of a large-scale contagion process. Here we study this question by analyzing the growth of Facebook, a rare example of a social process with genuinely global adoption. We find that the probability of contagion is tightly controlled by the number of connected components in an individual’s contact neighborhood, rather than by the actual size of the neighborhood. Surprisingly, once this “structural diversity” is controlled for, the size of the contact neighborhood is in fact generally a negative predictor of contagion. More broadly, our analysis shows how data at the size and resolution of the Facebook network make possible the identification of subtle structural signals that go undetected at smaller scales yet hold pivotal predictive roles for the outcomes of social processes.
    • Add this to the discussion section – done
  • Dissertation
    • Started on the theory section, then realized the background section didn’t set it up well. So worked on the background instead. I put in a good deal on how individuals and groups interact with the environment differently and how social interaction amplifies individual contribution through networking.
  • Quick meetings with Don and Aaron
  • Time prediction (sequence to sequence) with Keras perceptrons
  • This was surprisingly straightforward
    • There was some initial trickiness in getting the IDE to work with the TF2.0 RC0 package:
      import tensorflow as tf
      from tensorflow import keras
      from tensorflow_core.python.keras import layers

      The first coding step was to generate the data. In this case I’m building a numpy matrix that has ten variations on math.sin(), using our timeseriesML utils code. There is a loop that sets up the code to create a new frequency, which is sent off to get back a pandas Dataframe that in this case has 10 sequence rows with 100 samples. First, we set the global sequence_length:

      sequence_length = 100

      then we create the function that will build and concatenate our numpy matrices:

      def generate_train_test(num_functions, rows_per_function, noise=0.1) -> (np.ndarray, np.ndarray, np.ndarray):
          ff = FF.float_functions(rows_per_function, 2*sequence_length)
          npa = None
          for i in range(num_functions):
              mathstr = "math.sin(xx*{})".format(0.005*(i+1))
              #mathstr = "math.sin(xx)"
              df2 = ff.generateDataFrame(mathstr, noise=0.1)
              npa2 = df2.to_numpy()
              if npa is None:
                  npa = npa2
              else:
                  ta = np.append(npa, npa2, axis=0)
                  npa = ta
      
          split = np.hsplit(npa, 2)
          return npa, split[0], split[1]

      Now, we build the model. We’re using keras from the TF 2.0 RC0 build, so things look slightly different:

      model = tf.keras.Sequential()
      # Adds a densely-connected layer with 64 units to the model:
      model.add(layers.Dense(sequence_length, activation='relu', input_shape=(sequence_length,)))
      # Add another:
      model.add(layers.Dense(200, activation='relu'))
      # Add a softmax layer with 10 output units:
      model.add(layers.Dense(sequence_length))
      
      loss_func = tf.keras.losses.MeanSquaredError()
      opt_func = tf.keras.optimizers.Adam(0.01)
      model.compile(optimizer= opt_func,
                    loss=loss_func,
                    metrics=['accuracy'])

      We can now fit the model to the generated data:

      full_mat, train_mat, test_mat = generate_train_test(10, 10)
      
      model.fit(train_mat, test_mat, epochs=10, batch_size=2)

      There is noise in the data, so the accuracy is not bang on, but the loss is nice. We can see this better in the plots above, which were created using this function:

      def plot_mats(mat:np.ndarray, cluster_size:int, title:str, fig_num:int):
          plt.figure(fig_num)
      
          i = 0
          for row in mat:
              cstr = "C{}".format(int(i/cluster_size))
              plt.plot(row, color=cstr)
              i += 1
      
          plt.title(title)

      Which is called just before the program completes:

      if show_plots:
          plot_mats(full_mat, 10, "Full Data", 1)
          plot_mats(train_mat, 10, "Input Vector", 2)
          plot_mats(test_mat, 10, "Output Vector", 3)
          plot_mats(predict_mat, 10, "Predict", 4)
          plt.show()
    • That’s it! Full listing below:
import tensorflow as tf
from tensorflow import keras
from tensorflow_core.python.keras import layers
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import timeseriesML.generators.float_functions as FF


sequence_length = 100

def generate_train_test(num_functions, rows_per_function, noise=0.1) -> (np.ndarray, np.ndarray, np.ndarray):
    ff = FF.float_functions(rows_per_function, 2*sequence_length)
    npa = None
    for i in range(num_functions):
        mathstr = "math.sin(xx*{})".format(0.005*(i+1))
        #mathstr = "math.sin(xx)"
        df2 = ff.generateDataFrame(mathstr, noise=0.1)
        npa2 = df2.to_numpy()
        if npa is None:
            npa = npa2
        else:
            ta = np.append(npa, npa2, axis=0)
            npa = ta

    split = np.hsplit(npa, 2)
    return npa, split[0], split[1]

def plot_mats(mat:np.ndarray, cluster_size:int, title:str, fig_num:int):
    plt.figure(fig_num)

    i = 0
    for row in mat:
        cstr = "C{}".format(int(i/cluster_size))
        plt.plot(row, color=cstr)
        i += 1

    plt.title(title)

model = tf.keras.Sequential()
# Adds a densely-connected layer with 64 units to the model:
model.add(layers.Dense(sequence_length, activation='relu', input_shape=(sequence_length,)))
# Add another:
model.add(layers.Dense(200, activation='relu'))
# Add a softmax layer with 10 output units:
model.add(layers.Dense(sequence_length))

loss_func = tf.keras.losses.MeanSquaredError()
opt_func = tf.keras.optimizers.Adam(0.01)
model.compile(optimizer= opt_func,
              loss=loss_func,
              metrics=['accuracy'])

full_mat, train_mat, test_mat = generate_train_test(10, 10)

model.fit(train_mat, test_mat, epochs=10, batch_size=2)
model.evaluate(train_mat, test_mat)

# test against freshly generated data
full_mat, train_mat, test_mat = generate_train_test(10, 10)
predict_mat = model.predict(train_mat)

show_plots = True
if show_plots:
    plot_mats(full_mat, 10, "Full Data", 1)
    plot_mats(train_mat, 10, "Input Vector", 2)
    plot_mats(test_mat, 10, "Output Vector", 3)
    plot_mats(predict_mat, 10, "Predict", 4)
    plt.show()



Phil 9.10.19

ASRC GOES 7:00 – 5:30

  • Got a mention in an article on Albawaba – When the Only Option is ‘Not to Play’? Autonomous Weapons Systems Debated in Geneva 
  • Dissertation – more SIH
  • Just saw this: On Extractive and Abstractive Neural Document Summarization with Transformer Language Models
    • We present a method to produce abstractive summaries of long documents that exceed several thousand words via neural abstractive summarization. We perform a simple extractive step before generating a summary, which is then used to condition the transformer language model on relevant information before being tasked with generating a summary. We show that this extractive step significantly improves summarization results. We also show that this approach produces more abstractive summaries compared to prior work that employs a copy mechanism while still achieving higher rouge scores. Note: The abstract above was not written by the authors, it was generated by one of the models presented in this paper.
  • Working on packaging timeseriesML. I think it’s working!

TimeSeriesML

  • I’ll try it out when I get back after lunch
  • Meeting with Vadim
    • Showed him around and provided svn access
  • Model:DLG3501W SKU:6181264

Phil 9.3.19 (including install directions for Tensorflow 2.0rc1 on Windows 10)

7:00 – 4:30ASRC GOES

  • Dissertation – Working on the Orientation section, where I compare Moby Dick to Dieselgate
  • Uninstalling all previous versions of CUDA, which should hopefully allow 10 to be installed
  • Still flailing on getting TF 2.0 working. Grrrrr. Success! Added guide below
  • Spent some time discussing mapping the GPT-2 with Aaron

Installing Tensorflow 2.0rc1 to Windows 10, a temporary accurate guide

  • Uninstall any previous version of Tensorflow (e.g. “pip uninstall tensorflow”)
  • Uninstall all your NVIDIA crap
  • Install JUST THE CUDA LIBRARIES for version 9.0 and 10.0. You don’t need anything else

NVIDIA1

NVIDIA2

  • Then install the latest Nvidia graphics drivers. When you’re done, your install should look something like this (this worked on 9.3.19):

NVIDIA3

Edit your system variables so that the CUDA 9 and CUDA 10 directories are on your path:

NVIDIA4

One more part is needed from NVIDIA: cudnn64_7.dll

In order to download cuDNN, ensure you are registered for the NVIDIA Developer Program.

    1. Go to: NVIDIA cuDNN home page
    2. Click “Download”.
  1. Remember to accept the Terms and Conditions.
  2. Select the cuDNN version to want to install from the list. This opens up a second list of target OS installs. Select cuDNN Library for Windows 10.
  3. Extract the cuDNN archive to a directory of your choice. The important part (cudnn64_7.dll) is in the cuda\bin directory. Either add that directory to your path, or copy the dll and put it in the Program Files\NVIDIA GPU Computing Toolkit\CUDA\v10\bin directory

NVIDIA6

Then open up a console window (cmd) as admin, and install tensorflow:

  • pip install tensorflow-gpu==2.0.0-rc1
  • verify that it works by opening the python console and typing the following:

NVIDIA5

if that works, you should be able to have the following work:

import tensorflow as tf
print("tf version = {}".format(tf.__version__))
mnist = tf.keras.datasets.mnist

(x_train, y_train), (x_test, y_test) = mnist.load_data()
x_train, x_test = x_train / 255.0, x_test / 255.0

model = tf.keras.models.Sequential([
    tf.keras.layers.Flatten(input_shape=(28, 28)),
    tf.keras.layers.Dense(128, activation='relu'),
    tf.keras.layers.Dropout(0.2),
    tf.keras.layers.Dense(10, activation='softmax')
])

model.compile(optimizer='adam',
              loss='sparse_categorical_crossentropy',
              metrics=['accuracy'])

model.fit(x_train, y_train, epochs=5)

model.evaluate(x_test, y_test)

The results should looks something like:

"D:\Program Files\Python37\python.exe" D:/Development/Sandboxes/PyBullet/src/TensorFlow/HelloWorld.py
2019-09-03 15:09:56.685476: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudart64_100.dll
tf version = 2.0.0-rc0
2019-09-03 15:09:59.272748: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library nvcuda.dll
2019-09-03 15:09:59.372341: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1618] Found device 0 with properties: 
name: TITAN X (Pascal) major: 6 minor: 1 memoryClockRate(GHz): 1.531
pciBusID: 0000:01:00.0
2019-09-03 15:09:59.372616: I tensorflow/stream_executor/platform/default/dlopen_checker_stub.cc:25] GPU libraries are statically linked, skip dlopen check.
2019-09-03 15:09:59.373339: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1746] Adding visible gpu devices: 0
2019-09-03 15:09:59.373671: I tensorflow/core/platform/cpu_feature_guard.cc:142] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2
2019-09-03 15:09:59.376010: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1618] Found device 0 with properties: 
name: TITAN X (Pascal) major: 6 minor: 1 memoryClockRate(GHz): 1.531
pciBusID: 0000:01:00.0
2019-09-03 15:09:59.376291: I tensorflow/stream_executor/platform/default/dlopen_checker_stub.cc:25] GPU libraries are statically linked, skip dlopen check.
2019-09-03 15:09:59.376996: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1746] Adding visible gpu devices: 0
2019-09-03 15:09:59.951116: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1159] Device interconnect StreamExecutor with strength 1 edge matrix:
2019-09-03 15:09:59.951317: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1165]      0 
2019-09-03 15:09:59.951433: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1178] 0:   N 
2019-09-03 15:09:59.952189: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1304] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 9607 MB memory) -> physical GPU (device: 0, name: TITAN X (Pascal), pci bus id: 0000:01:00.0, compute capability: 6.1)
Train on 60000 samples
Epoch 1/5
2019-09-03 15:10:00.818650: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cublas64_100.dll

   32/60000 [..............................] - ETA: 17:07 - loss: 2.4198 - accuracy: 0.0938
  736/60000 [..............................] - ETA: 48s - loss: 1.7535 - accuracy: 0.4891  
 1696/60000 [..............................] - ETA: 22s - loss: 1.2584 - accuracy: 0.6515
 2560/60000 [>.............................] - ETA: 16s - loss: 1.0503 - accuracy: 0.7145
 3552/60000 [>.............................] - ETA: 12s - loss: 0.9017 - accuracy: 0.7531
 4352/60000 [=>............................] - ETA: 10s - loss: 0.8156 - accuracy: 0.7744
 5344/60000 [=>............................] - ETA: 9s - loss: 0.7407 - accuracy: 0.7962 
 6176/60000 [==>...........................] - ETA: 8s - loss: 0.7069 - accuracy: 0.8039
 7040/60000 [==>...........................] - ETA: 7s - loss: 0.6669 - accuracy: 0.8134
 8032/60000 [===>..........................] - ETA: 6s - loss: 0.6285 - accuracy: 0.8236
 8832/60000 [===>..........................] - ETA: 6s - loss: 0.6037 - accuracy: 0.8291
 9792/60000 [===>..........................] - ETA: 6s - loss: 0.5823 - accuracy: 0.8356
10656/60000 [====>.........................] - ETA: 5s - loss: 0.5621 - accuracy: 0.8410
11680/60000 [====>.........................] - ETA: 5s - loss: 0.5434 - accuracy: 0.8453
12512/60000 [=====>........................] - ETA: 5s - loss: 0.5311 - accuracy: 0.8485
13376/60000 [=====>........................] - ETA: 4s - loss: 0.5144 - accuracy: 0.8534
14496/60000 [======>.......................] - ETA: 4s - loss: 0.4997 - accuracy: 0.8580
15296/60000 [======>.......................] - ETA: 4s - loss: 0.4894 - accuracy: 0.8609
16224/60000 [=======>......................] - ETA: 4s - loss: 0.4792 - accuracy: 0.8634
17120/60000 [=======>......................] - ETA: 4s - loss: 0.4696 - accuracy: 0.8664
17888/60000 [=======>......................] - ETA: 3s - loss: 0.4595 - accuracy: 0.8690
18752/60000 [========>.....................] - ETA: 3s - loss: 0.4522 - accuracy: 0.8711
19840/60000 [========>.....................] - ETA: 3s - loss: 0.4434 - accuracy: 0.8738
20800/60000 [=========>....................] - ETA: 3s - loss: 0.4356 - accuracy: 0.8756
21792/60000 [=========>....................] - ETA: 3s - loss: 0.4293 - accuracy: 0.8776
22752/60000 [==========>...................] - ETA: 3s - loss: 0.4226 - accuracy: 0.8794
23712/60000 [==========>...................] - ETA: 3s - loss: 0.4179 - accuracy: 0.8808
24800/60000 [===========>..................] - ETA: 2s - loss: 0.4111 - accuracy: 0.8827
26080/60000 [============>.................] - ETA: 2s - loss: 0.4029 - accuracy: 0.8849
27264/60000 [============>.................] - ETA: 2s - loss: 0.3981 - accuracy: 0.8864
28160/60000 [=============>................] - ETA: 2s - loss: 0.3921 - accuracy: 0.8882
29408/60000 [=============>................] - ETA: 2s - loss: 0.3852 - accuracy: 0.8902
30432/60000 [==============>...............] - ETA: 2s - loss: 0.3809 - accuracy: 0.8916
31456/60000 [==============>...............] - ETA: 2s - loss: 0.3751 - accuracy: 0.8932
32704/60000 [===============>..............] - ETA: 2s - loss: 0.3707 - accuracy: 0.8946
33760/60000 [===============>..............] - ETA: 1s - loss: 0.3652 - accuracy: 0.8959
34976/60000 [================>.............] - ETA: 1s - loss: 0.3594 - accuracy: 0.8975
35968/60000 [================>.............] - ETA: 1s - loss: 0.3555 - accuracy: 0.8984
37152/60000 [=================>............] - ETA: 1s - loss: 0.3509 - accuracy: 0.8998
38240/60000 [==================>...........] - ETA: 1s - loss: 0.3477 - accuracy: 0.9006
39232/60000 [==================>...........] - ETA: 1s - loss: 0.3442 - accuracy: 0.9015
40448/60000 [===================>..........] - ETA: 1s - loss: 0.3393 - accuracy: 0.9030
41536/60000 [===================>..........] - ETA: 1s - loss: 0.3348 - accuracy: 0.9042
42752/60000 [====================>.........] - ETA: 1s - loss: 0.3317 - accuracy: 0.9049
43840/60000 [====================>.........] - ETA: 1s - loss: 0.3288 - accuracy: 0.9059
44992/60000 [=====================>........] - ETA: 1s - loss: 0.3255 - accuracy: 0.9069
46016/60000 [======================>.......] - ETA: 0s - loss: 0.3230 - accuracy: 0.9077
47104/60000 [======================>.......] - ETA: 0s - loss: 0.3203 - accuracy: 0.9085
48288/60000 [=======================>......] - ETA: 0s - loss: 0.3174 - accuracy: 0.9091
49248/60000 [=======================>......] - ETA: 0s - loss: 0.3155 - accuracy: 0.9098
50208/60000 [========================>.....] - ETA: 0s - loss: 0.3131 - accuracy: 0.9105
51104/60000 [========================>.....] - ETA: 0s - loss: 0.3111 - accuracy: 0.9111
52288/60000 [=========================>....] - ETA: 0s - loss: 0.3085 - accuracy: 0.9117
53216/60000 [=========================>....] - ETA: 0s - loss: 0.3066 - accuracy: 0.9121
54176/60000 [==========================>...] - ETA: 0s - loss: 0.3043 - accuracy: 0.9128
55328/60000 [==========================>...] - ETA: 0s - loss: 0.3018 - accuracy: 0.9135
56320/60000 [===========================>..] - ETA: 0s - loss: 0.2995 - accuracy: 0.9141
57440/60000 [===========================>..] - ETA: 0s - loss: 0.2980 - accuracy: 0.9143
58400/60000 [============================>.] - ETA: 0s - loss: 0.2961 - accuracy: 0.9148
59552/60000 [============================>.] - ETA: 0s - loss: 0.2941 - accuracy: 0.9154
60000/60000 [==============================] - 4s 65us/sample - loss: 0.2930 - accuracy: 0.9158
... epochs pass ...
10000/1 [==========] - 1s 61us/sample - loss: 0.0394 - accuracy: 0.9778

Phil 8.29.19

ASRC GOES – 7:00 – 4:00

  • Find out who I was talking to yesterday at lunch (Boynton?)
  • Contact David Lazar about RB
  • Participating as an anonymous fish in JuryRoom. Started the discussion
  • Dissertation – started the State section
  • Working on Control and sim diagrams
    • Putting this here because I keep on forgetting how to add an outline/border to an image in Illustrator:

OutlineAI

  1. Place and select an image in the Illustrator document.
  2. Once selected, open Appearance panel and from the Appearance panel flyout menu, choose Add New Stroke:
  3. With the Stroke highlighted in the Appearance panel, choose Effect -> Path -> Outline Object.
  • Anyway, back to our regularly scheduled program.
  • Made a control system diagram
  • Made a control system inheritance diagram
  • Made a graphics inheritance diagram
  • Need to stick them in the ASRC Dev Pipeline document
  • Discovered JabRef: JabRef is an open source bibliography reference manager. The native file format used by JabRef is BibTeX, the standard LaTeX bibliography format. JabRef is a desktop application and runs on the Java VM (version 8), and works equally well on Windows, Linux, and Mac OS X.
  • Tomorrow we get started with TF 2.0

Phil 8.26.19

7:00 – ASRC GOES

  • Dissertation – working my way through the lit review section
  • Antonio sent a note about Software Impacts, which provides a scholarly reference to software that has been used to address a research challenge. The journal disseminates impactful and re-usable scientific software through Original Software Publications (OSP) which describe the application of the software to research and the published outputs.
    • Submissions to Software Impacts consist of two major parts:
      • A short descriptive paper of about three pages including an Impact Overview and references to publications where the software has been used
      • An open source software distribution with support material.
    • So, to get things to fit on GitHub, I worked on getting GPM to work with a smaller library – done
  • Discussions with Aaron about using TF 2.0 xformer on GOES sim data
  • Security training – an hour or so
  • Copied the Waikato JuryRoom proposal to PolarizationGame folder

Phil 8.23.19

7:00 – 4:00 ASRC GEOS

  • More Dissertation
    • Continuing lit review
  • Rework BlueSky paper for air traffic? Meeting with T at 10:00
  • Simulation
    • Need to discuss with Aaron the best way to use the data to train the NN and round-trip the outputs so that they can be used to have the ML model issue commands to the RCS system so that given the outputs of one model, the NN can create commands that cause the same outputs in a separate model
  • Wow. It knows/finds syntactically correct Java. From TalkToTransformer.com:
  • Wow

Phil 8.1.19

7:00 – 3:30 ASRC GEOS

  • Cancel service at Bob’s – done
  • Scan hotel receipt and fill out expense report
  • Write up some USPTO thoughts – done
  • School reimbursement and approval for 899 – forms filled out, waiting for signatures
  • Write down thoughts on inhibition and excitation in groups. Basically, when a group is engaged in discussion, some links are excitatory – a small group will engage in discussion, while others participate less or not at all – they are inhibited. These kind of discussions are almost always mediated by an explicit or implicit leader. The consensus that develops is greatly influenced by who is excited and who is inhibited.
  • July progress email – done
  • Dissertation
    • Work on flowchart(s)
  • Distributed Memory and the Representation of General and Specific Information
    • We describe a distributed model of information processing and memory and apply it to the representation of general and specific information. The model consists of a large number of simple processing elements which send excitatory and inhibitory signals to each other via modifiable connections. Information processing is thought of as the process whereby patterns of activation are formed over the units in the model through their excitatory and inhibitory interactions. The memory trace of a processing event is the change or increment to the strengths of the interconnections that results from the processing event. The traces of separate events are superimposed on each other in the values of the connection strengths that result from the entire set of traces stored in the memory. The model is applied to a number of findings related to the question of whether we store abstract representations or an enumeration of specific experiences in memory. The model simulates the results of a number of important experiments which have been taken as evidence for the enumeration of specific experiences. At the same time, it shows how the functional equivalent of abstract representations- prototypes, logogens, and even rules-can emerge from the superposition of traces of specific experiences, when the conditions are right for this to happen. In essence, the model captures the structure present in a set of input patterns; thus, it behaves as though it had learned prototypes or rules, to the extent that the structure of the environment it has learned about can be captured by describing it in terms of these abstractions.
  • Leveraging Meta Information in Short Text Aggregation
    • Analysing topics in short texts (e.g., tweets and new headings) is a challenging task because short texts often contain insufficient word co-occurrence information, which is important to learn good topics in conventional topic topics. To deal with the insufficiency, we propose a generative model that aggregates short texts into clusters by leveraging the associated meta information. Our model can generate more interpretable topics as well as document clusters. We develop an effective Gibbs sampling algorithm favoured by the fully local conjugacy in the model. Extensive experiments demonstrate that our model achieves better performance in terms of document clustering and topic coherence.

Phil 7.24.19

7:00 – 4:00 ASRC GEOS

  • Write up my impression of yesterday’s game – done
  • Put together a Google Form to get everyone else’s impression – done
    • Understanding the map
    • Using the map
    • The effect of the map on gameplay and enjoyment
  • Send Don routes for ebike – done
  • Maybe get started on Martindale?
  • Start setting up Heera’s github – done
  • More graphics at Mission drive (bring fixee from home!)
    • Adding class to handle mouse button events – done
    • Refactoring the classes out of the Primitives.py file
    • Working on caps for the cylinder
  • Send Chris and Panos the anonymized sql, and rank the questions for difficulty
  • Various meetings

Phil 7.22.19

7:00 – 5:00 ASRC GEOS

conformity

Today’s timeline serendipity

  • The TdF is very exciting this year!
  • Met with Heera to discuss her work. I’m going to set up a GitHub project and add a parser that reads in an xml config file that then parses csv files into:
    • Spreadsheet for evaluation
    • Split-up csv files for analysis
  • Pick up air filter and oil change kit for the bike
  • Ran the random binary network code and generated figures for the text
  • Remind all the players about the run tomorrow – done
  • Getting the tmesh working – success!
  • Getting better camera controls

Phil 7.19.19

7:00 – 4:30 ASRC GEOS

StanfordNLP

  • Still looking at what’s wrong with my NK model. I found Random Boolean Networks, when looking for “random binary networks kauffman example“. It also has a bibliography that looks helpful as well
    • Introduction to Random Boolean Networks
      • The goal of this tutorial is to promote interest in the study of random Boolean networks (RBNs). These can be very interesting models, since one does not have to assume any functionality or particular connectivity of the networks to study their generic properties. Like this, RBNs have been used for exploring the configurations where life could emerge. The fact that RBNs are a generalization of cellular automata makes their research a very important topic. The tutorial, intended for a broad audience, presents the state of the art in RBNs, spanning over several lines of research carried out by different groups. We focus on research done within artificial life, as we cannot exhaust the abundant research done over the decades related to RBNs.
      • I can add a display that shows this: Trajectory
      • Got that working
      • Rewrote so that there is an evolve without a fitness test. Trying to set up transition patterns like this: Transitions
      • The thing is, I don’t see how the K part works here…
      • I think I got it working!
    • Complex and Adaptive Dynamical Systems: A Primer
      • An thorough introduction is given at an introductory level to the field of quantitative complex system science, with special emphasis on emergence in dynamical systems based on network topologies. Subjects treated include graph theory and small-world networks, a generic introduction to the concepts of dynamical system theory, random Boolean networks, cellular automata and self-organized criticality, the statistical modeling of Darwinian evolution, synchronization phenomena and an introduction to the theory of cognitive systems. 
        It inludes chapter on Graph Theory and Small-World Networks, Chaos, Bifurcations and Diffusion, Complexity and Information Theory, Random Boolean Networks, Cellular Automata and Self-Organized Criticality, Darwinian evolution, Hypercycles and Game Theory, Synchronization Phenomena and Elements of Cognitive System Theory.

Phil 7.18.19

7:00 – 5:00 ASRC GEOS

  • Started to fold Wayne’s comments in
  • Working on the Kauffman section
  • Tried making it so K can be higher than N with resampling and I still can’t keep the system from converging, which makes me think that there is something wrong with the code.
  • Send reviews to Antonio – done
  • Back to work on the physics model. Make sure to include a data dictionary mapping system to support Bruce’s concept
  • Sent links to Panda3D to Vadim
  • Code autocompletion using deep learning
  • A lot of flailing today but no good progress:

N_20_K_6

Phil 7.17.19

7:00 – 7:00 ASRC GEOS

  • Got some nice NK model network plots working:
  • Added a long jump mutation when plateaus are hit:
  • Generally, fixed a lot of bugs in the code, but I think I understand the NK model thing. I do want to try and find how they did the traveling salesman problem
  • AI/ML Meeting
    • NASA? Air Force(?) are putting together a reinforcement learning model for autonomous spacecraft control, that requires a simulator.
  • Meeting with Wayne
    • Lots of work on the dissertation
    • Walked through JuryRoom prototype