Category Archives: thesis

Phil 7,11,19

7:00 – 4:30 ASRC GEOS

  • Ping Antonio – Done
  • Dissertation
    • More Bones in a Hut. Found the online version of the Hi-Lo chapter from BIC. Explaining why Hi-Lo is different from IPD.
  • More reaction wheel modeling
  • Get flight and hotel for July 30 trip – Done
  • So this is how you should install Panda3d and examples:
    • First, make sure that you have Python 3.7 (64 bit! The default is 32 bit). Make sure that your path environment points to this, and not any other 3.x versions that you’re hoarding on your machine.
    • pip install panda3d==1.10.3
    • Then download the installer and DON’T select the python 3.7 support: Panda3d_install
    • Finish the install and verify that the demos run (e.g. python \Panda3D-1.10.3-x64\samples\asteroids\main.py): asteroids
    • That’s it!
  • Discussed DARPA solicitation HR001119S0053 with Aaron. We know the field of study – logistics and supply chain, but BD has totally dropped the ball on deadlines and setting up any kind of relationship with the points of contact. We countered that we could write a paper and present at a venue to gain access and credibility that way.
    • There is a weekly XML from the FBO. Downloading this week’s to see if it’s easy to parse and search

Phil 7.10.19

7:00 – 5:00 ASRC

  • BP&S is up! Need to ping Antonio
  • Need to fix DfS to de-emphasize the mapping part. Including things like, uh, changing the title…
  • Pix at HQ – Done
  • Greenbelt today, which means getting Panda3D up and running on my laptop – Done. Had to point the IDE at the python in the install.
  • Need to add some thoughts to JuryRoom concepts
  • Send dungeon invites for the 23rd, and ping Aaron M. Done. Wayne can’t make it! Drat!
  • Dissertation working on the Bacharach section
  • Got the sim working on the laptop. I realize that the reaction wheel can be modeled as weights on a stick. Long discussion with Bruce T

Phil 7.9.19

7:00 – 5:30 ASRC GEOS

  • BP&S is “on hold” in ArXiv. Hoping that it’s overlap with DfS. I took the mapping text out of the DfS paper and resubmitted. Once that’s done I can send Antonio a link and get advice.
  • Code review with Chris
  • Contact David and see if he’s ok with July 23 – Nope. Trying Aaron M. as a replacement
  • More dissertation. Folded in most of the BP&S paper
  • Look! More mapping of latent spaces! Unsupervised word embeddings capture latent knowledge from materials science literature41586_2019_1335_fig4_esm
    • Here we show that materials science knowledge present in the published literature can be efficiently encoded as information-dense word embeddings11,12,13 (vector representations of words) without human labelling or supervision. Without any explicit insertion of chemical knowledge, these embeddings capture complex materials science concepts such as the underlying structure of the periodic table and structure–property relationships in materials. Furthermore, we demonstrate that an unsupervised method can recommend materials for functional applications several years before their discovery. This suggests that latent knowledge regarding future discoveries is to a large extent embedded in past publications. Our findings highlight the possibility of extracting knowledge and relationships from the massive body of scientific literature in a collective manner, and point towards a generalized approach to the mining of scientific literature.
  • More Panda3D
    • Intervals and sequences
    • Panda3D forum
    • Programming with Panda3D
      • Well, this is looking a lot like the way I would have written it
      • You can convert a NodePath into a “regular” pointer at any time by calling nodePath.node(). However, there is no unambiguous way to convert back. That’s important: sometimes you need a NodePath, sometimes you need a node pointer. Because of this, it is recommended that you store NodePaths, not node pointers. When you pass parameters, you should probably pass NodePaths, not node pointers. The callee can always convert the NodePath to a node pointer if it needs to.
      • Nodepath
    • Huh. It looks like there is no support for procedurally generated primitives. Well, I know what I’m going to be doing…
      • Origin – done
      • Grid
      • Cube (x, y, z size), color (texture?), Boolean for endcaps
      • Cylinder (radius+steps, length), color
      • Sphere  (radius+steps), color
      • Skybox (texture)
      • Then try making a satellite from parts
    • JuryRoom Meeting
      • A lot of discussion on UI issues – how to vote for/against, the right panel layout, and the questions that should be asked for Chris’ studyCapture

Phil 7.8.19

7:00 – 4:30 ASRC GEOS

  • Read and commented on Shimei’s proposal. It’s interesting to see how she’s weaving all these smaller threads together into one larger narrative. I find that my natural approach is to start with an encompassing vision and figure out how to break it down into its component parts. Which sure seems like stylistic vs. primordial. Interestingly, this implies that stylistic is more integrative? Transdisciplinary, primordial work, because it has no natural home, is more disruptive. It makes me think of this episode of Shock of the New about Paul Cezanne.
  • Working on getting BP&S into one file for ArXiv, then back to the dissertation.
    • Flailed around with some package mismatches, and had a upper/lowercase (.PNG vs. .png) problem. Submitted!
  • Need to ping Antonio about BP&S potential venues
  • The Redirect Method uses Adwords targeting tools and curated YouTube videos uploaded by people all around the world to confront online radicalization. It focuses on the slice of ISIS’ audience that is most susceptible to its messaging, and redirects them towards curated YouTube videos debunking ISIS recruiting themes. This open methodology was developed from interviews with ISIS defectors, respects users’ privacy and can be deployed to tackle other types of violent recruiting discourses online.
  • Pushed TimeSeriesML to the git repo, so we’re redundently backed up. Did not send data yet
  • Starting on the PyBullet tutorial
    • Trying to install pybullet.
      • Got this error: error: command ‘C:\\Program Files (x86)\\Microsoft Visual Studio 14.0\\VC\\BIN\\x86_amd64\\link.exe’ failed with exit status 1158
      • Updating my Visual Studio (suggested here), in the hope that it fixes that. Soooooo Slooooow
      • Link needs rc.exe/rc.dll
      • Copied the most recent rc.exe and rcdll.dll (from into C:\\Program Files (x86)\\Microsoft Visual Studio 14.0\\VC\\BIN\\x86_amd64\
      • Giving up
  • Trying Panda3d
    • Downloaded and ran the installer. It couldn’t tell that I had Python 3.7.x, but otherwise was fine. Maybe that’s because my Python is on my D: drive?
    • Ran:
      pip install panda3d==1.10.3

      Which worked just fine

    • Had to add the D:\Panda3D-1.10.3-x64\bin and D:\Panda3D-1.10.3-x64\panda3d to the path to get all the imports to work right. This could be because I’m using a global, separately installed Python 3.7.x
    • Hmmm. Getting ModuleNotFoundError: No module named ‘panda3d.core.Point3’; ‘panda3d.core’ is not a package. The IDE can find it though….
    • In a very odd sequence of events, I tried using
      • from pandac.PandaModules import Point3, which worked, but gave me a deprecated warning.
    • Then, while fooling around, I tried the preferred
      • from panda3d.core import Point3, which now works. No idea what fixed it. Here’s the config that I’m using to run: Panda3dConfig
    • Nice performance, too: Pandas3D
    • And it has bullet in it, so maybe it will work here?
    • Starting on the manual

Phil 7.5.19

7:00 – 5:00 ASRC GEOS

  • Got a desk reject from JASSS. Finding a home for this is turning out to be hard
  • Adjust Belief Places and Spaces for a straight ArXiv submission (article, endquote, fix cites). I’m doing this partially out of spite – I don’t want to see JASSS looking back at me in my svn repo. But I also need to get all the parts fixed so that it can be folded into the dissertation. \citep doesn’t play well, and I need to replace all the quotes with \enquote{}.
  • Start folding BP&S into dissertation
  • Look for Collective Intelligence venue?
  • Updated Pandas, which was where I got hung on Tuesday. Now I can use DataFrame.to_numpy() instead of Dataframe.values
  • Continuing on TimeSeriesNormalizer – done! Below is the original file with the data in columns (left) and the normalized file with the data in rows (right): Normalized
  • Learning about PyBullet
  • Thought for the day. Find the snippet for each room/group with the most positive and most negative sentiment, and use that instead of the three words.

Phil 7.3.19

Continuing with the ICML 2019 Tutorial: Recent Advances in Population-Based Search for Deep Neural Networks. Wow. Lots of implications for diversity science. They need to read Martindale though.

This also looks good, using the above concepts of Quality Diversity to create map-like structures in low dimensions

  • Autonomous skill discovery with Quality-Diversity and Unsupervised Descriptors
    • Quality-Diversity optimization is a new family of optimization algorithms that, instead of searching for a single optimal solution to solving a task, searches for a large collection of solutions that all solve the task in a different way. This approach is particularly promising for learning behavioral repertoires in robotics, as such a diversity of behaviors enables robots to be more versatile and resilient. However, these algorithms require the user to manually define behavioral descriptors, which is used to determine whether two solutions are different or similar. The choice of a behavioral descriptor is crucial, as it completely changes the solution types that the algorithm derives. In this paper, we introduce a new method to automatically define this descriptor by combining Quality-Diversity algorithms with unsupervised dimensionality reduction algorithms. This approach enables robots to autonomously discover the range of their capabilities while interacting with their environment. The results from two experimental scenarios demonstrate that robot can autonomously discover a large range of possible behaviors, without any prior knowledge about their morphology and environment. Furthermore, these behaviors are deemed to be similar to handcrafted solutions that uses domain knowledge and significantly more diverse than when using existing unsupervised methods.

Back to the Dissertation

  • Added notes from Monday’s dungeon run
  • Added adversarial herding
  • At 111 pages!

Phil 7.2.19

7:00 – 3:30 ASRC GEOS

  • Wrote up some preliminary notes about the run yesterday
  • Need to set up a schedule for another run in the last half of July
  • Added a “Bones in a Hut” section that follows the main lit review
  • Clustering today
    • Wrote up a workflow.txt file for the whole process.
    • Building TimeSeriesNormalizer class
    • Looks like I lost admin. Stalled

Phil 7/1/19

7:00 –  6:00 ASRC

  • Updated ADP Job performance
  • Submitted JASSS Paper
  • Discovered the ComSES Network. It’s a repository for models, a job board, and a schedule with deadlines for ABM-related conferences and workshops!
  • First Tymora run with the map today!
    • It went well, I think. My main takeaway is that the place terms are working fine, but the space terms may need more context. It may be that an excerpt from the story that reflects the most common terms may be better than the terms in isolation.
    • The players stayed away from the stairs, and actually turned them to their advantage, using them as a defensive position.
    • The players went through the orb room very fast, and seemed to base their thinking on aspects of the map.
    • The crypticness of the map seems to help the games aspects. The terms are interpretable, and the ambiguity supports the agency of the players

Phil 6.28.19

7:00 – 8:00 ASRC GEOS

  • Early timesheet
  • Sent Aaron a writeup on the clustering results

8:00 – 5:00 PhD

  • Made a poster of the map for the run on Monday
  • Dissertation for the rest of the day
    • Created a chapter and equation directory. Things are getting messy
    • Working on the simulation study chapter – I think the main part of the work is in there. Now I need to add the adversarial herding

Phil 6.27.19

7:00 – 5:00 ASRC GEOS

  • Still working on getting a fourth player
  • More dissertation. Starting to fold in bits of previous papers. Using the iConference and CHIIR full paper for the basic theory to start with. Then I think sections on mapping and adversarial herding, though I’m not sure of the order…
  • More cluster membership today – Done! Clustering
  • Nice chat with Tanisha

Phil 6.26.19

ASRC GEOS 7:00 – 5:00

word_frequency

  • Still trying to get a 4th player for Monday
  • Sent Aaron his 100-word bio
  • Continuing to bang together parts of the first draft dissertation. At 71 pages, and I can launch Acrobat again. Wheee!
  • Mission Drive today
    • Working on the ClusterMembership class

Phil 6.25.19

7:00 – 7:00 ASRC GEOS

  • Scheduled the map run for Monday, July 1, 12:30 – 4:30
  • Asked wayne for a 100-word bio by the end of the month. Working on mine today
  • What use are computational models of cognitive processes?
    • Computational modelers are not always explicit about their motivations for constructing models, nor are they always explicit about the theoretical implications of their models once constructed. Perhaps in part due to this, models have been criticized as “black-box” exercises which can play little or no role in scientific explanation. This paper argues that models are useful, and that the motivations for constructing computational models can be made clear by considering the roles that tautologies can play in the development of explanatory theories. From this, additionally, I propose that although there are diverse benefits of model building, only one class of benefits — those which relate to explanation — can provide justification for the activity.
  • DTW run looks good. It took 8 1/2 hours to run: fulldtw
  • Fixed a lot of things to get the clustering to behave, but it all looks good
  • Spend a while arguing online about ketchup vs. mustard with Aaron JuryRoom
  • Waikato meeting
    • Test with a few thousand posts using lorem ipsum
    • Maybe double the character count
    • Scroll to offscreen posts
    • Context-sensitive text
    • Toggle vote button
    • 500 default chars, variable

Phil 6.10.19

ASRC GEOS 7:00 – 3:00

  • I’ve been thinking about the implications of this article: Training a single AI model can emit as much carbon as five cars in their lifetimes
    • There is something in this that has to do with the idea of cost. NN architectures have no direct concept of cost. Inevitably the “current best network” takes a building full of specialized processors 200 hours. This has been true for Inception, AmeoebaNet, and AlphaGo. I wonder what would happen if there was a cost for computation that was part of the fitness function?
    • My sense is that evolution, has two interrelated parameters
      • a mutation needs to “work better” (whatever that means in the context) than the current version
      • the organism that embodies the mutation has to reproduce
    • In other words, neural structures in our brains have an unbroken chain of history to the initial sensor neurons in multicellular organisms. All the mutations that didn’t live to make an effect. Those that weren’t able to reproduce didn’t get passed on.
    • Randomness is important too. Systems that are too similar, like Aspen trees that have given up on sexual reproduction and are essentially all clones reproducing by rhizome. These live long enough to have an impact on the environment, particularly where they can crowd out other species, but the species itself is doomed.
    • I’d like to see an approach to developing NNs that involves more of the constraints of “natural” evolution. I think it would lead to better, and potentially less destructive results.
  • SHAP (SHapley Additive exPlanations) is a unified approach to explain the output of any machine learning model. SHAP connects game theory with local explanations, uniting several previous methods [1-7] and representing the only possible consistent and locally accurate additive feature attribution method based on expectations (see our papers for details).
  • Working on clustering. I’ve been going around in circles on how to take a set of relative distance measures and use them as a basis for clustering. To revisit, here’s a screenshot of a spreadsheet containing the DTW distances from every sequence to every other sequence: DTW
  • My approach is to treat each line of relative distances as a high-dimensional coordinate ( in this case, 50 dimensions), and cluster with respect to the point that defines. This takes care of the problem that the data in this case is very symmetric about the diagonal. Using this approach, an orange/green coordinate is in a different location from the mirrored green/orange coordinate. It’s basically the difference between (1, 2) and (2, 1). That should be a reliable clustering mechanism. Here are the results:
           cluster_id
    ts_0            0
    ts_1            0
    ts_2            0
    ts_3            0
    ts_4            0
    ts_5            0
    ts_6            0
    ts_7            0
    ts_8            0
    ts_9            0
    ts_10           0
    ts_11           0
    ts_12           0
    ts_13           0
    ts_14           0
    ts_15           0
    ts_16           0
    ts_17           0
    ts_18           0
    ts_19           0
    ts_20           0
    ts_21           0
    ts_22           0
    ts_23           0
    ts_24           0
    ts_25           1
    ts_26           1
    ts_27           1
    ts_28           1
    ts_29           1
    ts_30           1
    ts_31           1
    ts_32           1
    ts_33           1
    ts_34           1
    ts_35           1
    ts_36           1
    ts_37           1
    ts_38           1
    ts_39           1
    ts_40           1
    ts_41           1
    ts_42           1
    ts_43           1
    ts_44           1
    ts_45           1
    ts_46           1
    ts_47           1
    ts_48           1
    ts_49           1
  • First-Order Adversarial Vulnerability of Neural Networks and Input Dimension
    • Carl-Johann Simon-Gabriel, Yann Ollivier, Bernhard Scholkopf, Leon BottouDavid Lopez-Paz
    • Over the past few years, neural networks were proven vulnerable to adversarial images: Targeted but imperceptible image perturbations lead to drastically different predictions. We show that adversarial vulnerability increases with the gradients of the training objective when viewed as a function of the inputs. Surprisingly, vulnerability does not depend on network topology: For many standard network architectures, we prove that at initialization, the l1-norm of these gradients grows as the square root of the input dimension, leaving the networks increasingly vulnerable with growing image size. We empirically show that this dimension-dependence persists after either usual or robust training, but gets attenuated with higher regularization.
  • More JASSS paper. Through the corrections up to the Results section. Kind of surprised to be leaning so hard on Homer, but I need a familiar story from before world maps.
  • Oh yeah, the Age Of discovery correlates with the development of the Mercator projection and usable world maps

Phil 6.7.19

7:00 – 4:30ASRC GEOS

  • Expense report
  • learned how to handle overtime
  • Dissertation. At 68 pages into the Very Horrible First Draft (VHFD)
  • Meeting with Wayne. Walked though JASSS paper and CHIPLAY reviews
  • Set arguments to DTW systems so that a specified number of rows can be evaluated to support parallelization – done: Split
  • Start clustering? Mope. Wrote up report instead