Category Archives: Machine Learning

Phil 7.17.19

7:00 – ASRC GEOS

  • Got some nice NK model network plots working:

Phil 7,11,19

7:00 – 4:30 ASRC GEOS

  • Ping Antonio – Done
  • Dissertation
    • More Bones in a Hut. Found the online version of the Hi-Lo chapter from BIC. Explaining why Hi-Lo is different from IPD.
  • More reaction wheel modeling
  • Get flight and hotel for July 30 trip – Done
  • So this is how you should install Panda3d and examples:
    • First, make sure that you have Python 3.7 (64 bit! The default is 32 bit). Make sure that your path environment points to this, and not any other 3.x versions that you’re hoarding on your machine.
    • pip install panda3d==1.10.3
    • Then download the installer and DON’T select the python 3.7 support: Panda3d_install
    • Finish the install and verify that the demos run (e.g. python \Panda3D-1.10.3-x64\samples\asteroids\main.py): asteroids
    • That’s it!
  • Discussed DARPA solicitation HR001119S0053 with Aaron. We know the field of study – logistics and supply chain, but BD has totally dropped the ball on deadlines and setting up any kind of relationship with the points of contact. We countered that we could write a paper and present at a venue to gain access and credibility that way.
    • There is a weekly XML from the FBO. Downloading this week’s to see if it’s easy to parse and search

Phil 7.8.19

7:00 – 4:30 ASRC GEOS

  • Read and commented on Shimei’s proposal. It’s interesting to see how she’s weaving all these smaller threads together into one larger narrative. I find that my natural approach is to start with an encompassing vision and figure out how to break it down into its component parts. Which sure seems like stylistic vs. primordial. Interestingly, this implies that stylistic is more integrative? Transdisciplinary, primordial work, because it has no natural home, is more disruptive. It makes me think of this episode of Shock of the New about Paul Cezanne.
  • Working on getting BP&S into one file for ArXiv, then back to the dissertation.
    • Flailed around with some package mismatches, and had a upper/lowercase (.PNG vs. .png) problem. Submitted!
  • Need to ping Antonio about BP&S potential venues
  • The Redirect Method uses Adwords targeting tools and curated YouTube videos uploaded by people all around the world to confront online radicalization. It focuses on the slice of ISIS’ audience that is most susceptible to its messaging, and redirects them towards curated YouTube videos debunking ISIS recruiting themes. This open methodology was developed from interviews with ISIS defectors, respects users’ privacy and can be deployed to tackle other types of violent recruiting discourses online.
  • Pushed TimeSeriesML to the git repo, so we’re redundently backed up. Did not send data yet
  • Starting on the PyBullet tutorial
    • Trying to install pybullet.
      • Got this error: error: command ‘C:\\Program Files (x86)\\Microsoft Visual Studio 14.0\\VC\\BIN\\x86_amd64\\link.exe’ failed with exit status 1158
      • Updating my Visual Studio (suggested here), in the hope that it fixes that. Soooooo Slooooow
      • Link needs rc.exe/rc.dll
      • Copied the most recent rc.exe and rcdll.dll (from into C:\\Program Files (x86)\\Microsoft Visual Studio 14.0\\VC\\BIN\\x86_amd64\
      • Giving up
  • Trying Panda3d
    • Downloaded and ran the installer. It couldn’t tell that I had Python 3.7.x, but otherwise was fine. Maybe that’s because my Python is on my D: drive?
    • Ran:
      pip install panda3d==1.10.3

      Which worked just fine

    • Had to add the D:\Panda3D-1.10.3-x64\bin and D:\Panda3D-1.10.3-x64\panda3d to the path to get all the imports to work right. This could be because I’m using a global, separately installed Python 3.7.x
    • Hmmm. Getting ModuleNotFoundError: No module named ‘panda3d.core.Point3’; ‘panda3d.core’ is not a package. The IDE can find it though….
    • In a very odd sequence of events, I tried using
      • from pandac.PandaModules import Point3, which worked, but gave me a deprecated warning.
    • Then, while fooling around, I tried the preferred
      • from panda3d.core import Point3, which now works. No idea what fixed it. Here’s the config that I’m using to run: Panda3dConfig
    • Nice performance, too: Pandas3D
    • And it has bullet in it, so maybe it will work here?
    • Starting on the manual

Phil 6.28.19

7:00 – 8:00 ASRC GEOS

  • Early timesheet
  • Sent Aaron a writeup on the clustering results

8:00 – 5:00 PhD

  • Made a poster of the map for the run on Monday
  • Dissertation for the rest of the day
    • Created a chapter and equation directory. Things are getting messy
    • Working on the simulation study chapter – I think the main part of the work is in there. Now I need to add the adversarial herding

Phil 6.27.19

7:00 – 5:00 ASRC GEOS

  • Still working on getting a fourth player
  • More dissertation. Starting to fold in bits of previous papers. Using the iConference and CHIIR full paper for the basic theory to start with. Then I think sections on mapping and adversarial herding, though I’m not sure of the order…
  • More cluster membership today – Done! Clustering
  • Nice chat with Tanisha

Phil 6.12.19

7:00 – 5:30 ASRC GEOS

Phil 6.11.19

ASRC GEOS 7:00 – 5:30

  • Some interesting stuff from ICML 2019
    • The Evolved Transformer
      • Recent works have highlighted the strength of the Transformer architecture on sequence tasks while, at the same time, neural architecture search (NAS) has begun to outperform human-designed models. Our goal is to apply NAS to search for a better alternative to the Transformer. We first construct a large search space inspired by the recent advances in feed-forward sequence models and then run evolutionary architecture search with warm starting by seeding our initial population with the Transformer. To directly search on the computationally expensive WMT 2014 EnglishGerman translation task, we develop the Progressive Dynamic Hurdles method, which allows us to dynamically allocate more resources to more promising candidate models. The architecture found in our experiments – the Evolved Transformer – demonstrates consistent improvement over the Transformer on four well-established language tasks: WMT 2014 English-German, WMT 2014 English-French, WMT 2014 EnglishCzech and LM1B. At a big model size, the Evolved Transformer establishes a new state-ofthe-art BLEU score of 29.8 on WMT’14 EnglishGerman; at smaller sizes, it achieves the same quality as the original “big” Transformer with 37.6% less parameters and outperforms the Transformer by 0.7 BLEU at a mobile-friendly model size of ~7M parameters.
    • DBSCAN++: Towards fast and scalable density clustering
      • DBSCAN is a classical density-based clustering procedure with tremendous practical relevance. However, DBSCAN implicitly needs to compute the empirical density for each sample point, leading to a quadratic worst-case time complexity, which is too slow on large datasets. We propose DBSCAN++, a simple modification of DBSCAN which only requires computing the densities for a chosen subset of points. We show empirically that, compared to traditional DBSCAN, DBSCAN++ can provide not only competitive performance but also added robustness in the bandwidth hyperparameter while taking a fraction of the runtime. We also present statistical consistency guarantees showing the trade-off between computational cost and estimation rates. Surprisingly, up to a certain point, we can enjoy the same estimation rates while lowering computational cost, showing that DBSCAN++ is a sub-quadratic algorithm that attains minimax optimal rates for level-set estimation, a quality that may be of independent interest
    • Garbage In, Reward Out: Bootstrapping Exploration in Multi-Armed Bandits
      • We propose a bandit algorithm that explores by randomizing its history of rewards. Specifically, it pulls the arm with the highest mean reward in a non-parametric bootstrap sample of its history with pseudo rewards. We design the pseudo rewards such that the bootstrap mean is optimistic with a sufficiently high probability. We call our algorithm Giro, which stands for garbage in, reward out. We analyze Giro in a Bernoulli bandit and derive a bound on its n-round regret, where ? is the difference in the expected rewards of the optimal and the best suboptimal arms, and K is the number of arms. The main advantage of our exploration design is that it easily generalizes to structured problems. To show this, we propose contextual Giro with an arbitrary reward generalization model. We evaluate Giro and its contextual variant on multiple synthetic and real-world problems, and observe that it performs well.
    • Guided evolutionary strategies: Augmenting random search with surrogate gradients
      • Many applications in machine learning require optimizing a function whose true gradient is inaccessible, but where surrogate gradient information (directions that may be correlated with, but not necessarily identical to, the true gradient) is available instead. This arises when an approximate gradient is easier to compute than the full gradient (e.g. in meta-learning or unrolled optimization), or when a true gradient is intractable and is replaced with a surrogate (e.g. in certain reinforcement learning applications or training networks with discrete variables). We propose Guided Evolutionary Strategies, a method for optimally using surrogate gradient directions along with random search. We define a search distribution for evolutionary strategies that is elongated along a subspace spanned by the surrogate gradients. This allows us to estimate a descent direction which can then be passed to a first-order optimizer. We analytically and numerically characterize the trade-offs that result from tuning how strongly the search distribution is stretched along the guiding subspace, and use this to derive a setting of the hyperparameters that works well across problems. Finally, we apply our method to example problems, demonstrating an improvement over both standard evolutionary strategies and first-order methods that directly follow the surrogate gradient
    • 2019 Workshop on Human In the Loop Learning (HILL)
      • This workshop is a joint effort between the 4th ICML Workshop on Human Interpretability in Machine Learning (WHI) and the ICML 2019 Workshop on Interactive Data Analysis System (IDAS). We have combined our forces this year to run Human in the Loop Learning (HILL) in conjunction with ICML 2019!
      • The workshop will bring together researchers and practitioners who study interpretable and interactive learning systems with applications in large scale data processing, data annotations, data visualization, human-assisted data integration, systems and tools to interpret machine learning models as well as algorithm designs for active learning, online learning, and interpretable machine learning algorithms. The target audience for the workshop includes people who are interested in using machines to solve problems by having a human be an integral part of the process. This workshop serves as a platform where researchers can discuss approaches that bridge the gap between humans and machines and get the best of both worlds.
    • More JASS paper
    • Start on clustering hyperparameter search
      • Created ClusterEvaluator. Going to use learning_optimizer as the search space evaluator – Done
    • Waikato meeting
      • Extract data from the PHP and Slack DBs for Tony and JASSS

Phil 6.7.19

7:00 – 4:30ASRC GEOS

  • Expense report
  • learned how to handle overtime
  • Dissertation. At 68 pages into the Very Horrible First Draft (VHFD)
  • Meeting with Wayne. Walked though JASSS paper and CHIPLAY reviews
  • Set arguments to DTW systems so that a specified number of rows can be evaluated to support parallelization – done: Split
  • Start clustering? Mope. Wrote up report instead

Phil 6.4.19

7:00 – 4:00 ASRC NASA GEOS

  • Continuing to read Colin Martindale’s Cognitive Psychology, a Neural Network Approach, which is absolutely bonkers for something written decades ago. Ordered two more copies.
  • JASSS Paper. Adding footnotes to figures, which is tricky.
  • Dissertation
    • Took the chapter numbers out of the file names, since these things seem to be sliding around quite a bit
  • Registered for Politics and Computational Social Science (PACSS) Conference
  • GROUP paper?
  • Waveform clustering
    • Adding noise to the float_functions class. Here’s the waveform without and with some (0.1) noise:
    • Installed fastdtw for python
    • DTW is working on the lines in the csv. Identical lines have zero distance, noise has some. Need to think about some kind of normalizing measure. Maybe divide by the number of points?
    • Need to iterate as nested loops over all the rows. Skip when i == j – done
    • Need to build a Dataframe of distances from one row to the next – done
    • Here are the two curves to compare: TwoCurves
    • And here’s the DTW result: DTW
  • Good Waikato meeting. We’ll try to run a jury next week. Also, meetings have been moved to 6:30 EST

Phil 5.31.19

7:00 – 3:00 NASA GEOS

  • Got a proposal from Panos and his group. Michael Mayo is interested in running Google’s Universal Sentence Encoder on the data
  • Defending Against Neural Fake News
    • Recent progress in natural language generation has raised dual-use concerns. While applications like summarization and translation are positive, the underlying technology also might enable adversaries to generate neural fake news: targeted propaganda that closely mimics the style of real news. 
      Modern computer security relies on careful threat modeling: identifying potential threats and vulnerabilities from an adversary’s point of view, and exploring potential mitigations to these threats. Likewise, developing robust defenses against neural fake news requires us first to carefully investigate and characterize the risks of these models. We thus present a model for controllable text generation called Grover. Given a headline like `Link Found Between Vaccines and Autism,’ Grover can generate the rest of the article; humans find these generations to be more trustworthy than human-written disinformation. 
    • Developing robust verification techniques against generators like Grover is critical. We find that best current discriminators can classify neural fake news from real, human-written, news with 73% accuracy, assuming access to a moderate level of training data. Counterintuitively, the best defense against Grover turns out to be Grover itself, with 92% accuracy, demonstrating the importance of public release of strong generators. We investigate these results further, showing that exposure bias — and sampling strategies that alleviate its effects — both leave artifacts that similar discriminators can pick up on. We conclude by discussing ethical issues regarding the technology, and plan to release Grover publicly, helping pave the way for better detection of neural fake news.
  • Retooling CHIPLAY for GROUP. Deadline is June 21
  • More JASS tweaking:
    • Switch the urls in the paper to antibubbles to anonymize – done

Phil 5.30.19

7:00 – 2:30 NASA GEOS

  • CHI Play reviews should come back today!
    • Darn – rejected. From the reviews, it looks like we are in the same space, but going a different direction – an alignment problem. Need to read the reviews in detail though.
    • Some discussion with Wayne about GROUP
  • More JASSS paper
    • Added some broader thoughts to the conclusion and punched up the subjective/objective map difference
  • Start writing proposal for Bruce
    • Simple simulation baseline for model building
    • Develop models for
      • Extrapolating multivariate (family) values, including error conditions
      • Classify errors
      • Explainable model, that has sensor inputs drive the controls of the model that produce outputs that are evaluated against the original inputs using RL
      • “Safer” ML using Sanhedrin approach
  • EfficientNet: Improving Accuracy and Efficiency through AutoML and Model Scaling
    • In our ICML 2019 paper, “EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks”, we propose a novel model scaling method that uses a simple yet highly effective compound coefficient to scale up CNNs in a more structured manner. Unlike conventional approaches that arbitrarily scale network dimensions, such as width, depth and resolution, our method uniformly scales each dimension with a fixed set of scaling coefficients. Powered by this novel scaling method and recent progress on AutoML, we have developed a family of models, called EfficientNets, which superpass state-of-the-art accuracy with up to 10x better efficiency (smaller and faster). EfficientNet