Category Archives: Phil

Phil 1.11.20

On the Relationship between Self-Attention and Convolutional Layers

  • Recent trends of incorporating attention mechanisms in vision have led researchers to reconsider the supremacy of convolutional layers as a primary building block. Beyond helping CNNs to handle long-range dependencies, Ramachandran et al. (2019) showed that attention can completely replace convolution and achieve state-of-the-art performance on vision tasks. This raises the question: do learned attention layers operate similarly to convolutional layers? This work provides evidence that attention layers can perform convolution and, indeed, they often learn to do so in practice. Specifically, we prove that a multi-head self-attention layer with sufficient number of heads is at least as powerful as any convolutional layer. Our numerical experiments then show that the phenomenon also occurs in practice, corroborating our analysis. Our code is publicly available.
  • I’ve just started to think about how machines and humans could serve as different attention heads, which is why we concentrate into populations with shared features. Attention, given the right conditions, may be an emergent phenomena. Need to look at Kauffman.

Dissertation

  • More Forward – done!
  • Dedication – done
  • Acknowledgements – started!
  • Sometime between the end of the forward and meeting with Aaron, move over to the new template

Phil 1.10.20

7:00 – 4:30 ASRC PhD, BD, GOES

  • Dissertation
    • Stampedes are a form of runaway attention, and precision/recall aid that process
    • Starting on forward. Using the Arab Spring and GamerGate as the framing
  • 11:00 VOLPE Meeting
    • Pursuing the resilience proposal was well received. Next, go up and meet with the folks?
  • Install card – done! Passed the smoke test

Phil 1.9.20

7:00 – 5:00 ASRC PhD, GOES

metaphorNLP highlights podcast

Dissertation

  • Fix H3a-c – look at the heatmaps to see if there is some way of showing cell visitation as trustworthy, low border cells as safe, and stampede conditions as untrustworthy. Otherwise, use DTW
  • Helpful information on Excel Histograms

Nomad, flocking, and stampeding heatmaps

  • A border/core ratio explains this nicely. when border dwell time (BDT) > 1,  dangerous stampede. When BDT = 1, then nomads, When BDT < 1, flocking.
  • Updated the simulation results section. Now I need to update the conclusion hypothesis. – done!

Got my graphics card!

Phil 1.8.20

7:00- 4:00 ASRC PhD, GOES

BREAKING

  • Dissertation
    • Finishing discussion – done
    • Rolling in TACJ from introduction – done
    • Adding conclusions – done
    • Fix H3a-c
  • Reimbursement for fall – done
  • Mission Drive meeting (need to get time for dissertation and GSAW prep)

Phil 1.7.20

ASRC PhD 7:00 – 7:00

WP

  • Dissertation
    • Started the exec summary. I think the formatting is fine and it doesn’t show up in the TOC
    • Started the discussion overview
    • Fixed a bunch of orphan numbers, figure references and other formatting

Phil 1.6.20

7:00 – 8:00 ASRC PhD

  • Dr. Yueh is Fellow in Economics at St Edmund HallOxford University and Adjunct Professor of Economics at London Business School.
  • Dissertation
    • Adding more chapter summaries
      • Maps – done
      • Human Study – done
      • Discussion
      • Conclusions
  • Long chat with Aaron M
    • The front matter is your cover letter
    • Search and replace et. al. -> at al., “. -> .”, and check all footnotes
    • Exec summary can be done as a renumber after main doc

Phil 1.5.20

 

MAGA

  • Roger pointed me at ‘Most advanced, yet acceptable’: Typicality and novelty as joint predictors of aesthetic preference in industrial design
    • Typicality and novelty have often been shown to be related to aesthetic preference of human artefacts. Since a typical product is rarely new and, conversely, a novel product will not often be designated as typical, the positive effects of both features seem incompatible. In three studies it was shown that typicality (operationalized as ‘goodness of example’) and novelty are jointly and equally effective in explaining the aesthetic preference of consumer products, but that they suppress each other’s effect. Direct correlations between both variables and aesthetic preference were not significant, but each relationship became highly significant when the influence of the other variable was partialed out. In Study 2, it was furthermore demonstrated that the expertise level of observers did not affect the relative contribution of novelty and typicality. It was finally shown (Study 3) that a more ‘objective’ measure of typicality, central tendency — operationalized as an exemplar’s average similarity to all other members of the category — yielded the same effect of typicality on aesthetic preference. In sum, all three studies showed that people prefer novel designs as long as the novelty does not affect typicality, or, phrased differently, they prefer typicality given that this is not to the detriment of novelty. Preferred are products with an optimal combination of both aspects.
  • Trust is earned in the smallest of moments. It is earned not through heroic deeds, or even highly visible actions, but through paying attention, listening, and gestures of genuine care and connection. Brené Brown
  • If we share group membership with other across a range of social settings it becomes more likely that the actors will face future exchanges with reversed roles (Resnick, 2002). Repeated interactions with stable identities also allow the trustor to accumulate knowledge about the trustee and to make better predictions about his behavior. Thus, by extrapolating from past behavior trust in future encounters can grow. The mechanics of trust: A framework for research and design
  • Dissertation
    • Adding more chapter summaries
      • Simulation – done
      • Adversarial Herding – done
      • Maps
      • Human Study
      • Discussion
      • Conclusions
  • Read “I Just Google It”: Folk Theories of Distributed Discovery

Phil 1.3.20

7:00 – 5:00 ASRC PhD

  • Diversity promotes collective intelligence in large groups but harms small ones
    • Diverse groups are often said to be less susceptible to decision errors resulting from herding and polarization. Thus, the fact that many modern interactions happen in a digital world, where filter bubbles and homophily bring people together, is an alarming yet poorly understood phenomenon. But online interactions are also characterized by unprecedented scale, where thousands of individuals can exchange ideas simultaneously. Evidence in collective intelligence however suggests that small (rather than large) groups tend to do better in complex information environments. Here, we adopt the well-established framework of social learning theory (from the fields of ecology and cultural evolution) to explore the causal link between diversity and performance as a function of group size. In this pre-registered study, we experimentally manipulate both group diversity and group size, and measure individual and group performance in realistic geo-political judgements. We find that diversity hinders the performance of individuals in small groups, but improves it in large groups. Furthermore, aggregating opinions of modular crowds composed of small independent but homogeneous groups achieves better results than using non-modular diverse ones. The results are explained by greater conflict of opinion in diverse groups, which negatively impacts small (but not large) groups. The present work sheds light on the causal mechanisms underlying the success (or lack thereof) of diverse groups in digital environments, and suggests that diversity research can benefit from adopting a wider social learning perspective.
  • “I Just Google It”: Folk Theories of Distributed Discovery
    • A significant minority of people do not follow news regularly, and a growing number rely on distributed discovery (especially social media and search engines) to stay informed. Here, we analyze folk theories of news consumption. On the basis of an inductive analysis of 43 in-depth interviews with infrequent users of conventional news, we identify three complementary folk theories (“news finds me,” “the information is out there,” and “I don’t know what to believe”) that consumers draw on when making sense of their information environment. We show that the notion of folk theories help unpack the different, complementary, sometimes contradictory cultural resources people rely on as they navigate digital media and public affairs, and we argue that studying those who rarely engage directly with news media but do access information via social media and search provides a critical case study of the dynamics of an environment increasingly defined by platforms.
  • Dissertation
    • Working on Lit Review overview
    • Fixed the margins for blockquotes by creating a more flexible changemargin command
      \def\changemargin#1#2{\list{}{\rightmargin#2\leftmargin#1}\item[]}
      \let\endchangemargin=\endlist
    • Which is used like this
      \begin{changemargin}{1.5cm}{1.5cm} 
      	They were one man, not thirty. For as the one ship that held them all; though it was put together of all contrasting things-oak, and maple, and pine wood; iron, and pitch, and hemp-yet all these ran into each other in the one concrete hull, which shot on its way, both balanced and directed by the long central keel; even so, all the individualities of the crew, this man’s valor, that man’s fear; guilt and guiltiness, all varieties were welded into oneness, and were all directed to that fatal goal which Ahab their one lord and keel did point to.
      \end{changemargin}
    • Fixed a bunch of things, including blockquotes
    • Added ch_lit_review_overview.tex
      • Biological Basis – done
      • Human Belief Spaces – done
      • Dimension Reduction – done
      • Orientation – done
      • Velocity – done
      • Social Influence Horizon – done
      • Bones in a hut – started
  • 1:00 Dentist

Phil 1.2.20

7:00 – 4:30 ASRC PhD

  • More highlighting and slides. Once I get through the Background section, I’ll write the overview, then repeat that patterns.
    • I’m tweaking too much text to keep the markup version. Sigh.
    • Finished Background and sent that to Wayne
  • GPT-2 Agents. See if we can get multiple texts generated – nope
    • Build a corpus of .txt files
    • Try running them through LMN
  • No NOAA meeting
  • No ORCA meeting

Phil 1.1.20

7:00 – 11:30, 3:00 – 5:00 PhD

  • More slides. I think I’m going to try saving a snapshot of the PDF that I can highlight and annotate.
    • That works, though every time I want to make an edit, I go back to the source material and forget to use the other pdf.
    • Also, saving out the PDF using Acrobat really shrinks the file size, 50MB down to 2.7MB
    • Finished Motivation and Introduction. Working on Background
  • Nice bike ride to start the year off

Phil 12.31.19

CodeIt

Get tix for ET 2020

7:00 – 4:30 PhD

  • Starting slides as a way to do the chapter overviews and summaries
  • GPT-2 agents
    • Got rid of Huggingface’s transformers library. Too much hidden stuff to understand
    • Aaron found a couple of other projects on GitHub – trying those
    • Downloaded the 715M model and associated files

And I’m a guest editor!IEEE

Phil 1.30.19

7:00 – 7:00 ASRC PhD

ClimateTree

  • Nice visualization, with map-like aspects: The Climate Learning Tree
  •  Dissertation
    • Start JuryRoom section – done!
    • Finished all content!
  • GPT-2 Agents
    • Download big model and try to run it
    • Move models and code out of the transformers project
  • GOES
    • Learning by Cheating (sounds like a mechanism for simulation to work with)
      • Vision-based urban driving is hard. The autonomous system needs to learn to perceive the world and act in it. We show that this challenging learning problem can be simplified by decomposing it into two stages. We first train an agent that has access to privileged information. This privileged agent cheats by observing the ground-truth layout of the environment and the positions of all traffic participants. In the second stage, the privileged agent acts as a teacher that trains a purely vision-based sensorimotor agent. The resulting sensorimotor agent does not have access to any privileged information and does not cheat. This two-stage training procedure is counter-intuitive at first, but has a number of important advantages that we analyze and empirically demonstrate. We use the presented approach to train a vision-based autonomous driving system that substantially outperforms the state of the art on the CARLA benchmark and the recent NoCrash benchmark. Our approach achieves, for the first time, 100% success rate on all tasks in the original CARLA benchmark, sets a new record on the NoCrash benchmark, and reduces the frequency of infractions by an order of magnitude compared to the prior state of the art. For the video that summarizes this work, see this https URL
  • Meeting with Aaron
    • Overview at the beginning of each chapter – look at Aaron’s chapter 5 for
    • example intro and summary.
    • Callouts in text should match the label
    • hfill to right-justify
    • Footnote goes after puntuation
    • Punctuation goes inside quotes
    • for url monospace use \texttt{} (perma.cc)
    • indent blockquotes 1/2 more tab
    • Non breaking spaces on names
    • Increase figure sizes in intro

Phil 12.28.19

Calculating Political Bias and Fighting Partisanship with AI

  • I tried this with the abstract to a paper that Google Scholar has been suggesting I read:
  • The Thorny Challenge of Making Moral Machines: Ethical Dilemmas with Self-Driving Cars
    • The algorithms that control AVs will need to embed moral principles guiding their decisions in situations of unavoidable harm. Manufacturers and regulators are confronted with three potentially incompatible objectives: being consistent, not causing public outrage, and not discouraging buyers. The presented moral machine study is a step towards solving this problem as it tries to learn how people all over the world feel about the alternative decisions the AI of self-driving vehicles might have to make. The global study displayed broad agreement across regions regarding how to handle unavoidable accidents. To master the moral challenges, all stakeholders should embrace the topic of machine ethics: this is a unique opportunity to decide as a community what we believe to be right or wrong, and to make sure that machines, unlike humans, unerringly follow the agreed-upon moral preferences. The integration of autonomous cars will require a new social contract that provides clear guidelines about who is responsible for different kinds of accidents, how monitoring and enforcement will be performed, and how trust among all stakeholders can be engendered.

      The online analyzer (https://www.thebipartisanpress.com/analyze-bias/#). Found this to have some left bias. I think its unable to distinguish between opinion presentation and fact presentation?

Phil 12.27.19

ASRC PhD 7:00 –

  • The difference between “more” (low dimension stampede-ish), and “enough” (grounded and comparative) – from Rebuilding the Social Contract, Part 2
  • Dissertation – finished Limitations!
  • GPT-2
    • Having installed all the transformers-related librarues, I’m testing the evolver to see if it still works. Woohoo! Onward
    • Is this good? It seems to have choked on the Torch examples, which makes sense
      D:\Development\Sandboxes\transformers>make test-examples
      python -m pytest -n auto --dist=loadfile -s -v ./examples/
      ================================================= test session starts =================================================
      platform win32 -- Python 3.7.4, pytest-5.3.2, py-1.8.0, pluggy-0.13.1 -- D:\Program Files\Python37\python.exe
      cachedir: .pytest_cache
      rootdir: D:\Development\Sandboxes\transformers
      plugins: forked-1.1.3, xdist-1.31.0
      [gw0] win32 Python 3.7.4 cwd: D:\Development\Sandboxes\transformers
      [gw1] win32 Python 3.7.4 cwd: D:\Development\Sandboxes\transformers
      [gw2] win32 Python 3.7.4 cwd: D:\Development\Sandboxes\transformers
      [gw3] win32 Python 3.7.4 cwd: D:\Development\Sandboxes\transformers
      [gw4] win32 Python 3.7.4 cwd: D:\Development\Sandboxes\transformers
      [gw5] win32 Python 3.7.4 cwd: D:\Development\Sandboxes\transformers
      [gw6] win32 Python 3.7.4 cwd: D:\Development\Sandboxes\transformers
      [gw7] win32 Python 3.7.4 cwd: D:\Development\Sandboxes\transformers
      [gw0] Python 3.7.4 (tags/v3.7.4:e09359112e, Jul  8 2019, 20:34:20) [MSC v.1916 64 bit (AMD64)]
      [gw1] Python 3.7.4 (tags/v3.7.4:e09359112e, Jul  8 2019, 20:34:20) [MSC v.1916 64 bit (AMD64)]
      [gw2] Python 3.7.4 (tags/v3.7.4:e09359112e, Jul  8 2019, 20:34:20) [MSC v.1916 64 bit (AMD64)]
      [gw3] Python 3.7.4 (tags/v3.7.4:e09359112e, Jul  8 2019, 20:34:20) [MSC v.1916 64 bit (AMD64)]
      [gw4] Python 3.7.4 (tags/v3.7.4:e09359112e, Jul  8 2019, 20:34:20) [MSC v.1916 64 bit (AMD64)]
      [gw5] Python 3.7.4 (tags/v3.7.4:e09359112e, Jul  8 2019, 20:34:20) [MSC v.1916 64 bit (AMD64)]
      [gw6] Python 3.7.4 (tags/v3.7.4:e09359112e, Jul  8 2019, 20:34:20) [MSC v.1916 64 bit (AMD64)]
      [gw7] Python 3.7.4 (tags/v3.7.4:e09359112e, Jul  8 2019, 20:34:20) [MSC v.1916 64 bit (AMD64)]
      gw0 [0] / gw1 [0] / gw2 [0] / gw3 [0] / gw4 [0] / gw5 [0] / gw6 [0] / gw7 [0]
      scheduling tests via LoadFileScheduling
      
      ======================================================= ERRORS ========================================================
      _____________________________________ ERROR collecting examples/test_examples.py ______________________________________
      ImportError while importing test module 'D:\Development\Sandboxes\transformers\examples\test_examples.py'.
      Hint: make sure your test modules/packages have valid Python names.
      Traceback:
      examples\test_examples.py:23: in 
          import run_generation
      examples\run_generation.py:25: in 
          import torch
      E   ModuleNotFoundError: No module named 'torch'
      _________________________ ERROR collecting examples/summarization/test_utils_summarization.py _________________________
      ImportError while importing test module 'D:\Development\Sandboxes\transformers\examples\summarization\test_utils_summarization.py'.
      Hint: make sure your test modules/packages have valid Python names.
      Traceback:
      examples\summarization\test_utils_summarization.py:18: in 
          import torch
      E   ModuleNotFoundError: No module named 'torch'
      ================================================== 2 errors in 1.57s ==================================================
      make: *** [test-examples] Error 1
    • Hmm. run_generation.py seems to need Torch. This sets of a whole bunch of issues. First, installing Torch from here provides a cool little tool to determine what to install: Torch
    • Note that the available version of CUDA are 9.2 and 10.0. This is a problem, because at the moment, TF only works with 10.0. Mostly because the user community hates upgrading driversTFCuda
    • That being said, it may be true that the release candidate TF is using CUDA 10.1: TFCuda10.1
    • I think I’m going to wait until Aaron shows up to decide if I want to jump down this rabbit hole. In the meantime, I’m going to look at other TF implementations of the GPT-2. Also, the  actual use of Torch seems pretty minor, so maybe it’s avoidable?
      • It appears to be just this method
        def set_seed(args):
            np.random.seed(args.seed)
            torch.manual_seed(args.seed)
            if args.n_gpu > 0:
                torch.cuda.manual_seed_all(args.seed)
      • And the code that calls it
            args.device = torch.device("cuda" if torch.cuda.is_available() and not args.no_cuda else "cpu")
            args.n_gpu = torch.cuda.device_count()
        
            set_seed(args)
    • Aaron suggest using a previous version of torch that is compatible with CUDA 10.0. All the previous versions are here, and this is the line that should work (huggingface transformers’ ” repo is tested on Python 3.5+, PyTorch 1.0.0+ and TensorFlow 2.0.0-rc1“):
      pip install torch==1.2.0 torchvision==0.4.0 -f https://download.pytorch.org/whl/torch_stable.html

Phil 12.26.19

ASRC PhD 7:00 – 4:00

  • Dissertation
    • Limitations
  • GPT-2 agents setup – set up the project, but in the process of getting the huggingface transformers, I wound up setting up that project as well
    • Following directions for
      • pip install transformers
      • git clone https://github.com/huggingface/transformers
        • cd transformers
        • pip install .
      • pip install -e .[testing]
        • make test – oops. My GNU Make wasn’t on the path – fixed it
        • running tests
          • Some passed, some failed. Errors like: tests/test_modeling_tf_t5.py::TFT5ModelTest::test_compile_tf_model Fatal Python error: Aborted
          • Sure is keeping the processor busy… Like bringing the machine to its knees busy….
          • Finished – 14 failed, 10 passed, 196 skipped, 20 warnings in 1925.12s (0:32:05)
  • Fixed the coffee maker
  • Dealt with stupid credit card nonsense