Author Archives: pgfeldman

Phil 8.17.18

7:00 – 4:30 ASRC MKT

Phil 8.16.18

7:00 – 4:30 ASRC MKT

  • R2D3 is an experiment in expressing statistical thinking with interactive design. Find us at @r2d3usR2D3
  • Foundations of Temporal Text Networks
    • Davide Vega (Scholar)
    • Matteo Magnani (Scholar)
    • Three fundamental elements to understand human information networks are the individuals (actors) in the network, the information they exchange, that is often observable online as text content (emails, social media posts, etc.), and the time when these exchanges happen. An extremely large amount of research has addressed some of these aspects either in isolation or as combinations of two of them. There are also more and more works studying systems where all three elements are present, but typically using ad hoc models and algorithms that cannot be easily transferred to other contexts. To address this heterogeneity, in this article we present a simple, expressive and extensible model for temporal text networks, that we claim can be used as a common ground across different types of networks and analysis tasks, and we show how simple procedures to produce views of the model allow the direct application of analysis methods already developed in other domains, from traditional data mining to multilayer network mining.
      • Ok, I’ve been reading the paper and if I understand it correctly, it’s pretty straightforward and also clever. It relates a lot to the way that I do term document matrices, and then extends the concept to include time, agents, and implicitly anything you want to. To illustrate, here’s a picture of a tensor-as-matrix: tensorIn2DThe important thing to notice is that there are multiple dimensions represented in a square matrix. We have:
        • agents
        • documents
        • terms
        • steps
      • This picture in particular is of an undirected adjacency matrix, but I think there are ways to handle in-degree and out-degree, though I think that’s probably better handled by having one matrix for indegree and one for out.
      • Because it’s a square matrix, we can calculate the steps between any node that’s on the matrix, and the centrality, simply by squaring the matrix and keeping track of the steps until the eigenvector settles. We can also weight nodes by multiplying that node’s row and column by the scalar. That changes the centrality, but ot the connectivity. We can also drop out components (steps for example) to see how that changes the underlying network properties.
      • If we want to see how time affects the development of the network, we can start with all the step nodes set to a zero weight, then add them in sequentially. This means, for example, that clustering could be performed on the nonzero nodes.
      • Some or all of the elements could be factorized using NMF, resulting in smaller, faster matrices.
      • Network embedding could be useful too. We get distances between nodes. And this looks really important: Network Embedding as Matrix Factorization: Unifying DeepWalk, LINE, PTE, and node2vec
      • I think I can use any and all of the above methods on the network tensor I’m describing. This is very close to a mapping solution.
  • The Shifting Discourse of the European Central Bank: Exploring Structural Space in Semantic Networks (cited by the above paper)
    • Convenient access to vast and untapped collections of documents generated by organizations is a valuable resource for research. These documents (e.g., Press releases, reports, speech transcriptions, etc.) are a window into organizational strategies, communication patterns, and organizational behavior. However, the analysis of such large document corpora does not come without challenges. Two of these challenges are 1) the need for appropriate automated methods for text mining and analysis and 2) the redundant and predictable nature of the formalized discourse contained in these collections of texts. Our article proposes an approach that performs well in overcoming these particular challenges for the analysis of documents related to the recent financial crisis. Using semantic network analysis and a combination of structural measures, we provide an approach that proves valuable for a more comprehensive analysis of large and complex semantic networks of formal discourse, such as the one of the European Central Bank (ECB). We find that identifying structural roles in the semantic network using centrality measures jointly reveals important discursive shifts in the goals of the ECB which would not be discovered under traditional text analysis approaches.
  • Comparative Document Analysis for Large Text Corpora
    • This paper presents a novel research problem, Comparative Document Analysis (CDA), that is, joint discovery of commonalities and differences between two individual documents (or two sets of documents) in a large text corpus. Given any pair of documents from a (background) document collection, CDA aims to automatically identify sets of quality phrases to summarize the commonalities of both documents and highlight the distinctions of each with respect to the other informatively and concisely. Our solution uses a general graph-based framework to derive novel measures on phrase semantic commonality and pairwise distinction, where the background corpus is used for computing phrase-document semantic relevance. We use the measures to guide the selection of sets of phrases by solving two joint optimization problems. A scalable iterative algorithm is developed to integrate the maximization of phrase commonality or distinction measure with the learning of phrase-document semantic relevance. Experiments on large text corpora from two different domains—scientific papers and news—demonstrate the effectiveness and robustness of the proposed framework on comparing documents. Analysis on a 10GB+ text corpus demonstrates the scalability of our method, whose computation time grows linearly as the corpus size increases. Our case study on comparing news articles published at different dates shows the power of the proposed method on comparing sets of documents.
  • Social and semantic coevolution in knowledge networks
    • Socio-semantic networks involve agents creating and processing information: communities of scientists, software developers, wiki contributors and webloggers are, among others, examples of such knowledge networks. We aim at demonstrating that the dynamics of these communities can be adequately described as the coevolution of a social and a socio-semantic network. More precisely, we will first introduce a theoretical framework based on a social network and a socio-semantic network, i.e. an epistemic network featuring agents, concepts and links between agents and between agents and concepts. Adopting a relevant empirical protocol, we will then describe the joint dynamics of social and socio-semantic structures, at both macroscopic and microscopic scales, emphasizing the remarkable stability of these macroscopic properties in spite of a vivid local, agent-based network dynamics.
  • Tensorflow 2.0 feedback request
    • Shortly, we will hold a series of public design reviews covering the planned changes. This process will clarify the features that will be part of TensorFlow 2.0, and allow the community to propose changes and voice concerns. Please join developers@tensorflow.org if you would like to see announcements of reviews and updates on process. We hope to gather user feedback on the planned changes once we release a preview version later this year.

Phil 8.12.18

7:00 – 4:00 ASRC MKT

  • Having an interesting chat on recommenders with Robin Berjon on Twitter
  • Long, but looks really good Neural Processes as distributions over functions
    • Neural Processes (NPs) caught my attention as they essentially are a neural network (NN) based probabilistic model which can represent a distribution over stochastic processes. So NPs combine elements from two worlds:
      • Deep Learning – neural networks are flexible non-linear functions which are straightforward to train
      • Gaussian Processes – GPs offer a probabilistic framework for learning a distribution over a wide class of non-linear functions

      Both have their advantages and drawbacks. In the limited data regime, GPs are preferable due to their probabilistic nature and ability to capture uncertainty. This differs from (non-Bayesian) neural networks which represent a single function rather than a distribution over functions. However the latter might be preferable in the presence of large amounts of data as training NNs is computationally much more scalable than inference for GPs. Neural Processes aim to combine the best of these two worlds.

  • How The Internet Talks (Well, the mostly young and mostly male users of Reddit, anyway)
    • To get a sense of the language used on Reddit, we parsed every comment since late 2007 and built the tool above, which enables you to search for a word or phrase to see how its popularity has changed over time. We’ve updated the tool to include all comments through the end of July 2017.
  • Add breadcrumbs to slides
  • Download videos – done! Put these in the ppt backup
  • Fix the DTW emergent population chart on the poster and in the slides. Print!
  • Set up the LaTex Army BAA framework
  • Olsson
  • Slide walkthough. Good timing. Working on the poster some more AdversarialHerding2

Phil 8.14.18

7:00 – 4:30 ASRC MKT

  • Presented LaTex talk/workshop. I think it needs to be a more focused SIGCHI workshop that steps through the transition from a template document to a document with all the needed parts
    • Will’s document then becomes a resource for how to do a particular task.
  • Promoted The Radio in Fascist Italy as a Phlog post. Need to add a takeaway section
  • Georgetown Law Technology Review (Vol 2, Issue 2)
  • More poster AdversarialHerding2
  • BAA work? Lots, actually. Dug though the Army’s and found many good leads
  • Add to the list of things to read: How social media took us from Tahrir Square to Donald Trump
    • To understand how digital technologies went from instruments for spreading democracy to weapons for attacking it, you have to look beyond the technologies themselves.

Phil 8.13.18

7:00 – 4:30 ASRC MKT

Phil 8.11.18

The Communicative Constitution of Hate Organizations Online: A Semantic Network Analysis of “Make America Great Again”

  • In the context of the 2016 U.S. Presidential Election, President Donald Trump’s use of Twitter to connect with followers and supporters created unprecedented access to Trump’s online political campaign. In using the campaign slogan, “Make America Great Again” (or its acronym “MAGA”), Trump communicatively organized and controlled media systems by offering his followers an opportunity to connect with his campaign through the discursive hashtag. In effect, the strategic use of these networks over time communicatively constituted an effective and winning political organization; however, Trump’s political organization was not without connections to far-right and hate groups that coalesced in and around the hashtag. Semantic network analyses uncovered how the textual nature of #MAGA organized connections between hashtags, and, in doing so, exposed connections to overtly White supremacist groups within the United States and the United Kingdom throughout late November 2016. Cluster analyses further uncovered semantic connections to White supremacist and White nationalist groups throughout the hashtag networks connected to the central slogan of Trump’s presidential campaign. Theoretically, these findings contribute to the ways in which hashtag networks show how Trump’s support developed and united around particular organizing processes and White nationalist language, and provide insights into how these networks discursively create and connect White supremacists’ organizations to Trump’s campaign.

 

Phil 8.10.18

7:00 – ASRC MKT

  • Finished the first pass through the SASO slides. Need to start working on timing (25 min + 5 min questions)
  • Start on poster (A0 size)
  • Sent Wayne a note to get permission for 899
  • Started setting up laptop. I hate this part. Google drive took hours to synchronize
    • Java
    • Python/Nvidia/Tensorflow
    • Intellij
    • Visual Studio
    • MikTex
    • TexStudio
    • Xampp
    • Vim
    • TortoiseSVN
    • WinSCP
    • 7-zip
    • Creative Cloud
      • Acrobat
      • Reader
      • Illustrator
      • Photoshop
    • Microsoft suite
    • Express VPN

Phil 8.9.18

7:00 – 3:00 ASRC MKT

  • Working on the herding slide
  • Animals Teach Robots to Find Their Way
    • Michael Milford – “I always regard spatial intelligence as a gateway to understanding higher-level intelligence. It’s the mechanism by which we can build on our understanding of how the brain works.”
  • Direct recordings of grid-like neuronal activity in human spatial navigation
    • Grid cells in the entorhinal cortex appear to represent spatial location via a triangular coordinate system. Such cells, which have been identified in rats, bats, and monkeys, are believed to support a wide range of spatial behaviors. By recording neuronal activity from neurosurgical patients performing a virtual-navigation task we identified cells exhibiting grid-like spiking patterns in the human brain, suggesting that humans and simpler animals rely on homologous spatial-coding schemes. Human grid cells
  • The cognitive map in humans: spatial navigation and beyond
    • The ‘cognitive map’ hypothesis proposes that brain builds a unified representation of the spatial environment to support memory and guide future action. Forty years of electrophysiological research in rodents suggest that cognitive maps are neurally instantiated by place, grid, border and head direction cells in the hippocampal formation and related structures. Here we review recent work that suggests a similar functional organization in the human brain and yields insights into how cognitive maps are used during spatial navigation. Specifically, these studies indicate that (i) the human hippocampus and entorhinal cortex support map-like spatial codes, (ii) posterior brain regions such as parahippocampal and retrosplenial cortices provide critical inputs that allow cognitive maps to be anchored to fixed environmental landmarks, and (iii) hippocampal and entorhinal spatial codes are used in conjunction with frontal lobe mechanisms to plan routes during navigation. We also discuss how these three basic elements of cognitive map based navigation—spatial coding, landmark anchoring and route planning—might be applied to nonspatial domains to provide the building blocks for many core elements of human thought.
  • Spatial scaffold effects in event memory and imagination
    • Jessica Robin
    • Spatial context is a defining feature of episodic memories, which are often characterized as being events occurring in specific spatiotemporal contexts. In this review, I summarize research suggesting a common neural basis for episodic and spatial memory and relate this to the role of spatial context in episodic memory. I review evidence that spatial context serves as a scaffold for episodic memory and imagination, in terms of both behavioral and neural effects demonstrating a dependence of episodic memory on spatial representations. These effects are mediated by a posterior-medial set of neocortical regions, including the parahippocampal cortex, retrosplenial cortex, posterior cingulate cortex, precuneus, and angular gyrus, which interact with the hippocampus to represent spatial context in remembered and imagined events. I highlight questions and areas that require further research, including differentiation of hippocampal function along its long axis and subfields, and how these areas interact with the posterior-medial network.
  • Identifying the cognitive processes underpinning hippocampal-dependent tasks (preprint, not peer-reviewed)
    • Autobiographical memory, future thinking and spatial navigation are critical cognitive functions that are thought to be related, and are known to depend upon a brain structure called the hippocampus. Surprisingly, direct evidence for their interrelatedness is lacking, as is an understanding of why they might be related. There is debate about whether they are linked by an underlying memory-related process or, as has more recently been suggested, because they each require the endogenous construction of scene imagery. Here, using a large sample of participants and multiple cognitive tests with a wide spread of individual differences in performance, we found that these functions are indeed related. Mediation analyses further showed that scene construction, and not memory, mediated (explained) the relationships between the functions. These findings offer a fresh perspective on autobiographical memory, future thinking, navigation, and also on the hippocampus, where scene imagery appears to play a highly influential role.
  • Home early to wait for FedEx. And here’s a fun thing: dkgpgukx0aatbal

Phil 8.8.18

7:00 – 4:00 ASRC MKT

  • Oh, look, a new Tensorflow (1.10). Time to break things. I like the BigTable integration though.
  • Learning Meaning in Natural Language Processing — A Discussion
    • Last week a tweet by Jacob Andreas triggered a huge discussion on Twitter that many people have called the meaning/semantics mega-thread. Twitter is a great medium for having such a discussion, replying to any comment allows to revive the debate from the most promising point when it’s stuck in a dead-end. Unfortunately Twitter also makes the discussion very hard to read afterwards so I made three entry points to explore this fascinating mega-thread:

      1. a summary of the discussion that you will find below,
      2. an interactive view to explore the trees of tweets, and
      3. commented map to get an overview of the main points discussed:
  • The Current Best of Universal Word Embeddings and Sentence Embeddings
    • This post is thus a brief primer on the current state-of-the-art in Universal Word and Sentence Embeddings, detailing a few

      • strong/fast baselines: FastText, Bag-of-Words
      • state-of-the-art models: ELMo, Skip-Thoughts, Quick-Thoughts, InferSent, MILA/MSR’s General Purpose Sentence Representations & Google’s Universal Sentence Encoder.

      If you want some background on what happened before 2017 😀, I recommend the nice post on word embeddings that Sebastian wrote last year and his intro posts.

  • Treeverse is a browser extension for navigating burgeoning Twitter conversations. right_pane
  • Detecting computer-generated random responding in questionnaire-based data: A comparison of seven indices
    • With the development of online data collection and instruments such as Amazon’s Mechanical Turk (MTurk), the appearance of malicious software that generates responses to surveys in order to earn money represents a major issue, for both economic and scientific reasons. Indeed, even if paying one respondent to complete one questionnaire represents a very small cost, the multiplication of botnets providing invalid response sets may ultimately reduce study validity while increasing research costs. Several techniques have been proposed thus far to detect problematic human response sets, but little research has been undertaken to test the extent to which they actually detect nonhuman response sets. Thus, we proposed to conduct an empirical comparison of these indices. Assuming that most botnet programs are based on random uniform distributions of responses, we present and compare seven indices in this study to detect nonhuman response sets. A sample of 1,967 human respondents was mixed with different percentages (i.e., from 5% to 50%) of simulated random response sets. Three of the seven indices (i.e., response coherence, Mahalanobis distance, and person–total correlation) appear to be the best estimators for detecting nonhuman response sets. Given that two of those indices—Mahalanobis distance and person–total correlation—are calculated easily, every researcher working with online questionnaires could use them to screen for the presence of such invalid data.
  • Continuing to work on SASO slides – close to done. Got a lot of adversarial herding FB examples from the House Permanent Committee on Intelligence. Need to add them to the slide. Sobering.
  • And this looks like a FANTASTIC ride out of Trento: ridewithgps.com/routes/27552411
  • Fixed the border menu so that it’s a toggle group

Phil 8.7.18

8:00 – ASRC MKT

  • Looking for discussion transcripts.
  • Podcasts
    • Do you get your heart broken by the Nationals, Wizards, Caps and Redskins every single year but you still come back for more? The DMV Sports Roundtable is the podcast for you – Washington’s sports teams from the fans’ perspective – and plenty of college coverage too.
    • Join UCB Theatre veterans Cody Lindquist & Charlie Todd as they welcome a panel of NYC’s most hilarious comedians, journalists, and politicians to chug two beers on stage and discuss the politics of the week. It’s like Meet The Press, but funnier and with more alcohol. Theme song by Tyler Walker.
    • Rasslin Roundtable: Wrestling podcast centered around the latest PPV
    • TSN 1290 Roundtable: Kevin Olszewski hosts the Donvito Roundtable, airing weekdays from 11am-1pm CT on TSN 1290 Winnipeg. Daily discussion about the Winnipeg Jets, the NHL, and whatever else is on his mind!
    • The Game Design Round Table Focusing on both digital and tabletop gaming, The Game Design Round Table provides a forum for conversation about critical issues to game design.
    • Story Works Round Table Before you can be a successful author, you have to write a great story. Each week, co-hosts, Alida Winternheimer, author and writing coach at Word Essential, Kathryn Arnold, emerging writer, & Robert Scanlon, author of the Blood Empire series, have conversations about the craft of writing fiction. They bring diverse experiences and talents to the table from both the traditional and indie worlds. Our goal is for each episode to be a fun, lively discussion of some aspect of story craft that that enlightens, as well as entertains.
  • Some good pix of bike-share graveyards in China that would be good stampede pix from The Atlantic (set 1) (set 2) Bicycles of various bike-sharing services are seen in Shanghai.
  • Starting back on the SASO slides. Based on Wayne’s comments, I’m reworking the Stephens’ slide
    • Flashes of Insight: Whole-Brain Imaging of Neural Activity in the Zebrafish (video)(paper)(paper)

Phil 8.6.18

7:00 – 5:00 ASRC CONF

  • Heard about this on the Ted Radio Hour: Crisis Trends
    • Crisis Trends empowers journalists, researchers, school administrators, parents, and all citizens to understand the crises their communities face so we can work together to prevent future crises. Crisis Trends was originally funded by the Robert Wood Johnson Foundation. CurrentTrends
  • Committee talk today!
    • Tweaked the flowchart slides
    • Added pix to either end of the “model(?)” slide showing that the amount of constraint is maximum at either end. On the nomadic side, the environment is the constraint. Imagine a solitary activity in a location so dangerous that any false move would result in death or injury. Think of freeclimbing: b16-540x354
    • On the other end of the spectrum is the maximum social constraint of totalitarianism, which is summed up nicely in this play on the constitutional basis for English law “Everything not forbidden is allowed” by T. H. White THWhite
    • The presentation went pretty well. There is a consensus that I should look for existing sources of discussions that reach consensus. Since this has to be a repeated discussion about the same topic, I think that sports are the only real option.
  • Added a slide on tracking changes to the Latex presentation slides for next week
  • Amusing ourselves to Trump
    • The point of Amusing Ourselves to Death is that societies are molded by the technologies atop which they communicate. Oral cultures teach us to be conversational, typographic cultures teach us to be logical, televised cultures teach us that everything is entertainment. So what is social media culture teaching us?
  • It’s Looking Extremely Likely That QAnon Is A Leftist Prank On Trump Supporters
    • There’s a growing group of Trump supporters who are convinced that the president is secretly trying to save the world from a global pedophilia ring.

Phil 8.3.18

7:00 – 3:30 ASRC MKT

  • Slides and walkthrough – done!
  • Ramping up on SASO
  • Textricator is a tool for extracting text from computer-generated PDFs and generating structured data (CSV or JSON). If you have a bunch of PDFs with the same format (or one big, consistently formatted PDF) and you want to extract the data to CSV or JSON, _Textricator_ can help! It can even work on OCR’ed documents!
  • LSTM links for getting back to things later
  • Who handles misinformation outbreaks?
    • Misinformation attacks— the deliberate and sustained creation and amplification of false information at scale — are a problem. Some of them start as jokes (the ever-present street sharks in disasters) or attempts to push an agenda (e.g. right-wing brigading); some are there to make money (the “Macedonian teens”), or part of ongoing attempts to destabilise countries including the US, UK and Canada (e.g. Russia’s Internet Research Agency using troll and bot amplification of divisive messages).

      Enough people are writing about why misinformation attacks happen, what they look like and what motivates attackers. Fewer people are activelycountering attacks. Here are some of them, roughly categorised as:

      • Journalists and data scientists: Make misinformation visible
      • Platforms and governments: Reduce misinformation spread
      • Communities: directly engage misinformation
      • Adtech: Remove or reduce misinformation rewards

Phil 8.2.18

7:00 – 5:00 ASRC MKT

  • Joshua Stevens (Scholar)
    • At Penn State I researched cartography and geovisual analytics with an emphasis on human-computer interaction, interactive affordances, and big data. My work focused on new forms of map interaction made possible by well constructed visual cues.
  • A Computational Analysis of Cognitive Effort
    • Cognitive effort is a concept of unquestionable utility in understanding human behaviour. However, cognitive effort has been defined in several ways in literature and in everyday life, suffering from a partial understanding. It is common to say “Pay more attention in studying that subject” or “How much effort did you spend in resolving that task?”, but what does it really mean? This contribution tries to clarify the concept of cognitive effort, by introducing its main influencing factors and by presenting a formalism which provides us with a tool for precise discussion. The formalism is implementable as a computational concept and can therefore be embedded in an artificial agent and tested experimentally. Its applicability in the domain of AI is raised and the formalism provides a step towards a proper understanding and definition of human cognitive effort.
  • Efficient Neural Architecture Search with Network Morphism
    • While neural architecture search (NAS) has drawn increasing attention for automatically tuning deep neural networks, existing search algorithms usually suffer from expensive computational cost. Network morphism, which keeps the functionality of a neural network while changing its neural architecture, could be helpful for NAS by enabling a more efficient training during the search. However, network morphism based NAS is still computationally expensive due to the inefficient process of selecting the proper morph operation for existing architectures. As we know, Bayesian optimization has been widely used to optimize functions based on a limited number of observations, motivating us to explore the possibility of making use of Bayesian optimization to accelerate the morph operation selection process. In this paper, we propose a novel framework enabling Bayesian optimization to guide the network morphism for efficient neural architecture search by introducing a neural network kernel and a tree-structured acquisition function optimization algorithm. With Bayesian optimization to select the network morphism operations, the exploration of the search space is more efficient. Moreover, we carefully wrapped our method into an open-source software, namely Auto-Keras for people without rich machine learning background to use. Intensive experiments on real-world datasets have been done to demonstrate the superior performance of the developed framework over the state-of-the-art baseline methods.
  • I think I finished the Dissertation Review slides. Walkthrough tomorrow!