Category Archives: Writing

Phil 8.17.18

7:00 – 4:30 ASRC MKT

Phil 8.16.18

7:00 – 4:30 ASRC MKT

  • R2D3 is an experiment in expressing statistical thinking with interactive design. Find us at @r2d3usR2D3
  • Foundations of Temporal Text Networks
    • Davide Vega (Scholar)
    • Matteo Magnani (Scholar)
    • Three fundamental elements to understand human information networks are the individuals (actors) in the network, the information they exchange, that is often observable online as text content (emails, social media posts, etc.), and the time when these exchanges happen. An extremely large amount of research has addressed some of these aspects either in isolation or as combinations of two of them. There are also more and more works studying systems where all three elements are present, but typically using ad hoc models and algorithms that cannot be easily transferred to other contexts. To address this heterogeneity, in this article we present a simple, expressive and extensible model for temporal text networks, that we claim can be used as a common ground across different types of networks and analysis tasks, and we show how simple procedures to produce views of the model allow the direct application of analysis methods already developed in other domains, from traditional data mining to multilayer network mining.
      • Ok, I’ve been reading the paper and if I understand it correctly, it’s pretty straightforward and also clever. It relates a lot to the way that I do term document matrices, and then extends the concept to include time, agents, and implicitly anything you want to. To illustrate, here’s a picture of a tensor-as-matrix: tensorIn2DThe important thing to notice is that there are multiple dimensions represented in a square matrix. We have:
        • agents
        • documents
        • terms
        • steps
      • This picture in particular is of an undirected adjacency matrix, but I think there are ways to handle in-degree and out-degree, though I think that’s probably better handled by having one matrix for indegree and one for out.
      • Because it’s a square matrix, we can calculate the steps between any node that’s on the matrix, and the centrality, simply by squaring the matrix and keeping track of the steps until the eigenvector settles. We can also weight nodes by multiplying that node’s row and column by the scalar. That changes the centrality, but ot the connectivity. We can also drop out components (steps for example) to see how that changes the underlying network properties.
      • If we want to see how time affects the development of the network, we can start with all the step nodes set to a zero weight, then add them in sequentially. This means, for example, that clustering could be performed on the nonzero nodes.
      • Some or all of the elements could be factorized using NMF, resulting in smaller, faster matrices.
      • Network embedding could be useful too. We get distances between nodes. And this looks really important: Network Embedding as Matrix Factorization: Unifying DeepWalk, LINE, PTE, and node2vec
      • I think I can use any and all of the above methods on the network tensor I’m describing. This is very close to a mapping solution.
  • The Shifting Discourse of the European Central Bank: Exploring Structural Space in Semantic Networks (cited by the above paper)
    • Convenient access to vast and untapped collections of documents generated by organizations is a valuable resource for research. These documents (e.g., Press releases, reports, speech transcriptions, etc.) are a window into organizational strategies, communication patterns, and organizational behavior. However, the analysis of such large document corpora does not come without challenges. Two of these challenges are 1) the need for appropriate automated methods for text mining and analysis and 2) the redundant and predictable nature of the formalized discourse contained in these collections of texts. Our article proposes an approach that performs well in overcoming these particular challenges for the analysis of documents related to the recent financial crisis. Using semantic network analysis and a combination of structural measures, we provide an approach that proves valuable for a more comprehensive analysis of large and complex semantic networks of formal discourse, such as the one of the European Central Bank (ECB). We find that identifying structural roles in the semantic network using centrality measures jointly reveals important discursive shifts in the goals of the ECB which would not be discovered under traditional text analysis approaches.
  • Comparative Document Analysis for Large Text Corpora
    • This paper presents a novel research problem, Comparative Document Analysis (CDA), that is, joint discovery of commonalities and differences between two individual documents (or two sets of documents) in a large text corpus. Given any pair of documents from a (background) document collection, CDA aims to automatically identify sets of quality phrases to summarize the commonalities of both documents and highlight the distinctions of each with respect to the other informatively and concisely. Our solution uses a general graph-based framework to derive novel measures on phrase semantic commonality and pairwise distinction, where the background corpus is used for computing phrase-document semantic relevance. We use the measures to guide the selection of sets of phrases by solving two joint optimization problems. A scalable iterative algorithm is developed to integrate the maximization of phrase commonality or distinction measure with the learning of phrase-document semantic relevance. Experiments on large text corpora from two different domains—scientific papers and news—demonstrate the effectiveness and robustness of the proposed framework on comparing documents. Analysis on a 10GB+ text corpus demonstrates the scalability of our method, whose computation time grows linearly as the corpus size increases. Our case study on comparing news articles published at different dates shows the power of the proposed method on comparing sets of documents.
  • Social and semantic coevolution in knowledge networks
    • Socio-semantic networks involve agents creating and processing information: communities of scientists, software developers, wiki contributors and webloggers are, among others, examples of such knowledge networks. We aim at demonstrating that the dynamics of these communities can be adequately described as the coevolution of a social and a socio-semantic network. More precisely, we will first introduce a theoretical framework based on a social network and a socio-semantic network, i.e. an epistemic network featuring agents, concepts and links between agents and between agents and concepts. Adopting a relevant empirical protocol, we will then describe the joint dynamics of social and socio-semantic structures, at both macroscopic and microscopic scales, emphasizing the remarkable stability of these macroscopic properties in spite of a vivid local, agent-based network dynamics.
  • Tensorflow 2.0 feedback request
    • Shortly, we will hold a series of public design reviews covering the planned changes. This process will clarify the features that will be part of TensorFlow 2.0, and allow the community to propose changes and voice concerns. Please join developers@tensorflow.org if you would like to see announcements of reviews and updates on process. We hope to gather user feedback on the planned changes once we release a preview version later this year.

Phil 8.12.18

7:00 – 4:00 ASRC MKT

  • Having an interesting chat on recommenders with Robin Berjon on Twitter
  • Long, but looks really good Neural Processes as distributions over functions
    • Neural Processes (NPs) caught my attention as they essentially are a neural network (NN) based probabilistic model which can represent a distribution over stochastic processes. So NPs combine elements from two worlds:
      • Deep Learning – neural networks are flexible non-linear functions which are straightforward to train
      • Gaussian Processes – GPs offer a probabilistic framework for learning a distribution over a wide class of non-linear functions

      Both have their advantages and drawbacks. In the limited data regime, GPs are preferable due to their probabilistic nature and ability to capture uncertainty. This differs from (non-Bayesian) neural networks which represent a single function rather than a distribution over functions. However the latter might be preferable in the presence of large amounts of data as training NNs is computationally much more scalable than inference for GPs. Neural Processes aim to combine the best of these two worlds.

  • How The Internet Talks (Well, the mostly young and mostly male users of Reddit, anyway)
    • To get a sense of the language used on Reddit, we parsed every comment since late 2007 and built the tool above, which enables you to search for a word or phrase to see how its popularity has changed over time. We’ve updated the tool to include all comments through the end of July 2017.
  • Add breadcrumbs to slides
  • Download videos – done! Put these in the ppt backup
  • Fix the DTW emergent population chart on the poster and in the slides. Print!
  • Set up the LaTex Army BAA framework
  • Olsson
  • Slide walkthough. Good timing. Working on the poster some more AdversarialHerding2

Phil 8.10.18

7:00 – ASRC MKT

  • Finished the first pass through the SASO slides. Need to start working on timing (25 min + 5 min questions)
  • Start on poster (A0 size)
  • Sent Wayne a note to get permission for 899
  • Started setting up laptop. I hate this part. Google drive took hours to synchronize
    • Java
    • Python/Nvidia/Tensorflow
    • Intellij
    • Visual Studio
    • MikTex
    • TexStudio
    • Xampp
    • Vim
    • TortoiseSVN
    • WinSCP
    • 7-zip
    • Creative Cloud
      • Acrobat
      • Reader
      • Illustrator
      • Photoshop
    • Microsoft suite
    • Express VPN

Phil 8.6.18

7:00 – 5:00 ASRC CONF

  • Heard about this on the Ted Radio Hour: Crisis Trends
    • Crisis Trends empowers journalists, researchers, school administrators, parents, and all citizens to understand the crises their communities face so we can work together to prevent future crises. Crisis Trends was originally funded by the Robert Wood Johnson Foundation. CurrentTrends
  • Committee talk today!
    • Tweaked the flowchart slides
    • Added pix to either end of the “model(?)” slide showing that the amount of constraint is maximum at either end. On the nomadic side, the environment is the constraint. Imagine a solitary activity in a location so dangerous that any false move would result in death or injury. Think of freeclimbing: b16-540x354
    • On the other end of the spectrum is the maximum social constraint of totalitarianism, which is summed up nicely in this play on the constitutional basis for English law “Everything not forbidden is allowed” by T. H. White THWhite
    • The presentation went pretty well. There is a consensus that I should look for existing sources of discussions that reach consensus. Since this has to be a repeated discussion about the same topic, I think that sports are the only real option.
  • Added a slide on tracking changes to the Latex presentation slides for next week
  • Amusing ourselves to Trump
    • The point of Amusing Ourselves to Death is that societies are molded by the technologies atop which they communicate. Oral cultures teach us to be conversational, typographic cultures teach us to be logical, televised cultures teach us that everything is entertainment. So what is social media culture teaching us?
  • It’s Looking Extremely Likely That QAnon Is A Leftist Prank On Trump Supporters
    • There’s a growing group of Trump supporters who are convinced that the president is secretly trying to save the world from a global pedophilia ring.

Phil 8.3.18

7:00 – 3:30 ASRC MKT

  • Slides and walkthrough – done!
  • Ramping up on SASO
  • Textricator is a tool for extracting text from computer-generated PDFs and generating structured data (CSV or JSON). If you have a bunch of PDFs with the same format (or one big, consistently formatted PDF) and you want to extract the data to CSV or JSON, _Textricator_ can help! It can even work on OCR’ed documents!
  • LSTM links for getting back to things later
  • Who handles misinformation outbreaks?
    • Misinformation attacks— the deliberate and sustained creation and amplification of false information at scale — are a problem. Some of them start as jokes (the ever-present street sharks in disasters) or attempts to push an agenda (e.g. right-wing brigading); some are there to make money (the “Macedonian teens”), or part of ongoing attempts to destabilise countries including the US, UK and Canada (e.g. Russia’s Internet Research Agency using troll and bot amplification of divisive messages).

      Enough people are writing about why misinformation attacks happen, what they look like and what motivates attackers. Fewer people are activelycountering attacks. Here are some of them, roughly categorised as:

      • Journalists and data scientists: Make misinformation visible
      • Platforms and governments: Reduce misinformation spread
      • Communities: directly engage misinformation
      • Adtech: Remove or reduce misinformation rewards

Phil 8.2.18

7:00 – 5:00 ASRC MKT

  • Joshua Stevens (Scholar)
    • At Penn State I researched cartography and geovisual analytics with an emphasis on human-computer interaction, interactive affordances, and big data. My work focused on new forms of map interaction made possible by well constructed visual cues.
  • A Computational Analysis of Cognitive Effort
    • Cognitive effort is a concept of unquestionable utility in understanding human behaviour. However, cognitive effort has been defined in several ways in literature and in everyday life, suffering from a partial understanding. It is common to say “Pay more attention in studying that subject” or “How much effort did you spend in resolving that task?”, but what does it really mean? This contribution tries to clarify the concept of cognitive effort, by introducing its main influencing factors and by presenting a formalism which provides us with a tool for precise discussion. The formalism is implementable as a computational concept and can therefore be embedded in an artificial agent and tested experimentally. Its applicability in the domain of AI is raised and the formalism provides a step towards a proper understanding and definition of human cognitive effort.
  • Efficient Neural Architecture Search with Network Morphism
    • While neural architecture search (NAS) has drawn increasing attention for automatically tuning deep neural networks, existing search algorithms usually suffer from expensive computational cost. Network morphism, which keeps the functionality of a neural network while changing its neural architecture, could be helpful for NAS by enabling a more efficient training during the search. However, network morphism based NAS is still computationally expensive due to the inefficient process of selecting the proper morph operation for existing architectures. As we know, Bayesian optimization has been widely used to optimize functions based on a limited number of observations, motivating us to explore the possibility of making use of Bayesian optimization to accelerate the morph operation selection process. In this paper, we propose a novel framework enabling Bayesian optimization to guide the network morphism for efficient neural architecture search by introducing a neural network kernel and a tree-structured acquisition function optimization algorithm. With Bayesian optimization to select the network morphism operations, the exploration of the search space is more efficient. Moreover, we carefully wrapped our method into an open-source software, namely Auto-Keras for people without rich machine learning background to use. Intensive experiments on real-world datasets have been done to demonstrate the superior performance of the developed framework over the state-of-the-art baseline methods.
  • I think I finished the Dissertation Review slides. Walkthrough tomorrow!

Phil 8.1.18

7:00 – 6:00 ASRC MKT

  • I need to add some things to both talks
    • Use Stephens to show how we can build vectors out of ‘positions’ in high dimension space, and then measure distances (hypotenuse, cosine similarity, etc). Also, how the use of stories show alignment over time and create a trajectory – done
    • Add slide that shows the spectrum from low-dimensional social space to high-dimensional environmental space.
      • Aligning in social spaces is easier because we negotiate the terrain we interact on
      • Aligning in environmental spaces is harder because there is no negotiation
    • Add slides for each of the main parts
      • Social influence
      • Dimension Reduction
      • Heading
      • Velocity
      • State (what we tend to think about)
    • Add demo slide that walks through each part of the demo – done
      • Single population with different SIH
      • Small explorer population interacting with stampeding groups
      • Adversarial Herding
      • Opposed AH
      • Map building
  • Capturing the interplay of dynamics and networks through parameterizations of Laplacian operators
    • We study the interplay between a dynamical process and the structure of the network on which it unfolds using the parameterized Laplacian framework. This framework allows for defining and characterizing an ensemble of dynamical processes on a network beyond what the traditional Laplacian is capable of modeling. This, in turn, allows for studying the impact of the interaction between dynamics and network topology on the quality-measure of network clusters and centrality, in order to effectively identify important vertices and communities in the network. Specifically, for each dynamical process in this framework, we define a centrality measure that captures a vertex’s participation in the dynamical process on a given network and also define a function that measures the quality of every subset of vertices as a potential cluster (or community) with respect to this process. We show that the subset-quality function generalizes the traditional conductance measure for graph partitioning. We partially justify our choice of the quality function by showing that the classic Cheeger’s inequality, which relates the conductance of the best cluster in a network with a spectral quantity of its Laplacian matrix, can be extended to the parameterized Laplacian. The parameterized Laplacian framework brings under the same umbrella a surprising variety of dynamical processes and allows us to systematically compare the different perspectives they create on network structure.

Phil 7.31.18

7:00 – 6:00 ASRC MKT

  • Thinking that I need to forward the opinion dynamics part of the work. How heading differs from position and why that matters
  • Found a nice adversarial herding chart from The EconomistBrexit
  • Why Do People Share Fake News? A Sociotechnical Model of Media Effects
    • Fact-checking sites reflect fundamental misunderstandings about how information circulates online, what function political information plays in social contexts, and how and why people change their political opinions. Fact-checking is in many ways a response to the rapidly changing norms and practices of journalism, news gathering, and public debate. In other words, fact-checking best resembles a movement for reform within journalism, particularly in a moment when many journalists and members of the public believe that news coverage of the 2016 election contributed to the loss of Hillary Clinton. However, fact-checking (and another frequently-proposed solution, media literacy) is ineffectual in many cases and, in other cases, may cause people to “double-down” on their incorrect beliefs, producing a backlash effect.
  • Epistemology in the Era of Fake News: An Exploration of Information Verification Behaviors among Social Networking Site Users
    • Fake news has recently garnered increased attention across the world. Digital collaboration technologies now enable individuals to share information at unprecedented rates to advance their own ideologies. Much of this sharing occurs via social networking sites (SNSs), whose members may choose to share information without consideration for its authenticity. This research advances our understanding of information verification behaviors among SNS users in the context of fake news. Grounded in literature on the epistemology of testimony and theoretical perspectives on trust, we develop a news verification behavior research model and test six hypotheses with a survey of active SNS users. The empirical results confirm the significance of all proposed hypotheses. Perceptions of news sharers’ network (perceived cognitive homogeneity, social tie variety, and trust), perceptions of news authors (fake news awareness and perceived media credibility), and innate intentions to share all influence information verification behaviors among SNS members. Theoretical implications, as well as implications for SNS users and designers, are presented in the light of these findings.
  • Working on plan diagram – done
  • Organizing PhD slides. I think I’m getting near finished
  • Walked through slides with Aaron. Need to practice the demo. A lot.

Phil 7.27.18

Ted Underwood

  • my research is as much about information science as literary criticism. I’m especially interested in applying machine learning to large digital collections
  • Git repo with code for upcoming book: Distant Horizons: Digital Evidence and Literary Change
  • Do topic models warp time?
    • The key observation I wanted to share is just that topic models produce a kind of curved space when applied to long timelines; if you’re measuring distances between individual topic distributions, it may not be safe to assume that your yardstick means the same thing at every point in time. This is not a reason for despair: there are lots of good ways to address the distortion. The mathematics of cosine distance tend to work better if you average the documents first, and then measure the cosine between the averages (or “centroids”).
  • The Historical Significance of Textual Distances
    • Measuring similarity is a basic task in information retrieval, and now often a building-block for more complex arguments about cultural change. But do measures of textual similarity and distance really correspond to evidence about cultural proximity and differentiation? To explore that question empirically, this paper compares textual and social measures of the similarities between genres of English-language fiction. Existing measures of textual similarity (cosine similarity on tf-idf vectors or topic vectors) are also compared to new strategies that use supervised learning to anchor textual measurement in a social context.

7:00 – 8:00 ASRC MKT

  • Continued on slides. I think I have the basics. Need to start looking for pictures
  • Sent response to the SASO folks about who’s presenting what.

9:00 – ASRC IRAD

Phil 7.25.18

7:00 – 3:00 ASRC

  • Send out email with meeting time
  • Rather than excerpts from the talks, do a demo of the relevant bits with conclusions and implications. Get the laptop running all the pieces. That means Python and TF and all the other bits.
  • Submitted tuition expenses
  • Submitted Fall 2018 approval
  • Got SASO travel approval!
  • More DNN study
    • Finished CNNs
    • Working on embeddings and W2V. Thought I’d try it on the laptop, but keras can’t find it’s back end and I’m getting other weird errors. One of the big ones was that I didn’t install tk with python. Here’s the answer from stackoverflow: python_fix
    • And now we’re waiting a very long time for a tf ‘hello world’ to run… But it did!
    • Had to also install pydot and graphviz-2.38.msi. Then add the graphviz bin directory to the path.
    • But now everything runs on the laptop, which will help with the demos!
    • Skipped the GloVe and pre-trained embeddings. Ready to start on DNNs tomorrow.

Phil 7.23.18

7:00 – ASRC MKT

  • Starting on the SASO slides. Found my diversity injection slide story:
    • Max Hawkins
      • (From NPR’s Invisibilia) “I just started thinking about these loops that we get into,” he says. “And about how the structure of your life … completely determines what happens in it.” Max’s once beautiful routine suddenly seemed unfulfilling. He felt like he was growing closer to people in his own bubble and becoming isolated from those outside of it. “There was something … that just made me feel trapped,” he says. “Like I was reading a story that I’d read before or I was playing out someone else’s script.” As any computer developer would do, Max turned to technology to craft his way out — a series of randomization applications.
    • Reading Review: Totalitarianism: The Revised Standard Version
      • …they have chosen to identify totalitarianism in terms of a set of six interrelated traits or characteristics-Fried- rich’s oft-referred-to “totalitarian syndrome” (9-io).25 The syndrome includes an official ideology (orientation), a single party typically led by one man (dimension reduction), a terroristic police (herding), a communications monopoly (social influence horizon), a weapons monopoly (??) and a centrally directed economy (dimension reduction)
  • Continued to spin up on LSTM effort. Got my dev environment COMPLETELY up to date. Continued with Deep learning & Keras

3:00 – 5:00 Fika & meeting with Wayne

  • Worked on the slides for PhD status. I realize that this is actually a good time to have demos with conclusions.
  • Talked about options if IRAD falls through
  • Need to think about what are the best ways for the work to have impact

Phil 7.20.18

Listening to We Can’t Talk Anymore? Understanding the Structural Roots of Partisan Polarization and the Decline of Democratic Discourse in 21st Century America. Very Tajfel

  • David Peritz
  • Political polarization, accompanied by negative partisanship, are striking features of the current political landscape. Perhaps these trends were originally confined to politicians and the media, but we recently reached the point where the majority of Americans report they would consider it more objectionable if their children married across party lines than if they married someone of another faith. Where did this polarization come from? And what it is doing to American democracy, which is housed in institutions that were framed to encourage open deliberation, compromise and consensus formation? In this talk, Professor David Peritz will examine some of the deeper forces in the American economy, the public sphere and media, political institutions, and even moral psychology that best seem to account for the recent rise in popular polarization.

Sent out a Doodle to nail down the time for the PhD review

Went looking for something that talks about the cognitive load for TIT-FOR-TAT in the Iterated Prisoner’s Dilemma and can’t find anything. Did find this though, that is kind of interesting: New tack wins prisoner’s dilemma. It’s a collective intelligence approach:

  • Teams could submit multiple strategies, or players, and the Southampton team submitted 60 programs. These, Jennings explained, were all slight variations on a theme and were designed to execute a known series of five to 10 moves by which they could recognize each other. Once two Southampton players recognized each other, they were designed to immediately assume “master and slave” roles – one would sacrifice itself so the other could win repeatedly.
  • Nick Jennings
    • Professor Jennings is an internationally-recognized authority in the areas of artificial intelligence, autonomous systems, cybersecurity and agent-based computing. His research covers both the science and the engineering of intelligent systems. He has undertaken fundamental research on automated bargaining, mechanism design, trust and reputation, coalition formation, human-agent collectives and crowd sourcing. He has also pioneered the application of multi-agent technology; developing real-world systems in domains such as business process management, smart energy systems, sensor networks, disaster response, telecommunications, citizen science and defence.
  • Sarvapali D. (Gopal) Ramchurn
    • I am a Professor of Artificial Intelligence in the Agents, Interaction, and Complexity Group (AIC), in the department of Electronics and Computer Science, at the University of Southampton and Chief Scientist for North Star, an AI startup.  I am also the director of the newly created Centre for Machine Intelligence.  I am interested in the development of autonomous agents and multi-agent systems and their application to Cyber Physical Systems (CPS) such as smart energy systems, the Internet of Things (IoT), and disaster response. My research combines a number of techniques from Machine learning, AI, Game theory, and HCI.

7:00 – 4:30 ASRC MKT

  • SASO Travel request
  • SASO Hotel – done! Aaaaand I booked for August rather than September. Sent a note to try and fix using their form. If nothing by COB try email.
  • Potential DME repair?
  • Starting Deep Learning with Keras. Done with chapter one
  • Two seedbank lstm text examples:
    • Generate Shakespeare using tf.keras
      • This notebook demonstrates how to generate text using an RNN with tf.keras and eager execution.This notebook is an end-to-end example. When you run it, it will download a dataset of Shakespeare’s writing. The notebook will then train a model, and use it to generate sample output.
    • CharRNN
      • This notebook will let you input a file containing the text you want your generator to mimic, train your model, see the results, and save it for future use all in one page.