Category Archives: Writing

Phil 8.19.18

7:00 – 5:30 ASRC MKT

  • Had a thought that the incomprehension that comes from misalignment that Stephens shows resembles polarizing light. I need to add a slider that enables influence as a function of alignment. Done
    • Getting the direction cosine between the source and target belief
      double interAgentDotProduct = unitOrientVector.dotProduct(otherUnitOrientVector);
      double cosTheta = Math.min(1.0, interAgentDotProduct);
      double beliefAlignment = Math.toDegrees(Math.acos(cosTheta));
      double interAgentAlignment = (1.0 - beliefAlignment/180.0);
    • Adding a global variable that sets how much influence (0% – 100%) influence from an opposing agent. Just setting it to on/off, because the effects are actually pretty subtle
  • Add David’s contributions to slide one writeup – done
  • Start slide 2 writeup
  • Find casters for Dad’s walker
  • Submit forms for DME repair
    • Drat – I need the ECU number
  • Practice talk!
    • Need to reduce complexity and add clearly labeled sections, in particular methods
  • I need to start paying attention to attention
  • Also, keeping this on the list How social media took us from Tahrir Square to Donald Trump by Zeynep Tufekci
  • Social Identity Threat Motivates Science – Discrediting Online Comments
    • Experiencing social identity threat from scientific findings can lead people to cognitively devalue the respective findings. Three studies examined whether potentially threatening scientific findings motivate group members to take action against the respective findings by publicly discrediting them on the Web. Results show that strongly (vs. weakly) identified group members (i.e., people who identified as “gamers”) were particularly likely to discredit social identity threatening findings publicly (i.e., studies that found an effect of playing violent video games on aggression). A content analytical evaluation of online comments revealed that social identification specifically predicted critiques of the methodology employed in potentially threatening, but not in non-threatening research (Study 2). Furthermore, when participants were collectively (vs. self-) affirmed, identification did no longer predict discrediting posting behavior (Study 3). These findings contribute to the understanding of the formation of online collective action and add to the burgeoning literature on the question why certain scientific findings sometimes face a broad public opposition.

Phil 8.18.18

This looks good:

  • Created almost 25 years ago, when the web was in its infancy, Propaganda Critic is dedicated to promoting techniques of propaganda analysis among critically minded citizens.

    In 2018, realizing that traditional approaches to propaganda analysis were not well-suited for making sense out of our contemporary political crisis, we completely overhauled Propaganda Critic to take into account the rise of ‘computational propaganda.’ In addition to updating all of the original content, we added nearly two dozen new articles exploring the rise of computational propaganda, explaining recent research on cognitive biases that influence how we interpret and retain information, and presenting recent case studies of how propaganda techniques have been used to disrupt democracy around the world.

Continuing to work on the SASO writeup – it’s coming along. Slower than I’d like…

This is just too good:

  • Data Organization in Spreadsheets
    • Spreadsheets are widely used software tools for data entry, storage, analysis, and visualization. Focusing on the data entry and storage aspects, this article offers practical recommendations for organizing spreadsheet data to reduce errors and ease later analyses. The basic principles are: be consistent, write dates like YYYY-MM-DD, do not leave any cells empty, put just one thing in a cell, organize the data as a single rectangle (with subjects as rows and variables as columns, and with a single header row), create a data dictionary, do not include calculations in the raw data files, do not use font color or highlighting as data, choose good names for things, make backups, use data validation to avoid data entry errors, and save the data in plain text files.

Phil 8.17.18

7:00 – 4:30 ASRC MKT

Phil 8.16.18

7:00 – 4:30 ASRC MKT

  • R2D3 is an experiment in expressing statistical thinking with interactive design. Find us at @r2d3usR2D3
  • Foundations of Temporal Text Networks
    • Davide Vega (Scholar)
    • Matteo Magnani (Scholar)
    • Three fundamental elements to understand human information networks are the individuals (actors) in the network, the information they exchange, that is often observable online as text content (emails, social media posts, etc.), and the time when these exchanges happen. An extremely large amount of research has addressed some of these aspects either in isolation or as combinations of two of them. There are also more and more works studying systems where all three elements are present, but typically using ad hoc models and algorithms that cannot be easily transferred to other contexts. To address this heterogeneity, in this article we present a simple, expressive and extensible model for temporal text networks, that we claim can be used as a common ground across different types of networks and analysis tasks, and we show how simple procedures to produce views of the model allow the direct application of analysis methods already developed in other domains, from traditional data mining to multilayer network mining.
      • Ok, I’ve been reading the paper and if I understand it correctly, it’s pretty straightforward and also clever. It relates a lot to the way that I do term document matrices, and then extends the concept to include time, agents, and implicitly anything you want to. To illustrate, here’s a picture of a tensor-as-matrix: tensorIn2DThe important thing to notice is that there are multiple dimensions represented in a square matrix. We have:
        • agents
        • documents
        • terms
        • steps
      • This picture in particular is of an undirected adjacency matrix, but I think there are ways to handle in-degree and out-degree, though I think that’s probably better handled by having one matrix for indegree and one for out.
      • Because it’s a square matrix, we can calculate the steps between any node that’s on the matrix, and the centrality, simply by squaring the matrix and keeping track of the steps until the eigenvector settles. We can also weight nodes by multiplying that node’s row and column by the scalar. That changes the centrality, but ot the connectivity. We can also drop out components (steps for example) to see how that changes the underlying network properties.
      • If we want to see how time affects the development of the network, we can start with all the step nodes set to a zero weight, then add them in sequentially. This means, for example, that clustering could be performed on the nonzero nodes.
      • Some or all of the elements could be factorized using NMF, resulting in smaller, faster matrices.
      • Network embedding could be useful too. We get distances between nodes. And this looks really important: Network Embedding as Matrix Factorization: Unifying DeepWalk, LINE, PTE, and node2vec
      • I think I can use any and all of the above methods on the network tensor I’m describing. This is very close to a mapping solution.
  • The Shifting Discourse of the European Central Bank: Exploring Structural Space in Semantic Networks (cited by the above paper)
    • Convenient access to vast and untapped collections of documents generated by organizations is a valuable resource for research. These documents (e.g., Press releases, reports, speech transcriptions, etc.) are a window into organizational strategies, communication patterns, and organizational behavior. However, the analysis of such large document corpora does not come without challenges. Two of these challenges are 1) the need for appropriate automated methods for text mining and analysis and 2) the redundant and predictable nature of the formalized discourse contained in these collections of texts. Our article proposes an approach that performs well in overcoming these particular challenges for the analysis of documents related to the recent financial crisis. Using semantic network analysis and a combination of structural measures, we provide an approach that proves valuable for a more comprehensive analysis of large and complex semantic networks of formal discourse, such as the one of the European Central Bank (ECB). We find that identifying structural roles in the semantic network using centrality measures jointly reveals important discursive shifts in the goals of the ECB which would not be discovered under traditional text analysis approaches.
  • Comparative Document Analysis for Large Text Corpora
    • This paper presents a novel research problem, Comparative Document Analysis (CDA), that is, joint discovery of commonalities and differences between two individual documents (or two sets of documents) in a large text corpus. Given any pair of documents from a (background) document collection, CDA aims to automatically identify sets of quality phrases to summarize the commonalities of both documents and highlight the distinctions of each with respect to the other informatively and concisely. Our solution uses a general graph-based framework to derive novel measures on phrase semantic commonality and pairwise distinction, where the background corpus is used for computing phrase-document semantic relevance. We use the measures to guide the selection of sets of phrases by solving two joint optimization problems. A scalable iterative algorithm is developed to integrate the maximization of phrase commonality or distinction measure with the learning of phrase-document semantic relevance. Experiments on large text corpora from two different domains—scientific papers and news—demonstrate the effectiveness and robustness of the proposed framework on comparing documents. Analysis on a 10GB+ text corpus demonstrates the scalability of our method, whose computation time grows linearly as the corpus size increases. Our case study on comparing news articles published at different dates shows the power of the proposed method on comparing sets of documents.
  • Social and semantic coevolution in knowledge networks
    • Socio-semantic networks involve agents creating and processing information: communities of scientists, software developers, wiki contributors and webloggers are, among others, examples of such knowledge networks. We aim at demonstrating that the dynamics of these communities can be adequately described as the coevolution of a social and a socio-semantic network. More precisely, we will first introduce a theoretical framework based on a social network and a socio-semantic network, i.e. an epistemic network featuring agents, concepts and links between agents and between agents and concepts. Adopting a relevant empirical protocol, we will then describe the joint dynamics of social and socio-semantic structures, at both macroscopic and microscopic scales, emphasizing the remarkable stability of these macroscopic properties in spite of a vivid local, agent-based network dynamics.
  • Tensorflow 2.0 feedback request
    • Shortly, we will hold a series of public design reviews covering the planned changes. This process will clarify the features that will be part of TensorFlow 2.0, and allow the community to propose changes and voice concerns. Please join developers@tensorflow.org if you would like to see announcements of reviews and updates on process. We hope to gather user feedback on the planned changes once we release a preview version later this year.

Phil 8.12.18

7:00 – 4:00 ASRC MKT

  • Having an interesting chat on recommenders with Robin Berjon on Twitter
  • Long, but looks really good Neural Processes as distributions over functions
    • Neural Processes (NPs) caught my attention as they essentially are a neural network (NN) based probabilistic model which can represent a distribution over stochastic processes. So NPs combine elements from two worlds:
      • Deep Learning – neural networks are flexible non-linear functions which are straightforward to train
      • Gaussian Processes – GPs offer a probabilistic framework for learning a distribution over a wide class of non-linear functions

      Both have their advantages and drawbacks. In the limited data regime, GPs are preferable due to their probabilistic nature and ability to capture uncertainty. This differs from (non-Bayesian) neural networks which represent a single function rather than a distribution over functions. However the latter might be preferable in the presence of large amounts of data as training NNs is computationally much more scalable than inference for GPs. Neural Processes aim to combine the best of these two worlds.

  • How The Internet Talks (Well, the mostly young and mostly male users of Reddit, anyway)
    • To get a sense of the language used on Reddit, we parsed every comment since late 2007 and built the tool above, which enables you to search for a word or phrase to see how its popularity has changed over time. We’ve updated the tool to include all comments through the end of July 2017.
  • Add breadcrumbs to slides
  • Download videos – done! Put these in the ppt backup
  • Fix the DTW emergent population chart on the poster and in the slides. Print!
  • Set up the LaTex Army BAA framework
  • Olsson
  • Slide walkthough. Good timing. Working on the poster some more AdversarialHerding2

Phil 8.10.18

7:00 – ASRC MKT

  • Finished the first pass through the SASO slides. Need to start working on timing (25 min + 5 min questions)
  • Start on poster (A0 size)
  • Sent Wayne a note to get permission for 899
  • Started setting up laptop. I hate this part. Google drive took hours to synchronize
    • Java
    • Python/Nvidia/Tensorflow
    • Intellij
    • Visual Studio
    • MikTex
    • TexStudio
    • Xampp
    • Vim
    • TortoiseSVN
    • WinSCP
    • 7-zip
    • Creative Cloud
      • Acrobat
      • Reader
      • Illustrator
      • Photoshop
    • Microsoft suite
    • Express VPN

Phil 8.6.18

7:00 – 5:00 ASRC CONF

  • Heard about this on the Ted Radio Hour: Crisis Trends
    • Crisis Trends empowers journalists, researchers, school administrators, parents, and all citizens to understand the crises their communities face so we can work together to prevent future crises. Crisis Trends was originally funded by the Robert Wood Johnson Foundation. CurrentTrends
  • Committee talk today!
    • Tweaked the flowchart slides
    • Added pix to either end of the “model(?)” slide showing that the amount of constraint is maximum at either end. On the nomadic side, the environment is the constraint. Imagine a solitary activity in a location so dangerous that any false move would result in death or injury. Think of freeclimbing: b16-540x354
    • On the other end of the spectrum is the maximum social constraint of totalitarianism, which is summed up nicely in this play on the constitutional basis for English law “Everything not forbidden is allowed” by T. H. White THWhite
    • The presentation went pretty well. There is a consensus that I should look for existing sources of discussions that reach consensus. Since this has to be a repeated discussion about the same topic, I think that sports are the only real option.
  • Added a slide on tracking changes to the Latex presentation slides for next week
  • Amusing ourselves to Trump
    • The point of Amusing Ourselves to Death is that societies are molded by the technologies atop which they communicate. Oral cultures teach us to be conversational, typographic cultures teach us to be logical, televised cultures teach us that everything is entertainment. So what is social media culture teaching us?
  • It’s Looking Extremely Likely That QAnon Is A Leftist Prank On Trump Supporters
    • There’s a growing group of Trump supporters who are convinced that the president is secretly trying to save the world from a global pedophilia ring.

Phil 8.3.18

7:00 – 3:30 ASRC MKT

  • Slides and walkthrough – done!
  • Ramping up on SASO
  • Textricator is a tool for extracting text from computer-generated PDFs and generating structured data (CSV or JSON). If you have a bunch of PDFs with the same format (or one big, consistently formatted PDF) and you want to extract the data to CSV or JSON, _Textricator_ can help! It can even work on OCR’ed documents!
  • LSTM links for getting back to things later
  • Who handles misinformation outbreaks?
    • Misinformation attacks— the deliberate and sustained creation and amplification of false information at scale — are a problem. Some of them start as jokes (the ever-present street sharks in disasters) or attempts to push an agenda (e.g. right-wing brigading); some are there to make money (the “Macedonian teens”), or part of ongoing attempts to destabilise countries including the US, UK and Canada (e.g. Russia’s Internet Research Agency using troll and bot amplification of divisive messages).

      Enough people are writing about why misinformation attacks happen, what they look like and what motivates attackers. Fewer people are activelycountering attacks. Here are some of them, roughly categorised as:

      • Journalists and data scientists: Make misinformation visible
      • Platforms and governments: Reduce misinformation spread
      • Communities: directly engage misinformation
      • Adtech: Remove or reduce misinformation rewards

Phil 8.2.18

7:00 – 5:00 ASRC MKT

  • Joshua Stevens (Scholar)
    • At Penn State I researched cartography and geovisual analytics with an emphasis on human-computer interaction, interactive affordances, and big data. My work focused on new forms of map interaction made possible by well constructed visual cues.
  • A Computational Analysis of Cognitive Effort
    • Cognitive effort is a concept of unquestionable utility in understanding human behaviour. However, cognitive effort has been defined in several ways in literature and in everyday life, suffering from a partial understanding. It is common to say “Pay more attention in studying that subject” or “How much effort did you spend in resolving that task?”, but what does it really mean? This contribution tries to clarify the concept of cognitive effort, by introducing its main influencing factors and by presenting a formalism which provides us with a tool for precise discussion. The formalism is implementable as a computational concept and can therefore be embedded in an artificial agent and tested experimentally. Its applicability in the domain of AI is raised and the formalism provides a step towards a proper understanding and definition of human cognitive effort.
  • Efficient Neural Architecture Search with Network Morphism
    • While neural architecture search (NAS) has drawn increasing attention for automatically tuning deep neural networks, existing search algorithms usually suffer from expensive computational cost. Network morphism, which keeps the functionality of a neural network while changing its neural architecture, could be helpful for NAS by enabling a more efficient training during the search. However, network morphism based NAS is still computationally expensive due to the inefficient process of selecting the proper morph operation for existing architectures. As we know, Bayesian optimization has been widely used to optimize functions based on a limited number of observations, motivating us to explore the possibility of making use of Bayesian optimization to accelerate the morph operation selection process. In this paper, we propose a novel framework enabling Bayesian optimization to guide the network morphism for efficient neural architecture search by introducing a neural network kernel and a tree-structured acquisition function optimization algorithm. With Bayesian optimization to select the network morphism operations, the exploration of the search space is more efficient. Moreover, we carefully wrapped our method into an open-source software, namely Auto-Keras for people without rich machine learning background to use. Intensive experiments on real-world datasets have been done to demonstrate the superior performance of the developed framework over the state-of-the-art baseline methods.
  • I think I finished the Dissertation Review slides. Walkthrough tomorrow!

Phil 8.1.18

7:00 – 6:00 ASRC MKT

  • I need to add some things to both talks
    • Use Stephens to show how we can build vectors out of ‘positions’ in high dimension space, and then measure distances (hypotenuse, cosine similarity, etc). Also, how the use of stories show alignment over time and create a trajectory – done
    • Add slide that shows the spectrum from low-dimensional social space to high-dimensional environmental space.
      • Aligning in social spaces is easier because we negotiate the terrain we interact on
      • Aligning in environmental spaces is harder because there is no negotiation
    • Add slides for each of the main parts
      • Social influence
      • Dimension Reduction
      • Heading
      • Velocity
      • State (what we tend to think about)
    • Add demo slide that walks through each part of the demo – done
      • Single population with different SIH
      • Small explorer population interacting with stampeding groups
      • Adversarial Herding
      • Opposed AH
      • Map building
  • Capturing the interplay of dynamics and networks through parameterizations of Laplacian operators
    • We study the interplay between a dynamical process and the structure of the network on which it unfolds using the parameterized Laplacian framework. This framework allows for defining and characterizing an ensemble of dynamical processes on a network beyond what the traditional Laplacian is capable of modeling. This, in turn, allows for studying the impact of the interaction between dynamics and network topology on the quality-measure of network clusters and centrality, in order to effectively identify important vertices and communities in the network. Specifically, for each dynamical process in this framework, we define a centrality measure that captures a vertex’s participation in the dynamical process on a given network and also define a function that measures the quality of every subset of vertices as a potential cluster (or community) with respect to this process. We show that the subset-quality function generalizes the traditional conductance measure for graph partitioning. We partially justify our choice of the quality function by showing that the classic Cheeger’s inequality, which relates the conductance of the best cluster in a network with a spectral quantity of its Laplacian matrix, can be extended to the parameterized Laplacian. The parameterized Laplacian framework brings under the same umbrella a surprising variety of dynamical processes and allows us to systematically compare the different perspectives they create on network structure.

Phil 7.31.18

7:00 – 6:00 ASRC MKT

  • Thinking that I need to forward the opinion dynamics part of the work. How heading differs from position and why that matters
  • Found a nice adversarial herding chart from The EconomistBrexit
  • Why Do People Share Fake News? A Sociotechnical Model of Media Effects
    • Fact-checking sites reflect fundamental misunderstandings about how information circulates online, what function political information plays in social contexts, and how and why people change their political opinions. Fact-checking is in many ways a response to the rapidly changing norms and practices of journalism, news gathering, and public debate. In other words, fact-checking best resembles a movement for reform within journalism, particularly in a moment when many journalists and members of the public believe that news coverage of the 2016 election contributed to the loss of Hillary Clinton. However, fact-checking (and another frequently-proposed solution, media literacy) is ineffectual in many cases and, in other cases, may cause people to “double-down” on their incorrect beliefs, producing a backlash effect.
  • Epistemology in the Era of Fake News: An Exploration of Information Verification Behaviors among Social Networking Site Users
    • Fake news has recently garnered increased attention across the world. Digital collaboration technologies now enable individuals to share information at unprecedented rates to advance their own ideologies. Much of this sharing occurs via social networking sites (SNSs), whose members may choose to share information without consideration for its authenticity. This research advances our understanding of information verification behaviors among SNS users in the context of fake news. Grounded in literature on the epistemology of testimony and theoretical perspectives on trust, we develop a news verification behavior research model and test six hypotheses with a survey of active SNS users. The empirical results confirm the significance of all proposed hypotheses. Perceptions of news sharers’ network (perceived cognitive homogeneity, social tie variety, and trust), perceptions of news authors (fake news awareness and perceived media credibility), and innate intentions to share all influence information verification behaviors among SNS members. Theoretical implications, as well as implications for SNS users and designers, are presented in the light of these findings.
  • Working on plan diagram – done
  • Organizing PhD slides. I think I’m getting near finished
  • Walked through slides with Aaron. Need to practice the demo. A lot.

Phil 7.27.18

Ted Underwood

  • my research is as much about information science as literary criticism. I’m especially interested in applying machine learning to large digital collections
  • Git repo with code for upcoming book: Distant Horizons: Digital Evidence and Literary Change
  • Do topic models warp time?
    • The key observation I wanted to share is just that topic models produce a kind of curved space when applied to long timelines; if you’re measuring distances between individual topic distributions, it may not be safe to assume that your yardstick means the same thing at every point in time. This is not a reason for despair: there are lots of good ways to address the distortion. The mathematics of cosine distance tend to work better if you average the documents first, and then measure the cosine between the averages (or “centroids”).
  • The Historical Significance of Textual Distances
    • Measuring similarity is a basic task in information retrieval, and now often a building-block for more complex arguments about cultural change. But do measures of textual similarity and distance really correspond to evidence about cultural proximity and differentiation? To explore that question empirically, this paper compares textual and social measures of the similarities between genres of English-language fiction. Existing measures of textual similarity (cosine similarity on tf-idf vectors or topic vectors) are also compared to new strategies that use supervised learning to anchor textual measurement in a social context.

7:00 – 8:00 ASRC MKT

  • Continued on slides. I think I have the basics. Need to start looking for pictures
  • Sent response to the SASO folks about who’s presenting what.

9:00 – ASRC IRAD

Phil 7.25.18

7:00 – 3:00 ASRC

  • Send out email with meeting time
  • Rather than excerpts from the talks, do a demo of the relevant bits with conclusions and implications. Get the laptop running all the pieces. That means Python and TF and all the other bits.
  • Submitted tuition expenses
  • Submitted Fall 2018 approval
  • Got SASO travel approval!
  • More DNN study
    • Finished CNNs
    • Working on embeddings and W2V. Thought I’d try it on the laptop, but keras can’t find it’s back end and I’m getting other weird errors. One of the big ones was that I didn’t install tk with python. Here’s the answer from stackoverflow: python_fix
    • And now we’re waiting a very long time for a tf ‘hello world’ to run… But it did!
    • Had to also install pydot and graphviz-2.38.msi. Then add the graphviz bin directory to the path.
    • But now everything runs on the laptop, which will help with the demos!
    • Skipped the GloVe and pre-trained embeddings. Ready to start on DNNs tomorrow.