Category Archives: Writing

Phil 8.24.18

7:00 – 4:00 ASRC MKT

  • Make more obvious the Inadvertent Social Information and Digital ISI
    • ISI
      • Trails
      • Visual clustering
      • Behavior around the commons (waterholes)
      • Presence of young
      • Mating behavior
      • etc.
    • DISI
      • Words and their overall source (Social media, website content, contributor content, auto-generated, etc)
      • Votes (likes, kudos, karma points)
      • Money (site income, blockchain ledger)
      • Linking (href, retweet, share)
      • Images & videos
  • Work more on behavior patterns of humans and animals
    • Highly organized (soccer match singing, marching, mass dancing events)
    • Wildebeest feeding, defending,migrating and stampeding
  • AutoKeras is a GitHub project that uses the ENAS algorithm. It can be installed using pip. Since it’s written in Keras it’s quite easy to control and play with, so you can even dive into the ENAS algorithm and try making some modifications. If you prefer TensorFlow or Pytorch, there’s also public code projects for those here and here!
  • From Zeynep’s twitter
    • So, Russian trolls amplified divisive content and helped spread vaccine misinformation.  Look, the challenge before us is to redefine *critical thinking* to include figuring out what to believe, not just how to be skeptical. Personal and institutional.
    • Weaponized Health Communication: Twitter Bots and Russian Trolls Amplify the Vaccine Debate
      •  Whereas bots that spread malware and unsolicited content disseminated antivaccine messages, Russian trolls promoted discord. Accounts masquerading as legitimate users create false equivalency, eroding public consensus on vaccination.
  • Trying to decode podcasts. Here’s my test, and here are the results from Google speech-to-text:
    • We were talking about the choices of who’s you can keep two of these three, I guess Adonis Alexander is along for the ride, huh? I thought I was about to I didn’t know I haven’t I haven’t sent it to him. Well, has he been out there? They might missing some guys got to hand. I kept thinking like if to say, they weren’t having these injuries. Like if they have like us to say, okay, but they have these reason iron Marshall and maybe he maybe he’s not available week one, but they don’t want to put them on IR prn’s things up. So maybe they have to add another running back like you so you have to create a roster spot I could imagine this is just speculation Alexander. Somehow gets the mysterious injury to put them on I are clearly my keys ready, right and they they would have five cornerbacks otherwise and you know, yeah, if you’re not going to be ready to go, but you may have to you know, go get okay. Yeah. I mean the he’s he’s a guy that I think is on based on these the way the wrong.
    • It’s pretty good as long as people aren’t stepping over each other verbally.
    • Good enough to try, I guess. Noisy data is life, right? Look for the bigger signal.
  • Here’s my current plan. It’s a half-assed first approach, but it should provide some insight.
    1. Download a season of a sports podcast and put each podcast into it’s own document Here’s the tutorial for Speech-to-text with REST
    2. Use Corpus Manager to convert, using BOW and create an ignore list for common words like “the”
    3. then read all the docs into LMN
    4. Then set the weight of each successive document (in time) so that its top
    5. Take the top ten words and save them to a file
    6. Try building a map

Phil 8.23.18

7:00 – 5:30 ASRC MKT

dlr99umvaaed9rk

  • Slides
    • Groups/tribes stay the same, but the topics change
    • Past polarizing topics:
      • Confederate statues
      • Kneeling for the national anthem
      • #blacklivesmatter
      • Hoodies
      • Crack cocaine
      • 1968 Olympics Black Power salute
      • Alabama bus boycott
    • Stiffening a group creates a stampede (In-group high SIH)
    • Adding group-invisible diversity disrupts the velocity and direction of a stampede
    • Arendt/Moscovici slide “So we’re doomed, right! Except…”
    • See what velocity of the disrupted stampede looks like
  • Why Trump Supporters Believe He Is Not Corrupt
    • The answer may lie in how Trump and his supporters define corruption. In a forthcoming book titled How Fascism Works, the Yale philosophy professor Jason Stanley makes an intriguing claim. “Corruption, to the fascist politician,” he suggests, “is really about the corruption of purity rather than of the law. Officially, the fascist politician’s denunciations of corruption sound like a denunciation of political corruption. But such talk is intended to evoke corruption in the sense of the usurpation of the traditional order.”
  • Climate science proposals are being reviewed by Ryan Zinke’s old football buddy. Seriously.
    • But what if the corruption isn’t hidden at all, but right out in the open? What if, when it’s identified, the perpetrator doesn’t apologize, or demonstrate any remorse or shame, and there’s no punishment? What then? We don’t really have good narratives around what happens in that situation, which is why the Trump administration so often leaves us sputtering and gawking. It can’t just be a motley collection of incompetent grifters, each misruling their own little fiefdom, trying to stay in their boss’s good graces, succeeding less through wits than a congenital lack of shame and the unstinting institutional support of GOP donors. Can it?

Phil 8.22.18

7:00 – 4:00 ASRC MKT

Phil 8.21.18

7:00 – 3:00 ASRC MKT

  • Rework the slides
    • Explicit introduction, lit review, methods, results, conclusion and discussion slides
    • Slide for the difference between opinion dynamics & consensus formation as a static end  and part of a dynamic process. (Tribe membership may be static, belief of the tribe is highly dynamic. It’s the story for the group)
    • Revisit stampede/flock/nomad slide in the conclusions
    • Lose the following slides:
      • Belief space
      • Theory slide replace with a slide that breaks out the to knobs of dimension reduction and social influence horizons. The slide is called “the simple trick” Explain how herding affects these knobs by presenting simple issues and making the network stiffer through weight and connection
    • Get rid of optical polarization
  • Fanning the Flames of Hate: Social Media and Hate Crime
    • This paper investigates the link between social media and hate crime using Facebook data. We study the case of Germany, where the recently emerged right-wing party Alternative für Deutschland (AfD) has developed a major social media presence. We show that right-wing anti-refugee sentiment on Facebook predicts violent crimes against refugees in otherwise similar municipalities with higher social media usage. To further establish causality, we exploit exogenous variation in major internet and Facebook outages, which fully undo the correlation between social media and hate crime. We further find that the effect decreases with distracting news events; increases with user network interactions; and does not hold for posts unrelated to refugees. Our results suggest that social media can act as a propagation mechanism between online hate speech and real-life violent crime.
  • Facebook is rating the trustworthiness of its users on a scale from zero to 1
    • Facebook has begun to assign its users a reputation score, predicting their trustworthiness on a scale from zero to 1.
    • Tessa Lyons, product manager who is in charge of fighting misinformation (video)
  • Social Science One
    • implements a new type of partnership between academic researchers and private industry to advance the goals of social science in understanding and solving society’s greatest challenges. The partnership enables academics to analyze the increasingly rich troves of information amassed by private industry in responsible and socially beneficial ways. It ensures the public maintains privacy while gaining societal value from scholarly research. And it enables firms to enlist the scientific community to help them produce social good, while protecting their competitive positions.
  • Lost Causes Is this fashion in economic theory (found via Twitter)?Causal
  • Poster printing – UMBC Commonvision

Phil 8.19.18

7:00 – 5:30 ASRC MKT

  • Had a thought that the incomprehension that comes from misalignment that Stephens shows resembles polarizing light. I need to add a slider that enables influence as a function of alignment. Done
    • Getting the direction cosine between the source and target belief
      double interAgentDotProduct = unitOrientVector.dotProduct(otherUnitOrientVector);
      double cosTheta = Math.min(1.0, interAgentDotProduct);
      double beliefAlignment = Math.toDegrees(Math.acos(cosTheta));
      double interAgentAlignment = (1.0 - beliefAlignment/180.0);
    • Adding a global variable that sets how much influence (0% – 100%) influence from an opposing agent. Just setting it to on/off, because the effects are actually pretty subtle
  • Add David’s contributions to slide one writeup – done
  • Start slide 2 writeup
  • Find casters for Dad’s walker
  • Submit forms for DME repair
    • Drat – I need the ECU number
  • Practice talk!
    • Need to reduce complexity and add clearly labeled sections, in particular methods
  • I need to start paying attention to attention
  • Also, keeping this on the list How social media took us from Tahrir Square to Donald Trump by Zeynep Tufekci
  • Social Identity Threat Motivates Science – Discrediting Online Comments
    • Experiencing social identity threat from scientific findings can lead people to cognitively devalue the respective findings. Three studies examined whether potentially threatening scientific findings motivate group members to take action against the respective findings by publicly discrediting them on the Web. Results show that strongly (vs. weakly) identified group members (i.e., people who identified as “gamers”) were particularly likely to discredit social identity threatening findings publicly (i.e., studies that found an effect of playing violent video games on aggression). A content analytical evaluation of online comments revealed that social identification specifically predicted critiques of the methodology employed in potentially threatening, but not in non-threatening research (Study 2). Furthermore, when participants were collectively (vs. self-) affirmed, identification did no longer predict discrediting posting behavior (Study 3). These findings contribute to the understanding of the formation of online collective action and add to the burgeoning literature on the question why certain scientific findings sometimes face a broad public opposition.

Phil 8.18.18

This looks good:

  • Created almost 25 years ago, when the web was in its infancy, Propaganda Critic is dedicated to promoting techniques of propaganda analysis among critically minded citizens.

    In 2018, realizing that traditional approaches to propaganda analysis were not well-suited for making sense out of our contemporary political crisis, we completely overhauled Propaganda Critic to take into account the rise of ‘computational propaganda.’ In addition to updating all of the original content, we added nearly two dozen new articles exploring the rise of computational propaganda, explaining recent research on cognitive biases that influence how we interpret and retain information, and presenting recent case studies of how propaganda techniques have been used to disrupt democracy around the world.

Continuing to work on the SASO writeup – it’s coming along. Slower than I’d like…

This is just too good:

  • Data Organization in Spreadsheets
    • Spreadsheets are widely used software tools for data entry, storage, analysis, and visualization. Focusing on the data entry and storage aspects, this article offers practical recommendations for organizing spreadsheet data to reduce errors and ease later analyses. The basic principles are: be consistent, write dates like YYYY-MM-DD, do not leave any cells empty, put just one thing in a cell, organize the data as a single rectangle (with subjects as rows and variables as columns, and with a single header row), create a data dictionary, do not include calculations in the raw data files, do not use font color or highlighting as data, choose good names for things, make backups, use data validation to avoid data entry errors, and save the data in plain text files.

Phil 8.17.18

7:00 – 4:30 ASRC MKT

Phil 8.16.18

7:00 – 4:30 ASRC MKT

  • R2D3 is an experiment in expressing statistical thinking with interactive design. Find us at @r2d3usR2D3
  • Foundations of Temporal Text Networks
    • Davide Vega (Scholar)
    • Matteo Magnani (Scholar)
    • Three fundamental elements to understand human information networks are the individuals (actors) in the network, the information they exchange, that is often observable online as text content (emails, social media posts, etc.), and the time when these exchanges happen. An extremely large amount of research has addressed some of these aspects either in isolation or as combinations of two of them. There are also more and more works studying systems where all three elements are present, but typically using ad hoc models and algorithms that cannot be easily transferred to other contexts. To address this heterogeneity, in this article we present a simple, expressive and extensible model for temporal text networks, that we claim can be used as a common ground across different types of networks and analysis tasks, and we show how simple procedures to produce views of the model allow the direct application of analysis methods already developed in other domains, from traditional data mining to multilayer network mining.
      • Ok, I’ve been reading the paper and if I understand it correctly, it’s pretty straightforward and also clever. It relates a lot to the way that I do term document matrices, and then extends the concept to include time, agents, and implicitly anything you want to. To illustrate, here’s a picture of a tensor-as-matrix: tensorIn2DThe important thing to notice is that there are multiple dimensions represented in a square matrix. We have:
        • agents
        • documents
        • terms
        • steps
      • This picture in particular is of an undirected adjacency matrix, but I think there are ways to handle in-degree and out-degree, though I think that’s probably better handled by having one matrix for indegree and one for out.
      • Because it’s a square matrix, we can calculate the steps between any node that’s on the matrix, and the centrality, simply by squaring the matrix and keeping track of the steps until the eigenvector settles. We can also weight nodes by multiplying that node’s row and column by the scalar. That changes the centrality, but ot the connectivity. We can also drop out components (steps for example) to see how that changes the underlying network properties.
      • If we want to see how time affects the development of the network, we can start with all the step nodes set to a zero weight, then add them in sequentially. This means, for example, that clustering could be performed on the nonzero nodes.
      • Some or all of the elements could be factorized using NMF, resulting in smaller, faster matrices.
      • Network embedding could be useful too. We get distances between nodes. And this looks really important: Network Embedding as Matrix Factorization: Unifying DeepWalk, LINE, PTE, and node2vec
      • I think I can use any and all of the above methods on the network tensor I’m describing. This is very close to a mapping solution.
  • The Shifting Discourse of the European Central Bank: Exploring Structural Space in Semantic Networks (cited by the above paper)
    • Convenient access to vast and untapped collections of documents generated by organizations is a valuable resource for research. These documents (e.g., Press releases, reports, speech transcriptions, etc.) are a window into organizational strategies, communication patterns, and organizational behavior. However, the analysis of such large document corpora does not come without challenges. Two of these challenges are 1) the need for appropriate automated methods for text mining and analysis and 2) the redundant and predictable nature of the formalized discourse contained in these collections of texts. Our article proposes an approach that performs well in overcoming these particular challenges for the analysis of documents related to the recent financial crisis. Using semantic network analysis and a combination of structural measures, we provide an approach that proves valuable for a more comprehensive analysis of large and complex semantic networks of formal discourse, such as the one of the European Central Bank (ECB). We find that identifying structural roles in the semantic network using centrality measures jointly reveals important discursive shifts in the goals of the ECB which would not be discovered under traditional text analysis approaches.
  • Comparative Document Analysis for Large Text Corpora
    • This paper presents a novel research problem, Comparative Document Analysis (CDA), that is, joint discovery of commonalities and differences between two individual documents (or two sets of documents) in a large text corpus. Given any pair of documents from a (background) document collection, CDA aims to automatically identify sets of quality phrases to summarize the commonalities of both documents and highlight the distinctions of each with respect to the other informatively and concisely. Our solution uses a general graph-based framework to derive novel measures on phrase semantic commonality and pairwise distinction, where the background corpus is used for computing phrase-document semantic relevance. We use the measures to guide the selection of sets of phrases by solving two joint optimization problems. A scalable iterative algorithm is developed to integrate the maximization of phrase commonality or distinction measure with the learning of phrase-document semantic relevance. Experiments on large text corpora from two different domains—scientific papers and news—demonstrate the effectiveness and robustness of the proposed framework on comparing documents. Analysis on a 10GB+ text corpus demonstrates the scalability of our method, whose computation time grows linearly as the corpus size increases. Our case study on comparing news articles published at different dates shows the power of the proposed method on comparing sets of documents.
  • Social and semantic coevolution in knowledge networks
    • Socio-semantic networks involve agents creating and processing information: communities of scientists, software developers, wiki contributors and webloggers are, among others, examples of such knowledge networks. We aim at demonstrating that the dynamics of these communities can be adequately described as the coevolution of a social and a socio-semantic network. More precisely, we will first introduce a theoretical framework based on a social network and a socio-semantic network, i.e. an epistemic network featuring agents, concepts and links between agents and between agents and concepts. Adopting a relevant empirical protocol, we will then describe the joint dynamics of social and socio-semantic structures, at both macroscopic and microscopic scales, emphasizing the remarkable stability of these macroscopic properties in spite of a vivid local, agent-based network dynamics.
  • Tensorflow 2.0 feedback request
    • Shortly, we will hold a series of public design reviews covering the planned changes. This process will clarify the features that will be part of TensorFlow 2.0, and allow the community to propose changes and voice concerns. Please join developers@tensorflow.org if you would like to see announcements of reviews and updates on process. We hope to gather user feedback on the planned changes once we release a preview version later this year.

Phil 8.12.18

7:00 – 4:00 ASRC MKT

  • Having an interesting chat on recommenders with Robin Berjon on Twitter
  • Long, but looks really good Neural Processes as distributions over functions
    • Neural Processes (NPs) caught my attention as they essentially are a neural network (NN) based probabilistic model which can represent a distribution over stochastic processes. So NPs combine elements from two worlds:
      • Deep Learning – neural networks are flexible non-linear functions which are straightforward to train
      • Gaussian Processes – GPs offer a probabilistic framework for learning a distribution over a wide class of non-linear functions

      Both have their advantages and drawbacks. In the limited data regime, GPs are preferable due to their probabilistic nature and ability to capture uncertainty. This differs from (non-Bayesian) neural networks which represent a single function rather than a distribution over functions. However the latter might be preferable in the presence of large amounts of data as training NNs is computationally much more scalable than inference for GPs. Neural Processes aim to combine the best of these two worlds.

  • How The Internet Talks (Well, the mostly young and mostly male users of Reddit, anyway)
    • To get a sense of the language used on Reddit, we parsed every comment since late 2007 and built the tool above, which enables you to search for a word or phrase to see how its popularity has changed over time. We’ve updated the tool to include all comments through the end of July 2017.
  • Add breadcrumbs to slides
  • Download videos – done! Put these in the ppt backup
  • Fix the DTW emergent population chart on the poster and in the slides. Print!
  • Set up the LaTex Army BAA framework
  • Olsson
  • Slide walkthough. Good timing. Working on the poster some more AdversarialHerding2

Phil 8.10.18

7:00 – ASRC MKT

  • Finished the first pass through the SASO slides. Need to start working on timing (25 min + 5 min questions)
  • Start on poster (A0 size)
  • Sent Wayne a note to get permission for 899
  • Started setting up laptop. I hate this part. Google drive took hours to synchronize
    • Java
    • Python/Nvidia/Tensorflow
    • Intellij
    • Visual Studio
    • MikTex
    • TexStudio
    • Xampp
    • Vim
    • TortoiseSVN
    • WinSCP
    • 7-zip
    • Creative Cloud
      • Acrobat
      • Reader
      • Illustrator
      • Photoshop
    • Microsoft suite
    • Express VPN

Phil 8.6.18

7:00 – 5:00 ASRC CONF

  • Heard about this on the Ted Radio Hour: Crisis Trends
    • Crisis Trends empowers journalists, researchers, school administrators, parents, and all citizens to understand the crises their communities face so we can work together to prevent future crises. Crisis Trends was originally funded by the Robert Wood Johnson Foundation. CurrentTrends
  • Committee talk today!
    • Tweaked the flowchart slides
    • Added pix to either end of the “model(?)” slide showing that the amount of constraint is maximum at either end. On the nomadic side, the environment is the constraint. Imagine a solitary activity in a location so dangerous that any false move would result in death or injury. Think of freeclimbing: b16-540x354
    • On the other end of the spectrum is the maximum social constraint of totalitarianism, which is summed up nicely in this play on the constitutional basis for English law “Everything not forbidden is allowed” by T. H. White THWhite
    • The presentation went pretty well. There is a consensus that I should look for existing sources of discussions that reach consensus. Since this has to be a repeated discussion about the same topic, I think that sports are the only real option.
  • Added a slide on tracking changes to the Latex presentation slides for next week
  • Amusing ourselves to Trump
    • The point of Amusing Ourselves to Death is that societies are molded by the technologies atop which they communicate. Oral cultures teach us to be conversational, typographic cultures teach us to be logical, televised cultures teach us that everything is entertainment. So what is social media culture teaching us?
  • It’s Looking Extremely Likely That QAnon Is A Leftist Prank On Trump Supporters
    • There’s a growing group of Trump supporters who are convinced that the president is secretly trying to save the world from a global pedophilia ring.

Phil 8.3.18

7:00 – 3:30 ASRC MKT

  • Slides and walkthrough – done!
  • Ramping up on SASO
  • Textricator is a tool for extracting text from computer-generated PDFs and generating structured data (CSV or JSON). If you have a bunch of PDFs with the same format (or one big, consistently formatted PDF) and you want to extract the data to CSV or JSON, _Textricator_ can help! It can even work on OCR’ed documents!
  • LSTM links for getting back to things later
  • Who handles misinformation outbreaks?
    • Misinformation attacks— the deliberate and sustained creation and amplification of false information at scale — are a problem. Some of them start as jokes (the ever-present street sharks in disasters) or attempts to push an agenda (e.g. right-wing brigading); some are there to make money (the “Macedonian teens”), or part of ongoing attempts to destabilise countries including the US, UK and Canada (e.g. Russia’s Internet Research Agency using troll and bot amplification of divisive messages).

      Enough people are writing about why misinformation attacks happen, what they look like and what motivates attackers. Fewer people are activelycountering attacks. Here are some of them, roughly categorised as:

      • Journalists and data scientists: Make misinformation visible
      • Platforms and governments: Reduce misinformation spread
      • Communities: directly engage misinformation
      • Adtech: Remove or reduce misinformation rewards

Phil 8.2.18

7:00 – 5:00 ASRC MKT

  • Joshua Stevens (Scholar)
    • At Penn State I researched cartography and geovisual analytics with an emphasis on human-computer interaction, interactive affordances, and big data. My work focused on new forms of map interaction made possible by well constructed visual cues.
  • A Computational Analysis of Cognitive Effort
    • Cognitive effort is a concept of unquestionable utility in understanding human behaviour. However, cognitive effort has been defined in several ways in literature and in everyday life, suffering from a partial understanding. It is common to say “Pay more attention in studying that subject” or “How much effort did you spend in resolving that task?”, but what does it really mean? This contribution tries to clarify the concept of cognitive effort, by introducing its main influencing factors and by presenting a formalism which provides us with a tool for precise discussion. The formalism is implementable as a computational concept and can therefore be embedded in an artificial agent and tested experimentally. Its applicability in the domain of AI is raised and the formalism provides a step towards a proper understanding and definition of human cognitive effort.
  • Efficient Neural Architecture Search with Network Morphism
    • While neural architecture search (NAS) has drawn increasing attention for automatically tuning deep neural networks, existing search algorithms usually suffer from expensive computational cost. Network morphism, which keeps the functionality of a neural network while changing its neural architecture, could be helpful for NAS by enabling a more efficient training during the search. However, network morphism based NAS is still computationally expensive due to the inefficient process of selecting the proper morph operation for existing architectures. As we know, Bayesian optimization has been widely used to optimize functions based on a limited number of observations, motivating us to explore the possibility of making use of Bayesian optimization to accelerate the morph operation selection process. In this paper, we propose a novel framework enabling Bayesian optimization to guide the network morphism for efficient neural architecture search by introducing a neural network kernel and a tree-structured acquisition function optimization algorithm. With Bayesian optimization to select the network morphism operations, the exploration of the search space is more efficient. Moreover, we carefully wrapped our method into an open-source software, namely Auto-Keras for people without rich machine learning background to use. Intensive experiments on real-world datasets have been done to demonstrate the superior performance of the developed framework over the state-of-the-art baseline methods.
  • I think I finished the Dissertation Review slides. Walkthrough tomorrow!

Phil 8.1.18

7:00 – 6:00 ASRC MKT

  • I need to add some things to both talks
    • Use Stephens to show how we can build vectors out of ‘positions’ in high dimension space, and then measure distances (hypotenuse, cosine similarity, etc). Also, how the use of stories show alignment over time and create a trajectory – done
    • Add slide that shows the spectrum from low-dimensional social space to high-dimensional environmental space.
      • Aligning in social spaces is easier because we negotiate the terrain we interact on
      • Aligning in environmental spaces is harder because there is no negotiation
    • Add slides for each of the main parts
      • Social influence
      • Dimension Reduction
      • Heading
      • Velocity
      • State (what we tend to think about)
    • Add demo slide that walks through each part of the demo – done
      • Single population with different SIH
      • Small explorer population interacting with stampeding groups
      • Adversarial Herding
      • Opposed AH
      • Map building
  • Capturing the interplay of dynamics and networks through parameterizations of Laplacian operators
    • We study the interplay between a dynamical process and the structure of the network on which it unfolds using the parameterized Laplacian framework. This framework allows for defining and characterizing an ensemble of dynamical processes on a network beyond what the traditional Laplacian is capable of modeling. This, in turn, allows for studying the impact of the interaction between dynamics and network topology on the quality-measure of network clusters and centrality, in order to effectively identify important vertices and communities in the network. Specifically, for each dynamical process in this framework, we define a centrality measure that captures a vertex’s participation in the dynamical process on a given network and also define a function that measures the quality of every subset of vertices as a potential cluster (or community) with respect to this process. We show that the subset-quality function generalizes the traditional conductance measure for graph partitioning. We partially justify our choice of the quality function by showing that the classic Cheeger’s inequality, which relates the conductance of the best cluster in a network with a spectral quantity of its Laplacian matrix, can be extended to the parameterized Laplacian. The parameterized Laplacian framework brings under the same umbrella a surprising variety of dynamical processes and allows us to systematically compare the different perspectives they create on network structure.

Phil 7.31.18

7:00 – 6:00 ASRC MKT

  • Thinking that I need to forward the opinion dynamics part of the work. How heading differs from position and why that matters
  • Found a nice adversarial herding chart from The EconomistBrexit
  • Why Do People Share Fake News? A Sociotechnical Model of Media Effects
    • Fact-checking sites reflect fundamental misunderstandings about how information circulates online, what function political information plays in social contexts, and how and why people change their political opinions. Fact-checking is in many ways a response to the rapidly changing norms and practices of journalism, news gathering, and public debate. In other words, fact-checking best resembles a movement for reform within journalism, particularly in a moment when many journalists and members of the public believe that news coverage of the 2016 election contributed to the loss of Hillary Clinton. However, fact-checking (and another frequently-proposed solution, media literacy) is ineffectual in many cases and, in other cases, may cause people to “double-down” on their incorrect beliefs, producing a backlash effect.
  • Epistemology in the Era of Fake News: An Exploration of Information Verification Behaviors among Social Networking Site Users
    • Fake news has recently garnered increased attention across the world. Digital collaboration technologies now enable individuals to share information at unprecedented rates to advance their own ideologies. Much of this sharing occurs via social networking sites (SNSs), whose members may choose to share information without consideration for its authenticity. This research advances our understanding of information verification behaviors among SNS users in the context of fake news. Grounded in literature on the epistemology of testimony and theoretical perspectives on trust, we develop a news verification behavior research model and test six hypotheses with a survey of active SNS users. The empirical results confirm the significance of all proposed hypotheses. Perceptions of news sharers’ network (perceived cognitive homogeneity, social tie variety, and trust), perceptions of news authors (fake news awareness and perceived media credibility), and innate intentions to share all influence information verification behaviors among SNS members. Theoretical implications, as well as implications for SNS users and designers, are presented in the light of these findings.
  • Working on plan diagram – done
  • Organizing PhD slides. I think I’m getting near finished
  • Walked through slides with Aaron. Need to practice the demo. A lot.